Neuroscience and Technology in National Security, Intelligence and Defense: The Right Measure of Knowledge…The Right Measure of Action.

I keenly read recent posts on the NeuroLaw Blog and the Manitoban, and I greatly appreciate these groups’ interest in, and illustration of our work.  Our intent is not to advocate the use of a particular neuroscientific technique or technology (neuroS/T) in national security, intelligence and defense (NSID), but rather to illustrate that any and all ethical address and analysis must begin with and proceed from fact(s).  The fact is that neuroS/T, like any science and technology (S/T), can and will be used in service of NSID agendas, not only by the United States, but by other nation states and individual actors with aims that are not aligned with the values of the USA and its allies. This finding arises from our ongoing work in surveillance of the field and the way(s) that neuroS/T is – and potentially can be – used and misused.  It is from this reality that we promote a stance of preparedness and an ethic of responsible intent and action.

Fundamental to this stance is an ongoing commitment to studying the ways that neuroS/T can be employed in NSID; but we emphasize the need for an equally strong commitment to invest in ethico-legal studies that are explicitly intended and designed to address the realities of this field, its contingencies and consequences, and from such authentic representation and analyses, develop guidelines, direct engagement, and inform policy and laws that govern the conduct, scope and tenor of research and its applications (both domestically, and on the world stage).  That’s quite an effort, and may amount to evoking a sea-change, but given the trend toward revivifying US “Big Science” incentives, inclusive of the brain sciences, and the increasing advancement of neuroS/T worldwide, we believe that such investment of time, effort and funding is important, necessary and urgent. And we call for more than  “drop in the bucket” fiscal support of such efforts, if a sea-change is to occur.

In many recent blogs, our work is juxtaposed with that of Dr Curtis Bell, and I think that this juxtaposition does not accurately represent our respective positions.  I applaud Dr Bell’s efforts, and each and all of those neuroscientists who commit to a petition that expresses their moral compass and convictions.  I whole-heartedly support Dr Bell and those signatories whose actions evidence their perspectives and commitment, and I urge strong moral prudence in the use of any and all neuroS/T in NSID (and any other applications in the public domain). Where Dr Bell and I differ is not in our mutual call for moral and ethical probity, but rather in the type and nature of the actions assumed to uphold this position. Dr Bell has stated that “…it’s not enough just to study the issue of ethics.” He’s right; and I argue against any exercises of mental onanism and ethical navel-gazing. To paraphrase Hannah Arendt, mere reflection without action is an empty endeavour and a waste of time. But frank and critical reflection must precede action, if action is to be apt, balanced and judicious. Thus, I call for some – but certainly not all – neuroscientists and neuroethicists to be actively involved in the discussion and debate, as informed, experienced experts at those tables where guidelines and policies are made, to work proactively to provide lenses and voices to report what neuroscience can and cannot do, and to be participatory in the formulation of directives that shape and govern the ways that neuroS/T should – and should not – be utilised.

I worry that there will not be adequate or sufficient representation of the community at the discussion and decision tables, and that the guidance of neuroS/T will thus be less than completely and/or accurately informed, both as regards the science and its ethico-legal and social implications. If a researcher does not want to be involved in NSID studies, it is easy to do so – simply do not respond to RFPs.  However, if a neuroscientist does not wish her/his work to be used or even misused, then what….don’t publish? Nonsense, of course, but the reality is that the days of ivory tower restriction and siloing are long gone given the advent of online publishing, and open access, creative commons, and internet dissemination of information.  Thus, results are “out there” and rapidly and easily available, and we, as neuroscientists and neuroethicists, cannot passively dictate who, how and in what ways the information we release is used, misued, or even frankly abused for ends that are inconsistent with the aims and ends of our intent. Perhaps, more than ever before, this openness of scientific knowledge, techniques and technologies demands a greater, proactive, and ongoing responsibility of scientists (and ethicists) in steering the ways that science – and ethico-legal issues and frameworks – are communicated, regarded, and appropriated.

So then, how are we, as a community, to effect a difference and engage in the steerage and possible trajectories of our work?  It is in this light, and in both 1) acknowledgment of the (scary) fact that neuroscientific research and its outcomes and products can and are being viewed by various groups with nefarious intent, and 2) with respect to our need to assume a moral high ground – both as a community of scientists and ethicists, and as representative of our field(s) as a public good for the public good, that I advocate participatory engagement of neuroscientists and neuroethicists in the discourse.   Dr Bell and I do not disagree; our respective input offers complementary flavours to provide necessary balance to a provocative brew of science, ethics and public involvement. I envision this not as an “either/or”, but as a “both/and” set of approaches, which I think are needed so as to be cooperatively and reciprocally supportive of technically sound, and morally strict direction of neuroS/T research and use.

Until we can actively assert a consensus on what neuroscience has shown, and what it means for the human condition, human ecology and human predicament, we are wise to be attentive to the realities of that ecology and predicament, and be prepared for the ways that the knowledge and tools of this field can be employed to affect power over others. Michel Foucault saw this potential, as did Hans Lenk, Hans Jonas, Jurgen Habermas, Martin Heidegger, and Herbert Marcuse, and we should not ignore their insight and wisdom, but rather, must take their erudition as a call to actively enter, engage and affect the scope, tenor and trajectory of the discourse and its outcomes.

I am fond of using the Thomistic perspective on the Aristotelian notion of “practical wisdom”, which asserts: Recta ratio speculabilium; recta ratio agibilium – “the right measure of knowledge to compel the right measure of actions”.  Indeed, human conflict, the capacity for cruelty and the use of the newest knowledge and capability in conflict and war is written in our history.  Aristotle recognized this, as did Aquinas, Foucault, Jonas, and many others.  We do as well. It is in this recognition that we invoke neuroethics in its two traditions as a human endeavour for human endeavour, and hope that what we learn about the brain, morality and human ecology can be leveraged in ways that afford practical wisdom from our bio-psychosocial past and present to guide our future. It is to these ends that our group remains dedicated and involved, and to which we invoke those in the community of neuroscientists and neuroethicists with similar interests to participate under the guidance of their own moral compass.

The Use of Neuroscience and Neurotechnology (NeuroS/T) in Interrogations and Questioning: Part 2 – Developing Ethico-legal Objectives, Frameworks, and Proposed Criteria for Studying and Using NeuroS/T in National Security Settings

In last week’s blog, I described the work the Rhiannon Bower and I are doing  that examines the possibility, potential, and problems of using neuroscience and technology (neuroS/T) in military and legal interrogation(s). Clearly, there are a number of issues, problems and concerns that jump to the fore, but at this point, we’re focusing upon the following two:  First is whether such neuroS/T is mature enough to be used in these ways.  Verdict: Perhaps some are (eg. – particular neuropharmacologic agents); many are not (eg.- neuroimaging and transcranial magnetic stimulation). This will require both further assessment of the viability of extant neurotechnoloogies, and ongoing identification and analyses of gaps in information, capability and administrative structures to provide oversight of these current and emerging tools and techniques. This latter point brings us to the second area of interest, namely whether ethico-legal systems are in place and realistic and mature enough to guide, direct and govern such possible use and/or non-use: Verdict: They are not; at least not to the extent that we believe necessary and sufficient to address and account for the contingencies spawned by rapid advancement in neuroS/T and the pull exerted upon its use and employment by a variety of market, political and social forces. This is where the proverbial “rubber hits the road” as regards the ways that pragmatic evaluations of the capabilities and limitations of neuroS/T are translated to practical parameters for the ways that these approach can, should, and/or should not be utilized.

In light of this, we are attempting to develop algorithmic protocols for studying and using neuroS\T that:

  1. Reflect and substantiate technical rectitude
  2. Reflect appropriate moral analyses of use and outcomes
  3. Afford ethico-legal bases to guide/direct both the use of neuroS\T and its outcomes within extant judicial frameworks and guidelines.
  4. Engage technical and ethical concepts to revise/develop pertinent laws to ethico-legally govern any use of neuroscience and neurotechnology in such circumstances.

From these studies, we are developing a proposed set of criteria for using neurotechnologies in national security settings. These tentative criteria include:

  1. That there is less harm done to the subject by using the neurotechnology in question.
  2. If an individual poses a realistic and immediate threat of severe harm to others, the most effective technology – and least harmful among these – should be utilized.
  3. The use of such neurotechnologies must be admissible in a court of law under Daubert (i.e. – reliability) rather than merely Frye (i.e.- relevance) standards.*
  4. If neurotechnology is used, only information pertinent to an ongoing investigation should be obtained, and this should be stored in official police and/or government records.
  5. There must be other corroborating evidence to substantiate prosecution (outside of evidence gathered by neurotechnologies) as is necessary based upon maturity and reliability of techniques (see 3).
  6. There must be a court order issued to incur use of neurotechnologies in these circumstances (see 2 and 3).
  7. Applying these technologies in a preventative or predictive manner is still practically problematic and should not be implemented until further development and adequate ethical frameworks are addressed/generated.

* As well, we are examining other ethico-legal frameworks and standards to enable a more internationally relevant approach to using neuroS/T in such ways (see Caroline Rödiger: Neurolaw: Hype or Reality?)

We are also working to develop policy recommendations that are aimed at supporting fiscal investment in building sustainable infrastructures that:

  1. Engage research to evaluate if and how neuroscience and neurotechnology could be used in NSID
  2. Develop a stance of preparedness with respect to the potential military and law enforcement uses of/for neuroS/T.
  3. Establish multi-disciplinary bodies to formulate ethico-legal guidelines and protocols to monitor/oversee/regulate the use of neurotechnologies both in the US, and internationally.

The goal of this enterprise is not to be merely dismissive or proscriptive, but rather to be critically perceptive, and insightful to the potential for innovation and viable ways that neuroscience and neurotechnology could be developed, used and/or misused, to what ends, and by whom.

 

Asking Important Questions About the Use of Neuroscience and Technology in Interrogations and Questioning

The axiomatic goal of national security is the protection of the population. Toward this end, knowledge of real and potential threats is crucial to both preventing events that place the population at risk, and to squelching events before they escalate into scenarios of large-scale harm. Intelligence is a vital part of any national security agenda, and accurate information is the key to successful intelligence. Since interrogation is often essential to acquiring accurate information, and in light of expressed difficulties in interrogating subjects in the Global War on Terror (GWOT), there is increasing interest – and concern about – developing and using neuroscience and neurotechnologies (neuroS/T) to enable more effective interrogation in domestic and international settings; both of which may present complex cross-cultural issues and problems.

Techniques and technologies that have been identified as having possible utility for obtaining information that could be important to intelligence efforts include:

1) a variety of neuropharmacologic agents, including substances that induce feeling of affiliation (such as the neurohormone, oxytocin), mood altering drugs (such as the anti-anxiety drugs and dopamine transport inhibitors) and  drugs that produce a state of elation or euphoria (such as some of the opiates, and amphetamines, such as methylenedioxymethamphetamine, MDMA…what is commonly referred to as ‘ecstasy’); and

2) neurotechnologic devices and approaches, such as certain types of neuroimaging, and forms of magnetic and\or electrical nerve and brain stimulation (such as TMS).

Emerging technologies have historically and predictably been used in military and national security programs, and inarguably neuroscience and its technologies represent cutting-edge developments that have viable potential in such security agenda. Therefore, dismissing the possible employment of neuroS/T, based upon either fear of misuse and/or ethical qualms, does not reflect historicity, and may be unrealistic.  But before we go any further down this road, let me state three important premises: First it’s likely that sooner or later (and I’d wager on sooner) neuroS/T will be used in interrogations. Second, neuroS/T, like any scientific approach and tool, has potential for misuse or frank harm, and so identifying the nervous system and brain as target sites through which to leverage information – including by the infliction of pain and suffering – is a reality that must be faced. Third, it’s probable that other individuals and/or groups are also thinking about these very same ideas, and these folks may not be friendly to the US and its allies.

From these premises, let me offer a three-fold stance: First, a realistic perspective on these possibilities, not least of which is acknowledgement of the actual capabilities and limitations of the neuroS/T used – and the ethico-legal issues generated by apt or inapt use, or blatant abuse – is necessary. Second, we need to avoid the so-called fallacy of two wrongs, and not ‘do something’ (or at least not do something cavalierly or without appropriate reflection and regard) just because “…everybody else might”, and from this, third, we do however need to be prepared for the contingencies and realties of such uses of neurotechnology, but must do so in ways that are also ethico-legally sound.

Working in our group, Rhiannon Bower is addressing these issues and problems, and has posed the following questions that are important to defining and shaping the conduct of neuroS/T research and use in these ways.

  • Is there some “sanctity of mind” that negates the use of such approaches, regardless of how suspect an individual may be?
  • Or, are there particular circumstances under which certain “advanced neuroscientific methods” may be employed to obtain information from individuals who may be aware of – or pose – significant danger to the populace?
  • Does the use of  neuroS/T incur greater or lesser risk and harms than other interrogation methods?
  • Are there limits to the ways that neuroS/T should be used in such situations, and if so, how should such criteria be developed and enforced?

Of course, there are claims that neuroscientific methods should not be employed in interrogation – or national security agenda at all – because of the potential for misuse, and/or the view that using neuroS/T in these ways would incur violations of inherent human rights that the US has vowed to protect.  We recognize and respect the validity of such claims, and in light of this Bower has envisioned three possible options:

  1. Abstaining from implementing neuroS/T in any/all national security agendas and situations.
  2. Utilizing neuroS/T in only specific situations/conditions that would dictate – and ethico-legally justify – the need for this level of intervention.
  3. Making (appropriate) neuroS/T approaches available and employable in all national security endeavors, including interrogations, in accordance with defined ethico-legal parameters.

When considering these options, it is important to bear in mind that the appointed goal of interrogation for US national defense is not to cause harm without purpose, but rather, to uphold and protect the rights of the greater population (namely the right to life). But, as history has shown, law enforcement and military authority can be misappropriated and abused, and these possibilities must be taken into account and mitigated.

Thus, we are focusing upon:

1)      Whether to base ethical decisions upon the spirit of the law, which might allow such uses of technology; or, if such approaches would be considered so morally problematic that it’d be preferable to ban the development and implementation of these techniques and technologies altogether?

2)      Whether guidance and governance should entail a neuroethics of military operations – or a military ethics applied to the use of neuroscience and neurotechnology?

3)      Whether some (extant or new) combination of both approaches might need to be addressed and articulated, and what such a set of ethico-legal parameters would obtain and entail.

These are deep and broad questions, and we’re working to flesh-out the trajectories and scenarios that each might foster, and from these possibilities develop answers that are meaningful to national security agenda, and the public at-large.

In the main, we advocate that neuroS/T be continued to be studied for its potential viability – specifically to decrease harm during interrogations deemed necessary in national security, intelligence, and defense operations.

But we advocate sensitivity to what we call “footfall effects”: namely, that it’s not a question of impeding the momentum or even the pace of forward progress (because that may be difficult, if not impossible, to do); rather it’s a question of where each forward step falls so as to tread wisely with appropriate lightness, and remain upright and balanced on one’s feet…both in the course of usual events, and if pushed or stricken.

Next week: Focus, objectives and proposed criteria for studying and using neuroS/T in interrogations and other national security settings

To my readers:

I’ve been a bit out of the loop and off line for a few weeks, as I was in the midst of setting up my new “home away from home” as Fulbright Visiting Professor of Neuroscience, Neurotechnology, and Ethics at the Human Sciences Center of the Ludwig Maximilians Universität in both Munich, their Peter Schilffarth Institute in Bad Tölz, Germany, where I’ll be through the end of February 2012.

I’ll be working with colleagues here to address some interesting developments in adaptive assistive neurotechnologies, integrative neurosciences, and the neuroethical issues they foster, and will be lecturing here at the Uni, as well as in Berlin, and Bonn.

Thanks for your patience while I was setting up shop (and our new apartment), getting into the swing of my lecture schedule, empirical studies and meeting with students, and getting my brain used to flipping between German and English back and forth throughout the day.

I’ll get back to blogging later this week or early next.  ’til then…Prosit und herzlichen Glückwunsche aus Bayern!

“G”

Neurosecurity: Definition, Scope, and Potential

Neurosecurity can be defined as studies and applications of:

(a) the concepts, practices, guidelines and policies dedicated to (i) identifying socio-political and military threats to neuro-psychiatric information and function, and (ii) preserving the integrity of both neuro-psychiatric information and neuro-psychiatric function of persons, groups and populations

and

(b) neuroscientific techniques and neurotechnologies to affect, manipulate and/or control neurological structures and/or functions of individuals, groups and/or populations in the service of national defense, and/or military objectives.

As history illustrates, new developments in science have- and will continue to have particular appeal for use in security and defense agendas, and this is certainly the case for neuroscience and its related (neuro)technologies – i.e. neuroS&T

A 2008 report conducted by the ad-hoc Committee on Military and Intelligence Methodology for Emergent Neurophysiological and Cognitive/Neural Science Research in the Next Two Decades National Research Council of the National Academy of Sciences entitled “Emerging Cognitive Neuroscience and Related Technologies” addressed the state of neuroscience as relevant to the (1) potential utility for defense and intelligence applications, (2) pace of progress, (3) present limitations, and (4) threat value of such science. Stating that “…military and intelligence planners are uncertain about the likely scale, scope, and timing of advances in neurophysiological research and technologies that might affect future U.S warfighting capabilities,” the Committee essentially defined the state of the field in its assertion that “…for good or for ill, an ability to better understand the capabilities of the body and brain will require new research that could be exploited for gathering intelligence, military operations, information management, public safety and forensics.”

The brain and its functional activities of cognitions, emotions, and behaviors – which when taken together can be considered “mind” – represents both a new frontier in scientific research, and a viable target-of-opportunity for the employment of science and technology to affect and manipulate its functions.  Both of these venues are laden with ethical, legal and social issues, questions, and problems, and many of these stem from the miscommunication and misappropriation of neuroscientific information, and/or unrealistic assessments of neurotechnological capabilities.

As the 2008 Committee report noted, there is a fair amount of “… pseudoscientific information and journalistic oversimplification related to cognitive neuroscience,” and so any consideration of the possible use of neuroS&T for national security, intelligence and defense (NSID) would need to parse facts from the fiction about what these approaches actually can and cannot do. The goal is not to be merely dismissive, but rather to be critically perceptive, and keen to the potential for innovation and viable ways that neuroS&T could be developed, used and/or misused, to what ends, and by whom.

Simply put, the brain and nervous system can – and will – be engaged to effect outcomes relevant to NSID operations, and some of these efforts will most certainly be undertaken by countries other than the United States and its allies. Thus, it’s crucial to remain keenly aware of international research programs that could be used in ways that pose obvious threat(s) to security and defense. Obviously, surveillance of international research, development, testing, and evaluation (RDTE) is necessary, but it’s not sufficient to guard against such potentially negative and harmful uses of neuroS&T. Instead, it would be practically wise to develop a stance of national/public protection that is based upon preparation, resilience, and in some cases intervention to prevent the advancement of certain RDTE trajectories.

Such an agenda will require the coordinated efforts of scientists, engineers, ethicists, sociologists, futurists, and the public (although issues about the pros and cons of transparency of governmental research then come to the fore), and should conjoin academic, corporate and governmental sectors (the so-called ‘triple helix’ of the scientific estate) in this enterprise. This is not new; we need only to look at the Manhattan Project, and ‘Space Race’ for examples of this estate in practice.  But that framework, while viable, may need some ‘tweaking’ to enable a more convergent approach that allows for stronger collaboration between the sciences and the humanities. I argue that this involvement of both the humanities and the public (at least to some reasonable extent) is important because any real effect – both domestically and internationally – can only be leveraged through guidelines, laws and policies that are sensitive to ethical and social effects, issues, and problems. But international policies don’t guarantee cooperation.

So any meaningful efforts in neurosecurity must sustain an active research program – both to delve into the potential capabilities and limitations of neuroS&T, and to enable ongoing analysis and evaluation of possible future applications of this science and technology in order to empower ethical decisions and actions that can —hopefully – prevent the occurrence of risks and harms.