This article originally appeared on Global Policy Journal.
Nayef Al-Rodhan explores the implications of the development and acquisition of human enhancement technologies for military purposes.
Human enhancement technologies are expanding the frontiers of biotechnology and changing the nature of warfare, international relations and geopolitics.
Human enhancement refers to the suite of techniques which alter the human body beyond its normal healthy state. While ‘therapy’ is meant to ‘fix’ or ‘heal’ something damaged, enhancement technologies aim to stimulate and augment the human body beyond its natural capacities. Some of the possibilities available soon, such as “personality pills”, super-intelligent machines or gene therapy to block normal aging, come with extremely disruptive side effects.
As is frequently the case with technological innovation, the origins of enhancement technologies are closely linked to military research. Soldiers equipped with devices for increased muscle strength, better pain management or extra-alertness make ideal combatants. Yet whilst administering pills that enable stress resistance or erase post-traumatic stress might seem like ideal quick fixes, they raise profound ethical and security concerns.
In their most extreme form, such techniques could push us beyond what it means to be human, effectively bringing us on the brink of transhumanism. Transhumanism challenges the very notion of the human condition as fixed, rooted and constant. Interventions to improve our bodies, modify our pleasure centres, eradicate pathogenic conditions, enhance cognitive functions or extend life will eventually alter emotions (e.g. fear) which are the result of millennia of evolution.
The rise of the super soldier – at any cost
The search for performance optimization via human enhancement in the military is not new. Stimulant drugs have been used in the army for decades. For instance, amphetamine , a synthetic drug which enhances the neurotransmitters adrenalin and noradrenalin, started being widely available to US troops in the 1960s for its effects in enhancing alertness and physical endurance. More recently, in an effort to find safer alternatives, the military has switched to the use of modafinil, a drug first used by US troops during the 2003 invasion of Iraq. Modafinil acts as a psycho-stimulant, enhances vigilance and overall cognitive and physical performance even in sleep-deprived individuals. It is estimated that the UK Ministry of Defence purchased 24,000 modafinil tablets in 2004.
Apart from the use of such enhancements in the military, we are increasingly witnessing the rise of technologies that can alter human biology irreversibly, especially by incorporating technology within the human body. Such technologies are radically different from previous eras as they are much more invasive and potentially irreversible, marking a new phase in the quest to create super soldiers.
The US Defense Advanced Research Projects Agency (DARPA) is now at the forefront of developing enhancement technologies. In the early 1990s, DARPA acquired an avid interest in biology. DARPA’s turn to biotechnology and biomimetics (getting inspiration from nature, the animal world and metabolic flexibility) is now well on track, and garnering growing federal funding. For the fiscal year 2015, DARPA’s proposed budget request was of 2.915 billion, a steady increase from previous years.
The projects for human augmentation resulted from the recognition that even with the most sophisticated weapons, war remains dependent on soldiers that are subject to physical, cognitive, or psychological vulnerabilities. This sentiment was openly expressed by the Agency, which stated that the human being was “ the weakest link in Defense systems ”.
Techno-integration became critical to achieving this purpose. This requires creating a symbiotic coupling between men and machines in order to enhance physical and cognitive fitness. This mostly concerned restorative medicine for a long time, but more recent advances in neural integration bring about the real possibility that the peripheral nervous system could be coupled with advanced technology with a simple plug. An extreme form of invasive technology currently being explored is a micro-processing chip which can be implanted beneath the skull and manipulated remotely.
Experiments with so-called “ non-invasive brain stimulation ” at the US Air Force Research Lab, made public in early 2014, tried a new technique to keep soldiers awake and alert with electric shocks. The results were promising: the electro-stimulation tested much better than the mere use of caffeine. The doses of electrical current were carefully controlled and succeeded in making soldiers wide-awake, refreshed and alert for as long as 30 hours. Although still at an experimental stage, the initiative proves that hijacking the brain for the end of military effectiveness will be used justify whatever scientific means.
The basics of neuro-stimulation now allow us to employ methods to boost our ability to learn, pay attention to the environment, better recall information, take risks or exercise self-control. The amount of knowledge we have on the frontal cortex already permits us to understand how to influence cognitive processes. Two major approaches are Transcranial Magnetic Stimulation (TMS) and Transcranial Direct Current Stimulation (TDCS) and the latter is already in use by the US military to improve the performance of drone pilots . However, scientists caution that TMS and TDS can produce many unintended effects. The military’s ambitions could soon catch up with the neuro-stimulation technology to the extent that soldiers’ reactions, responsiveness and emotionality could be pre-programmed with precision. They could become faster, more agile, alert, more receptive and fast learners, more disciplined or docile or, if needed, less empathetic.
Other projects pursued by DARPA in partnership with various universities across the United States include programs such as : “Accelerated Learning”, “Crystalline Cellulose Conversion to Glucose” (enabling humans to eat grass and other non-digestible plants), “Human-aided optical recognition”, (neuro-optical binoculars to detect threats), “RealNose”, (extra sensors to detect chemicals as accurately as a dog) and “Z-Man” (allowing humans to climb up walls like lizards).
While DARPA officially claims these projects are without invasive mental or physical effects, controversies abound and many questions about their long-lasting implications remain open.
Human nature
is frail, vulnerable and less adaptable than other species. It is therefore not surprising that DARPA would explicitly defend human enhancement projects based on a pragmatic calculation of cost, time and military effectiveness: “the idea is not simply to replace people with machines, but to team people with robots to create a more capable, agile and cost-effective force that lowers the risks of US casualties.”
Implications for international relations and geopolitics
With these developments, questions of law, international competition and ethics will become more prominent as both states and societies will have to respond to these technologies and their risks of spinning out of control.
Enhancement raises many ethical ’red flags’: how far will the imperative of “ military necessity ” go in justifying biotechnological enhancement that would otherwise be considered unacceptable? Could soldiers become dehumanized tools, coerced into whatever is necessary to wage war? Are safety considerations taken into account, and are norms of ethical medical conduct extended to all enhancement technologies? Moreover, it will be critical to explore whether enhancement is reversible or not and to what extent a transhumanist soldier can switch back to the ‘pre-enhanced’ state.
Considerations of risks from enhancement and transhumanism have been largely absent from the military, but it is high time the military gave more considerations to the ethical aspects of enhancing soldiers. These should cover both long-term consequences for the soldiers’ health, as well as the inequalities created between enhanced and non-enhanced soldiers, since enhanced soldiers might eventually need to be treated differently from the average, non-enhanced soldiers. Questions of responsibility will ensue as well.Should the enhanced soldier run out of control, who will be accountable: the soldier, the engineer or medical teams that enhanced him?
Pressure could soon mount for the US to have an ethical review of its enhancement programs, an expectation that is easier to foresee in a country where demands for accountability can be consequential even in an institution as secretive as the Army. However, this might not be the case everywhere, which brings the need for global discussions and standard-setting for enhancement technologies.
Human enhancement will be disruptive for the entire military establishment and have far-reaching international relations and geopolitical consequences. At a unit level , war-fighters might be enhanced differently, or selectively, creating thus a class of enhanced vs. “normal” soldiers. This will affect morale and unit cohesion drastically, potentially causing resentment in some and a false sense of entitlement in others. Such asymmetry of capabilities will also reflect in international competition and international law, where countries benefiting from advanced technologies of enhancement will possess an advantage over those who will continue relying on non-enhanced soldiers.
In the more near future, the implications of human enhancement in international relations could entail similar reactions to those provoked by the extensive use of drones by the United States. While one country might regard enhancement as justifiable, appropriate and defensible, others could perceive it as an unjust use of capabilities. This will further exacerbate the sense of illegitimacy in war and disproportionate material and human loss.
At the same time, and as was the case with other technologies, it is not improbable to anticipate a race of development and acquisition of human enhancement technologies by many other countries in the coming decades, thus further complicating international conflict resolution, code of conduct and international law. In addition, given the potential effects of these technologies on emotions, remorselessness and increased physical power (for instance through the use of powered exoskeletons), it should be expected that the level of brutality in warfare could increase significantly, thus complicating the implementation of international treaties and post-conflict reconciliation and reconstruction efforts.
This article originally appeared on open Democracy.
Four interlocking elements shape the global system: the neurobiological substrates of human nature (providing a more complex account of human nature), the persistence of global anarchy , which today coexists with conditions of instant connectivity and interdependence .
In an era of widespread decentralization, formation of regional blocs, and popular uprisings the role of states will continue to evolve dramatically. While they will without doubt remain pivotal, their nature and the ways in which they deploy power are in a profound transition.
In parallel to these developments, the discipline of International Relations can now benefit from a more complex understanding of human nature than what was previously held as perennially true. The role of rationality and egoism, long touted by the Realist school as critical to our understanding of human and state behaviour has become subject to significant criticism.
Neuroscience has contributed largely to providing a more nuanced view of humans and their neurochemistry. More circumspect accounts of human nature show that emotionality in fact plays a much more prominent role than previously believed, which overturns the conception of the foundations for interstate relations. A strong case can be made for the emotionality of states alongside a greater appreciation for the role of emotions in individual thought. These conceptions substantially undermine classical Realism in which the structure of IR itself was taken to be both zero-sum and analyzable in terms of pure rational self interest.
Alternatively, the theory of Symbiotic Realism adheres to our best neurobiologically-informed understanding of human nature, and offers the potential for a more collaborative conception of International Relations through the use of just power.
One important tenet of Symbiotic Realism is the acknowledgment that emotional vulnerabilities are shared by all parties, and that these can be orchestrated for good or for ill. While the human nature of classical Realism was fundamentally that of a pure rational egoist, Symbiotic Realism acknowledges the importance of symbiotic relationships in which both parties benefit from their willingness to interact cooperatively and compete in a non-conflictual way.
As such, Symbiotic Realism recognizes four interlocking elements which shape the global system: the neurobiological substrates of human nature (which provides a more complex account of human nature), the continuing persistence of global anarchy , which today coexists with conditions of instant connectivity and interdependence .
Emotionality, individuals, and states
Neuroscience and advanced brain-scanning technology has helped to elaborate our understanding of human nature in at least two important ways. The first is to lessen the role of reason in human decision-making, in large part by demonstrating the immensely important role of emotions . The second is to name and characterize aspects of the ego that do not manifest straightforwardly in terms of self-interest or power-seeking. With regard to the first of these, there is growing consensus in both neurological and psychological research that human beings have long overestimated the role of reason in their thoughts. Reason has an important role, but comes into play more rarely than is usually understood, and typically only after emotions have had their say.
The circumstances necessary for reason can best be realized where just power is consistently employed. The term “just power” is defined here as the exercise of power that respects human dignity and international norms, is savvy with regard to current global conditions, and protects the national interest. In these conditions, emotions will inevitably be present and have causal efficacy, but their effects will be accommodated rather than downplayed or ignored. Just power generates stability as well as a wider recognition of the equal availability and legitimacy of this stability.
This consideration does not override the basic tenet of international politics that self-interest is the fundamental attribute of human nature nor the argument about emotionality. This self-interest evolved according to selection pressures in precisely the same ways as all other features of human beings, and these attributes are marked by a strong inclination towards self-preservation. The fundamental nature of these emotions also highlights the importance of group inclusion and a narrative of identity in fully developed human beings. Therefore, these attributes might broadly be construed as egoist in the sense that they are required for individual human flourishing, yet they simultaneously indicate an irreducible interdependence of people which undermines a simplistic conception of self-interested rational actors.
Although states differ in many ways from individuals, it is worth noting that the decisions that inform interstate relations are ultimately in the hands of individual human beings, even in cases of collaborative decision-making. Evidence for the emotionality of states is ubiquitous if we realize that genuine existential threats to states are far less common than challenges to a state’s self-conception. In contemporary events, it is often issues with a state’s self-conception that results in conflict.
For example the desire for vengeance across generations is very difficult to characterize in terms of (purely) rational actors, but is sufficiently emotionally compelling to motivate some of the world’s longest-standing and most intractable conflicts.
Modern states, power, and sustainability
The game-theoretic interpretation of Classical Realism was characterized by a structural situation in which each actor was forced to act egoistically in order to avoid being taken advantage of or defeated by free-riders. Typically these actors were seen as rational and egotistical states and the zero-sum assumption that underpinned this idea meant that one party’s gain implied another’s loss.
Symbiotic Realism also recognizes the inherent propensity of actors to be egoistic yet in a more accommodative manner as implies a wider appreciation for cultural synergy and recognizes the possibility to move beyond a zero-sum scenario.
Globalization has greatly increased the interdependence between actors in areas such as environmental integrity, the stability of financial markets or the control of nuclear proliferation. This theory remains realist in the sense that it acknowledges an important role for rational self-interest, but Symbiotic Realism is better attuned to the realities of an interdependent world and emphasizes that mutual benefits should be possible in collaborative circumstances.
Cultural borrowing has been a source of great gain for centuries and now the opportunities for such shared benefits are more readily available than ever. Despite the significantly anarchic circumstances of contemporary interstate relations, connectivity and increasing interdependence now ensure that more intercultural exchanges are inevitable, and that problems of governance will arise (and are already arising) that cannot be resolved unilaterally. To put this in a simple scenario: suppose that “A” discovers a highly advanced and effective technology for mitigating carbon pollution, while actor “B” but not “A” has the resources and infrastructure to implement this technology successfully. In an arrangement in which both A and B will have absolute gain—that is, both will gain more than they lose if the technology is shared, Symbiotic Realism can overcome the zero-sum limitations of Realism. The pressing policy objective for the future will thus be to create the conditions in which such good faith arrangements are encouraged and implemented.
Just power includes conceptions of “hard,” “soft,” and “smart” power, with additional parameters of respect for human dignity, and a basic guarantee of justice and compliance with international law. These are the necessary conditions for this good faith to become the norm between states. Power conscientiously exercised in this way provides assurances to all the parties in the system and to would-be collaborators that their contributions will not be used unfairly. In order to be sustainable in our radically interdependent world, uses of power must be demonstrably just, as the misuse of power quickly destabilizes interstate relations.
The recent reporting of extensive torture in the name of security, and the violation of international norms should be examined in exactly this light. Such actions radically undermine the possibility of good faith agreements in the international theatre.
While Realism asserts an almost exclusive focus on the balance of power with an implicit assumption about the malign intentions of other powers, Symbiotic Realism is more nuanced in this view and alludes to the inescapable interdependence now predominant in the international system. The new climate of international relations imposes new mechanisms of deploying power. Manifestations of power that uphold robust regard for human dignity and respect for international norms enable the sine qua non trust that is necessary for mutually beneficial decisions. When such just power is exercised and recognized to be operational, the conditions are created for collaboration and the possibility of absolute gain among actors.
This article originally appeared on Journal of Public Policy Blog.
Studies of human behavior and psychology have received extensive attention in public policy. Economists, social theorists and philosophers have long analyzed the incentives of human actions, decision making, rationality, motivation, and other cognitive processes. More recently, the study of happiness furthered the debate in public policy, as many governments brought up the necessity for new measures of social progress. The discussion was bolstered when the UN passed a critical resolution in July 2011 inviting member countries to measure the happiness of their people as a tool to help guide public policies. It was also hoped that discussions about happiness would serve to refine the wider debate about the UN Sustainable Development Goals for 2015-2030 and the standards for measuring and understanding well-being. The World Happiness Report , a recent initiative, attempts to analyze and rate happiness as an indicator to track social progress.
These recent initiatives serve as reminders that sound public policies must evolve in strong connection with an understanding of human psychology, emotions and the sources of happiness and satisfaction. Nevertheless, there are further invaluable insights from neuroscience that have remained less explored. Contemporary neuroscientific research and an understanding of the predispositions of our neurochemistry challenge classical thought on human nature and inform us of fundamental elements that must accompany good governance.
Nature and Nurture
Are we intrinsically good or bad? Are we born with innate morality or with a blank slate? The question of the original endowments of human beings has intrigued philosophers since at least Plato’s day. The notion of anamnesis , or recollection, is foregrounded in several of the Dialogues , and serves as a kind of digression in a number of others. The notion of innate ideas was subsequently popularized in Western philosophy and reemerged with thinkers as influential as Descartes.
At the other side of the spectrum, John Locke came to be known as the most ardent critic of these concepts, believing that there was no evidence for innate ideas whatsoever. Instead, he advocated a tabula rasa , or blank slate, image of the mind. The Lockean challenge to innate ideas represented a healthy exercise of philosophical parsimony and an important step forward but, at the same time, it led to another dichotomy between innate and acquired aspects of human nature more generally.
This debate, however, missed some crucial insights. While Locke was right to eschew particular innate ideas, his lack of familiarity with evolutionary theory and neurosciences prevented him from grasping aspects of human nature that are inherited and universal and grounded in our shared neurochemistry .
Famously, Locke discredited innate ideas by arguing that logical and mathematical truths, which make the best candidates for innate ideas, are by no means universally accepted. If such ideas were innate, there should be no obstacle to all human beings recognizing their truth immediately. Though he mostly confines his discussion to “children and idiots,” similar themes have been expressed by those who advocate concepts and paradigms of cultural relativity.
Moral notions, in particular—which in contemporary times have been demonstrated to vary significantly from one culture to another—stand as evidence against the innateness of ideas. More generally, Locke intended to prove that there was no principled way to distinguish between innate ideas and those acquired through the process of reasoning (induction or deduction). Since the means to make such a distinction were missing, the defender of innate ideas will have to demonstrate that certain ideas could not have been acquired by reason.
Modern neurological studies have bolstered Locke’s position, proving the plasticity of the brain and hence its susceptibility to influence. What Locke could not appreciate, however, was that the same neurochemistry that allows significant flexibility and makes human beings malleable to their environment also predisposes them in certain basic ways. Our neurochemistry is our lowest common denominator, and this brings a nuanced counterargument to Locke with an appeal to the universality of emotions: because emotions are neurochemically mediated, they are present across cultures as part of our genetic inheritance. This surely does not suggest that specific ideas are universal, too; in that regard, Locke’s thesis remains largely intact.
Contemporary neuroscience does, however, point to an element of human nature that is naturally inherited, overturning the theory of a pure tabula rasa or any theory that resorts to explanations of nurture entirely to explain human nature. Moreover, more recent evidence of “ genetic memory ” also demonstrates the presence of readily inherited intuitions that we possess upon birth. The theory of our inborn “ numerosity ” explored by neuropsychologist Brian Butterworth further proves how numerical attributes are encoded in the human genome from our ancestors. Therefore, while distinct notions of right or wrong are largely absent from our genetic endowment, mounting evidence in neurosciences shows that some minimal inborn attributes do exist, and the most common and fundamental manifestation of these is the goal of survival.
Predispositions and Dispositions
Our basic suite of emotions is oriented towards our survival and typically functions at a subconscious level, preempting our idiosyncratic cultural conditioning. At the very minimum, human beings are equipped with a set of basic instincts coded by our genetics , which inevitably and repeatedly guide us toward actions that will ensure our survival (or that we calculate as most beneficial for survival at a specific time).
Emotions have increasingly been studied as important in our decision-making processes and in our construction of principles. Importantly, these emotions are not entirely deterministic with regard to behavior. Rather, the complexity of human behavior results from the interplay between general inherited instincts and factors contingent on our individual existences in certain sociocultural settings. This is a central insight in my theory of a predisposed tabula rasa : our nature is highly malleable and readily “written upon” by experience, but it is also and most powerfully predisposed toward self-preservation. Emotions are at the core of this predisposition. This means that there is a certain fundamental emotional commonality in the predisposition with which we begin our lives.
At the same time, the malleability of our nature ensures that our dispositions will also be profoundly influenced by familial, social, and cultural exposure. This understanding has immediate political implications: given that human beings significantly become what they are as a consequence of their environment and their social contexts , creating conditions of good governance, support and fairness is critical. As I have written before, human beings are not born intrinsically good or bad but rather amoral: their moral compasses will vary and shift (to a large extent) in response to external conditions. In the same vein, the emotions that form part of our inheritance can be appealed for both good and ill throughout the course of our lives. The demagogue who would rally people toward violence or radical social destabilization is counting precisely on such emotional instincts to override rational thinking. Being cognizant of such vulnerabilities should make us both more vigilant against those who would use our emotional responses and more sympathetic to those acting predominately and unknowingly out of fear.
Emotionality, Rationality, and Morality
The longstanding dichotomy between innate ideas and a blank slate parallels a related dichotomy between emotions and rationality. From what has already been hinted above, this dichotomy often leads to a distortion and oversimplification of emotions and their role. But, as we acquire a more nuanced appreciation of an inherited set of emotions as neurochemistrically mediated and material and instinct-oriented, it becomes clear that the strict division between emotions and rationality is equally misleading. This is in part because even “basic” emotions—long maligned as obstacles to clear rational thought—have more recently been demonstrated to be significantly inferential. Emotions need to be recognized as significant guides to our behavior, and this is also valid for those minimal emotions associated with survival.
The role conventionally given to rationality, on the other hand, has frequently been overestimated both in terms of its ubiquity and power. A strong tradition to glorify rationality has almost vilified anything pertaining to emotions as something precarious and menacing. Nevertheless, once emotions and their neurochemical underpinning are reevaluated properly a new picture emerges. Emotions have been our constant companions and, as evidenced by scientific research, rational reasoning is in fact less common than usually assumed. Many of our cognitive biases remain controversial, and modern psychology still has limited means to unlock all the unknowns of our brain. However, it is clear that emotions are critical, and the priority of emotions to reason in typical decision-making is increasingly considered a commonplace of psychology.
The theory of predisposed tabula rasa accommodates these results while providing grounds to understand morality as a higher reflective achievement, not inherent to our nature, and in clear correlation to the highly specific circumstances in which the individual lives. As already suggested, our common emotional background is best understood as amoral and capable of being developed for positive ends or manipulated for negative ones. We can thus arrive at a theory of human nature that both explains our inherited aspects in terms of natural selection and leaves sufficient scope for the agency of human beings to develop in relation to their circumstances.
Considering all these insights is also critical for public policy. An understanding of our minimal predispositions provides a guide for ensuring the basic conditions under which humans are most likely to acquire the interest in social cooperation and morality. The understanding of human nature as a predisposed tabula rasa informs us that survival is the most fundamental human instinct coded in our genetics and that, when imperiled, it is likely to trump everything else. Furthermore, the malleability of our neurochemistry is a powerful reminder that public policies must work towards preventing injustice, humiliation and insecurity, and more generally, any conditions that are likely to exacerbate our egoistic and survival-oriented behavior.
This article originally appeared in Georgetown Journal of International Affairs Blog.
Emerging technologies and their possible implications for ethics, security, and even human existence have increasingly gained ground in the past two decades. Some innovations have resulted in obvious security and existential threats: a world with nuclear arms, for example. The potential of other technological shifts, however, has been more mixed. Biotechnologies, genetic engineering, and stems cells have given rise to controversial debates in which advocacy groups on both sides have convincingly put forward pros and cons. The Internet has revolutionized everything from markets to family communication in ways both beneficial and harmful. The age of artificial Intelligence (AI) has shown itself to be similarly Janus-like in its potential to alter our lives both positively and negatively. On the one hand, AI has demonstrated its usefulness in predictive speech and typing software, robotics, and unmanned aircraft technology. On the other, these and many other AI-enabled platforms raise profound concerns about oversight.
AI is also unique among emergent technologies because it can learn and evolve without human input. This fact alone demands a policy approach that recognizes not only the immediate implications of AI itself but also what might happen because of the potential range of resultant technologies. In short, AI poses challenges for security and policymaking not merely of magnitude but of precedent. Further, AI forces us to consider our relationship with technology in ways that were never previously relevant—including the possibility of entering into competition with, and even being superseded by, our own creations.
The advent of AI brings with it numerous implications for the futures of global security, conflicts, and human dignity. The extensive use of drones, both for military and commercial purposes, is a rightly controversial current debate. But the uses of AI in unmanned aircraft are mere glimmers of what is to come. In the later stages of the industrial revolution, industrialization in factories rendered some jobs previously performed by human beings obsolete. AI appears to portend the inevitable complete removal of human beings from combat scenarios in numerous military-strategic areas.
AI applications facilitate real-time adaptation to contingencies without requiring the presence of people on the ground. Unmanned drones, for instance, are used to provide continuous surveillance and small robots are deployed in missions to counter improvised explosive devices. U.S. Army researchers are now working to develop intelligent robots that can successfully navigate in different environments by following voice commands and instructions by a human. Furthermore, the U.S. Defense Advanced Research Projects Agency (DARPA) launched an AI program in 2013 to help integrate machine-learning capacity in a wide variety of military weapons. Other teams of scientists are now exploring ways to create robots with a moral compasses and in-built senses of right or wrong that have the ability to pick the ethical course of action on the battlefield.
Two immediate consequences of this transition to battlefield AI are especially noteworthy. The first reflects the relative ease of convincing the public or another decision-making body to engage in violent conflict in cases where the use of AI technology assures minimal human casualties. Given that President Obama’s strategy to “degrade and ultimately destroy” the Islamic State in Iraq and Syria (ISIS), for example, attempted to explicitly avoid committing further on-the-ground American troops, wars that do not involve risk of bodily harm to soldiers continue to be much easier sells to both the public and to government bodies. These assurances are potentially problematic not only because they tend to work against even the most circumspect evaluation of a war’s justness, but also because they encourage a point of view that underestimates the destabilizing effect of all military engagements, regardless of battlefield casualties. This point of view often overlooks warfare’s terrible track record of noncombatant casualties and harm to nonmilitary parties. The history of recorded warfare demonstrates that far more civilians than soldiers have died as a result of military engagements, a trend that has significantly worsened in the era of modern technology. This fact alone should evidence a need for additional reflection about the part AI will play in the future of warfare.
A related area of concern is the role of judgment regarding entry into and conduct during interstate conflict ( jus ad bellum and jus in bello ). Any AI machine expected to make decisions in war should pass some variation of the Turing test , which was devised by British mathematician Alan Turing in 1950 to assess whether a particular machine exhibits intelligence equivalent to or beyond that of a human. But the worry is that a robotic soldier or a sufficiently sophisticated AI drone could easily pass a version of the Turing test and yet utterly fail to uphold jus in bello ’s fundamental commitment to non-combatant immunity, or jus ad bellum ’s supposed principal of non-aggression. Therefore, if AI is to play a role in military engagement, this potential must be closely monitored and constrained by international norms.
Second, as I have previously argued , a heavy reliance on AI machines would create further inequalities in war because of the unequal availability of such technologies to certain countries. This will make the outcome of interstate conflict far more directly a matter of superior technology and which nations or peoples have the resources to attain it. This availability gap could serve to exacerbate and reinforce preexisting global inequalities. This could also conceivably result in asymmetric battlefield casualties where countries that have access to AI technology will suffer fewer human losses compared to those countries that do not. Other questions about AI’s use and application are relevant too. Could conscious machines be sensitive to human welfare? Could they replicate the human motivation to cooperate in order to avoid the “state of nature ,” which Hobbes defined as a state of a perpetual war and lack of effective higher authority to arbitrate disputes? How can we expect robots to understand, relate to, and execute the basic norms of social cooperation and political order?
Beyond its potential military applications, the nature and use of AI should also be monitored and regulated in non-combat settings. AI has achieved an almost ubiquitous presence in our everyday lives in the machines and applications we use in the workplace, at home, and beyond. Learning software, like the popular “Swipe” texting key—an app that learns user’s tendency to use particular words and phrases and becomes predictive of what a user is trying to say or is about to say next—is an example of the sort of AI that is coming to play a significant role in everyday life. A similar technology, developed by Intel, is responsible for the speech-assistance software used by British physicist Stephen Hawking, whose degenerative ALS rendered him unable to speak unassisted by machinery in 1985. Nevertheless, while acknowledging the benefit he receives from AI, Hawking has vocalized concerns that complete AI could bring about the end of the human race . With the capacity to learn and improve at near-limitless rates, full AIs would quickly become superior to human beings, constrained as we are by long and slow evolutionary processes.
While the dystopian vision of runaway or out-of-control AI still appears like something out of science fiction, today’s rate of technological innovation serves as a reminder that we may be headed in that direction. The collective of hackers and activists known as Anonymous has demonstrated the fearsome capacity of AI programs even at their current stage of development: at the outset of the Arab Spring in 2011, leading members of the group clogged the networks of Tunisia’s governing regime. Within 24 hours, the websites of the president, prime minister, and that of the Tunisian stock exchange had been brought down. Simple AI can learn to avoid spam filters, avoid fraud detection, and disguise itself as various different forms of online protocol. And these features are minimal compared to the more advanced capabilities to which AI might lead—the ability of a fully AI machine to make strategic decisions about which governments to isolate or which weapons systems to activate, for instance.
Regardless of how close to or far from the realization of such capabilities we are, the fact that the possibility exists in principle should motivate dialogue and careful control over the development of AI. Alongside environmental degradation and large-scale human rights violations, artificial intelligence represents yet another critical challenge that requires interstate collaboration and the shoring up of international law to preserve the safety and dignity of human beings in both our contemporary and future world.
This article originally appeared in YaleGlobal Online.
Blogs continue to wield influence; governments and bloggers could coordinate on regulations to increase the potential.
The internet and global interconnectivity, while often taken for granted, has changed the face of social reality. Weblogs, more commonly known as blogs, have emerged and in many ways manifest both extremes of positive and negative potential.
Because blogs have tremendous potential to be used either for good or ill, they could be dubbed a new avatar of a power group supplementing the old. The modern expression of the separation of powers in the executive, legislative and judicial branches became known as the three estates, later to be followed by a “fourth estate” in the form of the media. The designation has often been contested simply because the media does not implement policy or mandate particular activity, yet these criticisms miss the larger point. The essence of “estates” as used here refers to sources of power.
When the term “fourth estate” was coined by Edmund Burke and referred to by Thomas Carlyle, their astute observation was that the press had come to wield an equal or occasionally greater power to influence policy than the original three state powers.
The internet multiplied this power, providing the possibility for previously unheard voices to gain an audience as well as provide another check on the power of the other estates. This led me in 2007 to designate blogs as the fifth estate .
The revelations of Edward Snowden via WikiLeaks are a resounding example. The evidence he provided about the extent and mechanisms of US state surveillance have sparked overdue global discussions about the limits of privacy in the age of the Internet, as well as closer investigations into the legal and technical aspects of spying and surveillance. Blogs have emerged almost imperceptibly, especially as so much content is non-political. Still, blogs nonetheless represent a tremendous capacity for the masses to disseminate information, encouraging public participation and interest in politics, and opinions, which in many countries can be openly expressed without censorship, barriers or editorial boards. This realization has started to cause anxiety in some countries that have a poor record of civil liberties. In China, for instance, blogs like “China Change” have emerged as sources of news and commentaries on human rights and civil society issues in the country.
Blogs have been bolstered by more frequent contributions from experts and shown themselves to be the least constrained forum. Examples come from established journalists, members of parliaments, and political parties from different ends of the political spectrum or key figures in global politics such as John Kerry .
In a hyper-capitalist environment dominated by media giants, the means available to independent journalism have narrowed considerably. The advent of blogs has reinvigorated such possibilities of independence, giving not only journalists but anyone with access to the internet the capacity to express views and disseminate information. At the same time, some adverse effects have been recorded as so-called netizens and bloggers covering political events or revolutions in real time later became targets of backlash. Recently Avijit Roy, an influential Bangladeshi-born American blogger, was hacked to death in Dhaka. He was a persistent critic of the Islamist radicals.
As a mechanism of positive policy reform, blogs continue to face challenges:
How the blogosphere tends to be perceived: Despite general acknowledgment that freedom from influence or constraint of major media channels or ideological bias is a favorable quality, blogs often suffer from the concern that their authors lack journalistic experience or other relevant credentials.
bsence of oversight: Questions are raised about blogs’ lack of editorial review and insufficient fact-checking mechanisms. Such shortcomings leave readers in a dilemma. Yet well-researched and reviewed information from dominant media outlets can be prone to biases, too. Doubts can also emerge by the perception of the blogosphere as a source of entertainment and “light” information, rather than contributor of serious content. Further issues of credibility arise also as some bloggers joined programs like the “paid blogger program” where they commit to endorse companies or products in exchange for money.
A source of polarized views: Without oversight and checks, blogs can serve morally dubious intentions by those who aim to spread propaganda, radicalize readers or exacerbate antagonisms. For readers who deliberately seek out only blogs that reinforce their views without checks, such content ceases to become a source of understanding.
Sensitive or dangerous information: Blogs can disrupt society, business and government activities, such as by disclosures of secret information . Apple Computers, for instance, reportedly filed a lawsuit against bloggers who communicated confidential company information on their blogs. Other blogs disseminate information or blueprints for constructing weapons of mass destruction or propagate anarchist messages . All of these concerns would be ruled out in more traditional media sources by journalistic integrity and institutional checks.
A primary countermeasure to these negative implications is education. The ways in which readers encounter and relate to information is dramatically influenced by their education as well as their awareness of the pitfalls relating to the information source.
Furthermore, serious bloggers should welcome expert guest commentary, critical feedback and open dialogue in their blogs. Only through education and critical engagement can readers become more demanding and circumspect, which in turn improves the quality of blogs.
The question of oversight-free authorship remains the prevailing concern, and people must become critical readers with a heightened sensitivity to unjustified positions or unsubstantiated claims.
Other regulatory steps are also necessary to limit the extreme abuses of blogs. The question of absolute anonymity has a downside from the viewpoint of global security. Anonymity can protect activists working in the world’s most brutal areas, but can also allow rogues or criminals to spread ideas without being easily tracked.
Governments must combat bloggers engaging in deliberately radicalizing rhetoric, employing hate speech, or engaging in criminal activity including human trafficking or pornography.
These recommendations might raise concerns about censorship and rights to free speech, but just as there are reasonable limits to free speech in public life, the same logic and amount of regulation should be applied in the digital domain. There are inherent difficulties about establishing such limits in an even-handed way yet this should not mean that these limits should not be sought and imposed.
The blogosphere must function as an extension of the public space, where people can be held accountable and liable for their actions as well as potentially investigated for threats of violence or criminal activity. Nevertheless, the plurality of legal systems and many interpretations of freedom of speech or hate speech remains a persistent challenge in the blogosphere. Conundrums are bound to arise as the internet is a global medium and the removal of some content will be problematic especially if the servers are located in countries where those messages are not illegal. As an information stream that reveals public opinion largely free from outside influence, the capacity of blogs for shaping attitudes positively is tremendous. Governments must ensure that the power of blogs is cultivated and implemented in collaborative ways, with a view to preserve peace and human dignity. Contributors, too, must become more proactive and committed to integrity and responsible content. The idea of a bloggers’ code of ethics , proposed a few years ago, deserves renewed consideration.
Undoubtedly, the future of information holds high potential for blogs. Their political relevance is only expected to expand.
The question is not whether or not the influence of the fifth estate will increase, but what form this influence will take and what regulatory mechanisms are necessary to implement to cultivate blogs’ positive potential.
This article originally appeared in the Scientific American Forum.
Brainlike computer chips, smart pharmacology and other advances offer great promise but also raise serious questions that we must deal with now.
SA Forum is an invited essay from experts on topical issues in science and technology.
Editor's Note: This essay was produced in coordination with the World Economic Forum.
In the past four decades technology has fundamentally altered our lives: from the way we work to how we communicate to how we fight wars. These technologies have not been without controversy, and many have sparked intense debates that are often polarized or embroiled in scientific ambiguities or dishonest demagoguery.
The debate on stem cells and embryo research , for example, has become a hot-button political issue involving scientists, policy makers, politicians and religious groups. Similarly, the discussions on genetically modified organisms (GMOs) have mobilized civil society, scientists and policy makers in a wide debate on ethics and safety. The developments in genome-editing technologies are just one example that bioresearch and its impact on market goods are strongly dependent on social acceptance and cannot escape public debates of regulation and ethics. Moreover, requests for transparency are increasingly central to these debates, as shown by movements like Right to Know , which has repeatedly demanded the labeling of GMOs on food products.
Ethical and regulatory challenges
On March 4 the World Economic Forum released its list of the top 10 emerging technologies for 2015 . It includes advances that aim to resolve some of the ethical debates posed by an earlier generation of technologies as well as others that will bring about new ethical and regulatory challenges. The notion of “emerging” technology does not necessarily mean that all such advances are new or revolutionary by themselves. Some have already been around for years or, in various forms, for decades (for example, fuel-cell vehicles, artificial intelligence, the digital genome, additive manufacturing methods). They are now transitioning to a new phase, however, becoming more widely used or incorporated in consumer goods. In one way or another all these technologies are bound to gain more ground in coming years.
Precise genetic-engineering techniques will likely solve some of the main controversial elements in the GMO debate—for example, the fact that genetic engineering was neither precise nor predictable . The range of procedures associated with GM crops is precise in the initial process of cutting and splitting genes in the test tubes. But the subsequent steps are uncontrolled and some mutations can occur and alter the functioning of the natural genes in potentially harmful ways. A technique that would achieve greater accuracy and greater predictability over genetic mutations is, of course, a net improvement on conventional GMOs. It is, however, critical that this technique is properly studied and implemented in a sustainable way and that it doesn’t just give renewed legitimacy to genetic engineering in agriculture.
More accuracy is also expected in the operation of drones with the adaptation of the sense-and-avoid equipment. This will have unequivocal security benefits, helping unmanned aerial vehicles avoid collisions with other drones or piloted aircraft. The critical offshoot of this innovation is that it will encourage and enable the operation of a larger number of drones, a development which can be both welcomed (for instance, China flies drones to help fight pollution ) and anticipated, as the growth in dangerous drone flights around populated areas appears to be developing ahead of regulations.
Autonomous systems, artificial intelligence (AI) and robotics, while already decades-old technologies, will continue to expand their functionalities and enter new eras of continual specialization. More intuitive, emergent AI could change speech and conversational software with unprecedented precision, helping millions of people and also redefining the way we command and interact with computers.
Robots as intelligent as humans
New-generation robotics will increasingly have more autonomy and capacity to react without preprogramming, which complicates current debates on robotics: The trust and reliance invested in a robot will have to be greater, bringing robots closer to the point of being on par with us. Neuromorphic chip technology further illustrates this. It is among the most revolutionary developments in AI and a radical step in computing power. Mimicking the intricacies of the human brain, a neuro-inspired computer would work in a fashion similar to the way neurons and synapses communicate. It could potentially learn or develop memory. This would imply that, for instance, a drone equipped with a neuromorphic chip would be better at surveillance, remembering or recognizing new elements in its environment.
Immediate ethical red flags emerge, however: Building neuromorphic chips would create machines as smart as humans, the most intelligent species on the planet. These technologies are demonstrations of human excellence yet computers that think could be devastating for our species and, as
Marvin Minsky
has put it, they could even keep humanity as pets.
The interest in smart machines is now also pursued in additive manufacturing methods, which are increasingly integrating smart materials into manufacturing. These materials could adapt, change properties, interact or respond to their environments. With
4-D Printing
, which takes into account the transformation that occurs over time, some materials will adapt and repair by themselves without maintenance or they could be preprogrammed to disintegrate on their own. This will raise new questions of standardization, traceability and copyright.
More radical disruptions will occur once the technology transitions to the organic world, making it possible to assemble biomaterials that evolve and develop on their own, design cancer-fighting robots that would release antibodies only in contact with cancerous cells, and so on. The moment of the print button for biology is nearing. Effectively, this could also mean that in a not too-distant future smart pharmacology will permit us to receive a continuous supply of antidepressants or neuroenhancers every time our dopamine level drops. The ethical consequences of such developments should be thought through. Having our emotions controlled in detail by smart machines will pave the way for dangerous forms of dependences and new understandings of our humanity and the emotions that define us.
Genome-based treatment, based on wider and cheaper availability of genome data, will provide new ways to customize the therapeutic protocol and enhance our control over diseases and medical treatment. The speed, accuracy and costs of genome-reading have changed dramatically in just a matter of years: A decade ago this process was a billion-dollar effort whereas today the price has dropped sharply to around $8,000. In cancer treatment, for instance, this will allow transitioning from broad-spectrum chemotherapies to more individualized diagnoses and targeting of specific malfunctioning genes. As we are truly starting to gain more precise tools to fight life-threatening diseases, a range of other issues arise. Pervasive global inequalities will still prevent millions of people from enjoying the benefits of such treatments, even in a context of decreasing costs of genome sequencing. Furthermore, a range of security and privacy risks associated with data storage of genome data will invariably arise and require protective mechanisms, especially as such databases are often shared for security reasons (for example, between international police forces), increasing the possibility of hacking or abuse by authorities.
Inevitably, the emerging technologies of the future will redefine our understanding of biology, the material world and manufacturing. The implications will further extend into geopolitics and global balances of power. Fuel-cell vehicles are finally expected to make their way to the market and reduce dependency on oil or emissions that contribute to climate change. In the long term, this will accentuate the vulnerability of oil-dependent economies and recalibrate geopolitical relations. Recyclable thermostat polymers, reportedly discovered by accident , will dramatically change fabrication and manufacturing, leading to new standards in industries. Globally, the advent of distributed manufacturing is bound to lead to a reassessment of the meaning of value chains and infrastructure. Rather than ship parts of a given product, some companies will simply trade information, leaving it to the customer to finalize the product’s manufacture. A suite of other technologies such as 3-D printing, informatics and robotics are enabling a paradigm shift to a dematerialized future with endless possibilities for customization.
Changes ahead
As always, we must welcome innovation and the benefits it brings us. But we must also remain committed to sustainable development, taking into account issues of inequality, human dignity and inclusiveness. Finally, this year’s top emerging technologies also remind us of the importance of political commitment. Take the example of the transition toward fuel-cell vehicles: it will require huge infrastructural adaptations and conversions. In fact, it’s estimated that if the U.S. spent the same amount of money it took to put a person on the moon ( $100 billion in today’s dollars ), the shift to hydrogen-powered cars and refueling stations that pump hydrogen would be significantly eased. Often the technology itself is available but only a massive exercise of political will can bring about change.
Some technologies might progress independently of political support. But good governance , examinations of dual-use risks and ethical considerations must still remain guiding posts at all times. Ultimately, how we approach the regulation of emerging technologies will inevitably have wide implications—not only for security and ethics but for our definition of human dignity and the equality of individuals.
This article originally appeared in the World Economic Forum Blog.
In the past four decades, technology has fundamentally altered our lives: from the way we work, to how we communicate, to how we fight wars. These technologies have not been without controversy, and many have sparked intense debates, often polarized or embroiled in scientific ambiguities or dishonest demagoguery.
The debate on stem cells and embryo research , for example, has become a hot-button political issue, involving scientists, policy-makers, politicians and religious groups. Similarly, the discussions on genetically modified organisms (GMOs) have mobilized civil society, scientists and policy-makers in a wide debate on ethics and safety. The developments in genome-editing technologies are just one example that bio research and its impact on market goods are strongly dependent on social acceptance and cannot escape public debates of regulation and ethics. Moreover, requests for transparency are increasingly central to these debates, as shown by movements like Right to Know , which has repeatedly demanded the labelling of GMOs on food products.
Ethical and regulatory challenges
The World Economic Forum’s list of top 10 emerging technologies of 2015 includes those that aim to resolve some of the ethical debates posed by an earlier generation of technologies, as well as others that will bring about new ethical and regulatory challenges. The notion of “emerging” technology does not necessarily mean that all such technologies are new or revolutionary by themselves. Some have already been around for years or, in various forms, for decades (e.g. fuel-cell vehicle, artificial intelligence, digital genome, additive manufacturing methods). However, they are now transitioning to a new phase, becoming more widely used or incorporated in consumer goods. In one way or another, all these technologies are bound to gain more ground in the years to come.
Precise genetic engineering techniques , one of the highlighted technologies, will likely solve some of the main controversial elements in the GMO debate, for example the fact that genetic engineering was neither precise nor predictable . The range of procedures associated with GM crops is precise in the initial process of cutting and splitting genes in the test tubes. But the subsequent steps are uncontrolled and some mutations can occur and alter the functioning of the natural genes in potentially harmful ways.
A precise technique that would achieve greater accuracy and greater predictability over genetic mutations is, of course, a net improvement on conventional GMOs. It is, however, critical that this technique is properly studied and implemented in a sustainable way and that it doesn’t just give renewed legitimacy to genetic engineering in agriculture.
More accuracy is also expected in the operation of drones with the adaptation of the Sense and Avoid equipment. This will have unequivocal security benefits, helping to avoid collisions of drones with other drones or piloted systems.
The critical offshoot of this innovation is that it will encourage and enable the operation of a larger number of drones, a development which can be both welcomed (for instance, China flies drones to help fight pollution ) and anticipated, as the growth in dangerous drone flights around populated areas appears to be developing ahead of regulations.
Autonomous systems, artificial intelligence (AI) and robotics, while already decades-old technologies, will continue to expand their functionalities and enter new eras of continuous specialization. More intuitive, emergent AI could change speech and conversational software with unprecedented precision, helping millions of people and also redefining the way we command and interact with computers.
Robots as intelligent as humans
New-generation robotics will increasingly have more autonomy and capacity to react without pre-programming, which complicates all current debates on robotics: the trust and reliance invested in a robot will have to be greater, bringing us closer to the point of being on a par with robots. Neuromorphic chip technology further illustrates this. This is among the most revolutionary developments in AI and a radical step further in computing power. Mimicking the intricacies of the human brain, a neuro-inspired computer would work in a similar fashion to the way neurons and synapses communicate, and potentially be able to learn or develop memory. This would imply that, for instance, a drone equipped with a neuromorphic chip would be better at surveillance, remembering or recognizing new elements in the environment.
However, immediate ethical red flags emerge: building neuromorphic chips would create machines as intelligent as humans, the most superior and intelligent species in the universe. These technologies are demonstrations of human excellence yet computers that think could be devastating for our species and, as Marvin Minsky has put it, they could even keep humanity as pets.
The interest in smart machines is now also pursued in additive manufacturing methods , which are increasingly integrating smart materials into manufacturing. These materials could adapt, change properties, interact or respond to their environments. With 4D Printing , which takes into account the transformation that occurs over time, some materials will adapt and repair by themselves, without maintenance, or they could be pre-programmed to disintegrate on their own. This will raise new questions of standardization, traceability and copyright.
More radical disruptions will occur once the technology transitions to the organic world, making it possible to assemble biomaterials that evolve and develop on their ow n , design cancer-fighting robots that would release antibodies only in contact with cancerous cells, etc. The moment of the print button for biology is nearing. Effectively, this could also mean that in a not too-distant future, smart pharmacology will permit us to receive a constant supply of anti-depressants or neuro-enhancers every time our dopamine level drops. The ethical consequences of such developments should be thought through. Having our emotions controlled in detail by smart machines will pave the way for dangerous forms of dependences and new understandings of our humanity and the emotions that define us.
Genome-based treatment , based on wider and cheaper availability of genome data, will provide new ways to customize the therapeutic protocol and enhance our control over diseases and medical treatment. The speed, accuracy and costs of genome-reading have changed dramatically in just a matter of years: a decade ago, this process was a billion-dollar effort , while today the price has dropped sharply to around $8,000. In cancer treatment, for instance, this will allow transitioning from broad-spectrum chemotherapies to more individualized diagnosis and targeting of specific malfunctioning genes. As we are truly starting to gain more precise tools to fight life-threatening diseases, a range of other issues arise. Pervasive global inequalities will still prevent millions of people from enjoying the benefits of such treatments, even in a context of decreasing costs of genome sequencing. Furthermore, a range of security and privacy risks associated with data storage of genome data will invariably arise and require protective mechanisms, especially as such databases are often shared for security reasons (e.g. between international police forces), increasing the possibility of hacking.
Inevitably, the emerging technologies of the future will redefine our understanding of biology, the material world and manufacturing. The implications will further extend into geopolitics and global balances of power. Fuel cell vehicles are finally expected to make their way to the market and reduce dependency on oil or emissions that contribute to climate change. In the long term, this will accentuate the vulnerability of oil-dependent economies and recalibrate geopolitical relations. Recyclable thermostat polymers , reportedly discovered by accident , will dramatically change fabrication and manufacturing, leading to new standards in industries. Globally, the advent of distributed manufacturing is bound to lead to a reassessment of the meaning of value chains and infrastructure: rather than ship parts of a given product, some companies will simply trade information, leaving it to the customer to finalize the manufacture of the product. A suite of other technologies, such as 3D printing, informatics and robotics are enabling a paradigm shift to a dematerialized future with endless possibilities for customization.
Changes ahead
The Forum’s list of top 10 emerging technologies for 2015 alerts us to important changes on the horizon for all sectors. As always, we must welcome innovation and the benefits it brings us. But we must also remain committed to sustainable development, taking into account issues of inequality, human dignity and inclusiveness. Finally, this year’s top emerging technologies also remind us of the importance of political commitment. Take the example of the transition towards fuel cell vehicles: it will require huge infrastructural adaptations and conversions. In fact, it’s estimated that if the US government spent the same putting a man on the moon ( $100 billion in today’s dollars ), the shift to hydrogen-powered cars and gas stations that pump hydrogen would be significantly eased. Often, the technology itself is actually available, but it takes a massive exercise of political will to bring about change.
Some technologies might progress independently of political support. But good governance , examinations of dual-use risks and ethical considerations must still remain guiding posts at all times. Ultimately, how we approach the regulation of emerging technologies will inevitably have wide implications – not only for security and ethics, but for our definition of human dignity and equality of individuals.
This article originally appeared in ISN.
What geopolitical factors helped transform Geneva into a global economic and diplomatic center? For Nayef Al-Rodhan, two of them stand out – the city’s role as a safe haven during the two World Wars, and its ability to provide a needed ‘coordination point’ during the Cold War.
Introduction
On 19 May 2015, Geneva will celebrate the two-hundredth anniversary of its accession to the Swiss Confederation. This occasion provides an opportunity to reflect on how the past two hundred years have transformed Geneva’s relationship to Switzerland and Geneva’s role in the world. With a population of less than 200,000 inhabitants, Geneva is a global and multicultural city, a hub for humanitarian diplomacy, an epicenter for banking and trading, and it ranks behind only Zurich and Vienna in global measures of the quality of life .
Alongside New York, Geneva has also become one of the most active locations for multilateral diplomacy . It hosts 30 international organizations, including the European headquarters of the United Nations, 250 international non-governmental organizations and 172 permanent missions. In total, the international sector in Geneva employs over 28,000 people . Geneva is a center of humanitarian action, education, peacekeeping, security and nuclear research. This critical mass of mandates makes the city uniquely relevant in world politics.
The story of how Geneva acquired this role is tightly connected to the history of power politics in Europe, the distinct advantages of Swiss neutrality and the evolution of international diplomacy. Two hundred years ago, Geneva was treated as an object of geopolitics and bartered away at the Congresses of Paris and Vienna in order to establish a post-Napoleonic equilibrium on the European continent. This geopolitical role was retained until the Inter-War Period. Today, Geneva is often described as “the diplomatic capital of the world” and is an important node in the global economy. Two factors explain this remarkable transformation: 1) the role of the city as a “safe haven” that could offer intact infrastructure and ‘business as usual’ during the two World Wars and 2) its role as a hub of political and economic coordination between the West and the Soviet Union during the Cold War.
Paris, Vienna and Geneva
The year 1815 marked the end of a fifteen-year period of French rule over Geneva. After Napoleon’s troops were driven from the city following his defeat at Leipzig in 1813, the Swiss federal assembly voted to integrate Geneva, Neuchâtel and the Valais into the Confederation, leading to the signing of the Treaty for the Admission of Geneva on 19 May 1815.
On Geneva’s part, the move for admission was primarily a geopolitical calculation. In an era of empires and nation-states, Geneva recognized that city-states would require a larger entity to provide for their defense and survival.
At the Congresses of Paris and Vienna, Geneva won support for its desire to become a part of Switzerland. Represented by the diplomat Charles Pictet de Rochemont, Geneva received seven communes from the Pays de Gex and twenty-four communes from Savoy. Both France and the Kingdom of Sardinia ceded territories for this purpose, according to the Treaty of Paris of 1815 and the Treaty of Turin of 1816 .
Geneva achieved its objectives because they were in line with the geopolitical aims of the great powers of the day. At the same time, those great powers guaranteed the city’s neutrality which helped it to become an important setting for international cooperation.
Fifteen years after Geneva became the twenty-second canton of Switzerland, Swiss philanthropist Jean-Jacques Sellon created the Society for Peace. Another 33 years later, Geneva became the seat of the International Committee of the Red Cross (ICRC) and witnessed the signing of the first international humanitarian treaty, the Geneva Convention, in 1864.
A global capital
The first attempts at formal international cooperation in Geneva were not resoundingly successful. The League of Nations, which came into existence in 1920, was headquartered in the city – first in the Palais Wilson and then in the purpose-built Palace of Nations. Though it ultimately failed to prevent the slide towards the Second World War, the League was not without its successes : for instance, the work performed by the International Labour Organization, the International Refugee Organization and the Health Organization helped to raise Geneva’s stature in the interwar period.
Geneva attained even greater significance, however, in the post-War period when many high-level negotiations and diplomatic summits began to take place in the city. These included the 1954 Conference on Indochina, the post-war meeting of the Allies in 1955, the Reagan-Gorbachev Summit in 1985, START negotiations in 2008-2009, and the ongoing high-level talks on the Iranian nuclear weapons program. For its contributions to international peace and stability, Geneva-based organizations and personalities have received no fewer than sixteen Nobel prizes, most of them for peace. The first was awarded to Henry Dunant, the founder of the ICRC; the most recent was awarded to the Intergovernmental Panel on Climate Change.
Geneva, however, is not only a global diplomatic capital but an important node in the global economy. In particular, it has become a center for the global trade in raw materials. More than 500 multi-national corporations trade in raw materials from Geneva, accounting for approximately 10% of the city’s (and the canton’s) GDP . On a given day, Geneva-based corporations process over 700 million tons of oil , which exceeds the trading volumes of the City of London (approximately 520 million tons per day) and Singapore (440 million). 80% of all Russian oil is traded through the city and approximately 20% of all cotton. Some estimate that a third of the global trade in oil, cereals, cotton and sugar, as well as half of the global trade in coffee are also directed through Geneva .
Geneva has risen to become an important geopolitical city for a variety of reasons. During the First World War, Switzerland, and hence also Geneva, was able to offer “business as usual” to international trading firms. During the 1920s, the first cereal traders, such as André, came to Geneva, primarily to be close to their main customer, Nestlé. On top of this, several Ottoman and later Turkish traders found it convenient to establish trading subsidiaries in the region of Lausanne, located on the route of the Orient Express between London and Istanbul.
Furthermore, Geneva began to benefit from the image of neutrality bestowed upon the city by the international organizations which increasingly established their headquarters there. Yet it was perhaps Geneva’s role as a “safe haven” (and its intact infrastructure) during the Second World War that attracted the most business to the city.
During the Cold War, as a result, Geneva was already well known throughout the world as a ‘neutral’ trading location. This meant that it was in Geneva that economic and political coordination between the West and the Soviet bloc came to be orchestrated. It also continued to function as an economic safe haven. Indeed, it was to Geneva that Egyptian cotton traders transferred their activities during the Nasser era, just like many Arab oil traders after the oil crisis of 1973-1974.
Swiss meta-geopolitics
Undeniably, one of the reasons why Geneva is so international is because the European headquarters of the UN and its agencies are located in the city. This reflects Switzerland’s long-standing commitment to provide federal and cantonal support to the United Nations. Most recently, this took the form of a generous loan at preferential rates for the renovation of the UN’s Palais des Nations, covering almost 50% of the costs (approximately 300 million Swiss francs). Nowhere else does the UN benefit from such facilities and this level of support.
Over decades, Geneva has established a well-defined identity as a city of peace and an ideal meeting place for diplomats – whether in the field of humanitarian action, disarmament, climate change or other concerns. In recent years, activities in other sectors, such as the crude oil trade, have increased the city’s international renown. While Geneva faces competition as a global economic and diplomatic center from cities in Asia, Africa and Latin America – some of which are becoming prominent regional centers of dialogue and diplomacy – it is unlikely that the city’s stature will diminish anytime soon.
Using the framework of meta-geopolitics , the following table discusses the geopolitical strengths and imperatives of “International Geneva.”
Issue Area
Geopolitical Realities and Dilemmas
Social and Health Issues
Excellent services, quality of life and an ideal location for diplomats and expats.
Geneva is a central location for global governance regarding social issues, public health, employment, youth, education and other areas.
Domestic Politics
Swiss neutrality, highly stable and democratic, but the initiative to curb the number of foreigners is perceived as a major setback for the city and country (although these regulations do not affected the staff of international organizations from the UN family.)
Economics
Trade hub, both private companies and inter-governmental organizations in the area of trade, development, labour.
Environment
The city and canton of Geneva place strong emphasis on energy-saving and a clean environment. In line with the Swiss environmental policies, Geneva has strict standards of agricultural biodiversity, waste management or water management.
Geneva is a center for environmental diplomacy and climate change dialogue (e.g. the UN Environmental Programme is located here).
Science and Human Potential
High-profile universities, excellent research centers in medicine, chemistry, physics and other sciences.
Numerous UN research centers and institutes are located in Geneva (e.g. UNITAR).
Military and Security Issues
Geneva is a key centre for disarmament diplomacy, including the Conference on Disarmament and is host to numerous NGOs and think tanks with a unique profile in security studies, small arms, demilitarization.
International Diplomacy
Unique strength as global meeting point for international diplomats, activists and NGOs.
Issue Area
Imperatives and future trajectories
Social and Health Issues
High quality of life, among the top best in the world (ranked before London) will make it attractive for foreign companies.
Domestic Politics
Greater openness to foreign workforce, imperative for more facilities for expats.
Economics
Increasing importance as trading center for petrol and other commodities, growing importance in cereals trading, insurance companies, consultancies and shipping.
Low inflation - gives strength to the economy
The simple and strict tax system, with some tax discounts for companies contributes to attracting companies and investors (taxes from 3.5 to 14.1%, compared to London - 30%)
Environment
N/A
Science and Human Potential
Continued investment in sciences and research. Excellent universities and highly skilled workforce on the local market are expected to attract even more foreign companies.
Military and Security Issues
N/A
International Diplomacy
Geneva will retain a prominent place in global diplomacy, yet the future of "International Geneva" strongly correlate with the future of the UN system.
This article originally appeared in the Global Policy Journal.
The fast-evolving processing power of computers is a fact that hardly surprises anyone today. This was predicted five decades ago by the co-founder of Intel, Gordon Moore , in what is now widely known as the Moore Law. He postulated that processor speed (and overall processing power) for computers would double every 18 months and that the number of transistors on an integrated chip would double at the same pace. The law gained so much popularity that it became some sort of self-fulfilling prophecy and chip fabricators raced to make processors faster, smaller and simultaneously cheaper.
In the past decade, this trend appears to have reached a plateau as the difference in processing speed between 2000 and 2009 has barely doubled in a 10-year span. This has prompted conclusions that the end of Moore’s Law, anticipated for a while now, is nearing. To keep up with the demand to increase processing power, big companies will have to invest much more in research, thus potentially spiking up the prices of processors.
While the accuracy of Moore’s Law is now losing ground, this does not mean that the search for supercomputing has faded too. Moving away from conventional computing, with its already impressive power, quantum computing is part of a new revolutionary generation of computer research which aims to surpass not only limitations in speed but also in the technical limits of the chip-making material . Whatever speed can be imagined with computers, it is nowhere near what quantum computing is expected to achieve.
In the 1980s, the notion that quantum physics could be used to perform computations simultaneously, on massive amounts of information, emerged for the first time. The quantum computer is considered a “ seventies child ” as its conceptual foundations were first laid during the late ‘70s and early ‘80s. The interest in developing such a machine, with unprecedented speed and agility, was revived in the mid 1990s, when computer theorists began to explore the possibilities of developing quantum computers. Highly ambitious researches placed overly optimistic bets that quantum computers could be in use by 2010. To date, scientists have yet to create an operational quantum computer but this task is surely not hampering its research and development. “ The Holy Grail of supercomputing ” is now drawing increasing interest and investment: NASA, IBM and Google’s D-Wave Systems are among the most important actors in the field and more recently, the National Security Agency joined the ranks by pledging $ 80 million on basic research in quantum computing.
What is so special about quantum computing?
Unlike a classic computer, quantum computers do not work in an orderly and linear manner. Conventional computers function according to binary logic, using 1s and 0s (“either/or” distinctions) and stringing together combinations of these. By contrast, quantum computing uses quantum bits or qubits, which are basically quantum particles such as electrons or atom nuclei. This gives quantum computers unique functionalities as qubits communicate with each other through entanglement and calculate every existing possibility at the same time. Qubits are placed in a state of “ superposition ” where they do not have values of 1 or 0 but both. In this regard, quantum computing is a step further from what is possible in the real world as qubits can be in more than one state at a time.
This means that quantum computers would be capable of huge calculations and enormous processing power. They could surpass conventional computers in speed and could help solve or race through problems that would normally take other systems eons to solve.
The ongoing research is also charting new grounds in material science and our understanding of materials properties. For example, a leading start-up in quantum computing, D-Wave Systems , claims that certain types of metals, such as niobium (a soft metal that becomes superconducting at low temperatures), are key to the development of the quantum processor. Moreover, other recent breakthroughs in silicon-wrapped quantum technology prove again that more thorough investigation of materials and properties of chemical elements can unlock the unknowns that have delayed progress.
Quantum computers, once fully functional, will mark the ultimate frontier in computing, being able to make calculations billions of times faster. It is their extraordinary features which also prompt immediate considerations about their social and security implications. In a future not too distant, when the quantum leap will have reached an operational stage, we can expect a series of groundbreaking uses. For a start, quantum computers could help scientists find cures for cancer, advance research of Alzheimer’s disease, or find distant planets; they could be used to simulate or test certain political and military scenarios and inform policymakers about possible outcomes. But by far, the greatest scope for interest (and investment) so far has been the promise of quantum computing in the area of cryptography.
Quantum computers could potentially be capable of breaking public key encryption , which is responsible for protecting almost all private communication online. Not surprisingly, the US spy agency, the NSA , has been at the forefront for the development of the supercomputer which could crack most keys used for encrypted communication. Its sponsored research project, called “ Penetrating Hard Targets ”, aims to build a computer that could break almost all forms of encryption that protects medical, business, e-commerce, banking or government records in the world. Clearly, if successful, this would be the ultimate ‘Big Brother moment’ for the agency. Today, long encryption keys (particularly for sensitive information) are very difficult to break, taking up to several years but quantum computer could accelerate the process, making it millions of times faster. Similarly, since qubits cannot be cloned, hacking a code encrypted with a quantum computer is virtually impossible and hacking would mostly be a concern if a hacker were to have access to a quantum computer.
Racing for the supercomputer
The development of quantum computing remains highly disputed and advancing slow due to a combination of scientific unknowns, mixed reactions in the academic community and industries. A persistent obstacle has been the challenge of instability and vulnerability. Quantum computers combine computing with quantum mechanics, an extremely complex and still mysterious branch of physics. On top of this, as calculations take place at the quantum level , no outside interference (such as light or noise) is permissible since the qubits would collapse and it would disrupt the calculations. This makes quantum computing extremely expensive to build and maintain.
However, as elusive as the search for the super computer might be, it has sparked a competition in which both states and private shareholders have stakes.
The US Defence agencies have been investing in quantum computing research for over a decade and other countries have gradually entered the race as well. Now China, Russia and other European states are investing in quantum research and Canada’s Institute of Quantum Computing at University of Waterloo is over a decade old. In late 2013, the UK government announced it would spend £270 million to build a network of quantum computing centres.
Security Implications
The construction of a functional quantum computer means much more than simply winning the innovation race and it has clear national security relevance. In the context of the current of development, the race is now fought at an academic level, where researchers work in interdisciplinary labs to shrink transistors to the quantum scale.
However, as pointed out by many, science is now inevitably done in global collaborative frameworks and it is quite difficult to estimate if there are guaranteed paybacks for individual nations. Ultimately, the Herculean efforts and funding that defence agencies pledge often pass through private industry and will benefit the commercial sector too, not only the government.
Quantum computing will have very disruptive effects, both at national levels and internationally. They will have implications for information security , impacting both symmetric-key algorithms and public-key algorithms. If spying and mass surveillance are already impressively effective with the more limited means we now have in place, quantum computing will simply enable unprecedented breaches of privacy and access to confidential data in businesses, hospitals, banks or governments worldwide. The NSA no longer hides its support and sponsorship for the development of quantum computing which could be used to crack any encryption system in the world. Hand-in-hand with the race for the supercomputer is the race to ‘own’ the internet and gain virtually unlimited access to information. Quantum communication will redefine how we communicate, making data transfer faster and more able since quantum computers can process enormous amounts of information with high encoding and decoding speeds.
The amount of distrust already existing over questions of privacy both domestically and between governments is only expected to surge, creating further domestic and diplomatic frictions and accelerating competition between states. A likely scenario is that with functional quantum computers, some governments will speed up the investment for the creation of other, cryptography-capable computers . At the same time, this competitive situation will leave behind less resourceful countries, widening a digital gap that is already stark.
The unique potential of quantum computers could also give unmerited temporary advantage to some individuals, retailers or groups over others. Quantum computers could dramatically improve stock market predictions thus benefiting wealthy financial institutions . This is not an imminent risk since the fees for access to quantum computing will be staggering, yet the possibility of quantum computing entering the Wall Street is not to be dismissed.
Coexistent with its numerous security risks, quantum computing offers a set of unique opportunities for humanity and states. From better logistic optimizations to DNA sequencing, better predictions in global warming and weather forecasting, quantum computing means new potential to tackle global challenges, improve healthcare and find cures for diseases, solve optimization, labour or economic problems (including in agriculture or water management). The application of quantum computers to solve optimization problems could be especially useful in the defence sector or space, where it can significantly impact the speed and accuracy of operations. A quantum computer could calculate ideal paths for travel either on land or air and it could improve code verification dramatically. Indeed, software verification is a key element in the defence industry’s push for quantum computers, especially as complex software systems are increasingly at the heart of defence applications. The F-35 joint strike fighter , for instance, has more than 10 million lines of code on the aircraft and quantum computers could be employed to do the code validation and verification.
Google also hopes that quantum computers could be used to make better and faster robots and more sophisticated artificial intelligence. Their use could also be extended to aviation in instances such as snowstorms where quantum computers could help find optimal alternative routes instantly. The Space agency NASA has also shown interest in quantum computing and its Quantum Artificial Intelligence Laboratory is working on exploring the likely applications of quantum computing in space. In addition to optimization solutions during space missions, such as better planning and scheduling, the lab is also working on improving the operations of NASA’s Kepler mission, which searches for habitable and Earth-sized planets. Current computational limitations, which use heuristic algorithms to identify transit signals from smaller planets, only help find approximate solutions whereas a quantum computer could perform data-intensive searches among the over 150,000 stars in the field of view of the spacecraft.
Emerging technologies for renewable energy are also taking into account the power of quantum computing and California’s renewable energy program aims to use “smart grids” or “quantum grids”, which is a network of quantum computers, to allow higher efficiency of input and output of energy. Qubits can also be deployed in solar panels to replace current photovoltaic cells technology or in quantum batteries and quantum dots can be embedded as semiconducting material, revolutionizing the energy sector.
Quantum computing is possibly a final threshold of scientific marvel, which will bring unparalleled precision and accuracy in computing. Given the extremely sensitive functions it can perform, it is critical that research and dissemination is done responsibly, with a view to harness its positive contributions. It is indeed critical that the development of quantum computing progresses in a way that will impede its becoming merely a tool for enhanced surveillance and endless control.
This article originally appeared in the Open Mind Blog.
When Aristotle famously claimed that the good life was not made in a summer, nor in a day, he implied that the best life was a life committed to contemplation. The question of what gives meaning to life has been central to philosophical inquires for millennia. While no definite answer would appear to clarify or solve this fundamental question, a look into the neurochemistry underlying our feelings, thoughts and behaviour charts new grounds in this exploration. Moreover, it hints at ways in which gratification is relevant both to society and the world at large.
A long history of philosophical inquiries
Existentialist thinkers emphasized the possibility of individuals being able to generate meaning through their actions. Sartre ’s statement that existence precedes essence is a rejection of the possibility that there could be any externally derived purpose to human life. It is simultaneously an argument that one’s life is given meaning through specific actions. Nietzsche ’s philosophy carries a similar credo: that defining one’s life creatively according to chosen actions makes a person who they are.
The ideas that one must live an “authentic” life and make choices that harmonize with a robust conception of the self are powerful, but leave unaddressed the question of what exactly it is that promotes authenticity. It is here that contemporary neuroscience can offer further insights, as a shared neurochemistry implies similar needs for achieving gratification. The human brain is “hard-wired” to seek pleasure and avoid pain , as well as to repeat acts that achieve gratification and avoid actions that cause discomfort. This process, which I previously called neurochemically mediated gratification implies that at a fundamental level, human beings are all seeking similar things. The difficulty is that at an individual level , such gratification might find expression in destructive actions, such as sustaining an addiction or engaging in criminal activity. It is thus crucial for societal policies to be fashioned and implemented with these challenges in mind.
Neurochemical gratification: creating the right circumstances
Our gratification is experienced neurochemically, irrespective of what prompts it. All of our feelings, emotions and experiences have a physical component insofar as they are mediated by neurochemistry. With the benefit of advanced scanning technology , we can observe that different mental processes change regional blood flow and chemistry in the brain . As such, we generally seek to feed our neurotransmitters (the chemical messengers transmitting signals within the brain) and boost the “feel good” chemical dopamine . It is likely that in time other neurochemicals relevant to various cognitive processes and gratifications will be identified and their specific actions known. However, to date, we know that the neurotransmitter dopamine , involved in reward processes in the brain, informs us which of our actions are more conducive to gratification and which are not. However, what exactly contributes to each of our respective forms of gratification and levels of dopamine as individuals can vary a great deal.
Not only is our gratification experienced on a personal neurochemical level but it is also attuned to our respective family and socio-political environments. In my paradigm of emotional amoral egoism , I discussed the relationship between our neurochemical underpinnings, the role of circumstances, morality and good governance. The fundamental feature encoded in our genetics is survival , meaning that the main driver of our actions will almost always be based in this instinct. Actions that are influenced by other drivers have a margin of fluctuation in strong alignment to our environment, including our moral compass and propensity for moral acts.
Like our human nature itself, our neurochemical make-up is modifiable, meaning that there is significant room for the environment to influence and mould both the motivators of our neurochemical gratification and our behaviour. Therefore, we will try and test many experiences but will predominantly choose to repeat those actions that gratify us in some way, no matter how unrealistic or influenced by our own perceptions they might be. Our gratification is highly individualistic and experienced subjectively, but it is also fluid and can be ‘instructed’ to a certain extent by the environment, repeated experiences, and exposures. This also implies that our neurochemical gratification might not be exclusively constructive , as we can opt for behaviour that is harmful to ourselves or others, such as forms of addiction or violence. The upside of this alterability, however, is that the foundations for this gratification can be influenced and turned into constructive forms of behaviour that meet societal expectations. In these situations, good governance plays a tremendous role. We might not be intrinsically moral, generous, altruistic etc., but living in a setting where basic survival and dignity needs are met will enhance our reflection, which is in turn subsequently required for conscious moral acts.
Conversely, living in fear, deprivation, injustice and insecurity precludes morality in most cases, and prompts survival-driven acts. Harmful excesses of any kind promote a form of personal gratification that is very likely to affect both individuals people and society at large. However, ultimately, the meaningfulness of existence is individualistic and results from whatever brings each one of us most sustainable neurochemical gratification . What we can hope and strive for, collectively, is to create environments in which SNG comes from activities and beliefs that will create a balance between our personal wishes and acceptable values, both domestically and globally.
Neurochemistry teaches us that at the very basic level, we are fundamentally hardwired for survival and pre-programmed to ‘feel good’, often irrespective of what factors constitute the sources of our gratification or, in some cases, their social acceptability. To keep this gratification sustainable in a social and political setting, family, education and society need to create mechanisms whereby individuals associate gratification with behaviour that is positive and constructive both for the individual and for society. Anything from social norms to media outlets, educational systems or entertainment industries contribute to the way gratification is defined. In order to ensure functional social orders, it is crucial that gratification is linked to constructive behaviour, such as social responsibility, work ethic, lawfulness, empathy, tolerance and mutual respect.
This article originally appeared on The Montréal Review.
Man will only become better when you make him see what he is like.
(Anton Chekhov)
The levels of sophistication of science to date might not have managed to fully grasp ‘what man is like’ in neurobiological terms, yet Chekov’s instinct was sound: acquiring an accurate portrayal of human nature is a prerequisite for creating conditions that respect human dignity and morality. Attempts at moral education which fail to take into account fundamental neurochemical elements of human nature, are bound to prove unsuccessful. In some cases, these may even have undesired effects as they can lead to unreasonable expectations.
Amorality of man
Cumulative intellectual history offers a wide variety of characterizations of human nature, from those that attribute to human beings a full set of innate ideas to the well-known Lockean tabula rasa . The discussions have also often oscillated between polar contrasts, presenting human nature as either fundamentally good or bad.
The origins of this debate go back to antiquity and various cultures and religions, and relatively recently to Rousseau and Hobbes. Rousseau posited that men, in the original state of nature, were basically good, unselfish and pure. In contrast, for Hobbes, in the state of nature man was intrinsically self-interested, acting for his own well-being and in a manner strictly determined by natural, pre-existent desires and needs to avoid discomfort. The implicit tone of these perspectives (optimistic vs. pessimistic) has informed political philosophy and theories of government for centuries.
While there is grain of truth in a number of these accounts, contemporary research, and neuroscientific insights in particular, adequately demonstrates that both of these extremes distort what is in fact the case, and that both share a common mistake: underestimating the significance of the neurochemical underpinnings of human nature. When this error is recognized, it immediately becomes clear that circumstance and background conditions inform moral development to a much greater degree than previously appreciated.
Rather than choosing between dichotomous notions of moral vs. immoral, I argue that humans are essentially amoral . The notion of amorality implies that we are neither products of pure free will, nor entirely of genetics. Humans are born with what I have called a predisposed tabula rasa , free of any innate ideas but possessing certain predilections for survival coded by genetics. Therefore, we come into the world with a set of basic survival instincts which do not operate as conscious motivators but more like inbuilt biological microchips tuning us for survival.
Several crucial aspects then weigh in our conduct, actions and propensity to act morally or immorally, of which the environment (such as education, or social and cultural context) and exogenous conditions are crucial. Indeed, man`s moral compass is greatly shaped by circumstances as little expectations of moral behavior can be inferred in immoral environments, where choosing moral actions would be detrimental to one`s own survival.
Egoism as the only innate endowment
The predispositional aspect of my neurophilosophical theory of human nature is informed by Darwinian selection pressure. The drive for survival of the self—a basic kind of egoism—is a principle motivation for human beings, as it must be for the evolutionary process to function. The presence of this survival instinct thus cuts against the idea of a truly blank slate.
The supposition of additional innate ideas, however, and in particular the advocacy of an innate morality is demonstrably false given the tendency of moral development to vary widely but predictably with regard to background conditions: were morality innate, we should expect to find, contrary to fact, that the most harrowing and most stable social circumstances contribute equally to the development of a moral compass and regard for human dignity.
The amorality of the untutored human beings thus leaves them significantly, though not entirely, at the mercy of the circumstances and social context in which they find themselves. To a large extent, therefore, our moral compass, guiding us to be good or bad, is shaped by our perceived self-interest at a given time. In this underlying framework of action, guided by self-interest, human motivation is further shaped by other environmental factors and emotionality. Emotionality is not a peripheral aspect of our human nature or an occasional distortion of it, but rather is formative in our development and constitutive of our moral lives and has clear neurochemical foundations.
The Centrality of Emotion
Human experience is mediated by emotions, and these emotions, in turn, are mediated by neurochemistry. This general observation is strengthened in a preliminary way through intercultural comparison of emotional expression, which demonstrates their similarity across social and cultural frameworks . It is further bolstered by contemporary neuroscience showing that emotions are fundamentally material and the neurochemicals responsible for these observed states can now be specified and described with a high degree of sophistication, although much more will be known in the future about their nature, diversity and mechanisms of action.
These findings lay the scientific bedrock for rethinking longstanding assumptions regarding the role of rationality and its dominance over emotions. On this traditional model, human beings conceived of as rational actors were only occasionally subject to flights of irrationality in those rare moments when emotions overtook them. Since at least Plato, this picture, which idealizes those with the greatest rational self-mastery, has been held up as the ideal. Kant’s fixation with the law of rationality shares this inheritance. This stark divide between one’s moral duty as rationally derivable and emotional human sympathies has left an indelible impression on Western moral philosophy. However, more recently, given our understanding of the frequency and power of emotional influence this basic structure began to be challenged.
As continues to be poignantly illustrated by theorists and scientists, it is very often the emotions, rather than rationality which determine human behavior. Demonstrations of this include now well-known neuroscientific experiments showing that decisions are often made before the fully conscious (and thus rational) mind knows what is being decided. Jonathan Haidt provides an apt description of this process in his metaphor of elephant and rider , where the emotional self is represented by the elephant, and rationality by the rider: if you wish to change the direction of the duo, the best strategy is to appeal to the elephant. This is not to say that the rider / rationality can never override the more impulsive elephant / emotions, but that the best science shows this to be the exception rather than the rule.
These accounts significantly harmonize with my concept of emotional amoral egoism . Haidt’s example further softens the traditional dichotomy between emotionality and rationality to the effect that emotions should be understood as cognitive: emotional experience is deeply implicated in most of our thought-processes and inferences, rather than being an encumbrance to them.
Contemporary neuroscientific research has confirmed these ideas. When those areas of the brain—particularly the prefrontal-cortex (pfc)—are underdeveloped or damaged, the emotions associated with sociality are either severely truncated or absent altogether. Extensive evidence further shows that such individuals have little moral understanding or regard for morality broadly defined. These clear connections between the capacity to experience particular emotions and brain function, on the one hand, and between brain function and morality on the other, cement the inextricable nature of our neurochemistry and our moral/ socio-emotional capacities.
They thus serve to demonstrate that our neurochemistry is the lowest common denominator: the minimal endowment human beings have at birth both determines them to be initially geared purely for survival, but also leaves them highly susceptible to the influence of their respective environments. With this understanding of our human nature, as emotional, amoral and egoistic, we must weigh alternative policies and approaches to social organization, especially given the emotional and deeply visceral nature of identity issues. This will be critical if we are to improve our capacities for moral and political cooperation and generate sustainable domestic and global peace and prosperity. This can be achieved by reciprocity both at transnational and transcultural levels , through mutual respect, equality, justice and the guarantee of human dignity for all, at all times, and under all circumstances.
Recent intellectual history has made considerable strides in acknowledging certain forms of institutionalized discrimination as well as the unjustifiable privilege of certain cultures over others. Our epistemology, however, has in general continued as if its schools—predominantly empiricism and rationalism—are preoccupied with the nature of knowledge per se , rather than the nature of knowledge as conceptualized within a specific society or cultural tradition.
Consequently, epistemology has been slow to see its own limitations as well as in acquiring a kind of basic understanding of our shared cognitive architecture. The outcome of improved understanding in this regard would go far beyond academia. It would serve as a more profitable and equitable foundation for international relations. Two crucial features of this new edifice will be humility and an appreciation for underestimated commonality.
The classical positions of both the empiricist and the rationalist schools remain well-entrenched. Empiricism continues to praise sensory experience and the data gathered from such experience. The purists of the rationalist school, for their part, emphasize the role of reason in all knowledge acquisition, as they remind us of the frequent fallibility of our sensory apparatus. This dichotomy has served as the subject matter for long-standing philosophical controversies. Happily, there are now tools to bridge this conceptual chasm. Neuro-Rational Physicalism (NRP) provides a basis for understanding how sensory experience, emotionality, and rational inference are much more intimately related than has previously been appreciated. The relevance of these epistemological debates is not only scholarly but also political. A better understanding of the foundation of knowledge is critical to affirming the role of our limitations and consequently in demonstrating that all “truths” must be respected.
The Best Aspects of Two Traditions
Neuro-Rational Physicalism and empiricism share the view that sensory data is a source of knowledge. Using contemporary neuroscientific research, however, NRP argues for a much more pervasive role for inference. This is because individual perceptions are colored by the sensory apparatus through which they are perceived, and this apparatus, in turn, is significantly formed by unique spatio-temporal and cultural influences.
NRP also diverges from those rationalists who claim that there is innate knowledge. Instead NRP advocates for “a predisposed tabula rasa” which implies that the human mind is minimally equipped with egoistic survival instincts. We are born without innate notions of good or bad, moral or immoral, yet what we do possess is a survival instinct coded in our genetics, which motivates us to act toward our survival at all (or most) times. As we are spatially and temporally situated beings, all knowledge gained is subject to the influence of the mechanisms of knowledge acquisition, and the character of these mechanisms is dynamic and influenced by circumstances .
As Jonathan Haidt has argued at length, even apparently direct sensory input and emotional experience has a cognitive dimension ; knowledge is partially “given” by the world but also simultaneously worked upon by the mind of the individual to whom it is given. Because of this, what counts as knowledge by acquaintance will vary with the life narratives and resultant dispositions of each individual.
Members of the ancient Stoical school were thus closer to the truth than they realized in claiming the emotions to be judgments: whatever the case may be with regard to our capacity to control our emotions, neuroscientific research now demonstrates the inferential role in emotional experience. The ancients did not have the advantages of modern brain imaging and other contemporary tools for research, which led them at times to oversimplify consciousness and our mental processes.
It is now known that emotional “decisions” occur and inform behaviour prior to rational awareness of these decisions. Ground-breaking neuroscience experiments and research have proven in recent years that emotions are in fact dominant in our decision-making process. In this regard, modern neuroscience has been able to reverse postulations of philosophers from previous eras, including the idea that the human mind is incorporeal, distinct from the human body, as Descartes had famously argued. Quite the contrary, neuroscientists like Antonio Damasio have proven that some decisions are often picked by the brain after being marked as more “emotionally salient” than others. Through his experiments, carried out on people who missed the part of the brain where emotions were generated, he observed not only that they could not feel emotions, but also that they could not make decisions. Damage to the prefrontal cortex was detrimental to decision-making abilities due to the lack of the emotional machinery.
A dominant trend in philosophy and psychology since its earliest days has been to underestimate the ubiquitous nature of our emotions , their inferential structure, and their functional efficacy. NRP addresses these oversights by giving a fundamental role to the sensory experience emphasized by empiricists, while arguing that this experience itself involves the process of inference focused upon by the rationalists. NRP further creates the conceptual space for emotions to play the powerful role they can be seen to take in neuroscientific research.
The Place of Presupposition
As explained above, inference is critical in how we acquire and manipulate knowledge. This premise gives significant weight to the sources from where our inferences are drawn. The conclusions we make are informed by certain presuppositions, which makes knowledge indeterminate since it is tightly dependent on the nature of those initial presuppositions. This is reminiscent of a relativist stance, yet this is not necessarily the case.
Our world is a world of fact, but our knowledge, which is unavoidably situated within particulars, always strikes a glancing blow at these facts. Put differently, while there are objective facts concerning the physical world, there is no non-perspectival knowledge of these facts. This carries the crucial implication that knowledge has a strong likelihood of being incomplete or containing inaccuracies.
As Gettier has famously shown, one can have a justified belief that the believer nonetheless seems only to have been right about through a kind of luck. The rhetorical question he raised was whether having the right conclusion—though inferred from a mistaken premise—should count as knowledge. The question of whether true opinion is sufficient for knowledge can be traced back to Plato. While debate goes on over so-called Gettier problems, the important upshot for NRP is the critical role played by premises in the acquisition of knowledge.
Because the sources of our inferences are always grounded in our respective particular conditions, the premises from which we operate should be thought of as eccentric to a certain degree, and hence subject to distortions that result in our knowledge being incomplete. Our knowledge is indeterminate, both temporally and spatially, and to a certain degree culturally constrained. It is a daunting task to prove our truths beyond any doubt – at least with the scientific methodologies we have to date; rather, some of our knowledge can be more accurately described as “possible truths subject to proof”.
Physicalism and Knowledge in the World
Comprehending that the ways we acquire knowledge are culturally mediated would be a profound step in softening rigid categories of “otherness” present in our globalized world. The recognition that the situated nature of one’s own knowledge renders it incomplete creates conceptual space for accepting the validity of knowledge formed in different cultural settings and removes the temptation for ranking systems of thought hierarchically. This recognition is as important as it is difficult to promote, especially as numerous policy-makers or ideologues are keen to perpetuate ideas of otherness, garnering political capital or power from such divisions.
As an educational agenda, this legitimization of varying forms of cultural thought and the humility entailed by seeing one’s own knowledge as provisional rather than absolute could go a great distances towards cross-cultural understanding. Neuro-Rational Physicalism provides a deep justification for this process. The physical nature of mental events—traceable through brain chemistry imaging—implies that repeated experiences and emotional inputs become entrenched to the extent that the individual will become unwilling to disrupt them. This understanding has two weighty consequences. First, the stimuli that make up our sensory experience and the ideas to which we are exposed are enormously influential in determining our comprehension and behavior patterns. Second, the entrenched chemical processes make us reluctant to question the premises we take on board and from which we do our reasoning. Therefore, in spite of the provisional, best-available-explanation nature of our knowledge, we are often tempted to take our premises to be objectively true.
Understanding the biases embedded in our ‘truths’, and the neurochemical foundations of our long-held beliefs has political and transcultural implications. Transcultural differences may exist, but those who believe they hold an “ultimate truth” are not only mistaken but also dangerous to peaceful coexistence. The long held animosities between the West and the Islamic world, the persisting ‘national humiliation’ narratives embedded in Chinese strategic culture and perpetuated through national curricula are two resounding yet not isolated examples of how knowledge and prejudice are furthered at times with little critical reflexivity.
This epistemological project of deconstructing the foundations of knowledge and, subsequently, its limitations, needs to permeate the public space. The best way to achieve it is to start off precisely in those places where forms of knowledge are cultivated: schools, and to a lesser extent, the media and the entertainment industry. Revisited curricula and historical narratives which help promote a vision of our limited knowledge and of the plurality of truths is a promising start for greater transcultural understanding and a more functional and thus sustainable, peaceful and progressive global order.
This article originally appeared in ISN.
Strong statist positions and a fixation on state sovereignty once inhibited progress toward more just and effective models of global governance. However, there can be no denying that globalization has not only led to the unprecedented transformation of our societies, but also the role that states play in the international system. Yet, even as states gradually share more responsibilities with corporations, sub-national entities and international organizations, their structural significance still remains indisputable – particularly when it comes to finding near-term solutions for better modes of global governance. This should result in a more equitable and representative international state system to which global governing structures will remain accountable.
The Existing Structure and Its Limitations
Traditional paradigms typically reserved almost unequivocal attention to the narrow interests of states, even around issues that were global in nature. This understanding gradually grew obsolete and many of the post-war multilateral institutions were formed to address those challenges which cannot be solved by unilateral state decisions. With its numerous funds and programs, the United Nations is the best known and furthest reaching of these institutions – even though important work is increasingly being undertaken by a variety of Non-Governmental Organizations (NGOs) that are not directly influenced by national interests. Yet, such developments have by no means guaranteed a significant improvement in global governance as it is currently practiced. UN Resolutions continue to be violated and international laws are regularly breached. To take the most straightforward example, the UN Security Council (UNSC) remains structurally tethered to the interests of its five permanent members. And because these five states retain formidable veto powers, the threat of unilateral decision making remains very much in place.
Accordingly, the veto system in its current form is a major impediment to more effective forms of global governance and is ill-suited for the economic and political realities of our times. Moreover, international failure to reform the system is solidifying global governance around a paradigm that better reflects the power balances of the mid-20thcentury rather than the present day. Without immediate and profound reform, the credibility of the international community continues to be severely hamstrung. In order to overcome this, the effectiveness and influence of the UNSC can be enhanced in several ways, and states will have to reach a consensus over how it should be amended. Options include shifting to gaining the majority consensus of the five permanent members, extending veto powers to additional states and regional blocs, and abolishing the veto altogether.
Without meaningful reform, the present veto system will continue to impede timely prevention and/or intervention against large-scale human rights violations and war crimes. However, evidence has shown that making such changes will occur slowly over an indeterminate length of time [DA1] , despite intense political pressure . The call for reform of the UNSC was subtly initiated in the ‘90s and hit its stride in the last decade. It has concentrated around the efforts of groups like “Uniting for Consensus”, the G4 or the African Union’s Ezulwini Consensus which consecutively put forward proposals for a more representative UNSC. The process of reform remains to date “painfully slow” and a decisive structural change has yet to materialize. Consequently, it might be more realistic to push for changes within the existing system, such as those recently suggested by France . Earlier this year, Paris urged its fellow permanent Security Council members to refrain from using their veto powers in situations involving mass atrocities. In doing so, France has merely reconfirmed that the status quo might not only be harmful to the UN’s standing and legitimacy, but for humanity as well.
Creating the Right Conditions
A further challenge involves ensuring and enhancing the accountability of existing global governance institutions. Entities such as the International Monetary Fund (IMF) and the World Trade Organization (WTO) continue to implement policies with far-reaching effects, whereas those most adversely affected by them quite often have little means of recourse. As Thomas Pogge has argued, WTO policies have systematically reinforced global inequality and thus exacerbated global poverty rather than ameliorating it. Poverty, marginalization, and huge gaps in socio-economic development all raise barriers to good global governance. Financial institutions and mechanisms have an important role to play in addressing these issues, but only greater levels of accountability and increased respect for human rights will overcome them. Quite often even NGOs, while not under the direct control of states, do an inadequate job of representing the most vulnerable members of society and instead take directives from its most favorably positioned members.
These considerations beg for the creation of two principal amendments to the current system. The first involves the creation of a universal citizens’ charter that guarantees human dignity regardless of ethnic, religious or national affiliations. The second is that states’ internal representational structures will have to be improved so that the fundamental rights of all sections of society are better protected. Because radical economic disparity within a nation compromises the capacity for representation and hence good internal governance , global governance will also subsequently be debilitated. Accordingly, improved internal governance within states will result in less economic disparity, and be a sine qua non for improved global governance and the upholding of human rights.
Setting Minimum Criteria
While attempts to promote human rights beyond the nation-state predate it, the 1948 UN Charter has led the way in formal efforts to recognize and protect them. Yet, it is only in more recent times that crises occurring outside of the spectre of warfare have been recognized as a fundamental challenge to global human rights. However, economic policies that systematically reinforce global disparities and actions that threaten global economic stability (like those leading up to the financial crisis of 2008) have always compromised human dignity and should be constrained by the aforementioned universal citizens’ charter. Cultural arrogance, marginalization and exclusionary practices also add to what is effectively a cluster of problems.
Historically entrenched divisions and transcultural misunderstandings perpetuate problems whose solutions must go beyond the national level. Such challenges to human welfare and dignity throw into stark relief the interconnectedness of all peoples and the remote consequences of local actions, as well as the need for efficient institutional solutions that are able to bypass long-standing power struggles between select nations. At its minimum, good global governance, as implemented by multilateral institutions, must work by a set of criteria that are general enough to allow for distinct cultural interpretations to coexist. A sustainable agenda for global governance must be clearly guided by:
Global governance suffers from both ineffectiveness and a crisis of legitimacy. Credible and sustainable global governance requires that institutions cease to work predominantly to the advantage of those groups or nations already in positions of power. It must create the conditions for those in less fortunate positions to have stable and meaningful social lives. Progress has begun on these fronts with the partial integration of multilateral agencies and non-state governing bodies. However, progress continues to occur at a rate far slower than that of globalization and is inadequate to address the emerging global challenges. Placing these criteria at the centre of developing forms of global governance will assure that such institutions and partnerships do not lose sight of their intended purpose.
This article originally appeared in the Global Policy Journal.
Nayef Al-Rodhan argues for a globally inclusive educational program that promotes cultural security and understanding.
There are all kinds of moral truths that see the world from different perspectives, and none of them have to necessarily be more right than the others. This underscores the significance of education: alongside family structure and cultural context, education has the capacity to influence every aspect of how we think about the world. It is crucial in our context of unprecedented globalization to put this powerful tool to use in the interest of tolerance and cultural understanding in ways that foster harmonious co-existence, and cultural synergies. When the fundamental importance of education becomes fully appreciated, it can be revitalized and adapted to encourage open-mindedness, inclusion and cooperation.
Educational Hurdles to Overcome
It is worth pausing to consider the reasons for a lingering lack of emphasis on education. Its general importance has not, of course, been lost on intellectuals through the ages: Plato made a (rather infamous) strict educational regime fundamental to his Republic. Bentham and Mill , despite their differences, both recognized education as the most direct route to realizing the utilitarian goal of maximizing happiness for the greatest number of people. John Dewey argued at length that education is crucial to democracy. The notion of a global education that considers globalization, its impacts, its promises and its challenges as its main subject matter—remains seriously underdeveloped; there are two principle issues that should first be confronted. The first is a debilitating form of parochialism in which parties fails to see the value in learning the ways of the “other”. The second issue is a naïve conception of personhood, which fails to appreciate the all-encompassing influence of environment, including education, in the development of a human being.
From a purely theoretical point of view, a position that embodies these two issues is untenable. As philosophers have remarked for some time, the lack of external influence simply leaves a void needing to be filled by some sort of pure internal causation, perhaps of the sort Aristotle had in mind when he claimed that a stone that moves is moved by a stick, and in turn the stick is moved by a man. But what moves the man? This is a question often-posed by contemporary thinkers and materialists in particular.
Theories of psychology, and neurochemistry as well as theories of mind and emotions have been especially interested in answering this question. My account of a predisposed tabula rasa — a “mind” equipped with a minimal suite of survival instincts demanded by natural selection and otherwise open and liable to be determined by circumstances — harmonizes with contemporary neuroscientific research, and suggests that what motivates a human being is greatly dependent upon his or her experience and exposure. Neuroscience also informs us that our knowledge is mediated by neurochemistry and that it is not fixed or objective, but alterable and incomplete, shaped by both our interpretations and our environment. Thus, education plays a central role both in determining our social dispositions as well as in global affairs: it teaches us to uncover the many biases in our respective forms of knowledge, appreciate our own limitations and respect the ‘truths’ of others.
The Content of Education for a Globalized World
The premise that we learn the most about ourselves by learning about others might sound like a platitude but the significance of the idea continues to be underappreciated and the concept remains under-applied. When students first encounters different mythologies not only do they come to understand others more thoroughly, but they also becomes capable of assessing the role that mythology—as well as dogma—has played in their own culture. Such multicultural study simultaneously creates the premises for more tolerant and self-critical attitude, while instilling a greater understanding of the ways that cultures have evolved. However, this outcome does not occur often enough because in order to assimilate mythology in this way, students should also be cautioned against the false but pervasive view of essentialism. A diverse cultural education must also emphasize intra-cultural variety, and the malleability of individual human beings when their cultural and social contexts shift. Such learning is enriching on another level as well: it teaches us that our histories are intertwined,. Furthermore, it shows that our ‘civilizations’ are not as separate as popular discourse would have us believe but rather that they developed through constant mutual borrowings . Most importantly, transcultural education reveals that human history is a cumulative effort, where no culture can claim monopoly over another but instead is indebted to others for their contributions. We need to move towards an educational paradigm that promotes an ‘ ocean model of civilisation ’: a metaphor for human civilization conceived as a whole, like an ocean into which different rivers flow and add depth.
Perhaps most significantly of all, education must be updated to be more objective and to present information in a fair and balanced manner. As is well-known, education has too often been the venue for indoctrination in which half-truths or outright falsehoods are perpetuated. Familiar cases include the inferiority of the “other” manifested in the language used to characterize intercultural relations. More insidiously, and ubiquitously, facts relating to violent conflict have long been distorted or blatantly suppressed. For example, the Gulf of Tonkin incident involved deliberate deception regarding the presence of North Vietnamese boats and false claims that the NVM later initiated hostilities. While it is now a well-documented case, at the time the situation was less clear. The dissemination of this type of disinformation is widespread and badly skews our understanding of history.
Beyond such deception and mischaracterization with regard to specific episodes in history and international relations, education in its current form is woefully inadequate concerning certain types of information crucial to global coexistence. The general notion that many wars are just, and perhaps that there is even a kind of nobility to many wars, is not sufficiently confronted. Were it more widely taught that the wars of the last 100 years have killed far more civilians than combatants—roughly three innocent bystanders for every two soldiers — justifications for war would be far fewer. Furthermore, the statistics of modern warfare show a far worse ratio of civilian to combatant deaths, in spite of all the advances in battlefield technology and bluster about “targeted drone strikes”. This is, of course, only one very specific example, but it demonstrates that education is the best means for altering people’s perspectives and in so doing challenge the many unjust features of the status quo.
More generally, education holds the key to greater empowerment of women and marginalized populations, and will be the principle weapon in the fight against global concerns such as poverty, injustice and inequality. Providing individuals with the requisite understanding of their place in our contemporary, globalized world and giving them the autonomy to have greater control over their own lives should figure high on our list of enshrined social and political rights.
Education has the capacity to both foster tolerance and a cooperative mentality essential to the future of humanity, as well as to build psychological barriers between peoples and reinforce divisive dogmas,. It is for this reason that it is of the utmost importance that education programs get the attention they deserve. As the rate of globalization accelerates, the de-emphasis of nationalist agendas and parochialism alongside the emphasis of mutual understanding and appreciation of cultural diversity will be crucial.
Sustainable security for humanity can only be achieved if education is made a priority by states and their societal institutions. These institutions include educational bodies, the media, the entertainment industry and political discourse. These electioneering sound bites are meant to unite and excite the electorate, and are thought of as temporary, but in fact they leave significant, lasting, and harmful negative attitudes in the minds of the electorate on various domestic and global issues.
The way forward
An ideal educational program that protects the national identity and heritage of states while being globally inclusive and promoting cultural security and understanding should include the following eight features:
- Empowerment and development of inclusive national narratives
- Global knowledge of cultures and histories
- Cultural respect and understanding
- Communication, exchange and exposure
- Global citizenry through responsible media and political statements
- Global values and equality
- Avoidance of dehumanization of the other and abuse of knowledge
- Other moral truths and views.
Educational practice must be updated to track and promote current and emerging challenges. It is the single most powerful tool for pushing back against an always-looming state of nature, and for promoting a more just, secure, equitable, prosperous and sustainable global order.
This article originally appeared in the World Economic Forum Blog.
The latest scientific advances will soon enable us to take charge of evolution itself. Synthetic biology is a new form of engineering that involves the creation of complex, new biological systems. It is the result of the confluence of knowledge in life sciences, engineering, and bio-informatics, and the most promising innovations in this new field – genetic design, protein manufacture and natural product synthesis – could have a revolutionary impact on our lives, particularly with regards to the production of energy and medicine. It brings with it gigantic opportunities and risks.
Early innovations may include personalized, genome-specific medications for the treatment of cancer and degenerative diseases such as Parkinson’s and Alzheimer’s, and pro-environmental bacteria designed to counter the effects of pollution; picture a microbe that ‘eats’ the toxicants in a contaminated body of water. As an alternative to existing, limited energy sources, we could also engineer the mass production of cellulosic ethanol – a renewable plant-based biofuel that produces very low carbon emissions.
On the other hand, synthetic biology could also prove to be extremely hazardous. Even among enlightened citizens and scientists, there’s a great deal of concern surrounding the field – and rightly so. These innovations have tremendous possibilities for good, but they could be devastating without proper regulation. Certain DNA products have huge capacity for virulence or pathogenicity: Mad Cow Disease is no more than a prion – a tiny little protein, smaller than a virus – but its effects are potentially devastating.
These new DNA products are quite consequential in terms of health and global security, but it wouldn’t require much for a rogue state or scientist to duplicate the technologies involved. The core knowledge isn’t difficult to acquire, and if an entity with an understanding beyond amateurish were intent on using or misusing it, they wouldn’t find it hard to do so. Even among engineers with positive intentions, working with nanomaterials is extremely risky: if you create tiny particles without the necessary oversight, sometimes the results are small enough to integrate into normal DNA sequences when they come into contact with them, producing unforeseen mutations.
In addition to these security threats, the rise of synthetic biology poses a series of ethical considerations. On a philosophical level, I believe that man is an emotional, amoral egoist. Our moral compass is steered by the frameworks in which we find ourselves, and we are governed primarily by self-interest and emotional motivations. These motivations, combined with biological innovations are now leading us towards personal enhancements, both physical and cognitive.
The cognitive enhancements are far more problematic, not least because the mind defines who we are. Within a decade or so we will have the ability to enhance our mental dexterity, not only in terms of mental ability, but also our emotionality (or lack thereof). While we might like to pretend otherwise, emotions are physical, cellular and subcellular neurochemical events; once we understand this better – we know quite a bit already – we should be able to influence mood.
The very concept of biological and cognitive enhancements poses significant questions. Who will be enhanced, and will this create a dangerous societal divide between the enhanced and the non-enhanced? With the chronic gap between rich and poor citizens now the top trend in this year’s Outlook , could synthetic biology result in even more dangerous inequalities both within and across societies? Do parents have the ethical and legal right to design their babies the way they want to, or should there be bio-ethical oversight approval mechanisms? These are serious concerns, and they operate at the state level in addition to affecting individuals.
The protective response required is not easy, but necessary. We must aim at creating oversight mechanisms that mitigate risks without stifling innovation. Because of the diverse national and commercial interests involved, oversight can only be provided by a powerful multistakeholder organization – one that can hold states to account, as well as non-state actors, from biotechnology companies to individual scientists.
Above all, we should remember that human nature is an uncertain variable. The idea that we have innate morality competes with the brutality, inequality, and everything else that fills the history of our species. We must never be complacent about the virtues of human nature – thus the need for very stringent governance paradigms for these extremely powerful new tools.
The Outlook on the Global Agenda 2015 report is now live.
Author: Nayef al-Rodhan is an Honorary Fellow of St. Antony’s College at Oxford University and Senior Fellow and Centre Director of the Centre for the Geopolitics of Globalisation and Transnational Security at the Geneva Centre for Security Policy.