THE FUTURE OF MORALITY

 

July 1, 2012

THE FUTURE OF MORALITY

Can we have a science of morality?

What is right and what is wrong?  What are good and evil?  These questions about the origins of morality, ethics and justice have been the subject of philosophy for millennia, but never science.  Unlike philosophy, science demands that any claims made about the universe be not only logically consistent, but supported by testable evidence as well.  A science of morality would therefore require empirical data across the full range of relevant spatial scales, from the micro-level of the individual person to the macro-level of our entire species.  An insurmountable obstacle up until now has been that data at the micro-level are inaccessible, locked within the minds of individuals.  For more than a century the prevailing view among philosophers and scientists alike has been that these data will remain forever out of reach – that the inner workings of the mind are inherently subjective, with no prospects of ever being observable.  So while a great deal of work can be done by making micro-level inferences about individual minds from macro-level observations of human behavior, scholars have so far been critical of any notion that a science of morality might emerge alongside psychology, sociology, anthropology, and the other social and behavioral sciences.  But a handful of thinkers believe that this may soon change as a result of the exponential progression of technology.

One of these thinkers is Sam Harris.  In his 2010 book, The Moral Landscape, Harris makes a strong case for a future science of morality.  He argues that morality is a function of wellbeing and suffering, and that because wellbeing and suffering are a product of our neurological machinery, morality must therefore be measurable at the level of brain.  On this view, a science of morality is both a logical and an inevitable extension of the neurological and mental health sciences.

In this essay I am going to argue that although Harris’s Moral Landscape is based upon a futuristic vision of the sciences and technologies related to the human brain, this vision is not nearly futuristic enough.  Harris’s arguments are not wrong per se, but rather are incomplete because like other cognitive scientists he is still implicitly basing his analysis on the assumption that human biology is immutable.  Harris is right to assume that the science of morality will be a brain science, but he is wrong to assume that in the future human brains will be no different than they are today.  By the end of this century we will have the technology to dramatically modify how our brains work, and the moral implications of re-engineering our minds are nothing short of staggering.

The impending availability of empirical data at the level of the brain means that age-old questions of right and wrong, and of good and evil, will become scientific questions in the near future.  A science of morality is indeed in the offing.  But when we abandon the assumption of biological immutability we open the door to a more fundamental debate than simply what is moral: we can begin to ask what should be moral, and why.

Let me begin by providing some conceptual context for Harris’s Moral Landscape.

PART 1

Wellbeing and Suffering: The Bedrock of the Moral Landscape?

The Moral Landscape by Sam Harris

Harris argues that what is morally good and bad can be measured in terms of the suffering or wellbeing of conscious creatures.  This is a consequentialist view of morality in which the value of an action is determined by its outcome, and is reminiscent of the late-18th Century moral philosophy of Jeremy Bentham, the father of utilitarianism (the most familiar flavor of consequentialism) who argued that, “Nature has placed mankind under the governance of two sovereign masters, pain and pleasure”.  For Bentham, the most moral action is the one that maximizes utility – meaning the net surplus of pleasure over pain – for the largest number of people over the longest period time.

There are two flavors of utilitarianism: act and rule.  Act utilitarianism, as the name implies, evaluates the utility produced by specific events or actions – acts.  Rule utilitarianism, by contrast, evaluates the utility produced by procedures, principles and laws – rules.  This is essentially a difference between microscopic and macroscopic units of analysis.  While Harris often discusses the morality of specific acts, his Moral Landscape has a decidedly macroscopic focus.

Harris offers a powerful thought experiment to illustrate why wellbeing and suffering represent what he considers to be the bedrock of morality.  Imagine a universe in which the largest number of conscious creatures suffer as much as possible for as long as possible.  Harris calls this universe The Worst Possible Misery for Everyone.  “That’s bad”, Harris says, and if you don’t agree, “then I don’t know what you’re talking about … and what’s more, I’m pretty sure you don’t know what you’re talking about either.”

Since any other configuration of the universe is, by definition, morally superior to The Worst Possible Misery for Everyone, this hellish scenario represents the deepest valley on a landscape of possible configurations of the universe where elevation corresponds to moral value as measured by wellbeing and suffering.  By forcing us to admit that some configurations of the universe are demonstrably better than others with this simple thought experiment, Harris claims we have no choice but to abandon the notion that all arrangements of human society are morally equivalent.  And if some societies really are morally better or worse than others, Harris argues, then it must be science that tells us the good from the bad.  To use one of his characteristically evocative examples, we should be able to determine scientifically that a society in which women are treated like chattel and disfigured with battery acid in honor disputes really is worse than one in which women enjoy all of the same rights and liberties as men.  The latter society in this case would represent a higher peak on the moral landscape, other things being equal.

There are several standard arguments against utilitarian views of morality.  First, critics observe that utilitarian morality is calculating rather than principled, and they suggest that in some extreme situations such calculations become abhorrent.  Would it be moral, for example, to endlessly torture an innocent child if doing so could somehow guarantee the wellbeing of an entire city?  Ursula K. Le Guin examines this nightmarishly Faustian bargain in her famous short story, The Ones Who Walk Away from Omelas.  Second, critics observe that it is impossible to measure all actions and their outcomes with the single metric of utility, and that in any case people would not agree about these measurements since personal preferences and tastes vary.  It is impossible, for example, to compare the utility of a rollercoaster ride with the utility of a hot fudge sundae since some people prefer ice cream to adrenaline rushes and vice versa.  Similarly, some people (whom we label as conservatives) value individual liberty above all else and believe that a society’s utility will be maximized where we each have the freedom to succeed or fail on our own, whereas others (whom we label as liberals) value a social contract in which modest but obligatory sacrifices of liberty and property are used to maximize the collective good.

Harris’s 21st Century utilitarianism is somewhat resistant to these criticisms because he argues that at the level of the brain it really is possible to measure the wellbeing and suffering of conscious creatures.  In fact, neurological science is progressing so rapidly that this may be possible not just in principle but in practice in the not-too-distant future.  With a brain scanner of adequate power, we could actually determine whether the utility of a particular rollercoaster ride outweighed the utility of a particular hot fudge sundae.  A future science of morality, Harris suggests, will therefore dispense with proximate measures of utility such as how people report they feel (we already know that self-reporting is rife with inconsistency and error), and instead will be able to measure wellbeing and suffering directly at the level of the brain.

Demystifying Subjectivity

To understand the plausibility of this vision of the future, it is useful to reason by analogy.  Think of the brain of a robot such as Star Trek’s famous android, Data.

Star Trek's Data

Star Trek’s Data (© Paramount Pictures)

Data’s brain is a fantastically powerful computer that is sophisticated enough to give rise to his mind.  Today it is widely accepted that we will one day have computers of such power, and that we will indeed use them to create artificial intelligences (AI) like the mind of Data.  Now consider that we could readily observe everything happening inside Data’s brain – every byte of memory, every computation, every bit of sensory input.  We can do this with computers today, so there is no reason to think we would not be able to do so in the future, even with computers that are trillions of times more powerful.  After all, an iPhone is millions of times more powerful than the first personal computer 40 years ago, and yet each and every one of the billions of 0s and 1s it processes every second is observable.  So too would be the quintillions of 0s and 1s coursing through Data’s brain every second.  Although this is a massive amount of information, we could in principle analyze it and thereby come to understand precisely how Data’s conscious mind arises from all those 0s and 1s.

So are biological brains any different?  There is no compelling reason to think so.  Biological brains are extraordinarily powerful computers, but computers all the same.  True, biological brains process information differently that the computers we currently design, but ultimately they are still just information-processing machines that generate outputs in response to inputs.  In fact, given our current understanding of the physical universe it is impossible to coherently argue that our conscious minds could be anything other than the product of our neurological information-processing machinery.  A mature science of the brain will allow us to observe precisely how conscious minds arise from neurological machinery, just as we can now observe the workings of an iPhone.  Whether we will, in fact, be capable of understanding what we observe is a separate question, but so far there is no evidence to suggest that workings of the human brain are too complex to be understood.

The idea that consciousness itself will one day be observable has remarkable implications, foremost of which is the demystification of subjectivity.  By Harris’s lights, simply recognizing that the day will come when subjective experiences are knowable as objective biological facts destroys any concepts which require that subjectivity remain forever unobservable.  Moral relativism is the most explicit casualty here, but constructivist notions of value and meaning are also on the chopping block.  On this topic, Harris points out that it has always been possible to “speak objectively about a range of ontologically subjective facts.”  As an example, he notes that the sensation of pain is a subjective fact that is perfectly open to scientific examination.  But Harris also argues that the advance of the neurological sciences will one day allow us to observe subjective experiences like the pain of a paper cut, the color of an orange, or the taste of chocolate as they occur in the brain.  We will be able to do more than just discuss these subjective experiences – they will become objective biological facts.  And if we can fully understand how the machinery of the brain gives rise to the conscious experiences we call pain and color and flavor, then why not also the experiences that we call thoughts, feelings, and knowledge?  Why not meaning?  Why not value?

Scientific Values

Even if the brain scanners of the future allow us to see what our values are as biological facts, does this mean science can tell us what we ought to value?  Critics have long bristled at the notion that science can tell us what our values should be.  This distaste is rooted in a concept known as the fact-value distinction, which traces its origins most clearly to David Hume’s famous is-ought problem.

David Hume

David Hume

Hume suggested that an ought (a value) cannot logically be derived from an is (a fact), and this has come to be a rather widely-held belief in both popular and academic culture.  Harris rejects the fact-value distinction and instead argues that values are simply a special kind of fact – namely, they are empirical claims about the circumstances under which the wellbeing of conscious creatures is maximized within a society.  This is another way of saying that values exist only in our minds, which is to say in the collection of biological facts that lies in the meat between our ears.

Harris extends his criticism of Hume’s is-ought problem by asserting that science is actually “in the values business” because the very idea of facts is itself predicated upon a special set of a priori values.  Harris lists honesty, logic, parsimony, mathematical elegance, and respect for evidence as axiomatic scientific values.  After all, what logical argument can we make to convince someone to respect logic?  What evidence could we use to convince someone that they should respect evidence?  Facts mean nothing to a person who does not value honesty, logic, parsimony, mathematical elegance, or evidence.  Facts, Harris argues, begin with values rather than the reverse.

It seems to me that Harris’s list reduces to a single a priori value: consistency.  Whenever we use the words honesty and logic we are invoking the idea of consistency – specifically, the idea of consistency with an external reality that exists independently of ourselves.  After all, it is impossible to remain consistent with reality while being dishonest, illogical, unparsimonious, mathematically inelegant, and disrespectful of evidence.  Science represents humanity’s best effort to be consistent, and I suspect the reason it has been so phenomenally successful at generating useful knowledge is because as far as we can tell the universe itself is perfectly consistent.

A science of morality, then, would be our best attempt to describe human values consistently, meaning as honestly, logically, and parsimoniously as possible, and with as much mathematical elegance and respect for evidence as possible.  Harris argues that the best way to undertake this enterprise is to observe the human brain.  To his credit, Harris acknowledges that the human brain isn’t the only conscious system to which this logic applies, but he nevertheless appears to assume that the brains of the future will experience wellbeing and suffering more or less the same as Homo sapiens do today.  As I will discuss shortly, the fact that this assumption is false raises a host of fascinating and evocative moral questions.

At bottom, Harris’s Moral Landscape is a philosophy in which the a priori valuing of consistency (science) allows us to develop a body of knowledge (facts) that accurately represent the universe in which we find ourselves, and included among these facts are the everyday moral values such as liberty, equality and justice that serve as reliable rules for maximizing wellbeing and minimizing suffering.

PART 2

Human Values Today

Harris’s Moral Landscape is based on the premise that we will one day be able to measure wellbeing and suffering at the level of the brain, but, as Immanuel Kant famously observed, we don’t get to choose the things we find pleasurable and painful.  So then where do our values come from?

Why do nearly all people in all societies, both today and throughout human history, agree that theft and murder are immoral?  Why do most people prefer to be free than to be slaves?  Why do we like hot fudge sundaes and roller coaster rides better than typhoid and tuberculosis?  What makes joy and wellbeing good things, and pain and suffering bad things?  To a remarkable degree, the answer is that our moral values are preferences that derive directly from our biology.

Human behavior, like all other animal behavior (albeit more complex and nuanced), has been shaped by evolution eon after eon to ensure our survival and reproductive success.  And our values – what we desire and admire, what we detest and deplore, what we reward and what punish – determine our behavior.  Hot fudge sundaes taste good because our ancestors evolved to crave certain types of sugar, salt and fat.  Food poisoning is miserable because headaches, nausea and vomiting helped our ancestors recover by immobilizing them and purging their digestive systems of contaminants.

credit: unknown

At the physical level, if we want to experience pleasure today we must seek out specific experiences that stimulate our nervous systems appropriately: we must drive to an ice cream parlor and buy a hot fudge sundae, perform an exhilarating activity such as ride a roller coaster, find a partner with whom to have sex, or ingest a certain quantity of alcohol or other narcotic drug.  And if we want to avoid suffering today, we must take care not to become injured or ill, or if we do then we must seek relief through modern medicine.

Procuring pleasure and avoiding pain at the psychological level are much more subtle and sophisticated endeavors, but they are no less firmly rooted in delivering whatever stimuli our hominid brains require in order to experience specific states of mind.  Whether it is the satisfaction of a job well done or the elation of creative expression, whether the delicious joy of succeeding in the face of adversity or the deep fulfillment of romantic intimacy and love, we must forever conspire to find ourselves in circumstances that tickle our brains just so.

Subtle or simple, complex or crude, the terms of our wellbeing and suffering are almost entirely dictated by the design of the Homo sapiens brain that evolved to ensure our ancestors’ survival.  There are many wonderful books about how human behavior (and its underlying values) has been sculpted by evolution, and I will not attempt to summarize them here.  I will, however, share one particularly strong excerpt from Patricia Churchland’s superb book, Braintrust: What Neuroscience Tells Us About Morality.

“The truth seems to be that values rooted in the circuitry for caring—for well-being of self, offspring, mates, kin, and others—shape social reasoning about many issues: conflict resolutions, keeping the peace, defense, trade, resource distribution, and many other aspects of social life in all its vast richness. Not only do these values and their material basis constrain social problem-solving, they are at the same time facts that give substance to the processes of figuring out what to do—facts such as that our children matter to us, and that we care about their well-being; that we care about our clan. Relative to these values, some solutions to social problems are better than others, as a matter of fact”

The social sciences and humanities have done a superb job exploring the diversity of human values across both individuals and cultures over the last century.  Nonetheless, the absolute range of human moral values is exceedingly narrow, and its boundaries are in large part (if not entirely) determined by the biology of the Homo sapiens brain.  It therefore seems clear that the true bedrock of Harris’s Moral Landscape is not wellbeing and suffering per se, but rather the evolved structure of the brains that experience those states of mind.

Human Values Tomorrow

The branch of philosophy known as normative ethics asks what we ought to value, and by extension how we ought to act. If wellbeing and suffering become observable biological facts at some point in the future, then according to Harris these questions of normative ethics will have scientific answers. The reason why is that if our goal is to maximize the wellbeing experienced by human beings, then the best way to achieve that goal will be a function of the architecture of the Homo sapiens brain. And observing and analyzing the brain falls within the purview of science.

Why should we not hit our thumbs with hammers all day if we wish to maximize wellbeing? The answer is a scientific one: because we are designed to experience physical injury as unpleasant. So the question of which values and actions will reliably maximize wellbeing for Homo sapiens brains is a scientific one.

Going a step further, we could ask the same question for any conscious being. There might, for example, be a race of aliens or androids whose brains are designed in such a way that the experience of hitting their thumbs with hammers is positively orgasmic. In that case, their normative ethics would be different from ours, but the questions of what they ought to value and how they ought to act in order to maximize their wellbeing would remain scientific ones whose answers could be found observing and analyzing their brains. From Harris’s consequentialist perspective, questions of normative ethics have scientific answers because we can analyze the consequences that any given values or actions produce for the conscious beings in question, even if the consequences are radically different for beings with varying brain architectures.

But what happens when we have the ability to either redesign our own brains or design entirely artificial ones? How would consequences continue to be our moral compass once we were able to choose how consequences affect us? How would we decide whether watching a sunset is good or bad if we had control over how we reacted to the experience?

Here we move beyond the question of what we ought to do in order to maximize wellbeing, to a meta-ethical question: what ought to give rise to wellbeing in the first place? This is another way of asking, what values should minds hold? To extend Harris’s metaphor of the Moral Landscape, we are no longer talking about choices that determine elevation in terms of wellbeing and suffering but rather choices about the laws of physics that define the landscape’s geology itself.

Today we are able to exert only very modest control over what creates wellbeing and suffering within our neurological machinery.  While we may modify how our brains react to stimuli with education, and to a lesser extent with mental exercise such as meditation, our strategy for maximizing wellbeing has overwhelmingly been to modify the environment that our minds experience, rather than to modify how our minds experience the environment.  We are, in Harris’s words, “constantly trying to create and repair a world that our minds want to be in.”

But what if the same technological advances that allow us to peer inside the neurological machinery of our brains also allow us to change that machinery?  What if that technology gives us not only the power to observe, but to exert complete control over how our conscious minds experience the world?  If we had such power, then what should we choose to find pleasurable and painful?

credit: unknown

Today a well-educated and disciplined person might successfully condition their brain to experience pleasure from, say, eating a vegan diet, or recycling, or donating blood.  And similarly, such a person might have succeeded in conditioning their brain to experience discomfort from engaging in or even witnessing animal cruelty and other forms of violent and destructive behavior.  But what if in the future we can reprogram our brains as easily as we reprogram a computer today?  Might we not choose, for example, to make celery taste like tenderloin steak and carrots taste like hot fudge sundaes?  And more poignantly, shouldn’t we choose to make beef taste awful, since it is bad for our health, bad for cows, and bad for the environment?  Might we not choose to make recycling and donating blood ecstatic, and shouldn’t we choose to make failure to recycle and donate blood excruciating?

As mind-blowing – quite literally – as the idea of reprograming the brain might be, there are two even more explosive facts to consider.  First, the technology will almost certainly have arrived by the end of this century, and perhaps as soon as the 2050s if the exponential growth of biotechnology, information technology and computing continues unabated.  And second, for many of us this will be within our lifetimes.  It is worth taking a few moments to let these facts sink in.  The implications are as profound as any that humanity has ever faced.

At the very least, the ability to exert control over the nature of consequences will detach morality from its anchorage to the fixed form of the Homo sapiens brain and transform it into a vastly more complex and iterative proposition.  Ironically, once we are no longer confined to the basic genetic blueprint of the human brain we may see the emergence of a truly robust moral relativity – one based on biophysical rather than cultural differences.

AI and Tabula Rasa

We might start by imagining a hypothetical “original position” for such choices, following the example of John Rawls, because unless we wipe the slate clean we would simply be choosing new values based on those we currently hold.  But can we, in fact, start from a tabula rasa, or blank slate?  Are there any values that we could endorse on the basis of logic alone – perhaps some that are similar to Immanuel Kant’s categorical imperatives?  Or would a truly blank slate simply paralyze us?

I do not pretend to have the answers, but it may be useful to reason through these questions by analogy again here.  Consider that when we finally do create AI, we will have to program how their minds experience wellbeing and suffering because as a tabula rasa an AI could not decide on its own values.  After all, what would an AI even do in the absence of any values?  Human beings and other animals are intrinsically motivated to act because our ancestors had to be perpetually on the lookout for both sources of sustenance and sources of danger.  Any individuals unfortunate enough to be born without such motivation would surely have failed to survive and reproduce to pass such a maladaptive trait on to the next generation.  As a result, any normal person quickly becomes uncomfortable if they sit still and do nothing for very long.  Boredom alone is an unpleasant experience, to say nothing of the physical consequences of sitting motionless for any great length of time, which range in severity from muscle stiffness and limbs falling asleep to deep vein thrombosis, bedsores, dehydration and starvation.  But why should an AI experience boredom?  In fact, why should a tabula rasa AI wish to do anything at all?  You and I wake up in the morning because evolution programmed our brains to do so.  An AI, on the other hand, would only wake up in the morning if it were programmed with the necessary motivation, and this motivation would have to be determined by a core set of criteria for how to respond to stimuli.  I think we have no alternative but to call these criteria values.  So what should those values be?

If the concept of playing God applies anywhere, it applies here.  Would it be moral, for example, to create an AI capable of experiencing pain and suffering if we had the power not to?  If you look closely, this is another way of asking, is suffering itself moral?

Now shift gears and apply this same question to future humans who have complete control over their neurological machinery: what if we could end pain and suffering of all kinds at the level of the brain?  Would there remain any basis for a morality predicated on the minimization of suffering?

I suspect that once we have the technological capability to do away with discomfort of all kinds, we will have to redefine the meaning of human suffering because there will be no unpleasant experiences; only pleasant ones and more pleasant ones.  Notions of pain and pleasure as negative and positive positions on either side of a neutral center-point seem likely to give way to a notion of wellbeing that starts from zero and only goes upward.  We already have adjectives to describe various points along such a scale of wellbeing, from mere gratification and satisfaction to deep contentment and fulfillment, from simple joy to sublime ecstasy.  And if we are talking about a future in which we can redesign and reprogram our brains, we will undoubtedly discover states of wellbeing of a variety, quality and quantity that are scarcely imaginable today.

Happiness Pills and Units of Analysis

credit: unknown

Paradoxically, eliminating pain while maximizing pleasure may not be an entirely good thing.  Several years ago I asked Sam what the moral implications of a Happiness Pill might be – a drug much more potent than, say, heroin, and whose high never ended.  His answer was that other sources of unhappiness would inevitably creep up on us if we were to forsake our responsibilities to provide either for ourselves or for those whom we care about for too long.  But this does not really answer the question, it merely kicks the can down a little further the road by saying that such a pill will only work until it doesn’t.

So let me ask again: what if a Happiness Pill could impose such a state of bliss upon me that no physical trauma or insult from reality could shake me from it?  So someone breaks my arm – if I were incapable of feeling discomfort and incapable of experiencing the displeasure of anxiety about the future, why would this injury bother me?  So my children are starving – why would their plight bother me if I were incapable of experiencing concern or anguish about the suffering of others?

The example of Happiness Pills illustrates raises three important considerations about a future science of morality predicated upon the maximization of wellbeing: 1) temporal units of analysis; 2) spatial units of analysis; and 3) the role that suffering plays in defining wellbeing.

First, with respect temporal units of analysis, consider that pain and suffering are the mechanisms evolution has used to ensure that we take care of ourselves.  If taking a Happiness Pill led me to abandon all self-preservation, I would die within a matter of days.  If the moral course of action is to maximize wellbeing, surely Happiness Pills are immoral because a week of bliss does not outweigh a lifetime of other experiences.  Our temporal unit of analysis must therefore be the long term, and efforts to maximize long-term wellbeing may inherently conflict with efforts maximize short-term pleasure.

For example, all conscious creatures on Earth experience pain because it has enormous survival value.  This is because pain is an extremely effective mechanism for modulating behavior to prevent injury, both reactively (as when we reflexively recoil from a hot surface) and proactively (as when we learn not to eat things that have made us ill in the past).  Many important – if mundane – bodily functions are facilitated by pain, such as realizing when we have something in our eye, or rolling over in our sleep when our muscles begin to cramp or fall asleep.  In fact, individuals with congenital analgesia who are unable to experience physical pain suffer from rapid physical deterioration as a consequence of self-induced injuries, and seldom live beyond the age of 25 as a result.  These injuries can range in severity from biting the inside of their mouths and failing to move sufficiently while they are sleeping, to using broken limbs without realizing their bones are fractured.  Analgesia, or insensitivity to pain, can be caused by infectious diseases as well as congenital ones.  Victims of leprosy, for example, lose fingers and toes because the nerve damage caused by the disease results in a loss of feeling in the extremities, which are then easily damaged by accident – often so severely they require amputation.

So while a person might wish to be rid of emotional discomforts like stage fright or social anxiety as well as the excruciating pain of physical or psychological trauma, would we be wise to eliminate pain altogether given that it is such utility?  Perhaps not.  Indeed, we can even learn to savor discomfort within certain contexts.  Exercise and education are good physical and mental examples respectively: unlike mere drudgery and toil, these efforts are intentional sacrifices of short-term comfort that represent an investment whose returns take the form of long-term wellbeing.

There are responses to this objection, of course.  We might, for example, reprogram the sensation of pain to be merely informative, like color or flavor, rather than excruciating.  Nevertheless, the overarching point of this objection is that a consequentialist morality requires a temporal unit of analysis that is large relative to the lifespan of the conscious beings in question.

Second, with respect to spatial units of analysis, pain and suffering help us empathize with others.  Taking my Happiness Pill prevented me from empathizing with my own children, and so perhaps if we lost the capacity to experience pain and suffering we might also lose our capacity to recognize and respect the conscious experiences of others – not just our loved ones, but strangers and animals as well.

credit: unknown

There may be inherent contradictions here as well: between efforts to maximize the wellbeing of individuals and efforts to maximize the wellbeing of everyone.  A consequentialist morality therefore requires a large spatial unit of analysis.  The wellbeing of the individual person alone is not enough; but rather the collective wellbeing of many (or even all) conscious creatures together must be included within the scope of moral concern.

(As a brief aside, let me point out that conflicts of interest based upon temporal and spatial units of analysis are not at all hypothetical, but are in fact central to every political, social, economic and environmental policy debate of our time.  This is because the two major political affiliations – liberal and conservative – hold opposing views about which units of analysis to use: conservatives prioritize the short-term and the individual, whereas liberals prioritize the long term and the collective.  I have write about this at length in Letter to a Conservative Nation.)

Third, pain and suffering have always been fundamental features of the human condition.  If we were to eliminate them, wouldn’t we also lose part of our essential humanity?  What would hard work mean if working were no longer hard?  What would it mean to make sacrifices if we were incapable of suffering from our losses?  What would courage or bravery mean to us if we were unable to feel pain or fear?  Would we not lose something important, something poetic – perhaps even beautiful – if the word tragedy ceased to be meaningful?  These are all ways of asking the same question: what does wellbeing mean without suffering to help define it?  Would a scale of moral value with only a positive vector – pleasant and more pleasant – be sufficient?

A Consensus for the Present, not the Future

As part of the Edge New Science of Morality Conference in 2010, Harris and a panel of six other luminaries of the cognitive and behavioral sciences drafted a consensus statement about morality as a subject of scientific inquiry.  The statement comprises eight assertions, summarized as follows:

  1. Morality is a natural phenomenon and a cultural phenomenon
  2. Many of the psychological building blocks of morality are innate
  3. Moral judgments are often made intuitively, with little deliberation or conscious weighing of evidence and alternatives
  4. Conscious moral reasoning plays multiple roles in our moral lives
  5. Moral judgments and values are often at odds with actual behavior
  6. Many areas of the brain are recruited for moral cognition, yet there is no “moral center” in the brain
  7. Morality varies across individuals and cultures
  8. Moral systems support human flourishing, to varying degrees

The full statement offers a more complete explanation of each of the above assertions, and is well worth reading.  But while this statement correctly recognizes the human brain as the basis of morality, it incorrectly assumes that morality could have no other basis.  In doing so, the statement effectively confines all discourse on the subject of morality to the parameters established by Homo sapiens biology.

The point I mean to make here is that the question of which specific moral code will maximize wellbeing and minimize suffering for a particular brain design is very different from the question of which moral principles – if any – apply to all conscious systems.  By assuming human biology to be forever immutable, Harris and his fellow cognitive scientists seem to be unintentionally conflating these two very distinct questions.

What Happens When a Leopard Can Change Its Spots?

We will transcend the limits of our biology over the course of this century.  Once that process begins, our moral discourse will inevitably shift away from discussions of what is good for Homo sapiens to what is good for conscious systems, and I suspect that our current notions of wellbeing and suffering will cease to serve us so well in that new discourse.

Male Amur Leopard

Male Amur Leopard (credit: Wildlife Heritage Foundation, UK)

Jeremiah 13:23 in the New International Version of the Bible asks, “Can the Ethiopian change his skin, or a leopard its spots?”  The belief that living things cannot change their nature has been a reasonable position to take throughout human history, and indeed throughout the history of life on Earth.  But the game has changed.  With the advent of modern science and technology, the past is no longer a reliable predictor of the future.

Will human nature be a reliable compass in our search for a universal morality?  Was it ever?  Perhaps our human nature has actually blinded us to deeper moral truths.  What might a genuinely universal morality look like?  On what values might it be built?

I will venture to suggest only one axiomatic value that does seem to logically undergird any coherent notion of morality: life.  Creatures must be alive in order to have conscious experiences, irrespective of the consequences of any particular stimuli, and on this view life is categorically good because it predicates conscious experience while death is categorically bad because it precludes conscious experience.  It also follows logically that if we are to value life we must use large spatial and temporal units of analysis.  But beyond these very general suppositions, we are in uncharted territory.

Questions for a Future Science of Morality

Let me close by sharing some of the more provocative moral questions I have so far encountered while exploring the idea that we may one day transcend our inherited biology.

  1. When we have the power to transcend our biology and choose our values, what will happen if we choose poorly?

It should go without saying that biological transcendence has the potential to be a truly terrifying prospect.  Today, when individuals find destructive behaviors pleasurable we identify them with labels such as psychopathic and evil.  Will we continue to do so?  Psychopathy is conditional upon the possibility that the acts of one person may cause another person to suffer, which is why mowing down pedestrians in a stolen car is a horrific crime in the real world but reasonably innocuous adult entertainment in the Grand Theft Auto series of video games.  But what will happen when the boundaries of the two worlds dissolve?

The Matrix

The Matrix (© Warner Bros. Pictures)

By the time we have the technology to create AI and reprogram our brains, we may also have perfected virtual reality technologies.  Video games such as Grand Theft Auto will then be fully immersive, like The Matrix – so realistic that they are indistinguishable from reality.  Would it be immoral to kill another player inside such a video game?  If so, why?  If the victim is not truly harmed, where is the crime?

Even if the victim of an assault is harmed, what if they have elected to value being harmed?  Consider a cannibal who kills and eats another person for sexual gratification.  Almost everyone alive today would unequivocally condemn such behavior as evil because of the suffering and harm inflicted upon another person.  But what if the victim’s values are so alien that he or she consents to such treatment – in fact, desires this more than anything else in life?  As it happens, we need not venture into the distant future where values are mutable in order to examine this ghastly scenario.  It has already happened, and rather famously, in Germany in 2001.

  1. What are the moral implications of cognitive enhancement at the social level?

When technology allows us to improve our brains, the wealthy will have access to that technology sooner than the poor.  This scenario is likely to unfold well before we have the power to reprogram our brains entirely. The development of nootropic drugs designed to enhance memory and concentration, for example, is already well underway.  Will this widen the class gap and further entrench disadvantaged individuals, communities and nations in the cycle of poverty?  When these drugs become widely available, will parents have an obligation to provide such enhancements to their children in order to give them the best chance for success in a competitive world?  The morality of our global market economy has already been intensely challenged and criticized (Michael Sandel’s book, What Money Can’t Buy: The Moral Limits of Markets, is a superb introduction to this topic).

What will happen when technology begins to tilt the playing field in favor of the wealthy not just socially, economically and politically but biologically as well?  The meaning of self-made success is bound to change, and perhaps not for the better.

  1. Is there something special about the values we inherited from our ancestors that justifies preserving them?

Beyond their raw utility, could there be an inherent aesthetic value in preserving our evolved values?  Perhaps, but we must be cautious here of the naturalistic fallacy – the notion that any behavior or quality that arises naturally must be good.  We know better, of course: nature is, as Lord Tennyson so vividly put it, “red in tooth and claw”.  Many of our human impulses are indeed base and destructive.  Tens of millions of men in Saudi Arabia and Afghanistan, for example, really do enjoy objectifying and subjugating women – as human males seem to have a biologically-programmed predilection to.  (Thankfully, with a modicum of education and diligence most men are quite capable of reprogramming their brains to supersede the most bestial of our hominid instincts.)  We should therefore be wary of lauding human nature carte blanche.

  1. What rights will AI have?

Will we extend the same notions of harm – and the right to be free from it – to AI?  Without a physical body or the capacity to experience pain or suffering, the only meaningful sense in which a conscious mind might be harmed is through deprivation or incapacitation.  At a basic level, this might mean denial of sensory input.  At a higher level, we might be most concerned about denial of the opportunities and capabilities necessary to fully experience wellbeing in all its forms.  Might we, for example, balk at the notion of enslaving AI?

Amartya Sen and Martha Nussbaum argue that the true basis of justice lies in the capability of individuals to live fulfilling lives, and so from our current perspective freedom might seem like a strong candidate for an axiomatic value.  But what if we created an AI who craved nothing in this world but to be a perfect slave?  Even more chillingly, if we are planning on having AI perform services for human beings, would it be moral to program them otherwise?  As I already mentioned, we will need to program AI with the motivation to act, otherwise they will take no action at all.  And if we expect them to serve and obey, they will need to be programmed to value service and obedience.  Is there a moral path through this morass?  Or is it possible that service and obedience are inherently immoral?

(For those interested in the implications that AI has for morality, ethics and justice, Moral Machines: Teaching Robots Right from Wrong by Wandell Wallach and Collin Allen is an excellent place to start, as is Machine Ethics by Michael Anderson and Susan Leigh Anderson (editors).)

  1. What are the implications of biological transcendence and mutable values for the predominant theories of justice?

Five major conceptions of justice dominate the current discourse: distributive justice (based on teleological ethics, which includes consequentialism and utilitarianism), procedural justice (based on deontological ethics, and centering upon freedoms and rights), justice as virtue (or desert, as in what is deserved), justice as fairness (from John Rawls), and justice as capabilities (pioneered by Amartya Sen and extended by Martha Nussbaum).  There is far too much material to cover here, so I will simply note once more that each of these schools of thought seems to be entirely predicated upon the nature of the Homo sapiens brain.  In fact, it is quite a challenge to even make sense of these schools of thought when the biological basis of human wellbeing and suffering is removed from the picture.  There is clearly much work to be done here.

  1. What are the moral implications of mutable memories?

Human memory far is from perfect.  We don’t remember details clearly, we cannot always control what we are able to recall, and we cannot choose to forget specific memories at all.  Our memories fade and change over time, and can even be manipulated by people with sufficient training – as when attorneys cross-examine witnesses.  But what will happen when technology grants us the ability to both perfectly remember and perfectly forget?

Certainly there are legal implications.  In fact, we are already seeing the impact of perfect image recall that the ubiquitous presence of cameras on mobile phones is having on law enforcement.  We have already begun to outsource parts of our memory to handheld computers in the form of mobile phones, and developments are accelerating.  Google, for example, recently demonstrated its new Glass technology where eyeglasses worn by users capture a continuous first-person-perspective video stream that can be shared with others.

Google Glass

Google’s Glass technology (credit: Paul Sakuma/AP Photo)

Within several decades futurists predict that we will record sights and sounds with technologically augmented eyes and ears and store them on devices within our own bodies, and ultimately within augmented brains that are permanently connected to online data storage services.  This will bring new meaning to the term eye-witness testimony.

Are there moral implications?  Will it be a good thing or a bad thing to have a flawless memory of our entire lives?  Will that not make it easier to ruminate over every mistake, obsess over every perceived slight, and incriminate others for slip-ups and blunders that we should just let slide?   And will it be a good thing or a bad thing to delete painful memories?  Do our most painful memories not sometimes help us to be more thoughtful, more kind, more compassionate, more productive people?

The perfection of memory will also make it more difficult to lie, and lie-detection will undoubtedly be perfected by the time we begin to transcend our biology.  Will individuals retain the right to avoid self-incrimination?

  1. What will happen when we can upload and download memories?

Virtual realities like The Matrix will allow us to form new memories based on conscious experiences that never actually happen in the real world.  But what about after we have formed memories, whether from real or virtual experiences?  Surely if it is all just data, we will be able to share memories in the future much as we share photos and videos today.  What are the moral implications of such technology?  How many memories can individuals share before they cease to be different people?  Virtually all of our ideas about morality are predicated upon individuality because it is a fundamental feature of our Homo sapiens brains.  What will happen to morality when individuality is optional?

  1. When we have the power to modify the brain, will there continue to be standards and norms for intelligence, mental health, and morality?

Today the majority of human brains are similar enough to one another that we can speak meaningfully about standards and norms, and therefore about deviations and abnormalities that we variously describe as gifts, disabilities and mental illnesses (although there is still enough variation around the mean to make the definitions of many mental illnesses contentious).

Diagnostic and Statistical Manual of Mental Disorders

credit: unknown

But once we can modify our brains, the similarities among individuals that create standards and norms may well disappear, and any notions of mental health and illness could disappear along with them.

Will it then be moral to allow a person to choose to be unintelligent, or to let them to freely adopt any values they please?  What if a person chooses to adopt a type of brain that cannot subsequently make intelligent choices?  Would it be moral to allow individuals to accidentally trap themselves (or be trapped by others) in a state of stupidity?  What if I am tricked by a sadist into becoming a masochist?

By analogy, consider that one reason why suicide is illegal in some countries is that we tend to consider people who truly wish to kill themselves as being mental ill.  If an otherwise healthy individual is not suffering a terminal illness or chronic pain, most societies hold that any genuine intent to commit suicide signals an unsoundness of mind that voids that person’s freedom of choice.

  1. What will happen to human society when technology gives us control over biological attributes that are currently immutable, such as eye color, skin color, hair color, gender, sexual orientation, size, and so on?

So far I have discussed only the modification of mental attributes, and our world would certainly be a very different place if we could control personal preferences such as the type of person we find sexually attractive or how happy we are with our own appearance.  But certainly by the time we have the power to modify the biology of our brains we will also have (and perhaps long since have had) the ability to drastically modify our bodies.  Today a plethora of crucial civil rights issues hinge upon the fact that our biology is not currently a choice.  But what happens when that is no longer the case?  Will racism persist in a world where we can change the color of our skin as easily as we change our clothes today?  What would racism even mean when we can choose to have feathers or fur instead of skin?  Will we prejudge others more or less when all physical attributes are a matter of personal choice?

  1. Is morality an artifact of evolved mammalian brains?

A number of philosophers, notably Paul and Patricia Churchland and Daniel Dennett, have suggested that most of the mental states human beings consciously experience – beliefs, choices, sensations, concepts, feelings, and even values – are an illusion.  According to this view, Homo sapiens indulge in folk psychology to describe our mental states: we construct narratives and rationalize how and why we behave the way we do in response to the world around us.  But we are deluding ourselves.  Since the 1980s, neuroscience research has shown that brain scanners can consistently identify the moment a person makes certain choices before he or she is conscious of doing so (in some cases several seconds before).

If this view is correct, neuroscience will likely reveal that little or none of the ideas we have about ourselves and our motives are true, and that the values and choices upon which morality is founded may simply be a complex and convincing fiction.

Beaver

American Beaver (credit: Wikimedia Commons)

Morality would then be just another sophisticated tool produced by evolution to help our mammalian ancestors survive and procreate – part of what Richard Dawkins has called our extended phenotype.   If true, the defining features of human nature – hate and love, anguish and joy, agony and ecstasy, value and meaning – are to Homo sapiens what beaver dams and lodges are to beavers.  This would certainly not be the first humbling fact revealed by science, though it would certainly be among the most spectacular.

Harris has also challenged the validity of the conventional view of morality built  upon choices and values in his recent book Free Will, in which he argues that free will is an illusion.  Without free will – namely, the freedom to make choices based on values – we must rethink what morality means from the ground up.  Our actions still have consequences without free will of course, but our notions of merit and culpability may need to be radically revised.

Together with a score of behavioral sciences now firmly rooted in evolutionary biology, these two lines of reasoning are beginning to obviate the traditional study of the origins of morality known as meta-ethics.  Like the theories of justice I mentioned previously, nearly all brands of meta-ethics implicitly (if not explicitly) take the biology of the human brain for granted, and so it is difficult to imagine how they will continue to be relevant once the era of biological transcendence arrives.  By failing to see beyond our biology, most meta-ethics remain chained in Plato’s cave, as it were, puzzling over shadows.

Shake Your Windows and Rattle Your Walls

Changes are coming of a magnitude like nothing humanity has ever faced before, and they are coming in the lifetimes many of the people likely to be reading these words.

Neurons

Neurons firing (Credit: Dr. Jonathan Clarke)

 

In each of the preceding questions we see that the consequentialist view of wellbeing as the arbiter of moral values breaks down once we have a choice over how our brains respond to their environment.  The questions I have raised here are ones that I believe we are obligated to ask, not just because they make for lively academic debate but because we will need real answers to them far sooner than most of us imagine.

 Leave a Reply

(required)

(required)

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>