This is going to make my hair itch for weeks. Dan Jones investigates the problems of moral choice and big problems for NewScientist (26 September 2015, paywall) and comes up with a doozylicious problem, at least in my mind. First, he covers the basics: intuitive moral sentiments are those gut reactions you have learned for local situations – you see it, you act. These are good when the situation is, ah, local, or better put, when the effects of your action are limited to the local (geographical) area, and, although it’s not stated, an analogous statement about chronological measurements.
And then there is what Harvard neuroscientist Joshua Greene calls “manual mode”, where the situation calls for deliberate consideration. The decision may not be quick, but it may more often be correct, especially if the intuitive reaction yields an improper result. Manual mode appears to be more appropriate for situations where the choice, correct or not, will have a far-reaching affect.
He covers a bit of history, such as the British history of abolition (it involves shame), and then moves on to modern movements, which also utilize shame, which brings us to this:
However, harnessing the power of rational reflection, collective identity and shame may not be the only options for would-be moral revolutionaries. In their book Unfit for the Future, philosophers Ingmar Persson of the University of Gothenburg in Sweden and Julian Savulescu of the University of Oxford argue that our moral brains are so compromised that the only way we can avoid catastrophe is to enhance them through biomedical means.
In the past few years, researchers have shown it might actually be possible to alter moral thinking with drugs and brain stimulation. Molly Crockett of the University of Oxford has found that citalopram, a selective serotonin reuptake inhibitor used to treat depression, makes people more sensitive to the possibility of inflicting harm on others. Earlier this year, for instance, Crockett and colleagues found that participants who had taken citalopram were willing to pay twice as much money as controls to prevent a stranger from receiving an electric shock (Current Biology, vol 25, p 1852).
I leaned back and wonder, Is this the loss of moral choice?
Of course that raises moral questions in itself – who to treat, how, and at what age? But Persson and Savulescu argue that if the techniques can be shown to change our moral behaviour for the better (who or what defines “better” is another question), then there are no good ethical reasons not to use them. Take the issue of consent, which children could not provide. “The same is true of all upbringing and education, including moral instruction,” says Persson.
But wouldn’t biomedical moral enhancement undermine responsibility by turning us into moral robots? Persson and Savulescu argue that biomedical treatment poses no more threat to free will and moral responsibility than educational practices that push us towards the same behaviour.
Assuming this was practical across a large segment of the population – it’s not, yet – can I agree with Persson & Savulescu that this is no different from moral instruction? I’m finding this difficult.
Education is the provision of known true facts (as best we can know them) and processes to sentient beings in order to facilitate better actions. In other words, the brain is altered by the impact of knowledge. However, as sentient, self-aware beings, we have at least the potential to understand why we react as we do to the world, such as understanding how increasing greenhouse gasses causes world wide climate change. If the administration of a drug would cause a comparable change in reactions as does knowledge, well, how is this working? The example is interesting, as it suggests an increase in empathy, but I have to wonder if it would a similar impact in manual mode.
Yet, unless one believes in the deterministic model of the universe, I see a difference in that the person subjected to education, general or moral, is still making a choice: a choice to believe, or disbelieve, the evidence, the processes, or even the inclinations of God, and whether or not the result of these actions are beneficial or not for themselves and those they are impacting in the non-local area. Is this so true of the person with the medicated morality? As I think about it (with my head-cold bound brain), it seems more and more fantastical to think a medication can change morality. To be sure, the cited study appears to have modified the intuitive moral mode; would it also affect the manual mode?
Is it coercion? Is shame coercion? Yes, and yes. Which is impermissible?
Another question: if a drug can make us “more moral”, does this imply there is a morality of some certainty, and that it’s known by our bodies if not articulated by our philosophers? Or is it simply a matter of interpretation: sure, the behavior is modified by the drug, but whether this is more or less moral depends on the interpretation put on the action?
Yep, the hair will be itching for weeks. Let me know what you think.