Renowned author and secular scientist Sam Harris hosted an essay contest a few months ago. This post is an explanation of why I couldn't participate (because I agree with him) and what is wrong with the winning entry.
A little background on Sam... he's an American philosopher and neuroscientist at UCLA. In 2004 his book The End of Faith appeared on The New York Times Best Seller list and stayed there for 33 weeks. He's made a career of shooting religious fish in barrels in public debates across the world, thus earning his a place among the four horsemen of the anti-apocalypse, and becoming a major figure in the so-called New Atheist movement.
Ok, so the essay contest issued concerning his book The Moral Landscape, the central argument of which goes like this:
Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.
He's already responded to previous criticism here and none of his detractor's arguments are fatal. The truth is that textbook meta-ethical, moral-skeptical, and utilitarian attacks fall flat. So then, why is Sam's argument not more persuasive? That's the interesting question.
Why The Winning Argument Fails
Ryan Born, a philosophy teacher from Georgia, was chosen as the winner. Now keep in mind that Sam didn't choose the winner. Russell Blackford was the judge, when I heard this I had predicted that he would choose a simpleton philosophic essay. This is because Blackford never understood Harris in the first place! Anyhow, Ryan's argument is this:
Sam's science of morality presupposes answers to fundamental questions of morality and value.
The fundamental questions of moral philosophy, Ryan's bread and butter, are but spirals of pontification fit for essays in obscure academic journals. Interesting moral questions are the ones that are not direct consequences of axioms. Once you've assumed a particular moral superstructure, all the interesting philosophical work is over. Interesting problems are the ones that must employ science to win any ground over, and both Sam and I use "science" in the general sense of the set of all rigorous subjects, including philosophy, a point Ryan clearly missed.
The bottom line is that at some point we need to make a choice, we cannot derive axioms, and this doesn't invalidate the resulting system of deduction itself. Minimizing the suffering of conscious creatures is the best choice for an axiom I've heard yet, and I'm comfortable ignoring the arguments of those who wouldn’t assume it!
Why Sam's Argument is Not More Persuasive
Because navigating the "Moral Landscape" is hard work and people are lazy. The lack of a clear program for how exactly we would use science to answer questions is problematic and makes people queasy. There are too many variables to isolate and no feasible amount of double blind studies could account for them all, people want exact not probabilistic answers.
Since Sam posits a version of utilitarianism, morality becomes an optimization problem. And as in all hill-climbing problems, you need to know which direction to travel. It might be the case that computing the heights and angles of the surface of the moral landscape is not tractable or even possible, i.e. the moral metric is an uncomputable function. That is, if in the course of exploring the moral landscape, we find ourselves standing on a hill surrounded by fog with no peaks in sight, which direction do we take? Are we deterred that we must go down to go up to find higher peaks? Additionally, there may be fundamental limits to the accuracy of simulating the global neuronal weather system, and even if there aren’t fundamental limits the necessary compute power wouldn’t be available for decades or even centuries. Now all these are valid concerns, but they shouldn't stop us. This is clearly a difficult endeavor, but one I believe is worth the attempt.
Two Interesting Points
Sam doesn't help himself by ignoring zero-sum moral problems. People are impatient and want answers to pragmatic problems in the world today - those that involve weighting the wellbeing of others, as governments weight the needs of their own citizens.
Compassion may be an Earthling-centric value. A universal science of morality on a par with say physics should be equally possible to practice on any planet. But we can conceive of a sentient species that evolved without compassion. For instance, on the Borg home-world moral science wouldn’t be the same as on Earth.
However, we have made a well informed choice, compassion is our choice as an axiom, and we can ignore those who wouldn’t assume it, including evil aliens and contrariwise philosophy teachers :)