In the real world, morality is very obviously a learned, communal, social thing. Children are taught moral rules and attitudes by their parents and teachers, by movies and television, by friends and by bullies. Different communities have different moral rules. And there is, of course, disagreement on particular rules and values within a community (is abortion wrong? How about assisted dying? Is it wrong to eat meat from factory farms? What about hunting?)
Philosophers going back to Plato have understood the communal, social morality we live with as connected to and dependent on a separate, perfect, objective and correct morality. The opinions a particular person or community might have about the morality of slavery, theft, property rights, or divorce are correct just insofar as they accord with objective morality.
Plato thought that the good had a real and literal existence outside the physical world, along with beauty, love, and triangle. Whether horseness and mud-ness similarly existed he was less sure. Modern philosophers have mostly abandoned his metaphysics, but the belief in a non-social objective morality remains widespread.
One nice thing about this view is it explains how and why we have disagreements about morality. If I passionately believe that abortion is always wrong and you disagree, it is not very satisfying to say that it’s wrong for me and right for you. If I believe abortion is wrong, I am not just stating my personal code of conduct, I am expressing the view that no-one should have abortions.1 If there is one perfect and objective correct morality, then we are disagreeing about what it requires, and if we could figure out the right answer one of us would be obliged to change their mind and maybe their behaviour.
I think that objective morality does not exist, and that the ‘moral realists’ who believe in it face an unresolvable dilemma. Either objective morality is empty, or it is unmotivating. This dilemma is the main thing the parable of the Tiger and the Ape was trying to show. To get any content for objective morality, one needs rules that every rational being would have to accept. But there are no such rules (‘do what you think is best’ doesn’t count), since any substantive moral rules cannot guarantee agreement. Attempts to derive moral rules along the lines of ‘do unto others as you would have them do unto you fail’. Ethical egoism is the most obvious consistent counterexample, but animal rights is another one. How do you apply the golden rule if you don’t know who counts?
You cannot base objective morality off of intuitions, because moral intuitions are too various and there are no rules (not even ‘don’t torture babies’) that command universal assent. I won’t belabor this point, but do moral realists really think that Jeffrey Dahmer would agree about torture if he thought it through a little more?
But I’m not going to get into that stuff any further here. What I want to talk about is the positive case for moral relativism.
The kind of moral relativism I support is that there are multiple different moral systems that rational agents can follow, but that there is no objective standard which can show that one of these is correct. This kind of moral relativism is quite difficult to falsify. It is clear that as a matter of fact, there are different moral systems that people follow. So the question is whether there is some way of sorting out a correct moral system from those on offer. We’ve seen my reasons for thinking not.
Now, I will happily concede that there are objective grounds for preferring some moral systems over others. That is because a moral system could be inconsistent or ineffective. A moral system that makes contradictory commands or is logically incoherent maybe shouldn’t be adopted given rationally superior alternatives. And moral systems that have certain underlying values (e.g. promote the greatest good for the greatest number) but which include rules that actually interfere with those goals (e.g. torture all lawbreakers to death) are similarly unappealing. The practical moral apparatus that implements the underlying principles can be broken, and those moralities objectively suck, because they suck according to their own standards.
Not only that, I think that for you and I, it may well be the case that there is a best moral system. That is because people have individual values and preferences, and the moral system that best reflects an individual’s values and preferences will be better suited to that individual. Not only that, but moral systems in the real world are embodied in social norms. That is, societies attach social rewards and punishments to following or deviating from the accepted moral code. So anyone who values getting along well in society has some reason to obey society’s moral code, and perhaps also to personally adopt it. I, for example, accept something pretty close to rule utilitarianism in my personal morality. I just don’t think that everyone is obligated to be a rule utilitarian; that anyone who disagrees with me is making a mistake. Our moralities may be incompatible.
For social apes like us, who suffer empathetic pains at seeing other people suffering and who must live in complex social group, an altruistic morality is a good fit. For a tiger, less so.
There’s the rub. Values and preferences are inherently subjective. A moral system that I will accept based on my values and preferences will not seem compelling to a tiger, or a serial killer, or a Nazi. So long as their preferences are internally consistent, on what grounds could we prove that their preferences ought to be abandoned?
If I cannot claim that my morality is objectively correct, what grounds do I have for following it at all? Are my principles not empty wind? Hardly. My principles are exactly that: mine. By calling them principles, I universalize them in the sense that I desire them to be followed by others. Insofar as I share values and preferences with others, I may be able to persuade them to follow my code, if it better realizes their values and preferences. But there are no ‘stance-independent’ reasons I can use to persuade them, only the implications of their own beliefs and values together with facts relevant to the efficacy of their practical moral apparatus.2
Back in the real world, moral systems are endorsed and enforced by societies. A community of people endorses and adopts a moral standard. They do this because the individuals are close enough in their outlooks that they can accept a shared system. Despite our many differences, we apes are much alike. Most of us value fairness, compassion, freedom, the welfare of self and others in the community. The moral disputes we have mostly concern the relative weight to give our conflicting preferences.
When rival moral systems confront one another, there might be no fact of the matter as to which is correct in an objective sense. This is, I think, the main intuitive objection to moral realism. It seems crazy to say that there is nothing wrong with Nazism or the Khmer Rouge. But as I observed above, there are objective grounds for preferring some moral systems over others. I think it is actually quite easy to show that the Khmer Rouge are immoral even for a moral subjectivist. The Nazis are also immoral, but a bit more interesting to discuss.
The Khmer Rouge were Marxists, and consequently they believed in something pretty close to Utilitarianism, like me. The Khmer Rouge’s beliefs and principles were intended to create peace and prosperity. Instead they killed millions and plunged the country into destitution. They were evil by their own standards and mine. Their practical moral apparatus — the beliefs and instrumental valuations about what will further their moral goals — were catastrophically flawed.
The Nazis are also pretty clearly failures in the same way that the Khmer Rouge are, however one might understand their moral framework. But in addition to that, their moral framework is an exclusivist one — that is, most human beings don’t count for much or anything for the Nazis.
This raises a concern. Is the problem with Nazism really just that they did a bad job of creating a thousand year reich? Isn’t there something objectionable about Nazism in virtue of the desire to commit genocide in the first place? Isn’t it wrong that they thought genocide was good?
Yes, it is wrong to think genocide is good: to me. And to most people. The norms of agents are definitive of right and wrong, because right and wrong are inherently subjective. Genocide was good from the Nazi point of view. There is no rational argument to persuade a Nazi that Jews deserve to live, except that they might be useful for a little while.
But we are not obliged to consider Nazism from an objective point of view. There is no such thing in morality. The only view we have is our own. The existentialists were right about that. We have nowhere else to turn: we have to decide moral questions for ourselves.
But we are not untethered in these judgements! We are moral animals. We value our own welfare, and that of others. The foundation of subjective morality is individual values, individual preferences, individual judgements of good and bad. The rational system of our considered moral judgements is built on these intuitions. We have these intuitions: they are as real as love and hate, pain and pleasure, vision and smell.
Torturing an animal does not seem wrong to a tiger. But it will always seem wrong to me. It seems wrong because I judge the suffering of animals to be bad. I am free to do so. I also cannot help it. I condemn those who act wrongly by my lights. The fact that my principles are not rational obligations on all agents is completely irrelevant to my interest in furthering those principles. It just means I would be wise to associate with agents who share my principles or something close to them.
And that I should be ready to hit the tiger with a rock.
So, actually, I can perfectly well universalize my preferences to others without there being an objective standard making my preference true. But we’ll get there later.
By practical moral apparatus, I mean those factual beliefs that determine our instrumental values and the normative commands that depend on those factual beliefs. These are not at all insignificant. The difference between a Marxist and a libertarian can easily be a matter of factual disagreement. The normative injunction to kill the Kulaks, or abolish the welfare system, depends on a factual belief in the efficacy of that action in furthering a deeper moral goal.
Morality is a personal understanding of best practices when dealing with other creatures. Ethics is formalized, usually shared, morality.
Morality is not something we are reasoned in to. Morality is something we are socialized in to. You can’t logically convince someone to share your intuitions but we can set up our society to reinforce intuitions that we want to impart.