Beckoning

In my opinion, it is very worthwhile to study ethics and metaethics in order to answer questions like “What is morality?”, and “Now that I have a strong case for what morality is, am I bound by it?” To answer these questions, one can study evolutionary psychology and philosophy, among other disciplines.

One aspect of ethical inquiry that I believe is widely under-appreciated is answering the question: “What is my brain’s current algorithm for determining how I feel about moral issues?” Most moral inquiry, it seems, is more based on why we have our current beliefs, whether they are right or wrong (or if there is such a thing as a right or wrong moral belief), and what we should do about them. But for now, put those question aside. I want to focus on the seemingly much simpler question as to what exactly our current moral beliefs are.

When one encounters a stimulus with moral weight (say, someone is murdered, or someone makes a breakthrough in cancer research), our brains quickly execute an algorithm to that produces as output a moral emotion. This algorithm can be very complicated. It certainly isn’t as simple as “Moral approval if it promotes the greatest good, moral disgust otherwise”, or “Moral approval if it abides by the rules of this old book, moral disgust otherwise”, or “Moral approval if my community feels moral approval when this happens, moral disgust otherwise”.

We can test our moral algorithm by considering hypothetical situations and imagining how we would feel. We have a long history of categorizing those results, and we can predict with high accuracy how we’ll feel in a hypothetical situation. In addition, our moral beliefs strongly inform our moral algorithm. But they cannot fully determine it, just in the same way that beliefs about what make you happy cannot fully determine what actually makes you happy. Like all emotions, most of the process that generates moral attitudes from emotionally charged situations is unconscious. Think of how difficult it is to understand the algorithm underlying basic emotions. We are notoriously bad at knowing what makes us happy, knowing what we want, or even knowing what we like.

I recently had an epiphany into the nature of my moral algorithm. For me, morality is in large part a beckoning. It is a desire for everyone in the world to appreciate beauty with me. It is “Hey! Come look at this awesome thing!” Whatever moves the world towards this end feels morally good to me, and whatever opposes it feels morally bad. I feel that my sense of beckoning is perhaps the dominant part of my sense of morality.

My sense of beckoning is not a belief. It is a desire and a goal. I also have strong beliefs that inform my moral compass, like the belief that people should be free and well fed. But where moral beliefs come from is another matter entirely.

Perhaps we can learn a lot about what morality means for us when we set aside questions about belief, and instead reflect on what our moral algorithms are like.