In searching to understand natural worldviews, one of the things that keeps coming up is the need to develop a system of morals. Ethics is generally part of religious worldviews, but it seems generally accepted that ethics in a natural worldview are developed from reasoning, logic, and some sort of shared values. For example, sentience is often considered valuable, so that particular shared value can form the basis of an ethics.
After looking at several rational, naturalistic approaches to defining values and morals, I’m left wondering about the general approach taken in developing these frameworks. One of the common elements seems to be that they are based on an assumption that the process can be approached as a rational exercise. In other words, given one or more shared values as described earlier, one can then apply some sort of logical or rational reasoning or thought process to develop an ethical framework.
The problem with this is that modern psychology and cognitive science have pretty clearly shown that humans are not generally rational, logical thinkers. Mind you, that doesn’t mean that rational thought plays no role at all, simply that it’s only one of several things that go into our mental processing, especially regarding things like setting values.
Seems like this raises a question: Should we really expect human ethical frameworks to be based on rational thought instead of taking into account all that it means to be human? In other words, why should we expect it to be possible to rationally establish human values and morals when humans themselves are not rational? *
For example, humans are fundamentally social beings; this is a central part of who we are. Therefore, it seems a mistake to consider the value of another being only in terms of their characteristics, and not include the relationships others may have with them. While relationships are not purely irrational, human relationships are generally so complex that describing them in a purely rational manner is not currently possible. Thus, if we want a rational framework for our ethics, we probably can’t simply include human relationships.
Consider, for example, how sentience is sometimes used as the basis for a value system. Some ethical frameworks start by assuming that sentience is something that we clearly value. We consider humans, the most sentient beings, to be most valuable. Based on that, we can build ethical systems that seek to optimize the well being of sentient beings. Working this out, some people conclude that a sophisticated mammal such as a gorilla, has more value than a human with a severe mental handicap. This is a logical progression of the framework built on valuing sentience.
However, this ignores the value that one human may have to another human because of relationship.
For example, when considering something like the value of an infant with severe mental handicap or adult overcome with dementia, it seems like their value includes others’ relationship with them. Some argue that, because such individuals do not exhibit sentience, support should be withdrawn from them and the resources used elsewhere. But, for example, terminating my handicapped child or withdrawing support from my parent with dementia does not affect only them. Because of the very deep relationship that exists with them, actually built into my brain and part of who I am as a human, such actions affect me, too.
It may be possible to assert a strictly logical framework and override the love I feel for others, but wouldn’t that come at the expense of denying some aspect of my own humanity? Seems like this is where the real death of sentience would happen, at least in part — in ourselves, in a part of our own humanity. This strikes against some of the most fundamental aspects of what it means to be healthy relational beings, in addition to rational beings.
Now, overly-compassionate actions may consume resources that could have been used to help others, but lack of resources isn’t really the fundamental problem today. We have enough resources as a species to take care of everyone; the problem is that resources are distributed non-uniformly. What’s really missing is a level of selflessness and love, on a global scale, to make sure that everyone is taken care of.
From that standpoint, would we really be better off by limiting our love when we can make some rational argument against it? Or would it be better to error on the side of loving too much, of sometimes being too compassionate? It seems like the latter is more likely to lead to the greatest good.
* Consider the difference between humans and Vulcans in Star Trek, for example. Basing an ethical framework strictly on rationality would seem to be appropriate for Vulcans, but as illustrated by the oft-highlighted differences, not for humans.
Image: Jax House [CC BY-SA 2.0 (https://creativecommons.org/licenses/by-sa/2.0)%5D, via Wikimedia Commons