The Question of Animal Sentience and Consciousness

0

Written by Jacy Reese, Sentience Institute Co-Founder & Research Director. Edited by Kelly Witwicki. Many thanks to Oliver Austin, Antonin Broi, Guillaume Corlouer, Peter Hurford, Tyler John, Caleb Ontiveros, Jose Luis Ricon, Brian Tomasik, and Jay Quigley for reviewing and providing feedback.

We launched Sentience Institute in June 2017 “to build on the body of evidence for how to most effectively expand humanity’s moral circle, and to encourage advocates to make use of that evidence.” Our aim is to expand the moral circle to include all sentient beings, but the term sentience opens up a host of important philosophical and empirical questions about exactly which beings we’re talking about.

Most of what we do doesn’t depend on the specifics.

Sentience Institute’s current focus is on expanding humanity’s moral circle to include farmed animals like chickens, fish, cows, and pigs. Our survey results suggest that 87% of US adults agree that “Farmed animals have roughly the same ability to feel pain and discomfort as humans.” Presumably an even higher percentage would agree that they have at least some level of sentience. There also seems to be consensus in the field of neuroscience that there is strong evidence for the sentience of many nonhuman animals.

This suggests that most common definitions and theories of consciousness agree that expanding the moral circle to all sentient beings includes expanding it to farmed animals, with perhaps an exception for the farmed animals with the simplest nervous systems such as shellfish and insects.

If we had to commit Sentience Institute as an organization to a single definition of sentience, we would say it’s simply the capacity to have positive and negative experiences, usually thought of as happiness and suffering. This is narrower than the most common definition of consciousness in philosophy, which is “something it is like to be that organism” (details in the second section of the post). Usually the term consciousness includes capacities beyond happiness and suffering, such as the experiences of seeing or visualizing a color. Sentience Institute chose to focus on sentience, which is a specific kind of consciousness, because most people who have given significant thought to the topic see sentience as morally relevant, rather than all conscious experience.

There are numerous theories of what exactly consciousness is. Is consciousness just a set of processes (functionalism)? Is it behavior (behaviorism)? There is substantial disagreement among researchers on which theory is most likely correct, and in fact there is disagreement about whether there is an objectively correct answer, as intuition suggests (the second part of this blog post will argue that there is not).

While we are always ready to update our beliefs on this matter, for now we think there’s significant evidence relative to these perspectives that nonhuman animals as simple as shrimp and insects have at least a small level of sentience or a small probability of sentience. This suggests they deserve much more moral consideration than they receive today, and thus we need to substantially expand humanity’s moral circle to include an extremely large number of beings. This position is all we need for most of our organizational decisions at Sentience Institute, at least for now.

Tentative views on the nature of sentience and consciousness

I’ve personally given a lot of thought to this topic even before co-founding Sentience Institute, so I’d like to spend the rest of this post detailing my current views, even though we’re not yet willing to commit our organization to a philosophical position on the topic. We hesitate to adopt an organizational view given such strong disagreement among researchers on the topic, but finding the correct answer here could substantially impact our research decisions and strategic views, so we would like to explore it more in the long run.

My overall tentative view is this: Consciousness (including sentience) does not exist in the common-sense way people intuitively believe it does. Specifically, we cannot merely discover whether a being is conscious — a possibility which is implied by popular questions like, “Is this robot conscious?” — like we might be able to discover whether an atom is gold or silver, based precisely on its atomic number. (Of course we can create a precise definition that allows us to determine this for consciousness, as I will elaborate on later, but current definitions are insufficient to allow for the categorization of entities as conscious or not.) People may have an intuition that consciousness is a real, substantive (or other similar terms) thing in the universe, but I think that intuition is unreliable and ultimately mistaken.

The view I wish to defend is often referred to as eliminative materialism, defined by the Stanford Encyclopedia of Philosophy as: “Our ordinary, common-sense understanding of the mind is deeply wrong and that some or all of the mental states posited by common-sense do not actually exist.” I believe Brian Tomasik has written the best articulation and defense of this view in his essay, “The Eliminativist Approach to Consciousness.” We seem to agree on essentially all the details, though I think we approach the topic with different focuses and terminology.

I’m also fine with the term consciousness reductionist, and I think the difference between this and eliminativism is semantic, or at least only a matter of rhetorical strategy. Consciousness reductionism is the idea that we can reduce consciousness to other phenomena, and eliminativism means we can just leave out discussion of consciousness and simply refer to those other phenomena. It might be advantageous in our discussions to use one of these framings over the other, but to the best of my knowledge, they both communicate the same empirical view of the universe.

I also endorse illusionism, meaning consciousness is an illusion like stage magic is an illusion, though I prefer not to use this term when possible because it doesn’t seem very well-defined to me. I use the term sometimes because it’s a useful intuitive way of gesturing at my precise views, and this is the terminology Luke Muehlhauser at Open Philanthropy Project uses in his report.

In philosopher David Chalmers’ popular framework of views on consciousness, I identify as a type-A materialist or a type-A physicalist. This means I don’t believe there’s a hard problem of consciousness. In other words, I believe “there is no epistemic gap between physical and phenomenal truths; or at least, any apparent epistemic gap is easily closed.”

Finally, I’m a consciousness denier, though it’s important to clarify that I’m not denying that I have first-person experiences of the world. I am fully on board with, “I think, therefore I am,” and the notion that you can have 100% confidence in your own first-person experience. What I reject is the existence of any broader set of things in the universe that we can objectively refer to as consciousness, based on standard, vague definitions like “something it is like to be that organism” or our intuitions and first-person experiences. I don’t think consciousness is an objective (i.e. attitude-independent) reality in this intuitive sense.

The core argument for eliminativism

With any of the common definitions of consciousness, I’m an eliminativist. The most common three definitions (or at least ways of pointing to consciousness) are:

  • “Something that it is like for the organism to be itself”
  • “If you have to ask, you’ll never know.”
  • A definition using personal examples, such as saying, “It’s the common feature between seeing the color red, imagining the shape of a triangle, and feeling the emotion of joy.”

But all of these (and other definitions, especially circular ones that just define consciousness with reference to another vague concept like “awareness”) share a fatal issue — they lack precision. They are insufficient for me to write a computer program that takes all objects in the universe and categorizes them as conscious or nonconscious. Contrast this with a precise definition like, “An even number is an integer which is a multiple of two.” I could easily write a computer program (assuming I have sufficient programming skill and a sufficiently powerful computer) that takes any integer and divides it by two. If the result is an integer, it is even. If the result is not an integer, it is not even. Thus the definition of even is precise for all integers.

So with a seemingly common-sense question like, “Is an insect conscious?” we can’t discover an answer even with the very best neuroscience, philosophy, and all other knowledge about the universe, because we do not have a precise definition of consciousness. This is the same way we would struggle to answer, “Is a virus alive?” We can’t hope to find an answer, or even give a probability, to these questions without giving a new, more exact definition of the term. Maybe we could define alive as “able to reproduce,” (of course, an oversimplification) then assuming we’re on the same page with the definition of reproduce, I could probably (in theory) write a computer program that categorizes everything in the universe into “alive” and “not alive.”

This issue is a straightforward consequence of any imprecise definition — let’s call it the imprecision argument in favor of eliminativism — yet scientists, philosophers, and others frequently throw out questions like “Is an insect conscious?” with the expectation that we might be able to one day discover an answer to this question, which is simply not possible if we don’t have criteria in place to evaluate whether an insect is conscious.

To be clear, I don’t think people who believe consciousness exists (i.e. people who disagree with me here) are just mistaken about how hard the question is or that they’re underestimating the resources it would take to answer that question. Instead, I’m arguing that questions like, “Is an insect conscious?” are actually impossible to answer. This is different than, say, math — where questions like “What is 32985 times 54417?” are hard, but not impossible to answer, especially given certain technologies like handheld calculators.

We can’t expect that sort of conclusive answer with consciousness because we can’t say exactly what it would mean for the insect to be conscious, like we can say a square root of a number is a number that, when multiplied by itself, gives the original number. It might be very clear to you that you yourself are conscious — if we’re using a deictic definition — but that doesn’t speak at all to whether a virus, an insect, or even another human is conscious.

Objections

“We have some positive examples of consciousness (e.g. adult, awake humans) and some negative examples (e.g. rocks). Therefore, we can make arguments by analogy about whether other objects are conscious based on the features they share with these examples.”

First, how do you really know those are valid examples? What in the definition of consciousness allows you to conclude anything about any of those objects — except for your own consciousness, alone, if you use the deictic definition?

Second, the existence of examples doesn’t by itself allow you to make estimates of other objects. This is only possible in cases where you know there’s an underlying, objective definition and are simply making estimates of that definition. For example, if I show you 10 shapes on a standardized test that are “nice” and 10 shapes that I say are not “nice,” then you can make an educated guess about whether another shape is “nice” based on the features it shares with the positive and negative examples. But you can only do that because there’s an implication that I have some objective definition I’m using behind the curtains to know if shapes are nice. With consciousness, I see no reason to believe there’s such a behind-the-curtains definition that exists but just hasn’t been told to you yet, despite how common usage of the term suggests there is one.

“Okay, sure, consciousness isn’t objective in that sense. But neither is an everyday term like, say, mountain. Are we supposed to stop saying mountain because not everyone agrees on a precise definition?”

Of course not. The term mountain is useful because our everyday discussions don’t depend on fine granularity. When I ask, “Is there a mountain in New York City?” you can intelligently respond that no, there is no mountain. But if I ask you, “Is there a mountain in San Francisco?” then you might need to clarify how tall a peak must be to qualify as a mountain because Mount Davidson, at 282 meters above sea level, may or may not make the cut.

The issue with consciousness is that our definitions (e.g. “what it is like to be”) are not nearly precise enough to match common usage (the implication that there’s an objective, discoverable answer). When someone asks, “Is this robot conscious?” in 2030, we might very well be dealing with a Mount Davidson situation. So we need a more precise definition in order to provide a reasonable answer. And getting that precise definition isn’t a process of discovery any more than it’s a process of discovery to decide the cutoff for mountain height — it’s a process of making up a definition, perhaps based on convenience or trying to get as close as possible to people’s intuitions.

To reiterate, I’m not denying that your first-person experience exists any more than I’m denying that Mount Davidson (or any other set of atoms that we speak of, like Mount Everest or Mount Diablo) exists. I’m just denying that a vague definition of mountain would let us identify every mountain and non-mountain in the universe, just as our vague definitions of consciousness don’t let us identify every conscious and non-conscious entity even with all the scientific tools and knowledge we can imagine. So I am a mountainhood eliminativist (i.e. mountainhood anti-realist) — I don’t think we could possibly discover whether certain peaks are mountains given that there’s not a precise definition of mountains. This might be unintuitive, but per the arguments above, it seems entirely correct to me. If someone decided to open a line of scientific inquiry dedicated to finding out whether or not certain peaks are mountains, I don’t think they could succeed or make any progress on that question in itself because they don’t have a definition to guide their inquiry. Of course, if you presupposed a cutoff for mountain height, such as 300 meters, or even a range of cutoffs over which we’re uncertain, such as 200 to 400 meters, then I’d be much closer to mountainhood realism.

I think this same argument applies to all imprecise terms, such as terms for elements like gold and silver before humans had a precise definition of these elements (i.e. their atomic numbers). It also applies to moral properties that serve as “useful fictions” like freedom and honor, but I’m not planning to write blog posts railing against the way people use those terms. With a question like “Is a citizen in a high-surveillance democracy free?” people tend to recognize that, while science and logic can inform our answer, we cannot find an objective answer without a more precise definition — some baseline of what exactly “free” means. This is unlike discussions of “consciousness” and “sentience,” where I see a lot of wasted resources spent on and moral decisions based on these properties being discoverable.

“That’s an interesting argument that consciousness doesn’t exist. But I have a better argument that consciousness exists. In fact, I have first-person experience that consciousness exists. This is superior to any logical or empirical argument.”

I’m not denying your first-person experience. Your first-person experience is an interesting and important feature of the world. If that’s all you mean when you argue consciousness exists, then I’m in full agreement. But usually people think of consciousness as a broader set of things in the world that is at least happening in the brains of billions of humans, if not also in many nonhuman animals. That broader set is not something of which you (or I) could possibly have first-person experiences, unless you’re suggesting some kind of mental connection between you and all of those other beings that enables you to experience their experiences.

But I am sympathetic to the strong intuition felt by many people that the broader set exists. I’ve felt that intuition too! However, I’ve since cast it off for two reasons.

First, I don’t think humans have reliable intuitions about this kind of deep question. Humans didn’t evolve making judgments of and getting feedback on our answers to deep questions like the nature of sentience, quantum physics, molecular biology, or any other field that wasn’t involved in the day-to-day life of our distant ancestors. And you probably haven’t had the opportunity to develop reliable intuitions to deep questions during your lifetime, though if you’ve spent years studying philosophy or similar disciplines, then perhaps you’ve developed some decent intuitions on these topics.

Second, I think there’s good reason to expect us to have an intuition that consciousness exists even if that’s not true. The idea of an objective property of consciousness is in line with a variety of intuitions humans have about their own superiority and special place in the universe. We tend to underestimate the mental capacities of nonhuman animals; we struggle to accept our own inevitable deaths; and even with respect to other humans, most of us suffer from the Dunning–Kruger effect: We believe our cognitive ability is better than it actually is. Consciousness realism is the same sort of phenomenon: it places our mental lives in a distinct, special category, which is something we strongly desire, but a desire for something to be true doesn’t make it true!

Moving forward with sentience research.

My view here fortunately doesn’t curtail the development of a deeper understanding of sentience or consciousness, nor does it curtail our ability to make progress on better understanding exactly which beings should be included in our moral circles. In fact, it facilitates this. If we’re stuck on consciousness being a real phenomenon, we’ll tend to waste a ton of effort musing about its existence or directly looking for it in the universe.

There’s an unfortunate cyclical effect here: our misguided intuition fuels vague terminology and causes philosophers and scientists to work hard to justify that intuition — as they have for centuries — which then enables a continuation of that misguided intuition. I believe that if we can get past this mental roadblock and accept the imprecision of our current terminology, accept that there is no objective truth to consciousness as it’s currently defined, then we can make meaningful progress on the two questions that are actually very real and important: What exactly are the features of various organisms and artificial beings, and which exact features do we morally care about?

If we want call those features, or some other set of features, “sentience,” that could be similar to, though not as elegant as, the way we started using “gold” to refer to an element with a certain atomic number that roughly mapped onto the proto-scientific vague definition of “gold.” But we could also drop the term, the way we’ve eliminated “élan vital” in favor of discussing specific biological features. I don’t have a strong view on the best rhetorical strategy.

Cataloguing mental features and refining our moral views on them might be a project Sentience Institute tackles in the future, though as stated earlier, it seems we have a good enough understanding of them right now for the purpose of advocating for the rights and welfare of farmed animals, wild animals, and artificial beings if they are at some point evaluated to be sentient (defined via moral concern). Even in the case of artificial sentience, we don’t need to have an exact understanding of the features sentient artificial beings will have in order to advocate on their behalf. Because our moral circle is currently very restrictive, advocating for its expansion gets us closer to the proper size, even if we’re not sure exactly what that is.

There are also two important implications of eliminativism outside of sentience research. First, it reduces the likelihood of moral convergence, because one way moral convergence can happen is if humanity discovers which beings are sentient and that serves as our criterion for moral consideration. This then should makes us more pessimistic about the expected moral value of the far future given humanity’s continued existence, which then makes reducing extinction risk a less promising strategy for doing good.

Second, it seems to usually increase the moral weight people place on small and weird minds, such as insects and simple artificial beings. This is because when you see sentience as a discoverable, real property in the world, you tend to care about all features of various beings (neurophysiology, behavior, but also physical appearance, evolutionary distance from humans, substrate, etc.) because these are all analogical evidence of sentience. However, if you see sentience as a vague property that’s up to us to specify, then you tend to care less about the features that seem less morally relevant in themselves (e.g. physical appearance, evolutionary distance). If an insect has the capacity for reinforcement learning, moods like anxiety, and integration of mental processes, then the eliminativist is more free to just say, “Okay, that’s a being I care about. My moral evaluation could change based on more empirical evidence, but those mental features are things I want to include as some level of sentience.” In other words, eliminativism places a burden on people who want to deny animal sentience. They more need to point to a specific, testable mental feature that those animals lack.

These implications of the eliminativism question make me see it as one of the most important theoretical questions in Sentience Institute’s purview.


Featured image: close-up of the face of Larry the cow, at Farm Sanctuary in New York. Image credit Jo-Anne McArthur / We Animals.

Print Friendly, PDF & Email
Share.

About Author

Sentience Institute is a think tank dedicated to the expansion of humanity’s moral circle. We are founded on the principle of effective altruism, meaning we strive to help others as much as possible using the best evidence available. We envision a society in which the interests of all sentient beings are fully considered, regardless of their sex, race, species, substrate, location, or any other characteristic apart from their sentience. We see this expansion as an important means to prevent suffering now and in the far future. Click to see author's profile.

Leave A Reply