Defining Sentient Joy
Ben Romney proposes a gradient rubric for quantifying the moral weight of human and non-human beings based on their capacity for happiness rather than intelligence. The rubric combines six measurable parameters—observed emotional behavior, neuron count, self-awareness, memory capacity, potential, and external utility—to produce a score from 0 to 100. Romney argues this framework could guide ethical decisions about animal welfare, self-driving car algorithms, and expanding our circle of moral concern to the twenty quintillion animals sharing our planet.

Ben Romney is a senior software engineer at Qualtrics. Outside of his professional career, he pursues his interest in moral philosophy. He presented at the MTAConf 2020 on his paper, “A Gradient Rubric for Human and Non-Human Utility,” available at bromney.com/ethicspaper.pdf. ¶ Romney’s work focuses on expanding ethical considerations beyond the human species to encompass animals, plants, and potentially robots, all in service of maximizing global happiness. He advocates for a utilitarian approach, proposing a gradient rubric to quantify various life forms’ capacity for happiness, aiming to prioritize efforts to improve the world for all sentient beings.
Transcript
Ben Romney
So welcome to this breakout session on defining sentient joy. My name’s Ben Romney and I’m presenting this based on a paper that I wrote called A Gradient Rubric for Human and Non-Human Utility. You can find that at my website, bromney. com slash ethicspaper. pdf. And just as a little background about myself, I’m a senior software engineer at Qualtrics. That’s my day job. By night, I’m a dabbling moral philosopher. So thanks for Giving audience to some of my ideas today.
Ben Romney
All right. So when we imagine our planet’s ideal future, we must not restrict our circle of concern to the human species alone. So I read a recent estimate that there are twenty quintillion or twenty billion billion animals on the planet that we share the planet with. Many of those are insects, but several I believe billion are mammals just like us. And then there’s also plants, which were brought up in the chat with David Pierce’s discussion. that may have moral relevance with their capacity to experience happiness. Robots, you know, who knows? if our current robots can experience happiness, but if they can, there’s definitely moral relevance there. So there’s just given the huge numbers with how many life forms there are, there’s tremendous opportunity for ethical progress in this sphere.
Ben Romney
So usually with humans, people align with Jeremy Bentham’s dictum, everyone to count for one, nobody to count for more than one. But there’s been less conversation for how to best quantify the value of a nonhuman life. So for example, does the life and happiness of a gorilla count as much as that of a human? Or how would it compare to that of a chicken or an ant. So identifying the relative parameters and then building a rubric that counts utility accordingly Would enable us to prioritize our efforts as we try to make the world a better place for all its sentient beings and live in a more ethical way than we do now.
Ben Romney
So my thesis is this, the maximization of global happiness requires a utility function that counts for both human and non-human beings. To this end, I would propose a gradient rubric by which we may approximate various life forms’ capacity for happiness. So throughout this presentation, we’ll just kind of hit these three points. The importance of estimating the capacity for happiness as opposed to other things like intelligence or other attributes. I argue that the capacity for happiness is is the most important thing to estimate. A proposal of a rubric that takes in several parameters and outputs a score of an organism’s capacity for happiness. And then we’ll talk about the rubric’s implications.
Ben Romney
So I take the utilitarian position that the maximization of global happiness is the ultimate good. I believe that also many times the most efficient way to improve global happiness is to alleviate suffering where it exists. So some people might say, okay, well, like, why not intelligence? Like, many people. Justify keeping animals in factory farms based on the fact that we’re more intelligent than they are. But if you take that attribute as your guiding factor. We should remember that Google DeepMind’s Alpha Go computer recently defeated the best human Go player in the world and could be considered more intelligent, at least in the narrow sense. than humans. And in the coming decades, we’ll certainly see AI that can run the gamut of human abilities and be more intelligent generally. Um, but it’s unlikely that they’ll have the capacity to experience happiness and suffering. Um We certainly hope to be able to find out if they can, because that would that is an important thing to know. But it’s I think it’s unlikely that they can feel happiness as much as as uh humans or other animals can at this point. So if intelligence is our guiding attribute, we should have no qualms allowing super intelligent machines to treat humans as we have treated non-human animals. So for moral consistency, it might be wise to follow David Pierce’s recommendations. The previous discussion.
Ben Romney
And from a historical perspective, happiness and suffering are inextricably tied to life on planet Earth. At the dawn of life, four billion years ago, when the first single-celled organisms emerged, they were motivated to go towards particles that were helpful and shy away from particles that were harmful. And so those those primitive emotions of pleasure and pain were foundational to what life means, what it means to, yeah. To live, at least on this planet. And so that kind of gives insight to how important these two emotions are: happiness and suffering.
Ben Romney
The main problem uh with holding the capacity to to have happiness As our kind of like guiding star when it comes to assigning moral weight to different entities, is that it’s difficult to measure. It’s kind of like the philosophical zombie problem. So, like, you guys can’t know with any certainty that I am conscious. I might just be a zombie that does and says all the things that a human would do and say, but inside the lights aren’t on. But you can reasonably assume that since I appear to operate the same way you do, that I am sentient just like you are. But in reality, the only thing that you know is that you yourself are conscious. Similarly, we can make assumptions that other people can experience happiness and suffering. And also other animals can, and perhaps plants and robots. And we can use Our observations and a few other parameters that go into this rubric as or in order to inform our calculation of utility.
Ben Romney
So with that, here’s the utility function that I propose. Utility equals O plus N plus S plus M plus P plus E. So we’ll go over each of these six parameters, but they stand for observed emotional behavior, neuron count, self-awareness, memory capacity, potential, and external utility. So each of these six attributes is more measurable than consciousness itself is. They’re less hard problems to solve. And when we add them up, it will give us a maximum utility score of 100. The Buddha would be a hundred, and like I don’t know, a grain of rice would be zero. So That’s the utility function, and now we’ll go through each of the parameters.
Ben Romney
So for the first one, observed emotional behavior. If we want to understand another human, the first thing we do is observe their behavior, like their vocal sounds, facial expressions, bodily movements, etc. We can do similar things with animals and plants. And we can and robots, we can ask these questions. To what degree does the entity respond to pain? And to what degree does the entity seek out pleasure? It’s sometimes remarkable how happy a puppy can be, but also, you know, if you’ve ever been to uh a factory farm or a slaughterhouse. Um I actually did some research during university at one. It it can also evoke a powerful sense of emotion on the other end of the spectrum when you see some of the things going on there. So you can assign a maximum of 15 points to each of these to these bullets and then 30 points total for this category.
Ben Romney
And then neuron count. So humans have 86 billion neurons. As we know, computational power increases as the number of transistors on a microchip increases. So similarly, we could assume that as the number of neurons increases in a brain, the the amount of consciousness or capacity to experience happiness would also increase. As a thought experiment, you could imagine removing one neuron at a time from a brain and by the time you get to zero, consciousness Would most likely cease to exist. And so clearly, there is some tie between neuron count and consciousness. So An African elephant has 257 billion neurons, which is about three times that of a human. So to get our maximum value of 20 for this category, we divide it by 12. 85 billion for a maximum value of 20. And it’s important to note here that I assume all neurons are equal, but in further iterations of this rubric. We might want to value cerebral cortex neurons or other neurons with a little higher weights.
Ben Romney
All right, self-awareness. So, you know, it’s It would be difficult to identify with pain or pleasure if you didn’t have a sense of self, at least to some extent. So one way to measure that is the mirror. Test where you draw a dot or stick a dot on the forehead of an animal and then put it in front of a mirror. And if it tries to scratch off the dot, we know that they recognize themselves. So humans, dolphins, killer whales, bonobos, chimpanzees, Asian elephants, magpies, pigeons, ants, and At least one species of fish have passed this test. Cats and dogs, interestingly enough, cannot recognize themselves in a mirror. It’s also important to note that like a blind human wouldn’t recognize themselves in a mirror. So this test isn’t sufficient to quantify self awareness completely, but hopefully we’ll develop other tests and in the end be able to add up to a possible of twenty points in this category.
Ben Romney
All right, memory capacity. So John Locke argued that the essential thing for personal identity is a capacity for memories that connect him to His past self. And for humans, most of the happiness and suffering we experience is actually in relation to things that have happened to us in the past. So like if I don’t remember something that happened to me years ago that was painful, that’s less relevant than something that I remember every day and that stick With me. So extra care should be taken with entities that have high memory capacity. And so there’s a possible score of 15 for the maximum for this.
Ben Romney
Category and potential. So, potential is the ability to become an entity with a high degree of capacity for happiness in the future. So this is a little less direct. It’s relevant over time. But it’s important because without this criterion, there’d be little ground for valuing the life of a newborn baby who doesn’t have memory capacity or self-awareness to the extent of an adult. But because of potential, they still have at least. some points in this category. Similarly, an adult human who’s in a permanent coma may have a lower score for potential. So the total here is 10 points.
Ben Romney
And external utility is the final parameter. And so this is important for endangered species. They bring the rest of the world a little more sadness when the last endangered member of endangered species passes away. It’s also important for people in comas, like we talked about in the last section. They might not be experiencing happiness in their coma, but they’re continuing existence. brings happiness to their family. So there’s an external benefit for o other individuals’ happiness. So this one’s the only one with a possible negative value. So like mosquitoes that have malaria could spread suffering. Also criminals may have negative utility. In this category. So that’s one thing to note.
Ben Romney
So here’s the final rubric with the max point values. It’s important to note here that this is just a model. And other moral thinkers may assign points differently or have other categories, but I just intend for this to be to lay the groundwork. for the discussion and details can be refined as they come to light. I just got notified of It’s a chat. Let’s see. Okay, cool. So yeah, we’ll have time for some questions at the end.
Ben Romney
So here are a few select examples. So human score of 86, gorilla 65. So one thing to note is that this is meant to be used at the individual level, not the species level. So like it’s entirely possible that Um, you know, certain uh gorillas might have a higher score than certain humans. Um And that might not sit easy with some people, but it’s something that this rubric leaves open for possibility. And it upends Bentham’s dictum, everyone to count for one. Unfortunately, Bentham wasn’t ever clear on his scope in the animal kingdom. So this rubric does justice. To the elephant in the room and all the other species as well. You know, plants, I think, I would place somewhere around in the like five to like ten score robots today, maybe the a Roomba might be somewhere around like a one or two. We should note that I didn’t actually go out and to make to get the gather these scores. I didn’t run any studies or a lot of them are just best guesses at this point. you know, run any uh you know, peer-reviewed studies or anything. So these are these are estimates. Um, neuron count actually is pretty um that’s the only one that I can say. Like, I pretty confident that I got the right numbers. Um but yeah, so that’s those are just a few examples of what happens when you when you crunch the numbers through the through the equation.
Ben Romney
And so applications. I leave it up to the reader to draw their lines with the scores. So like some readers may decide to not eat anything with a s score utility score above forty or maybe not to squish with a tissue paper or anything that has a positive score. There’s lots of different applications. And it’s important to have an objective measure of this now that we’re encoding these algorithms into our everyday lives. For example, self driving cars and the trolley problem. We We would hope the engineers at Tesla have thought through and have don’t just use a random number generator when it they decide how a car is going to react in In an emergency situation. So also looking at you know, human and animal rights over the centuries, it’s clear that we are improving and extending our circle to other species, for example 1971, the US outlawed horse consumption. So, you know, cows and pigs have similar scores as horses. So for moral consensus. It would make sense that in the near future we would ban the slaughter of those animals, similarly scoring animals for food consumption. As well. And as we continue to develop more accurate models, we will continue to make the world a better place for all its sentient beings. So that’s it for the slides.
Ben Romney
I already have a few questions. Feel free to add them to the chat. Okay, so this one’s from Caleb Jones. What are your thoughts about applying this rubric to superorganisms, okay, like ant colonies, biomes, ecosystems? Yeah, like so some of the biggest organisms I think the largest is a fungus in Oregon. And then there’s also an aspen grove here in Utah that is quite extensive. I do think that And then ant colonies. Yeah. So um I think there can be some shared consciousness that goes between um Entities. Again, that’s really hard to measure. But I think one thing we could do is consider I think we’d have to adjust this rubric. Like I said, this isn’t a perfect rubric, but I think we could maybe consider each ant a neuron or perhaps each root in or an aspen root to be a neuron as well. And like have that Be able to add those up. Um, but yeah, I I do I do think that um that those would be important to include.
Speaker 2
Thanks, everyone. Sorry I didn’t leave a little more time, but I hope that was interesting and helpful for everyone. Okay, thanks.