On Twitter, Steven Marlow has asked me to justify the exclusion of current AI systems from our system of rights without invoking the fact that they’re not human or that they don’t have feelings. Josh Gellers seconded the motion, adding that it’s going to be a hard nut to crack. This post is my attempt to crack it. Though I do personally believe that one reason not to give robots rights is that they don’t have inner lives like we do, I will leave this on the side and see if I can answer Steven’s question on his terms. I’ll explain why, being what they are, they can’t have rights.
Keep in mind that, when thinking about AI, I am for the most part interested in the question of whether transformer-based artificial text generators like GPT-3 can be considered “authors” in any meaningful sense. This intersects with the robot rights issue because we know how to recognize and respect (and violate!) the moral and legal rights of authors. If an AI can be an author then an AI can have such rights. To focus my inquiries, I normally consider the question, Can a language model assert “the moral right to be identified as the author” of a text? Under what circumstances would it legitimately be able to do so? And my provisional answer is, under no circumstances would it be able to assert such rights. That is, I would exclude GPT-3 (a currently available artificial text generator) from moral consideration and our system of rights. I take Steven to be asking me how I can justify this exclusion.
Remember that I’m not allowed to invoke the simple fact that GPT-3 is not human and has no inner life. We will take that as trivially true for the purpose of this argument. “Currently excluded,” asks Steven, “based on what non-human factors?”
I do, however, want to invoke the fact that, at the end of the day, GPT-3 is a machine. We exclude pocket calculators from moral consideration as a matter of course, and I have long argued that the rise of “machine learning” isn’t actually a philosophical gamechanger. Philosophically speaking, GPT-3 is more like a TI-81 than a T-800. In fact, I won’t even grant that the invention of microprocessors has raised philosophical questions (including ethical question about how to treat them) that are any deeper than the invention of the abacus. All that has happened is that the mechanism and the interface have changed. Instead of operating it by hand, the calculation is automated, and instead of setting up the system with beads we have to count ourselves (and interpret as 1s, 10s, 100s, etc.), we can provide the inputs and receive the output in symbols that we understand (but the machine, crucially, does not). GPT-3 itself is just a physical process that begins with an input and mechanically generates an output.
It shouldn’t have rights because it has no use for them. It neither wants nor needs rights. Giving it rights would not improve its existence. (Following Steven’s rules, I’ll resist the temptation to say that it has no “existence”, properly speaking, to improve. I’ll just say that even if it did, or in whatever sense it does, giving it a right would not contribute to it.) I simply don’t have any idea how to give rights to an entity that neither wants nor needs them. Tellingly, it isn’t demanding any either.
In a certain sense, GPT-3 is excluding itself from our system of rights. It is simply not the sort of thing (to honor Steven’s rules I’m not going to say it’s not a person) that can make use of rights in its functioning. Human beings, by contrast, function better given a certain set of rights. We are constantly trying to figure out which rights are best for our functioning (what some people call “human flourishing”) and we certainly don’t always get it right. Sometimes we have to wait for people who don’t have the rights they need to also want them. Then they ask for them and, after some struggle, we grant them. Whenever we do this right, society functions better. When we get this wrong, social life suffers.
But none of these considerations are relevant in the case of robots or language models. There is just the question of making them function better technically. To put it somewhat anthropomorphically, in addition to more power, better sensors and stronger servos, robots don’t need more privileges; they just need better instructions. That’s what improves them. Giving them freedom isn’t going to make them better machines.
A good way to think of this is that machines don’t distinguish between their physical environment and their moral environment. They are “free” to do whatever they can, not want, because they want for nothing. A chess bot can’t cheat because it doesn’t distinguish between the physics of the game and its rules. It can’t think of trying to move a chess piece in a way that violates the rules. (GPT-3, however, doesn’t know how to pay chess, so it can’t cheat either.) For the bot, this space of freedom — to break rules — doesn’t exist. There is no difference between what is legal and what is possible. And that’s why robots can’t have rights. Fortunately, like I say, they don’t want them either.
How did I do?