Beyond the Hard Problem: We Can't Make Progress on AI Rights If We Can't Define Key Concepts Like Intelligence
Lessons on defining AI personhood from the drafting of the Refugee Convention.
Prompt: A Robot at the UN
When I started working on the hypothetical question of a right to personhood under the law for advanced AI systems and/or robots (a legal identity, as we human rights lawyers call it), I little realized how dominated my life would become by deep philosophical questions about intelligence and free will. I though the computer scientists, neuroscientists and philosophers would hand me a list of attributes, leaving lawyers with the task of turning this list into a law defining AI personhood. Next would come an AI “bill of rights,” then a series of laws balancing AI rights against those of humans. After that would come the integration of AI rights into existing law, etc etc.
When I began working on robot rights three years ago, my model for what the drafting process of a robot rights treaty was the Ad Hoc Committee on Statelessness, who met at Lake Success in early 1950, headed by Eleanor Roosevelt, to draft what would become the international Refugee Convention. Along with the US Constitution, the Declaration of Independence, the Magna Carta and the Declaration of the Rights of Man and of the Citizen, the Refugee Convention is probably one of the top ten most important secular pieces of paper in the world (that is not a strictly religious text, like the Bible). It defined the word “refugee,” not just in international law, but in the national laws of countries like Canada, the UK, the United States, and many others. Hundreds of millions of people have relied on the Convention for their right to asylum. Yet, the team drafting the Convention was pretty small; mostly diplomats and lawyers. Reading their debate today, one is struck by the informal nature of the conversation and its lack of rigour or structure. Politics hung in the background, as the global consensus following World War II was already souring into the mistrust of the Cold War. A trained philosopher would have found much to fault with in the result. But you enter the 1950s with the refugee law you have, not the one that might have been drafted with a little more foresight to how terribly important it ended up being.
One of the biggest challenges facing the Ad Hoc Committee was to define the term “refugee,” which at the time, like the term “robot” today, had no fixed, legal definition. “Refugee” was a colloquial term referring to a displaced person who could not go home. Under the Convention (and the subsequent, national laws of most states, which are modelled on the Convention), a refugee must not only be displaced and unable to return home, he or she must be “persecuted” “on account” of his or her race, religion, national origins, political opinions or “membership in another social group.”
Thus, the somewhat nebulous concept of “persecution” has come to dominate refugee law ever since, guiding the fate of hundreds of millions of people, and showing the enormous power of words and concepts when they are made into law. Thousands of court decisions have refined the concept of “persecution” over the years, and thousands of trained, expert lawyers continue to argue its meaning in tens of thousands of cases around the world. Thousands of articles and books have been written on the meaning of “other social group.”
Signing the Refugee Convention…and deciding the fate of millions.
This is what lawyers do. The world needed a way to sort people into the categories of “refugee” and “migrant,” so lawyers came up with a concept, “persecution,” to distinguish between the two. One group, the refugees, gets the right of asylum, the other group, the migrants, gets deported.
When it comes to AI personhood under the law, we’re not even at step 1. We’re at step -1.
Until the scientific and philosophical community can arrive at consensus on the nature of intelligence, consciousness, free will and other concepts, and how these relate, if at all, to having a moral status, I now fear that decisions about a future law of AI personhood will be left to public opinion. While this doesn’t necessarily spell disaster, it’s likely that public opinion on AI personhood will be divided, perhaps hopelessly so, leading to social unrest and even violence. While expert consensus doesn’t always lead to good laws, (see: the climate crisis,) it’s a necessary prerequisite to a snowball’s chance in hell.
Human Personhood Versus Human Personhood Under the Law
But why is determining the moral status of AI important for AI personhood under the law? Why not just declare some AIs to be people and have done with it? To again take the example of the Refugee Convention, the group at Lake Success was very aware that refugee law needed to be as fair as possible. Not every displaced person could remain in their adopted country. Some people would need to be deported. “Merely economic reasons,” they decided, were not enough. Gender and sex weren’t included as grounds. Climate change wasn’t mentioned at all. Did the drafters make a horrible mistake? We seem to spend a lot of our time these days arguing about this, 73 years later. That’s precisely the point.
While it’s possible for society to survive arbitrary and a-moral laws relating to some things (zoning regulations, graduated taxation, or restrictions on turning right on red, to take just three examples), history tells us that arbitrary legal categories for personhood that don’t also correspond to moral categories end in massive social upheaval and war. See the transatlantic slave trade and the American civil war. So let’s not do that again.
We have to get this right.
Throughout history, the law has greatly benefited from the fact that “human” is a pretty stable category. While disputes still arise over edge cases (monkeys, fetuses, dead bodies), modern law has done pretty well by dividing the world into humans, who have rights, and objects, which don’t. Having drawn a circle around the legal universe of things with rights, the law can devote most of its time to the problem of balancing the rights of humans against the rights of other humans. Sure, all humans have basic rights, but do they all have the same civil and political rights? What about the rights of humans in groups, like states, corporations and churches? What about leaders, do they get special rights? The problem with AI and robots rights is that AI and robots are no where near stable categories, like “human,” they are instead descriptive, literary terms. Put another way, we can’t ban AGI until we have a legal definition for “AGI,” just like we can’t ban “ugly buildings” because the beauty of brutalism remains firmly in the eye of the beholder.
A Lack of Scientific Consensus Makes for Bad Law
Take sentience and consciousness. There is no expert consensus on what attributes might give rise to moral consideration, let alone rights, in a machine. Experts are divided on the importance of qualities like consciousness, intelligence and free will. There’s a lack of agreement on the need of AI or robots to be more “like us,” to qualify for rights, or what being “like us” even means. Many experts seem to agree that consciousness, or “having experiences,” is necessary for AI and/or robots to be “people,” but there is enormous disagreement over what consciousness is and how to measure it. As philosopher David Chalmers explains, this is the “hard problem.”
Or what about intelligence? Human society, including the law, tends to be more concerned with the welfare of “intelligent” animals with brains like our own. Putting aside whether or not this intuition is right or wrong, human-like intelligence will therefore likely be relevant. But there is no consensus on what “general intelligence” is, and consensus is needed to avoid granting legal status to a narrow, but powerful intelligence like AlphaGo. While measuring general machine intelligence (AGI) seems like should be easier than identifying machine consciousness, it’s far from settled, and a robust philosophical debate on the nature of intelligence seems to be everywhere these days. A group of AI experts, including diverse voices like Gary Marcus and Yoshua Bengio, recently got together to define “general intelligence.” Basing their definition on something called the Cattell-Horn-Carroll theory of intelligence, they define general intelligence in an AI thus:
AGI is an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult.
While I see the appeal, especially as a sufficient condition, rather than a necessary one, there would be problems using this definition as part of a test to decide which robots get rights. As someone who graduated from the American public school system, I’m pretty sure I wouldn’t qualify if I wasn’t lucky enough to have been born human.
In an article in the Atlantic, psychologist Eric Turkheimer explains that human intelligence does not seem to be primarily determined by genetics. If that is the case, will AI intelligence ever be determined only by model architecture, or will interacting with its environment produce radically different qualities in an AI, just as some, but not all, of the robots in the TV show Westworld became conscious by interacting with their environment? Blaise Aguera y Arcas agrees that intelligence is environmental, and goes on to argue that the key ingredient is already present in LLMs, but only if they are put in symbiosis with other potential intelligent beings. While this approach may be correct, it will be hard to use as the basis for a law to divide robots into categories of rights, i.e. hard to codify. I’m also worried we may be headed away from consensus, rather than towards it.
Other points: agreement on other factors like free will, agency, sense of self and numerous other attributes that are often raised in the debate over AI moral consideration shows there is little expert consensus on how these other concepts are related to one another, to intelligence and consciousness, how they might be identified or measured in an AI, or whether they even matter.
When Public Consensus Matters More
While expert consensus is always critical to guide public debate, that amorphous and dangerous creature, “the public,” will increasingly interact with AI and robots, and will begin to develop an intuitive consensus. When it came to the Refugee Convention, the law was set by experts with little regard for public opinion. I would argue that a lack of public consultation at the time was an enormous mistake, the effects of which we are still living with today. Nobody on the Ad Hoc Committee seems to have thought about the long term consequences of granting asylum to all persecuted people, no matter how large that population might become. At the time of drafting, the Refugee Convention, for example, applied only in Europe.
With robot rights, I fear the process will happen in reverse. An intuitive, public consensus will lead to laws, whether experts like those laws or not. The number of AI experts in the world is very small relative to the total number of opinionated humans, and at some point the AIs may begin to add their voices to this debate in a more organized and consistent way. AI experts thus have a limited period of time to discuss AI moral status within the safe haven of academic spaces, before it explodes into the public domain and gets decided by, as Henry Shevlin puts it, “folk judgment.” Use this time wisely.
Eleanor Roosevelt and Gary Cooper at Lake Success, New York, in 1950.





This piece really resonated with me, highlighting the immense challenge of defining intelligence, a concept we grapple with even in computer science where operational defintions often supersede a truly universal essence. Your comparaison to the Refugee Convention brilliantly underscores the foundational legal work required, making it clear that personhood debates rely heavily on these seemingly abstract philosophical explorations.