Why Corporation-Style "Legal Personality" is a Red Herring for AI Personhood (but Legal Identity is not)
We don't need to twist principles designed for corporations to grant rights and duties to the robots in our lives.
Prompt: Draw a robot red herring in the style of Audubon
Announcing our new paper: “How Should the Law Treat Future AI Systems : Fictional Legal Personhood vs Legal Identity” (forthcoming in the Case Western Journal of Law, Technology and the Internet)
Arxiv version: https://arxiv.org/abs/2511.14964
Here is a summary:
Cars, chickens, trees and those statues on Easter Island (called Moai) all have one thing in common. They are all categorized as objects under the law, without rights or duties. You cannot sue a chicken in court, only its owner, and chickens can’t vote. Robots and chatbots are also objects under the law, without rights or duties. As Cullen O’Keefe and Ketan Ramakrishnan point out in a recent article for Lawfare, “the law, as it exists today, imposes duties on persons. AI agents are not persons...”
Yet, with humans already dating AI boyfriends and robotics companies hard at work to create companion robots, many lawyers, philosophers and scientists debate whether some future robots should have rights and duties, and if so, why? Will the object-status of robots cause problems for human rights? Will there be a “responsibility gap” in our laws, with out-of-control AI that cannot be held responsible for its actions? Should a human have the right to marry a robot, or should robot slavery be allowed? And what about robot rights? Is it wrong to treat conscious, autonomous robots like objects? What about robots that just seem to be conscious? And how are the robots likely to feel about it?
There is a possible solution to this problem. Some religious idols and sacred rivers have fictional legal personhood status, even though they are not human. This allows them to have certain “rights” that can be enforced by humans, similar to how parents can enforce the rights of their babies. Yet, the category of fictional legal personhood is a tricky one. Ever since the medieval Church said a monastery was a “fictitious person,” lawyers and philosophers have been arguing over what that means : “fiction” as in manufactured, or as in imaginary? Some experts argue animals have been unfairly excluded, while the rights of rivers have been hard to enforce.
Confusingly, corporations, states, and many other formalized groups of humans also have fictional legal personhood, yet they are clearly in a very different category from animals, rivers, or religious idols. Fictional legal personhood is also a device to accomplish public policy goals, and it can therefore be taken away by the courts for reasons of public policy, making it a bad fit for basic rights like freedom of speech or due process. Corporate personhood is primarily used to limit the liability of the humans who own the corporation (meaning that applying it to robots risks inadvertently shielding the tech industry from accountability). In 2017, a European Parliament Resolution suggested that a form of “electronic personality” may be needed for future AI. This famously prompted a global backlash, and the debate continues, with many experts weighing the pros and cons of fictional legal personhood for AI, and some exploring whether there could be a hybrid approach that would somehow confer the benefits without the downsides.
But what if there were another way to grant personhood status to robots? We propose that there is. AI-enabled robots and AI entities that meet certain indicators in the future, like consciousness and the ability to make, and keep, promises, could obtain non-fictional personhood status, just like a human. In human rights law, non-fictional personhood is fundamental to all humans as beings with dignity, and the mechanism for recognizing it by a state is called “legal identity,” the industry term for the fact of being recognized as a (non-fictional) person under the law.
Humans obtain a legal identity through civil registration and the issuance of government ID. Getting every human on earth a legal identity via a birth certificate or other document is one of the UN’s Sustainable Development Goals, a major goal of both the World Bank and individual governments around the world. Registering qualifying robots and AI as people under the law would recognize their fundamental dignity, their ability to enter into and honor their agreements, and take part in their own governance. It would automatically grant them fundamental rights (though these rights may differ from those of humans). Of course, their rights would be balanced against the rights of humans, just as human rights are frequently balanced against one another.
With a legal identity, there would be nothing fictional about the personhood of qualifying robots. They would join our societies as full and equal members. Legal identity for future AI persons would remove the dangerous conflation between AI companies and their creations, and lead to a durable framework of rights and duties for AI persons, one that cannot be dissolved by the courts when, and if, it became inconvenient for humans. (Note, however, that recognizing the rights of future robots and AI persons does not mean that human rights won’t matter. To the contrary, rights-balancing frameworks exist in all legal systems to reconcile conflicts of rights between rights-holders: think of the way that traffic laws curtail your freedom to drive where you please. It’s perfectly permissible to limit the rights of robots in order to balance their needs against our own. In fact, this may prove a more durable route to alignment than the alternatives: it simply means that robots cannot be turned into property at whim.)
Of course, weighty questions remain. Which robots or AI systems should qualify to be persons — what might be the indicators for AI legal identity? What body is empowered to judge and decide? What rights and duties should future AI systems recognized as persons have? How might their duties be enforced under the law and balanced against the inalienble rights of humans? What about humans who categorically reject robot rights? What about robots or AI systems who do not satisfy the indicators? None of these questions can be answered in a day, or in a blog post.
We say more in the paper. For example, on the question of robots and AI systems that do not meet the indicators : bear in mind that personhood is not the same thing as moral status. The law does not consider animals to be persons. There are still very meaningful animal welfare laws — it is just that the law does not construe these as specifying duties that you owe to the animals. So if you worry that AI systems may be sentient before they can give and keep their word, we’ve got you covered : nothing we say speaks against the moral status of sentient beings that are not persons.
We hope you find the paper helpful. TL;DR : we don’t need to get bogged down by the fuzzy logic of fictional legal personhood in order to grant rights and duties to robots. We just need to adapt the system of legal identity and fundamental rights that we already have, and have worked so hard to achieve.
Of course, all this remains highly speculative. There is not yet any robot or AI system that qualifies for such a status. Maybe there never will : we are not forecasters. But if they do, there will be nothing fictional about their need for rights.


