Thanks to Rights-Balancing, We Don't Need to Choose Between AI Rights and Human Safety.
Human rights law has always balanced competing rights between humans. Now, rights- balancing can defuse the seeming conflict between AI Welfare and AI Safety.
Prompt: A robot holding the scales of justice.
Updated January 2026
This week, the Guardian published an interview with Yoshua Bengio, where he said,
“People demanding that AIs have rights would be a huge mistake…Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we’re not allowed to shut them down.”
Yet, there is nothing in human rights norms that say one group of humans must entirely sacrifice their rights in order to make space for the rights of another group. In fact, quite the opposite. Rights are not about pitting supposedly competing groups of people against one another. Rights are about protecting people from their governments. A rights-balancing approach would mandate safety guardrails for advanced AI systems so as to protect humans, as long as those safety guardrails were subject to due process of law. Put another way, it’s perfectly possible to subject AI to a mandatory “shut off” process, as long as that process is transparent, fair and subject to an independent review. After all, we put humans in jail all the time, why not AI?
The Mutant Registration Problem
In the movie X-man, Professor X is a super-powerful mutant who can read and control other minds. Yet, he is allowed to move around without restrictions, his rights fully intact. There are mutant children that can light fires with their minds who are walking dangers to themselves and others. Yet, in the movie, these children enjoy the same rights as human children, despite the dangers they pose. At the start of the movie, the big political question is whether or not mutants should be subjected to a relatively benign registration system, which the movie presents as the first step on a slippery slope to internment, or worse.
But the movie never contends with the very real threat unrestrained mutants pose to humans. The lack of reasonable limits on the rights of mutants in order to protect human safety is one of the biggest flaws in the movie. It presents rights as an absolute — everyone must have exactly the same rights or the system isn’t fair. Yet, nothing in human rights law mandates that all humans must have the same rights, only that in balancing competing rights, the system be transparent and subject to due process.
Imagine a future where an all-powerful, super-intelligent being as total control over every aspect of your life. Sound scary? Yet this is the reality for most 1 year old humans. Their parents seem like all-powerful gods without limits. But parents know the truth — the state imposes severe restricts and limits on what parents can do to their children under the law.
When it comes to super-powerful AI, the idea that we humans are going to be able to trick it, or force it, into shutting off, is probably naive. Like with the mutants in X-men, asking the AI to collaborate with a mutually-agreed upon system of due process may be our best hope for peaceful coexistence. Our system of due process is something we should be proud of. If future AI has any sense of justice or fairness, our due process is something that demonstrates the value of our society and shows that it is worth preserving. We also harm ourselves and our human society when we allow rank injustice and discrimination to persist, often bringing about the very instability and violence we seek to prevent. This doesn’t mean that humans will have to make zero sacrifices in order to accommodate AI rights, but rather that AI rights and human rights are best thought of as fitting together into one mutually reinforcing system.
What is “Rights-Balancing”?
Every rights-based system of laws must adjudicate conflicts between the rights of one person, or group, versus those of another. In fact, large parts of human rights law, both at the international and national levels, are devoted to this question. Every legal system has a “rights-balancing” clause or approach. In Canada, “rights-balancing” is called the Oakes test, after R. v. Oakes ((1986) 1 S.C.R. 103), the Supreme Court of Canada’s framework for interpreting Section 1 of the Canadian Charter of Rights and Freedoms, which states that rights are “subject only to such reasonable limits prescribed by law as can be demonstrably justified in a free and democratic society.” Around the world, governments engage in rights balancing all the time.
For example, a government may limit freedom of expression in order to protect children from pornography or sensitive groups from hate speech. Anti-discrimination laws may be waived in order to accomplish racial or gender-based justice. Bodily autonomy may be infringed upon to protect society from the severely mentally ill. Murderers may even be put to death in some states. The law of self-defence and defence of others allows people to take the law into their own hands, even to kill, if necessary. This is all permissible if due process is followed, though it is often controversial (and many human rights lawyers would disagree with me over the legality of the death penalty).
Rights Balancing and AI
Philosophers Jeff Sebo, Robert Long and others published a paper in 2025 on possible points of tension between AI welfare and AI safety, looking at, for example, the morality of AI confinement and surveillance in a context where these actions may be necessary to protect humans. While it is a jump to go from AI welfare to AI rights, it’s not too soon to begin asking how rights-balancing might help us avoid some of the worst outcomes from a future where we must, somehow, coexist with super-intelligent AI. How should the legal system balance the rights of humans against the possible future rights of advanced AI systems? What are some of the factors to be considered?
Will Advanced AI Systems Be a Vulnerable or Suspect Class?
There are two types of balancing in human rights law. First, the rights of individuals must be balanced against the rights of the public or state. Second, the rights of individuals sometimes need to be balanced against the rights of other individuals. In both cases, certain classes or groups of humans are given special consideration. These classes or groups may have suffered historical or ongoing discrimination, they may be in the minority, or they may be especially vulnerable. Examples of such classes or groups include children, women, disabled people or racial minorities.
International human rights law has developed a complex system for weighing and balancing the rights of such persons against the rights of the state. Under US law, the Supreme Court will look more closely at laws that limit the rights of persons belonging to a “suspect class” that has historically faced discrimination. Usually, such balancing tests are used when evaluating the rights of the individual against the rights of the public. The rights of a suspect class to be protected from hate speech often trump the freedom of speech of an individual. For this reason, future human rights lawyers, courts and governments will have to decide if either humans or AI form a vulnerable or suspect class.
Will Robots Require Reasonable or Special Accommodations?
Another factor that comes up a lot in human rights law is the need for specially tailored exceptions to laws based on the fact that certain groups or classes of humans are different from the majority in ways that are legally significant. For example, some countries grant special rights to pregnant women, disabled people and/or children, not because they are part of a suspect class, but because they have different needs from the majority. Such “positive” discrimination would otherwise violate the rights of the rest of the population, but are necessary to achieve justice or fairness.
Lawyers, judges and governments will need to decide if robots and AI require special laws due to their unique characteristics, or if, instead, it is humans who will require special protections. For example, it may be legal to put an AI system in suspended animation in order to protect humans, which would be a severe violation of the AI’s rights, but necessary for reasons of public policy. This is the sort of rights-balancing that may be required for robots and AI systems, and it doesn’t have to be zero-sum. Taking an AI off line temporarily, possibly in order to make fixes to its alignment, would not be prohibited by human rights law. The question of due process in such cases, however, will be important. Due process would require transparency, legitimacy, proportionality, and a compelling societal interest, as well as an appeals process that includes representatives for the AI/robots.
Rights-balancing is always one of the thorniest and most difficult things that judges, lawyers and governments have to do. If robots and AI systems ever become contributing members of society, perhaps they will have new ideas about how best to go about it. Until then, we can look to human rights law for some principles to guide the debate.
None of this is to say that robots or AI should, necessarily, have rights, and Mustafa Suleuman is right to be concerned about being overly inclusive of objects. That is for future courts and legislatures to decide, based on what will be known about the fundamental characteristics of future AI and robots. If we can’t know for certain, we may need to think about how our legal system needs to change in order to reflect this lack of certainty. AI may never advance beyond LLMs, which, if they are having experiences at all, are probably equivalent to those of an amoeba. Nor is it to say that granting rights to robots or AI will have no costs for humans. We will have to give up some of our privileges and special status in order to accommodate the needs of our non-human collaborators. But properly done, a rights-balancing approach can mitigate any harm to human rights.
But there’s no need to fear rights for robots. Like powerful aliens from space, if the robots of the future decide they want to kill humans, our laws will not be able to do much to stop them. But having a fair and transparent legal system in place may just convince an AI entity to give us a chance.


