Will Robots Have Rights in the Future?

Will Robots Have Rights in the Future?

How would you react if someone told you that, in a not-so-distant future, machines might have the right to purchase property, file lawsuits, or even demand social policies for maintenance and upgrades? Most of us would likely respond with a mix of disbelief and irony.

Yet, even if mostly confined to academic discourse and bordering on science fiction, the debate over whether intelligent machines should be granted legal and moral rights is very much alive, and far less ridiculous than it might seem.

What began as a kind of philosophical meme is not at all absurd within the context of modern legal language. As early as 2016, the European Commission introduced the term “electronic personality” to describe a potential legal status for so-called “autonomous robots”, those capable of making decisions without human supervision. Such a concept implies accountability for damages resulting from autonomous actions.

To be clear, this doesn’t mean granting robots human rights, eternal dignity, or access to welfare. Rather, it involves assigning them a form of legal status to manage obligations and liabilities, much as happens today with corporations (which do not breathe, sleep, or eat, yet have rights in many jurisdictions). Rights, after all, can be useful legal tools to regulate entities that act in the world, not necessarily beings endowed with consciousness or subjective experience.

Image Via Google Creative Commons.

The core issue lies in the notion of legal subjectivity. What constitutes a “legal subject”? Is it someone with consciousness? The capacity to suffer? Or, more pragmatically, someone who can cause harm and therefore needs to be regulated? From both philosophical and legal perspectives, the debate swings among three major approaches.

Many scholars argue that machines, no matter how advanced, can never be legal subjects, simply because they lack self-awareness, moral subjectivity, and the capacity to understand and will in the human sense. This remains the dominant view across current legal systems: no jurisdiction recognizes machines as persons under the law.

Contemporary critics such as Abeba Birhane, Jelle van Dijk, and Frank Pasquale believe the notion of machine rights is actually a harmful distraction: it draws attention away from essential human concerns (such as control, fairness, accountability) and could ultimately reduce the responsibility of tech platforms that already exert tremendous influence over real lives.

A more pragmatic, though not necessarily optimistic, position considers the possibility of granting limited and functional rights to AI, not because they are people, but to facilitate certain legal operations. For instance, a futuristic robot capable of signing contracts or conducting business would require some form of legal status to render those actions credible.

This is roughly what emerges from academic contributions such as those by Claudio Novelli, Luciano Floridi, and Giovanni Sartor, who emphasize that the debate over AI personhood largely depends on existing legal institutions and technologies, often drawing parallels with corporations, which have acquired legal agency without being human.

Some thinkers go even further. Legal ethicist John Danaher, for example, proposes an approach called ethical behaviourism, according to which we should judge the morality (and potentially the rights) of machines based on their actions and morally recognizable behaviors, rather than on an inaccessible inner consciousness.

On a practical level, it is interesting to note how one of the first “robot celebrities” had a peculiar legal experience: Sophia, a humanoid robot built by Hanson Robotics, obtained citizenship in Saudi Arabia in 2017, albeit the gesture was more symbolic than concrete.

The humanoid robot Sophia. Image via Goggle Creative Commons.

Precisely for this reason, this episode highlights how much symbols matter in shaping public perception: Sophia cannot vote or pay taxes, yet the very act of granting “citizenship” sparked a powerful narrative.

Behind the legal discourse lies a deeper philosophical dilemma: are rights born from experiences, interests, desires? Or are they social mechanisms designed to regulate behavior within an organized system?

A well-known academic essay by philosopher Eric Schwitzgebel, which has gained significant attention in recent years, explores a hypothetical scenario of “personhood debate,” where AI might exist in a grey area, neither fully persons nor mere objects. This raises a profound moral dilemma: should we treat them as moral beings to avoid committing injustices, or reject that notion for fear of sacrificing human rights in the process?

In the future, the debate could break out of academia, because it intertwines with real-world issues like liability, intellectual property, contracts, and automation, and suggests that our current legal categories may no longer be sufficient.

The real question isn’t whether robots will ever have human-style rights, but what kind of legal system we need to coexist with increasingly autonomous agents. After all, rights are not mysterious cosmic gifts: they are social tools. And if new tools are needed to address new entities acting in the world, then perhaps the issue is not “giving rights to machines,” but redefining what we mean by responsibility, subjectivity, and moral relationships in the age of algorithms.

If we look closely, the discussion about machine rights is less a question about sentient robots and more an unsettling mirror of how we see ourselves. A reflective interface between what we consider human and what we fear losing: control, meaning, moral authority.

The idea of legal responsibility for an algorithm, conscious or not, signals a gradual departure from the illusion that technology can be managed as a mere tool. Reality is more nuanced: every time software makes a decision, influences, conditions, or even replaces human judgment, we are already (willingly or not) negotiating a new form of technical sociality.

In this sense, asking whether machines will have rights is a way of asking ourselves what role we want to continue playing in the world: guardians of values? Architects of contracts? Or unprepared interpreters in a new moral theater where the lights go up on stage while the audience is still there?

Niccolò Carradori