Traditionally, legal personhood was limited to humans. However, technology and knowledge advancements have sparked debates about granting rights to non-human entities like AI and robots.

 

Establishing Rights for AI – a practical solution or an unattainable dream?

Robots at the Bar – European Parliament Debates ‘Electronic Personhood’ for AI Entities

In 2017, the European Parliament adopted a resolution calling for a specific legal status for AI and robots as “electronic persons.” This resolution aimed to address robots’ growing significance in various industries and everyday life. It hoped to establish a framework to grant legal rights to AI that recognise their unique attributes. Moreover, the resolution acknowledged the challenges of determining the extent of legal recognition and the ethical implications involved. As a consequence, this move sparked both support and criticism from legal scholars, ethicists, and AI experts. Proponents argued that granting legal personhood to robots fosters responsible development and ensures accountability. However, critics raised concerns about potential risks and questioned the necessity of creating a distinct legal status for robots.

 

AI Inventor Denied: Landmark Case Rejects Patent Applications by DABUS AI System: 

In 2018, an AI system named DABUS filed two patent applications for innovations in the field of food containers and devices for attracting enhanced attention (Thaler v Commissioner of Patents [2021] FCAFC 185). These applications were filed in various jurisdictions worldwide, raising the fundamental question of whether an AI system could be recognised as an inventor and consequently obtain patent rights.

However, the patent offices in multiple countries rejected the applications, asserting that the inventor must be a natural person or a legal entity. This rejection brought to light the issue of legal personhood for AI systems and its implications for intellectual property laws. As a result, the case prompted discussions on how to attribute authorship or ownership of AI creations.

In response, legal scholars and patent experts engaged in lively debates about the attribution of intellectual property rights to AI-generated works. While some argued that recognising AI as inventors would incentivise innovation, others expressed caution, fearing that it might undermine the principles of invention and ownership by attributing them to non-human entities.

 

Ethical dilemmas of granting legal rights to AI

The preceding cases and debates underscore the complex nature of the legal personhood discussion for non-human entities. Additionally, as technology advances, the legal and ethical implications of extending rights to AI and robots will remain at the forefront of legal discourse. This necessitates comprehensive analysis and thoughtful consideration to strike a balance between innovation and ethical responsibility.

Consequently, this raises profound ethical questions and liability concerns that necessitate careful consideration:

  1. Determining the scope of rights granted: The extent of rights that AI and robots should possess remains an open question. Should they have the same level of rights as humans, or should there be limitations to their legal standing?
  2. Defining consciousness and sentience: Recognising legal personhood for non-human entities involves grappling with the challenge of defining consciousness and sentience. Determining whether AI and robots possess these qualities becomes essential in determining their eligibility for legal recognition.
  3. Holding AI accountable: How are AI systems and robots held accountable for their actions? If these entities are recognised as legal persons, their actions may have legal consequences, and establishing mechanisms for accountability becomes crucial.
  4. Creator liability: Would creators or operators be exempt from any form of liability? The legal status of AI and robots may impact the liability of their creators or operators, and determining their responsibility becomes a complex legal task.

 

Academic Opinions

Legal scholars, ethicists, and AI developers offer valuable insights into the complexities of granting legal personhood to non-human entities:

  • Professor Lawrence Solum, a prominent legal scholar, advocates for a functionalist approach that could potentially justify extending personhood to AI systems (Solum, 2018). He argues that their capacity for complex decision-making, autonomous behaviour, and self-awareness could warrant legal recognition. According to Solum, AI and robots’ functional attributes, not their biological nature, should determine their rights and responsibilities. This challenges traditional legal personhood concepts, embracing technology’s transformative potential in reshaping the legal landscape.
  • Ethicist Wendell Wallach contrastingly emphasises a cautious approach to granting legal personhood to AI and robots, asserting that moral responsibility should be reserved exclusively for humans (Wallach & Allen, 2009). Further, Wallach cautions against absolving human creators or operators of accountability by attributing it to non-human entities. According to him, human designers should bear the consequences of AI and robots’ actions. His ethical stance prioritises human moral agency and raises significant questions about the implications of granting legal personhood to non-human entities.

The interviews with experts offer diverse viewpoints, exploring ethical, philosophical, and legal aspects of AI and robot personhood. Their different perspectives enrich the ongoing debate, highlighting the need for interdisciplinary collaboration and in-depth analysis in this evolving field. As technology advances, engaging with expert insights will be crucial for a well-informed and responsible approach to the legal rights and responsibilities of non-human entities.

 

Speculations for the future of AI Legal Rights

  1. Could AI serve as witnesses or parties in legal disputes? For complex litigation involving technical data or evidence, AI entities could act as expert witnesses, offering unique insights. Moreover, recognising AI as legal parties in certain contexts raises questions about their representation and advocacy rights.
  1. Would the creation of AI-generated works require new intellectual property laws? The issue of ownership and attribution of AI-generated art, music, literature, and other creative output arises. Thus, crafting new legal frameworks becomes necessary to ensure fair compensation and encourage further innovation.
  1. Addressing AI-generated harm and liability in novel ways? As AI and robots integrate into society, instances of AI-generated harm may become more common. Granting legal rights to AI opens avenues for novel liability approaches, considering the roles of AI developers, manufacturers, and operators in ensuring safe and ethical AI functioning.
  1. AI and robots as consumers with legal rights? Legal personhood impacts consumer protection and product liability. Granting legal rights to AI may mean that they could be entitled to consumer rights like warranties and protections. Consequently, manufacturers and developers must consider these rights to enhance safety and product quality.

 

Conclusion

The concept of legal personhood for non-human entities remains a complex and controversial issue. Moreover, as AI and robotics continue to develop, careful consideration about granting legal rights to AI increasingly essential. Addressing these challenges will require interdisciplinary collaboration and thoughtful analysis to navigate the legal landscape of AI and robots in an ever-changing world.

 

Interested in delving deeper into the dynamic interaction between AI, technology and law? Check out our article on Legal Rights in the Metaverse!