scholarly journals Visa A. J. Kurki and Tomasz Pietrzykowski (Eds.). Legal Personhood: Animals, Artificial Intelligence and the Unborn - The Law and Philosophy Library, Volume 119, ix (Springer International Publishing, 2017) 158 p.

2018 ◽  
Vol 9 (3) ◽  
pp. 187
Author(s):  
Oliver Wookey
Author(s):  
Ugo Pagallo

Scholars have increasingly discussed the legal status(es) of robots and artificial intelligence (AI) systems over the past three decades; however, the 2017 resolution of the EU parliament on the ‘electronic personhood’ of AI robots has reignited and even made current debate ideological. Against this background, the aim of the paper is twofold. First, the intent is to show how often today's discussion on the legal status(es) of AI systems leads to different kinds of misunderstanding that regard both the legal personhood of AI robots and their status as accountable agents establishing rights and obligations in contracts and business law. Second, the paper claims that whether or not the legal status of AI systems as accountable agents in civil––as opposed to criminal––law may make sense is an empirical issue, which should not be ‘politicized’. Rather, a pragmatic approach seems preferable, as shown by methods of competitive federalism and legal experimentation. In the light of the classical distinction between primary rules and secondary rules of the law, examples of competitive federalism and legal experimentation aim to show how the secondary rules of the law can help us understanding what kind of primary rules we may wish for our AI robots. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.


2018 ◽  
Vol 12 (1) ◽  
pp. 81-87 ◽  
Author(s):  
Jan Zibner

Kurki, V. A. J.; Pietrzykowski, T. (eds.). (2017) Legal Personhood: Animals, Artificial Intelligence and the Unborn. Springer International Publishing, 158 p.


Author(s):  
Christian List

AbstractThe aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical.


2021 ◽  
Vol 8 ◽  
Author(s):  
Eric Martínez ◽  
Christoph Winter

To what extent, if any, should the law protect sentient artificial intelligence (that is, AI that can feel pleasure or pain)? Here we surveyed United States adults (n = 1,061) on their views regarding granting 1) general legal protection, 2) legal personhood, and 3) standing to bring forth a lawsuit, with respect to sentient AI and eight other groups: humans in the jurisdiction, humans outside the jurisdiction, corporations, unions, non-human animals, the environment, humans living in the near future, and humans living in the far future. Roughly one-third of participants endorsed granting personhood and standing to sentient AI (assuming its existence) in at least some cases, the lowest of any group surveyed on, and rated the desired level of protection for sentient AI as lower than all groups other than corporations. We further investigated and observed political differences in responses; liberals were more likely to endorse legal protection and personhood for sentient AI than conservatives. Taken together, these results suggest that laypeople are not by-and-large in favor of granting legal protection to AI, and that the ordinary conception of legal status, similar to codified legal doctrine, is not based on a mere capacity to feel pleasure and pain. At the same time, the observed political differences suggest that previous literature regarding political differences in empathy and moral circle expansion apply to artificially intelligent systems and extend partially, though not entirely, to legal consideration, as well.


2021 ◽  
Author(s):  
Antônio Anselmo Martino ◽  
Eduardo Magrani ◽  
Lourenço Ribeiro Grossi Araújo ◽  
Yuri Alexandre dos Santos ◽  
Henry Colombi ◽  
...  

Author(s):  
Tijana T. Ivancevic ◽  
Bojan Jovanovic ◽  
Sasa Jovanovic ◽  
Milka Djukic ◽  
Natalia Djukic ◽  
...  

Author(s):  
Eliza Mik

Cyclical advancements in artificial intelligence (AI) are usually accompanied by theories advocating the granting of legal personhood to sophisticated, autonomous computers. This chapter criticizes such theories as incorrect—a possible result of legal scholars being seduced by incomprehensible technical terminology, sensationalistic stories in the popular press, and ‘creative’ photo filters that transform our faces into animals. Discussions as to when computers should be recognized as persons are, logically, outside of the scope of intellectual property law. The granting of legal personhood is not premised on the existence of consciousness, intelligence, or creativity. Recognizing an entity as a legal person is a normative choice dictated by commercial expediency, not the result of fulfilling any technical criteria. While it is necessary to acknowledge the blurring of borders between art and (computer) science, as well as the increase in the technological sophistication of the tools used by authors and inventors, it is also necessary to state that even an exponential increase in ‘computer creativity’ will not sever the link between the computer and its user. Before discarding the idea of legal personhood for ‘creative algorithms’ once and for all, the chapter explores the relationships between autonomy and creativity. In particular, it places technical terms such as ‘AI’ and ‘autonomy’ in their original context and criticizes uninformed attempts to imbue them with normative connotations.


Author(s):  
Martin Partington

This chapter discusses the role both of those professionally qualified to practise law—solicitors and barristers—and of other groups who provide legal/advice services but who do not have professional legal qualifications. It examines how regulation of legal services providers is changing. It notes new forms of legal practice. It also considers how use of artificial intelligence may change the ways in which legal services are delivered. It reflects on the adjudicators and other dispute resolvers who play a significant role in the working of the legal system. It reflects on the contribution to legal education made by law teachers, in universities and in private colleges, to the formation of the legal profession and to the practice of the law.


Sign in / Sign up

Export Citation Format

Share Document