The Algorithmic Learning Deficit: Artificial Intelligence, Data Protection, and Trade

2020 ◽  
Author(s):  
Svetlana Yakovleva ◽  
Joris V. J. van Hoboken
2021 ◽  
pp. 212-230
Author(s):  
Svetlana Yakovleva ◽  
Joris van Hoboken

2019 ◽  
Vol 5 (2) ◽  
pp. 75-91
Author(s):  
Alexandre Veronese ◽  
Alessandra Silveira ◽  
Amanda Nunes Lopes Espiñeira Lemos

The article discusses the ethical and technical consequences of Artificial intelligence (hereinafter, A.I) applications and their usage of the European Union data protection legal framework to enable citizens to defend themselves against them. This goal is under the larger European Union Digital Single Market policy, which has concerns about how this subject correlates with personal data protection. The article has four sections. The first one introduces the main issue by describing the importance of AI applications in the contemporary world scenario. The second one describes some fundamental concepts about AI. The third section has an analysis of the ongoing policies for AI in the European Union and the Council of Europe proposal about ethics applicable to AI in the judicial systems. The fourth section is the conclusion, which debates the current legal mechanisms for citizens protection against fully automated decisions, based on European Union Law and in particular the General Data Protection Regulation. The conclusion will be that European Union Law is still under construction when it comes to providing effective protection to its citizens against automated inferences that are unfair or unreasonable.


2021 ◽  
Author(s):  
Julian Heim

Data is the core of Internet-based business models. Ever since Facebook took over WhatsApp, European antitrust law has been faced with the question of how to deal with mergers, especially those involving the well-known Internet giants ("FANG"). Under what circumstances can market power be based as a prohibition criterion on the possession of and access to data? What competitive effects of data-based market power are to be feared in horizontal, vertical and conglomerate mergers? How can any commitments remedy this form of market power? The work takes into account technical developments such as artificial intelligence as well as data protection aspects.


2019 ◽  
Vol 6 (1) ◽  
pp. 205395171986054 ◽  
Author(s):  
Heike Felzmann ◽  
Eduard Fosch Villaronga ◽  
Christoph Lutz ◽  
Aurelia Tamò-Larrieux

Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.


2020 ◽  
Vol 44 (4) ◽  
Author(s):  
V. A. Savchenko ◽  
◽  
O. D. Shapovalenko

The article examines the key technologies of artificial intelligence in order to use them to ensure the protection of information. It is shown that currently there is no general concept of artificial intelligence in cybersecurity, the most important methods of artificial intelligence that can be used in cybersecurity are not defined, and the role that these methods can play to protect organizations in cyberspace has not been established. As a key idea for the use of artificial intelligence in cybersecurity, the use of technologies and methods that facilitate the detection and response to threats using cyber attack statistics sets has been proposed. Priority areas for the use of artificial intelligence are network security and data protection.


2020 ◽  
Author(s):  
Frederik Zuiderveen Borgesius

Algorithmic decision-making and other types of artificial intelligence (AI) can be used to predict who will commit crime, who will be a good employee, who will default on a loan, etc. However, algorithmic decision-making can also threaten human rights, such as the right to non-discrimination. The paper evaluates current legal protection in Europe against discriminatory algorithmic decisions. The paper shows that non-discrimination law, in particular through the concept of indirect discrimination, prohibits many types of algorithmic discrimination. Data protection law could also help to defend people against discrimination. Proper enforcement of non-discrimination law and data protection law could help to protect people. However, the paper shows that both legal instruments have severe weaknesses when applied to artificial intelligence. The paper suggests how enforcement of current rules can be improved. The paper also explores whether additional rules are needed. The paper argues for sector-specific – rather than general – rules, and outlines an approach to regulate algorithmic decision-making.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Emre Kazim ◽  
Denise Almeida ◽  
Nigel Kingsman ◽  
Charles Kerrigan ◽  
Adriano Koshiyama ◽  
...  

AbstractThe publication of the UK’s National Artificial Intelligence (AI) Strategy represents a step-change in the national industrial, policy, regulatory, and geo-strategic agenda. Although there is a multiplicity of threads to explore this text can be read primarily as a ‘signalling’ document. Indeed, we read the National AI Strategy as a vision for innovation and opportunity, underpinned by a trust framework that has innovation and opportunity at the forefront. We provide an overview of the structure of the document and offer an emphasised commentary on various standouts. Our main takeaways are: Innovation First: a clear signal is that innovation is at the forefront of UK’s data priorities. Alternative Ecosystem of Trust: the UK’s regulatory-market norms becoming the preferred ecosystem is dependent upon the regulatory system and delivery frameworks required. Defence, Security and Risk: security and risk are discussed in terms of utilisation of AI and governance. Revision of Data Protection: the signal is that the UK is indeed seeking to position itself as less stringent regarding data protection and necessary documentation. EU Disalignment—Atlanticism?: questions are raised regarding a step back in terms of data protection rights. We conclude with further notes on data flow continuity, the feasibility of a sector approach to regulation, legal liability, and the lack of a method of engagement for stakeholders. Whilst the strategy sends important signals for innovation, achieving ethical innovation is a harder challenge and will require a carefully evolved framework built with appropriate expertise.


Legal Ukraine ◽  
2021 ◽  
pp. 6-24
Author(s):  
Kseniia Zhyhalova

The purpose of the study was to demonstrate particular legal and objective reasons for necessity and expediency of legal regulation advancement, development and usage of Artificial Intelligence (AI) in Ukraine. Chapter 1 «Understanding of Artificial Intelligence» gives examples of AI applications, doctrinal and diverse legal definitions of AI. Chapter 2 «Necessity and Expediency of legal regulation of Artificial Intelligence in Ukraine» shows the necessity of legal regulation, exemplifies the gaps in current legislation. This Chapter demonstrates that it is paramount to establish protection of IP rights within AI legal relationships in Ukraine. Also, Chapter 2 analyzes particular issues in AI and national, international and social security, questions of data protection. Chapter 3 «Conclusion» demonstrates that absence of specific AI regulation could potentially lead to numerous problems in public/private sectors, for economics, businesses, civilians. Key words: Artificial Intelligence (AI), legal regulation of AI, intellectual property (IP) protection, national security, protection of human rights and freedoms, data protection.


Sign in / Sign up

Export Citation Format

Share Document