An interdisciplinary conceptual study of Artificial Intelligence (AI) for helping benefit-risk assessment practices

2021 ◽  
pp. 1-26
Author(s):  
Gauthier Chassang ◽  
Mogens Thomsen ◽  
Pierre Rumeau ◽  
Florence Sèdes ◽  
Alejandra Delfin

We propose a comprehensive analysis of existing concepts of AI coming from different disciplines: Psychology and engineering tackle the notion of intelligence, while ethics and law intend to regulate AI innovations. The aim is to identify shared notions or discrepancies to consider for qualifying AI systems. Relevant concepts are integrated into a matrix intended to help defining more precisely when and how computing tools (programs or devices) may be qualified as AI while highlighting critical features to serve a specific technical, ethical and legal assessment of challenges in AI development. Some adaptations of existing notions of AI characteristics are proposed. The matrix is a risk-based conceptual model designed to allow an empirical, flexible and scalable qualification of AI technologies in the perspective of benefit-risk assessment practices, technological monitoring and regulatory compliance: it offers a structured reflection tool for stakeholders in AI development that are engaged in responsible research and innovation.

Author(s):  
Gabrielle Samuel ◽  
Jenn Chubb ◽  
Gemma Derrick

The governance of ethically acceptable research in higher education institutions has been under scrutiny over the past half a century. Concomitantly, recently, decision makers have required researchers to acknowledge the societal impact of their research, as well as anticipate and respond to ethical dimensions of this societal impact through responsible research and innovation principles. Using artificial intelligence population health research in the United Kingdom and Canada as a case study, we combine a mapping study of journal publications with 18 interviews with researchers to explore how the ethical dimensions associated with this societal impact are incorporated into research agendas. Researchers separated the ethical responsibility of their research with its societal impact. We discuss the implications for both researchers and actors across the Ethics Ecosystem.


2018 ◽  
Vol 10 (10) ◽  
pp. 3472 ◽  
Author(s):  
Stephen Fox

The introduction of technological innovations is often associated with suboptimal decisions and actions during cycles of inflated expectations, disappointment, and unintended negative consequences. For brevity, these can be referred to as hype cycles. Hitherto, studies have reported hype cycles for many different technologies, and studies have proposed different methods for improving the introduction of technological innovations. Yet hype cycles persist, despite suboptimal outcomes being widely reported and despite methods being available to improve outcomes. In this communication paper, findings from exploratory research are reported, which introduce new directions for addressing hype cycles. Through reference to neuroscience studies, it is explained that the behavior of some adults in hype cycles can be analogous to that of irresponsible behavior among adolescents. In particular, there is heightened responsiveness to peer presence and potential rewards. Accordingly, it is argued that methods applied successfully to reduce irresponsible behavior among adolescents are relevant to addressing hype cycles, and to facilitating more responsible research and innovation. The unsustainability of hype cycles is considered in relation to hype about artificial intelligence (AI). In particular, the potential for human-beneficial AI to have the unintended negative consequence of being fatally unbeneficial to everything else in the geosphere other than human beings.


Author(s):  
Alan F. T. Winfield ◽  
Marina Jirotka

This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap—which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement—as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.


2021 ◽  
Vol 13 (12) ◽  
pp. 6879
Author(s):  
Hassan P. Ebrahimi ◽  
R. Sandra Schillo ◽  
Kelly Bronson

This study provides a model that supports systematic stakeholder inclusion in agricultural technology. Building on the Responsible Research and Innovation (RRI) literature and attempting to add precision to the conversation around inclusion in technology design and governance, this study develops a framework for determining which stakeholder groups to engage in RRI processes. We developed the model using a specific industry case study: identifying the relevant stakeholders in the Canadian digital agriculture ecosystem. The study uses literature and news article analysis to map stakeholders in the Canadian digital agricultural sector as a test case for the model. The study proposes a systematic framework which categorises stakeholders into individuals, industrial and societal groups with both direct engagement and supportive roles in digital agriculture. These groups are then plotted against three levels of impact or power in the agri-food system: micro, meso and macro.


Sign in / Sign up

Export Citation Format

Share Document