scholarly journals Irresponsible Research and Innovation? Applying Findings from Neuroscience to Analysis of Unsustainable Hype Cycles

2018 ◽  
Vol 10 (10) ◽  
pp. 3472 ◽  
Author(s):  
Stephen Fox

The introduction of technological innovations is often associated with suboptimal decisions and actions during cycles of inflated expectations, disappointment, and unintended negative consequences. For brevity, these can be referred to as hype cycles. Hitherto, studies have reported hype cycles for many different technologies, and studies have proposed different methods for improving the introduction of technological innovations. Yet hype cycles persist, despite suboptimal outcomes being widely reported and despite methods being available to improve outcomes. In this communication paper, findings from exploratory research are reported, which introduce new directions for addressing hype cycles. Through reference to neuroscience studies, it is explained that the behavior of some adults in hype cycles can be analogous to that of irresponsible behavior among adolescents. In particular, there is heightened responsiveness to peer presence and potential rewards. Accordingly, it is argued that methods applied successfully to reduce irresponsible behavior among adolescents are relevant to addressing hype cycles, and to facilitating more responsible research and innovation. The unsustainability of hype cycles is considered in relation to hype about artificial intelligence (AI). In particular, the potential for human-beneficial AI to have the unintended negative consequence of being fatally unbeneficial to everything else in the geosphere other than human beings.

Author(s):  
Alan F. T. Winfield ◽  
Marina Jirotka

This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap—which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement—as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.


Author(s):  
Gabrielle Samuel ◽  
Jenn Chubb ◽  
Gemma Derrick

The governance of ethically acceptable research in higher education institutions has been under scrutiny over the past half a century. Concomitantly, recently, decision makers have required researchers to acknowledge the societal impact of their research, as well as anticipate and respond to ethical dimensions of this societal impact through responsible research and innovation principles. Using artificial intelligence population health research in the United Kingdom and Canada as a case study, we combine a mapping study of journal publications with 18 interviews with researchers to explore how the ethical dimensions associated with this societal impact are incorporated into research agendas. Researchers separated the ethical responsibility of their research with its societal impact. We discuss the implications for both researchers and actors across the Ethics Ecosystem.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Nina Klimburg-Witjes ◽  
Frederik C. Huettenrauch

AbstractCurrent European innovation and security policies are increasingly channeled into efforts to address the assumed challenges that threaten European societies. A field in which this has become particularly salient is digitized EU border management. Here, the framework of responsible research and innovation (RRI) has recently been used to point to the alleged sensitivity of political actors towards the contingent dimensions of emerging security technologies. RRI, in general, is concerned with societal needs and the engagement and inclusion of various stakeholder groups in the research and innovation processes, aiming to anticipate undesired consequences of and identifying socially acceptable alternatives for emerging technologies. However, RRI has also been criticized as an industry-driven attempt to gain societal legitimacy for new technologies. In this article, we argue that while RRI evokes a space where different actors enter co-creative dialogues, it lays bare the specific challenges of governing security innovation in socially responsible ways. Empirically, we draw on the case study of BODEGA, the first EU funded research project to apply the RRI framework to the field of border security. We show how stakeholders involved in the project represent their work in relation to RRI and the resulting benefits and challenges they face. The paper argues that applying the framework to the field of (border) security lays bare its limitations, namely that RRI itself embodies a political agenda, conceals alternative experiences by those on whom security is enacted upon and that its key propositions of openness and transparency are hardly met in practice due to confidentiality agreements. Our hope is to contribute to work on RRI and emerging debates about how the concept can (or cannot) be contextualized for the field of security—a field that might be more in need than any other to consider the ethical dimension of its activities.


Sign in / Sign up

Export Citation Format

Share Document