scholarly journals Ethical challenges regarding artificial intelligence in medicine from the perspective of scientific editing and peer review

2019 ◽  
Vol 6 (2) ◽  
pp. 91-98 ◽  
Author(s):  
Seong Ho Park ◽  
Young-Hak Kim ◽  
Jun Young Lee ◽  
Soyoung Yoo ◽  
Chong Jai Kim
2021 ◽  
pp. 002203452110138
Author(s):  
C.M. Mörch ◽  
S. Atsu ◽  
W. Cai ◽  
X. Li ◽  
S.A. Madathil ◽  
...  

Dentistry increasingly integrates artificial intelligence (AI) to help improve the current state of clinical dental practice. However, this revolutionary technological field raises various complex ethical challenges. The objective of this systematic scoping review is to document the current uses of AI in dentistry and the ethical concerns or challenges they imply. Three health care databases (MEDLINE [PubMed], SciVerse Scopus, and Cochrane Library) and 2 computer science databases (ArXiv, IEEE Xplore) were searched. After identifying 1,553 records, the documents were filtered, and a full-text screening was performed. In total, 178 studies were retained and analyzed by 8 researchers specialized in dentistry, AI, and ethics. The team used Covidence for data extraction and Dedoose for the identification of ethics-related information. PRISMA guidelines were followed. Among the included studies, 130 (73.0%) studies were published after 2016, and 93 (52.2%) were published in journals specialized in computer sciences. The technologies used were neural learning techniques for 75 (42.1%), traditional learning techniques for 76 (42.7%), or a combination of several technologies for 20 (11.2%). Overall, 7 countries contributed to 109 (61.2%) studies. A total of 53 different applications of AI in dentistry were identified, involving most dental specialties. The use of initial data sets for internal validation was reported in 152 (85.4%) studies. Forty-five ethical issues (related to the use AI in dentistry) were reported in 22 (12.4%) studies around 6 principles: prudence (10 times), equity (8), privacy (8), responsibility (6), democratic participation (4), and solidarity (4). The ratio of studies mentioning AI-related ethical issues has remained similar in the past years, showing that there is no increasing interest in the field of dentistry on this topic. This study confirms the growing presence of AI in dentistry and highlights a current lack of information on the ethical challenges surrounding its use. In addition, the scarcity of studies sharing their code could prevent future replications. The authors formulate recommendations to contribute to a more responsible use of AI technologies in dentistry.


Author(s):  
AJung Moon ◽  
Shalaleh Rismani ◽  
H. F. Machiel Van der Loos

Abstract Purpose of Review To summarize the set of roboethics issues that uniquely arise due to the corporeality and physical interaction modalities afforded by robots, irrespective of the degree of artificial intelligence present in the system. Recent Findings One of the recent trends in the discussion of ethics of emerging technologies has been the treatment of roboethics issues as those of “embodied AI,” a subset of AI ethics. In contrast to AI, however, robots leverage human’s natural tendency to be influenced by our physical environment. Recent work in human-robot interaction highlights the impact a robot’s presence, capacity to touch, and move in our physical environment has on people, and helping to articulate the ethical issues particular to the design of interactive robotic systems. Summary The corporeality of interactive robots poses unique sets of ethical challenges. These issues should be considered in the design irrespective of and in addition to the ethics of artificial intelligence implemented in them.


2021 ◽  
pp. 146144482110227
Author(s):  
Erik Hermann

Artificial intelligence (AI) is (re)shaping communication and contributes to (commercial and informational) need satisfaction by means of mass personalization. However, the substantial personalization and targeting opportunities do not come without ethical challenges. Following an AI-for-social-good perspective, the authors systematically scrutinize the ethical challenges of deploying AI for mass personalization of communication content from a multi-stakeholder perspective. The conceptual analysis reveals interdependencies and tensions between ethical principles, which advocate the need of a basic understanding of AI inputs, functioning, agency, and outcomes. By this form of AI literacy, individuals could be empowered to interact with and treat mass-personalized content in a way that promotes individual and social good while preventing harm.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Amit Sood ◽  
Rajendra Kumar Sharma ◽  
Amit Kumar Bhardwaj

PurposeThe purpose of this paper is to provide a comprehensive review on the academic journey of artificial intelligence (AI) in agriculture and to highlight the challenges and opportunities in adopting AI-based advancement in agricultural systems and processes.Design/methodology/approachThe authors conducted a bibliometric analysis of the extant literature on AI in agriculture to understand the status of development in this domain. Further, the authors proposed a framework based on two popular theories, namely, diffusion of innovation (DOI) and the unified theory of acceptance and use of technology (UTAUT), to identify the factors influencing the adoption of AI in agriculture.FindingsFour factors were identified, i.e. institutional factors, market factors, technology factors and stakeholder perception, which influence adopting AI in agriculture. Further, the authors indicated challenges under environmental, operational, technological, economical and social categories with opportunities in this area of research and business.Research limitations/implicationsThe proposed conceptual model needs empirical validation across countries or states to understand the effectiveness and relevance.Practical implicationsPractitioners and researchers can use these inputs to develop technology and business solutions with specific design elements to gain benefit of this technology at larger scale for increasing agriculture production.Social implicationsThis paper brings new developed methods and practices in agriculture for betterment of society.Originality/valueThis paper provides a comprehensive review of extant literature and presents a theoretical framework for researchers to further examine the interaction of independent variables responsible for adoption of AI in agriculture.Peer reviewThe peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-10-2020-0448


2021 ◽  
Vol 66 (Special Issue) ◽  
pp. 133-133
Author(s):  
Regina Mueller ◽  
◽  
Sebastian Laacke ◽  
Georg Schomerus ◽  
Sabine Salloch ◽  
...  

"Artificial Intelligence (AI) systems are increasingly being developed and various applications are already used in medical practice. This development promises improvements in prediction, diagnostics and treatment decisions. As one example, in the field of psychiatry, AI systems can already successfully detect markers of mental disorders such as depression. By using data from social media (e.g. Instagram or Twitter), users who are at risk of mental disorders can be identified. This potential of AI-based depression detectors (AIDD) opens chances, such as quick and inexpensive diagnoses, but also leads to ethical challenges especially regarding users’ autonomy. The focus of the presentation is on autonomy-related ethical implications of AI systems using social media data to identify users with a high risk of suffering from depression. First, technical examples and potential usage scenarios of AIDD are introduced. Second, it is demonstrated that the traditional concept of patient autonomy according to Beauchamp and Childress does not fully account for the ethical implications associated with AIDD. Third, an extended concept of “Health-Related Digital Autonomy” (HRDA) is presented. Conceptual aspects and normative criteria of HRDA are discussed. As a result, HRDA covers the elusive area between social media users and patients. "


2020 ◽  
Vol 13 ◽  
pp. 175628642093896
Author(s):  
Vida Abedi ◽  
Ayesha Khan ◽  
Durgesh Chaudhary ◽  
Debdipto Misra ◽  
Venkatesh Avula ◽  
...  

Stroke is the fifth leading cause of death in the United States and a major cause of severe disability worldwide. Yet, recognizing the signs of stroke in an acute setting is still challenging and leads to loss of opportunity to intervene, given the narrow therapeutic window. A decision support system using artificial intelligence (AI) and clinical data from electronic health records combined with patients’ presenting symptoms can be designed to support emergency department providers in stroke diagnosis and subsequently reduce the treatment delay. In this article, we present a practical framework to develop a decision support system using AI by reflecting on the various stages, which could eventually improve patient care and outcome. We also discuss the technical, operational, and ethical challenges of the process.


2019 ◽  
Vol 32 (5) ◽  
pp. 272-275 ◽  
Author(s):  
Eric Racine ◽  
Wren Boehlen ◽  
Matthew Sample

Forms of Artificial Intelligence (AI), like deep learning algorithms and neural networks, are being intensely explored for novel healthcare applications in areas such as imaging and diagnoses, risk analysis, lifestyle management and monitoring, health information management, and virtual health assistance. Expected benefits in these areas are wide-ranging and include increased speed in imaging, greater insight into predictive screening, and decreased healthcare costs and inefficiency. However, AI-based clinical tools also create a host of situations wherein commonly-held values and ethical principles may be challenged. In this short column, we highlight three potentially problematic aspects of AI use in healthcare: (1) dynamic information and consent, (2) transparency and ownership, and (3) privacy and discrimination. We discuss their impact on patient/client, clinician, and health institution values and suggest ways to tackle this impact. We propose that AI-related ethical challenges may represent an opportunity for growth in organizations.


Sign in / Sign up

Export Citation Format

Share Document