A protocol of protocols to explore whether humans will continue in meaningful decision-making roles in an AI-driven future in complex health services (Preprint)

2021 ◽  
Author(s):  
Nandini Doreswamy ◽  
Louise Horstmanshof

BACKGROUND Health care can broadly be divided into two domains: clinical health services and complex health services, i.e., non-clinical health services such as health policy and health regulation. Artificial Intelligence (AI) is transforming both these areas. Currently, humans are leaders, managers, and decision makers in complex health services. However, with the rise of AI, the time has come to ask whether humans will continue to have meaningful decision-making roles in this domain. OBJECTIVE The objective is to establish a protocol of protocols to be used in the proposed research, which aims to explore whether humans will continue in meaningful decision-making roles in complex health services in an AI-driven future. METHODS The proposed research is designed as a four-step project, divided into two phases. In keeping with this design, the overarching protocol encompasses (i) the protocol for a scoping review that aims to identify and map human attributes that influence decision-making in complex health services; (ii) the protocol for a scoping review that aims to identify and map AI attributes that influence decision-making in this context; (iii) the protocol for a comparative analysis of human and AI attributes identified in the reviews; and (iv) the protocol for a simulation that tests the likelihood of humans competing, cooperating, or converging with AI in order to continue in meaningful decision-making roles in this context. RESULTS The results will be presented in tabular form, as well as visually intuitive formats. CONCLUSIONS This paper provides a roadmap for the proposed research. It also provides an example of a protocol of protocols for methods used in complex health research. While there are established guidelines for a priori protocols for scoping reviews, there is a paucity of guidance on establishing a protocol of protocols. This paper takes the first step towards building a scaffolding for future guidelines in this regard.

2020 ◽  
Author(s):  
Abdulrahman Takiddin ◽  
Jens Schneider ◽  
Yin Yang ◽  
Alaa Abd-Alrazaq ◽  
Mowafa Househ

BACKGROUND Skin cancer is the most common cancer type affecting humans. Traditional skin cancer diagnosis methods are costly, require a professional physician, and take time. Hence, to aid in diagnosing skin cancer, Artificial Intelligence (AI) tools are being used, including shallow and deep machine learning-based techniques that are trained to detect and classify skin cancer using computer algorithms and deep neural networks. OBJECTIVE The aim of this study is to identify and group the different types of AI-based technologies used to detect and classify skin cancer. The study also examines the reliability of the selected papers by studying the correlation between the dataset size and number of diagnostic classes with the performance metrics used to evaluate the models. METHODS We conducted a systematic search for articles using IEEE Xplore, ACM DL, and Ovid MEDLINE databases following the PRISMA Extension for Scoping Reviews (PRISMA-ScR) guidelines. The study included in this scoping review had to fulfill several selection criteria; to be specifically about skin cancer, detecting or classifying skin cancer, and using AI technologies. Study selection and data extraction were conducted by two reviewers independently. Extracted data were synthesized narratively, where studies were grouped based on the diagnostic AI techniques and their evaluation metrics. RESULTS We retrieved 906 papers from the 3 databases, but 53 studies were eligible for this review. While shallow techniques were used in 14 studies, deep techniques were utilized in 39 studies. The studies used accuracy (n=43/53), the area under receiver operating characteristic curve (n=5/53), sensitivity (n=3/53), and F1-score (n=2/53) to assess the proposed models. Studies that use smaller datasets and fewer diagnostic classes tend to have higher reported accuracy scores. CONCLUSIONS The adaptation of AI in the medical field facilitates the diagnosis process of skin cancer. However, the reliability of most AI tools is questionable since small datasets or low numbers of diagnostic classes are used. In addition, a direct comparison between methods is hindered by a varied use of different evaluation metrics and image types.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pooya Tabesh

Purpose While it is evident that the introduction of machine learning and the availability of big data have revolutionized various organizational operations and processes, existing academic and practitioner research within decision process literature has mostly ignored the nuances of these influences on human decision-making. Building on existing research in this area, this paper aims to define these concepts from a decision-making perspective and elaborates on the influences of these emerging technologies on human analytical and intuitive decision-making processes. Design/methodology/approach The authors first provide a holistic understanding of important drivers of digital transformation. The authors then conceptualize the impact that analytics tools built on artificial intelligence (AI) and big data have on intuitive and analytical human decision processes in organizations. Findings The authors discuss similarities and differences between machine learning and two human decision processes, namely, analysis and intuition. While it is difficult to jump to any conclusions about the future of machine learning, human decision-makers seem to continue to monopolize the majority of intuitive decision tasks, which will help them keep the upper hand (vis-à-vis machines), at least in the near future. Research limitations/implications The work contributes to research on rational (analytical) and intuitive processes of decision-making at the individual, group and organization levels by theorizing about the way these processes are influenced by advanced AI algorithms such as machine learning. Practical implications Decisions are building blocks of organizational success. Therefore, a better understanding of the way human decision processes can be impacted by advanced technologies will prepare managers to better use these technologies and make better decisions. By clarifying the boundaries/overlaps among concepts such as AI, machine learning and big data, the authors contribute to their successful adoption by business practitioners. Social implications The work suggests that human decision-makers will not be replaced by machines if they continue to invest in what they do best: critical thinking, intuitive analysis and creative problem-solving. Originality/value The work elaborates on important drivers of digital transformation from a decision-making perspective and discusses their practical implications for managers.


2002 ◽  
Vol 15 (3) ◽  
pp. 18-24 ◽  
Author(s):  
Kevin Brazil ◽  
Stuart MacLeod ◽  
Brian Guest

Health services research has emerged as a tool for decision makers to make services more effective and efficient. While its value as a basis for decision making is well established, the incorporation of such evidence into decision making remains inconsistent. To this end, strengthening collaborative relationships between researchers and healthcare decision makers has been identified as a significant strategy for putting research evidence into practice.


Author(s):  
Ekaterina Jussupow ◽  
Kai Spohrer ◽  
Armin Heinzl ◽  
Joshua Gawlitza

Systems based on artificial intelligence (AI) increasingly support physicians in diagnostic decisions, but they are not without errors and biases. Failure to detect those may result in wrong diagnoses and medical errors. Compared with rule-based systems, however, these systems are less transparent and their errors less predictable. Thus, it is difficult, yet critical, for physicians to carefully evaluate AI advice. This study uncovers the cognitive challenges that medical decision makers face when they receive potentially incorrect advice from AI-based diagnosis systems and must decide whether to follow or reject it. In experiments with 68 novice and 12 experienced physicians, novice physicians with and without clinical experience as well as experienced radiologists made more inaccurate diagnosis decisions when provided with incorrect AI advice than without advice at all. We elicit five decision-making patterns and show that wrong diagnostic decisions often result from shortcomings in utilizing metacognitions related to decision makers’ own reasoning (self-monitoring) and metacognitions related to the AI-based system (system monitoring). As a result, physicians fall for decisions based on beliefs rather than actual data or engage in unsuitably superficial evaluation of the AI advice. Our study has implications for the training of physicians and spotlights the crucial role of human actors in compensating for AI errors.


2020 ◽  
Vol 14 (4) ◽  
pp. 640-652
Author(s):  
Abraham Gale ◽  
Amélie Marian

Ranking functions are commonly used to assist in decision-making in a wide variety of applications. As the general public realizes the significant societal impacts of the widespread use of algorithms in decision-making, there has been a push towards explainability and transparency in decision processes and results, as well as demands to justify the fairness of the processes. In this paper, we focus on providing metrics towards explainability and transparency of ranking functions, with a focus towards making the ranking process understandable, a priori , so that decision-makers can make informed choices when designing their ranking selection process. We propose transparent participation metrics to clarify the ranking process, by assessing the contribution of each parameter used in the ranking function in the creation of the final ranked outcome, using information about the ranking functions themselves, as well as observations of the underlying distributions of the parameter values involved in the ranking. To evaluate the outcome of the ranking process, we propose diversity and disparity metrics to measure how similar the selected objects are to each other, and to the underlying data distribution. We evaluate the behavior of our metrics on synthetic data, as well as on data and ranking functions on two real-world scenarios: high school admissions and decathlon scoring.


Author(s):  
Viktor Elliot ◽  
Mari Paananen ◽  
Miroslaw Staron

We propose an exercise with the purpose of providing a basic understanding of key concepts within AI and extending the understanding of AI beyond mathematics. The exercise allows participants to carry out analysis based on accounting data using visualization tools as well as to develop their own machine learning algorithms that can mimic their decisions. Finally, we also problematize the use of AI in decision-making, with such aspects as biases in data and/or ethical concerns.


Author(s):  
Luisa Dall'Acqua

The chapter intends to be a theoretical contribution for developers in the field of artificial intelligence. It also means a practical guideline for leaders, as decision-makers, to manage tasks and optimize performance. The proposed approach interprets the fluid nature of the decision-making process looking at knowledge and knowledge activities as dynamic, adaptive, and self-regulative, based not only on well-known explicit curricular goals but also on unpredictable interactions and relationships between players. The knowledge process is emerging in human and biological, social, and cultural environments.


CJEM ◽  
2020 ◽  
Vol 22 (S1) ◽  
pp. S90-S90
Author(s):  
A. Kirubarajan ◽  
A. Taher ◽  
S. Khan ◽  
S. Masood

Introduction: The study of artificial intelligence (AI) in medicine has become increasingly popular over the last decade. The emergency department (ED) is uniquely situated to benefit from AI due to its power of diagnostic prediction, and its ability to continuously improve with time. However, there is a lack of understanding of the breadth and scope of AI applications in emergency medicine, and evidence supporting its use. Methods: Our scoping review was completed according to PRISMA-ScR guidelines and was published a priori on Open Science Forum. We systematically searched databases (Medline-OVID, EMBASE, CINAHL, and IEEE) for AI interventions relevant to the ED. Study selection and data extraction was performed independently by two investigators. We categorized studies based on type of AI model used, location of intervention, clinical focus, intervention sub-type, and type of comparator. Results: Of the 1483 original database citations, a total of 181 studies were included in the scoping review. Inter-rater reliability for study screening for titles and abstracts was 89.1%, and for full-text review was 77.8%. Overall, we found that 44 (24.3%) studies utilized supervised learning, 63 (34.8%) studies evaluated unsupervised learning, and 13 (7.2%) studies utilized natural language processing. 17 (9.4%) studies were conducted in the pre-hospital environment, with the remainder occurring either in the ED or the trauma bay. The majority of interventions centered around prediction (n = 73, 40.3%). 48 studies (25.5%) analyzed AI interventions for diagnosis. 23 (12.7%) interventions focused on diagnostic imaging. 89 (49.2%) studies did not have a comparator to their AI intervention. 63 (34.8%) studies used statistical models as a comparator, 19 (10.5%) of which were clinical decision making tools. 15 (8.3%) studies used humans as comparators, with 12 of the 15 (80%) studies showing superiority in favour of the AI intervention when compared to a human. Conclusion: AI-related research is rapidly increasing in emergency medicine. AI interventions are heterogeneous in both purpose and design, but primarily focus on predictive modeling. Most studies do not involve a human comparator and lack information on patient-oriented outcomes. While some studies show promising results for AI-based interventions, there remains uncertainty regarding their superiority over standard practice, and further research is needed prior to clinical implementation.


Health Policy ◽  
2019 ◽  
Vol 123 (7) ◽  
pp. 635-645 ◽  
Author(s):  
Nehla Djellouli ◽  
Lorelei Jones ◽  
Helen Barratt ◽  
Angus I.G. Ramsay ◽  
Steven Towndrow ◽  
...  

2020 ◽  
Vol 12 (1) ◽  
pp. 81-106
Author(s):  
Ran Spiegler

This review presents an approach to modeling decision making under misspecified subjective models. The approach is based on the idea that decision makers impose subjective causal interpretations on observed correlations, and it borrows basic concepts and tools from the statistics and artificial intelligence literatures on Bayesian networks. While these background literatures used Bayesian networks as a platform for normative and computational analysis of probabilistic and causal inference, in the framework proposed here graphical models represent causal misperceptions and help analyze their behavioral implications. I show how this approach sheds light on earlier equilibrium models with nonrational expectations and demonstrate its scope of economic applications.


Sign in / Sign up

Export Citation Format

Share Document