scholarly journals The potential application of artificial intelligence for diagnosis and management of glaucoma in adults

2020 ◽  
Vol 134 (1) ◽  
pp. 21-33
Author(s):  
Cara G Campbell ◽  
Daniel S W Ting ◽  
Pearse A Keane ◽  
Paul J Foster

Abstract Background Glaucoma is the most frequent cause of irreversible blindness worldwide. There is no cure, but early detection and treatment can slow the progression and prevent loss of vision. It has been suggested that artificial intelligence (AI) has potential application for detection and management of glaucoma. Sources of data This literature review is based on articles published in peer-reviewed journals. Areas of agreement There have been significant advances in both AI and imaging techniques that are able to identify the early signs of glaucomatous damage. Machine and deep learning algorithms show capabilities equivalent to human experts, if not superior. Areas of controversy Concerns that the increased reliance on AI may lead to deskilling of clinicians. Growing points AI has potential to be used in virtual review clinics, telemedicine and as a training tool for junior doctors. Unsupervised AI techniques offer the potential of uncovering currently unrecognized patterns of disease. If this promise is fulfilled, AI may then be of use in challenging cases or where a second opinion is desirable. Areas timely for developing research There is a need to determine the external validity of deep learning algorithms and to better understand how the ‘black box’ paradigm reaches results.

2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


2021 ◽  
Vol 10 (2) ◽  
pp. 205846012199029
Author(s):  
Rani Ahmad

Background The scope and productivity of artificial intelligence applications in health science and medicine, particularly in medical imaging, are rapidly progressing, with relatively recent developments in big data and deep learning and increasingly powerful computer algorithms. Accordingly, there are a number of opportunities and challenges for the radiological community. Purpose To provide review on the challenges and barriers experienced in diagnostic radiology on the basis of the key clinical applications of machine learning techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of machine learning models. A single contingency table was selected for each study to report the highest accuracy of radiology professionals and machine learning algorithms, and a meta-analysis of studies was conducted based on contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%, whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and specificity were 89% and 85% for the deep learning algorithms for detecting abnormalities compared to 75% and 91% for radiology experts, respectively. The pooled specificity and sensitivity for comparison between radiology professionals and deep learning algorithms were 91% and 81% for deep learning models and 85% and 73% for radiology professionals (p < 0.000), respectively. The pooled sensitivity detection was 82% for health-care professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images that may not be discernible through visual examination, thus may improve the prognostic and diagnostic value of data sets.


2021 ◽  
Author(s):  
Yew Kee Wong

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.


Author(s):  
Jay Rodge ◽  
Swati Jaiswal

Deep learning and Artificial intelligence (AI) have been trending these days due to the capability and state-of-the-art results that they provide. They have replaced some highly skilled professionals with neural network-powered AI, also known as deep learning algorithms. Deep learning majorly works on neural networks. This chapter discusses about the working of a neuron, which is a unit component of neural network. There are numerous techniques that can be incorporated while designing a neural network, such as activation functions, training, etc. to improve its features, which will be explained in detail. It has some challenges such as overfitting, which are difficult to neglect but can be overcome using proper techniques and steps that have been discussed. The chapter will help the academician, researchers, and practitioners to further investigate the associated area of deep learning and its applications in the autonomous vehicle industry.


Author(s):  
Abraham Rudnick

Artificial intelligence (AI) and its correlates, such as machine and deep learning, are changing health care, where complex matters such as comoribidity call for dynamic decision-making. Yet, some people argue for extreme caution, referring to AI and its correlates as a black box. This brief article uses philosophy and science to address the black box argument about knowledge as a myth, concluding that this argument is misleading as it ignores a fundamental tenet of science, i.e., that no empirical knowledge is certain, and that scientific facts – as well as methods – often change. Instead, control of the technology of AI and its correlates has to be addressed to mitigate such unexpected negative consequences.


2021 ◽  
Vol 13 (21) ◽  
pp. 11631
Author(s):  
Der-Jang Chi ◽  
Chien-Chou Chu

“Going concern” is a professional term in the domain of accounting and auditing. The issuance of appropriate audit opinions by certified public accountants (CPAs) and auditors is critical to companies as a going concern, as misjudgment and/or failure to identify the probability of bankruptcy can cause heavy losses to stakeholders and affect corporate sustainability. In the era of artificial intelligence (AI), deep learning algorithms are widely used by practitioners, and academic research is also gradually embarking on projects in various domains. However, the use of deep learning algorithms in the prediction of going concern remains limited. In contrast to those in the literature, this study uses long short-term memory (LSTM) and gated recurrent unit (GRU) for learning and training, in order to construct effective and highly accurate going-concern prediction models. The sample pool consists of the Taiwan Stock Exchange Corporation (TWSE) and the Taipei Exchange (TPEx) listed companies in 2004–2019, including 86 companies with going concern doubt and 172 companies without going concern doubt. In other words, 258 companies in total are sampled. There are 20 research variables, comprising 16 financial variables and 4 non-financial variables. The results are based on performance indicators such as accuracy, precision, recall/sensitivity, specificity, F1-scores, and Type I and Type II error rates, and both the LSTM and GRU models perform well. As far as accuracy is concerned, the LSTM model reports 96.15% accuracy while GRU shows 94.23% accuracy.


2020 ◽  
Vol 73 (4) ◽  
pp. 275-284
Author(s):  
Dukyong Yoon ◽  
Jong-Hwan Jang ◽  
Byung Jin Choi ◽  
Tae Young Kim ◽  
Chang Ho Han

Biosignals such as electrocardiogram or photoplethysmogram are widely used for determining and monitoring the medical condition of patients. It was recently discovered that more information could be gathered from biosignals by applying artificial intelligence (AI). At present, one of the most impactful advancements in AI is deep learning. Deep learning-based models can extract important features from raw data without feature engineering by humans, provided the amount of data is sufficient. This AI-enabled feature presents opportunities to obtain latent information that may be used as a digital biomarker for detecting or predicting a clinical outcome or event without further invasive evaluation. However, the black box model of deep learning is difficult to understand for clinicians familiar with a conventional method of analysis of biosignals. A basic knowledge of AI and machine learning is required for the clinicians to properly interpret the extracted information and to adopt it in clinical practice. This review covers the basics of AI and machine learning, and the feasibility of their application to real-life situations by clinicians in the near future.


2019 ◽  
Vol 87 (2) ◽  
pp. 27-29
Author(s):  
Meagan Wiederman

Artificial intelligence (AI) is the ability of any device to take an input, like that of its environment, and work to achieve a desired output. Some advancements in AI have focused n replicating the human brain in machinery. This is being made possible by the human connectome project: an initiative to map all the connections between neurons within the brain. A full replication of the thinking brain would inherently create something that could be argued to be a thinking machine. However, it is more interesting to question whether a non-biologically faithful AI could be considered as a thinking machine. Under Turing’s definition of ‘thinking’, a machine which can be mistaken as human when responding in writing from a “black box,” where they can not be viewed, can be said to pass for thinking. Backpropagation is an error minimizing algorithm to program AI for feature detection with no biological counterpart which is prevalent in AI. The recent success of backpropagation demonstrates that biological faithfulness is not required for deep learning or ‘thought’ in a machine. Backpropagation has been used in medical imaging compression algorithms and in pharmacological modelling.


Author(s):  
Evren Dağlarli

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.


Sign in / Sign up

Export Citation Format

Share Document