A Model of Categorization for Use in Automated Failure Analysis

1988 ◽  
Vol 110 (4) ◽  
pp. 559-563
Author(s):  
J. P. Morrill ◽  
D. Wright

Categorization is the procedure of determining set membership based on either necessary or statistically suggestive conditions for membership. This procedure lies at the heart of automated metallurgical failure analysis, controlling the accuracy of the final conclusion. This article examines the tradeoff between the number of questions posed by the computer during data collection and the certainty of the final decision. After a brief overview of failure analysis decision making, a model of categorization is proposed which is derived from Bayes’ theorem that asks questions in order of relevance and stops when an adequate level of certainty is achieved. This eliminates irrelevant questions without significantly compromising the accuracy of the final conclusion. The model has been implemented as part of an artificial intelligence computer program.

Lex Russica ◽  
2019 ◽  
pp. 79-87
Author(s):  
P. N. Biryukov

The paper deals with the problems of application of artificial intelligence (AI) in the field of justice. Present day environment facilitates the use of AI in law. Technology has entered the market. As a result, "predicted justice" has become possible. Once an overview of the possible future process is obtained, it is easier for the professional to complete the task-interpretation and final decision-making (negotiations, litigation). It will take a lot of work to bring AI up to this standard. Legal information should be structured to make it not only readable, but also effective for decision-making. "Predicted justice" can help both the parties to the case and the judges in structuring information, and students and teachers seeking relevant information. The development of information technology has led to increased opportunities for "predicted justice" programs. They take advantage of new digital tools. The focus is on two advantages of the programs: a) improving the quality of services provided; b) simultaneously monitoring the operational costs of the justice system. "Predicted justice" provides algorithms for analyzing a huge number of situations in a short time, allowing you to predict the outcome of a dispute or at least assess the chances of success. It helps: choose the right way of defense, the most suitable arguments, estimate the expected amount of compensation, etc. Thus, it is not about justice itself, but only about analytical tools that would make it possible to predict future decisions in disputes similar to those that have been analyzed.


Author(s):  
Jianhua Qin ◽  
Xueqiong Zhu ◽  
Zhen Wang ◽  
Jingtan Ma ◽  
Shan Gao ◽  
...  

In view of the actual needs faced by the substation maintenance, this paper proposes a kind of substation decision-making platform based on artificial intelligence. The platform formalizes and integrates the basic data, electrical data and the operational data of the equipment, qualitatively triggers the maintenance task abide by the result of the logistic regression model, provides further results of data processing through quantitative analysis, and provides knowledge navigation to the operation guidance of the corresponding equipment. The platform matches the electrical data with the inference engine stored in the knowledge base. If the data match the condition of the inference successfully, the inference is triggered and the action is executed. The result is provided to the relevant staff as a suggestion to assist the final decision. After the task is completed, the cause, effect and solution of the equipment failure are backfilled and expanded into the equipment base as a new instance.  


2020 ◽  
Author(s):  
Thomas Ploug ◽  
Anna Sundby ◽  
Thomas B Moeslund ◽  
Søren Holm

BACKGROUND Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public’s interests in such features of AI. OBJECTIVE This study elicited the public’s preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. METHODS We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents’ views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. RESULTS Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents’ trust in health and technology, and respondents’ fears and hopes regarding AI, do not play a significant role in the majority of cases. CONCLUSIONS The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.


Author(s):  
Ivan Izonin

Nowadays, the fast development of hardware for IoT-based systems creates appropriate conditions for the development of services for different application areas. As we know, the large number of multifunctional devices, which are connected to the Internet is constantly increasing. Today, most of the IoT devices just only collect and transmit data. The huge amount of data produced by these devices requires efficient and fast approaches to its analysis. This task can be solved by combining Artificial Intelligence and IoT tools. Essentially, AI accelerators can be used as a universal sensor in IoT systems, that is, we can create Artificial Intelligence of Things (AIoT). AIoT can be considered like a movement from data collection to knowledge aggregation. AIoT-based systems are being widely implemented in many high-tech industrial and infrastructure systems. Such systems are capable of providing not only the ability to collect but also analyse various aspects of data for identification, planning, diagnostics, evaluation, monitoring, optimization, etc., at the lower level in the entire system's hierarchy. That is, they are able to work more efficiently and effectively by generating the knowledge that is needed for real-time analytics and decision-making in some application areas.


Author(s):  
Abigail Christina Fernandez

Data is just data if it is not put to proper comprehensive usage. Information is Knowledge and Knowledge gets upgraded to wisdom pertaining to insight in the relevant field of analysis. Data Science has become the key that unravels many pitches of interest in diversified fields of quest. It is of optimal stipulation that the solutions that the Artificial Intelligence Algorithms provide should do justice to the intent for which what it was built. But at times, inadvertently the word bias is declaimed, which has become an implicit or explicit inclusion in the Algorithms and the data collection methodologies incorporated. IT companies manoeuvring this technology need to treat this hushed underplay in prediction and decision making with top-notch priority to epitomise this imminent episode of Machine Learning in Data Analysis.


10.2196/26611 ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. e26611
Author(s):  
Thomas Ploug ◽  
Anna Sundby ◽  
Thomas B Moeslund ◽  
Søren Holm

Background Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public’s interests in such features of AI. Objective This study elicited the public’s preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. Methods We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents’ views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. Results Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents’ trust in health and technology, and respondents’ fears and hopes regarding AI, do not play a significant role in the majority of cases. Conclusions The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.


2019 ◽  
Author(s):  
Tayana Soukup ◽  
Ged Murtagh ◽  
Ben W Lamb ◽  
James Green ◽  
Nick Sevdalis

Background Multidisciplinary teams (MDTs) are a standard cancer care policy in many countries worldwide. Despite an increase in research in a recent decade on MDTs and their care planning meetings, the implementation of MDT-driven decision-making (fidelity) remains unstudied. We report a feasibility evaluation of a novel method for assessing cancer MDT decision-making fidelity. We used an observational protocol to assess (1) the degree to which MDTs adhere to the stages of group decision-making as per the ‘Orientation-Discussion-Decision-Implementation’ framework, and (2) the degree of multidisciplinarity underpinning individual case reviews in the meetings. MethodsThis is a prospective observational study. Breast, colorectal and gynaecological cancer MDTs in the Greater London and Derbyshire (United Kingdom) areas were video recorded over 12-weekly meetings encompassing 822 case reviews. Data were coded and analysed using frequency counts.Results Eight interaction formats during case reviews were identified. case reviews were not always multi-disciplinary: only 8% of overall reviews involved all five clinical disciplines present, and 38% included four of five. The majority of case reviews (i.e. 54%) took place between two (25%) or three (29%) disciplines only. Surgeons (83%) and oncologists (8%) most consistently engaged in all stages of decision-making. While all patients put forward for MDT review were actually reviewed, a small percentage of them (4%) either bypassed the orientation (case presentation) and went straight into discussing the patient, or they did not articulate the final decision to the entire team (8%). Conclusions Assessing fidelity of MDT decision-making at the point of their weekly meetings is feasible. We found that despite being a set policy, case reviews are not entirely MDT-driven. We discuss implications in relation to the current eco-political climate, and the quality and safety of care. Our findings are in line with the current national initiatives in the UK on streamlining MDT meetings, and could help decide how to re-organise them to be most efficient.


Sign in / Sign up

Export Citation Format

Share Document