assurance cases
Recently Published Documents


TOTAL DOCUMENTS

101
(FIVE YEARS 8)

H-INDEX

9
(FIVE YEARS 0)

ITNOW ◽  
2021 ◽  
Vol 63 (3) ◽  
pp. 66-66
Author(s):  
Simon Foster ◽  
Yakoub Nemouchi ◽  
Mario Gleirscher ◽  
Ran Wei ◽  
Tim Kelly

Abstract The paper, by Simon Foster, Yakoub Nemouchi, Mario Gleirscher, Ran Wei and Tim Kelly, published in The Formal Aspects of Computing—Applicable Formal Methods (June 2021), explores the introduction of Isabelle/SACM into formal methods of assurance.


2021 ◽  
Author(s):  
Adrian Groza ◽  
Liana Toderean ◽  
George Muntean ◽  
Simona Delia Nicoara

Abstract Purpose: Expertise for auditing AI systems in medical domain is only now being accumulated. Conformity assessment procedures will require AI systems: 1) to be transparent, 2) not to rely decisions solely on algorithms, or iii) to include safety assurance cases in the documentation to facilitate technical audit. We are interested here in obtaining transparency in the case of machine learning (ML) applied to classification of retina conditions. High performance metrics achieved using ML has become common practice. However, in the medical domain, algorithmic decisions need to be sustained by explanations. We aim at building a support tool for ophthalmologists able to: i) explain algorithmic decision to the human agent by automatically extracting rules from the ML learned models; ii) include the ophthalmologist in the loop by formalising expert rules and including the expert knowledge in the argumentation machinery; iii) build safety cases by creating assurance argument patterns for each diagnosis. Methods: For the learning task, we used a dataset consisting of 699 OCT images: 126 Normal class, 210 with Diabetic Retinopathy (DR) and 363 with Age Related Macular Degeneration (AMD). The dataset contains patients from the Ophthalmology Department of the County Emergency Hospital of Cluj-Napoca. All ethical norms and procedures, including anonymisation, have been performed. We applied three machine learning algorithms: decision tree (DT), support vector machine (SVM) and artificial neural network (ANN). For each algorithm we automatically extract diagnosis rules. For formalising expert knowledge, we relied on the normative dataset [13]. For arguing be- tween agents, we used the Jason multi-agent platform. We assume different knowledge base and reasoning capabilities for each agent. The agents have their own Optical Coherence Tomography (OCT) images on which they apply a distinct machine learning algorithm. The learned model is used to extract diagnosis rules. With distinct learned rules, the agents engage in an argumentative process. The resolution of the debate outputs a diagnosis that is then explained to the ophthalmologist, by means of assurance cases. Results: For diagnosing the retina condition, our AI solution deals with the following three issues: First, the learned models are automatically translated into rules. These rules are then used to build an explanation by tracing the reasoning chain supporting the diagnosis. Hence, the proposed AI solution complies with the requirement that ”algorithmic decision should be explained to the human agent”. Second, the decision is not solely based on ML-algorithms. The proposed architecture includes expert knowledge. The diagnosis is taken based on exchanging arguments between ML-based algorithms and expert knowledge. The conflict resolution among arguments is verbalised, so that the ophthalmologist can supervise the diagnosis. Third, the assurance cases are generated to facilitate technical audit. The assurance cases structure the evidence among various safety goals such as: machine learning methodology, transparency, or data quality. For each dimension, the auditor can check the provided evidence against the current best practices or safety standards. Conclusion: We developed a multi-agent system for retina conditions in which algorithmic decisions are sustained by explanations. The proposed tool goes behind most software in medical domain that focuses only on performance metrics. Our approach helps the technical auditor to approve software in the medical domain. Interleaving knowledge extracted from ML-models with ex- pert knowledge is a step towards balancing the benefits of ML with explainability, aiming at engineering reliable medical applications.


Author(s):  
Simon Foster ◽  
Yakoub Nemouchi ◽  
Mario Gleirscher ◽  
Ran Wei ◽  
Tim Kelly

AbstractAssurance cases are often required to certify critical systems. The use of formal methods in assurance can improve automation, increase confidence, and overcome errant reasoning. However, assurance cases can never be fully formalised, as the use of formal methods is contingent on models that are validated by informal processes. Consequently, assurance techniques should support both formal and informal artifacts, with explicated inferential links between them. In this paper, we contribute a formal machine-checked interactive language, called Isabelle/SACM, supporting the computer-assisted construction of assurance cases compliant with the OMG Structured Assurance Case Meta-Model. The use of Isabelle/SACM guarantees well-formedness, consistency, and traceability of assurance cases, and allows a tight integration of formal and informal evidence of various provenance. In particular, Isabelle brings a diverse range of automated verification techniques that can provide evidence. To validate our approach, we present a substantial case study based on the Tokeneer secure entry system benchmark. We embed its functional specification into Isabelle, verify its security requirements, and form a modular security case in Isabelle/SACM that combines the heterogeneous artifacts. We thus show that Isabelle is a suitable platform for critical systems assurance.


2021 ◽  
Vol 176 ◽  
pp. 110922
Author(s):  
Damir Nešić ◽  
Mattias Nyberg ◽  
Barbara Gallina
Keyword(s):  

2021 ◽  
Vol 26 (4) ◽  
Author(s):  
Mazen Mohamad ◽  
Jan-Philipp Steghöfer ◽  
Riccardo Scandariato

AbstractSecurity Assurance Cases (SAC) are a form of structured argumentation used to reason about the security properties of a system. After the successful adoption of assurance cases for safety, SAC are getting significant traction in recent years, especially in safety-critical industries (e.g., automotive), where there is an increasing pressure to be compliant with several security standards and regulations. Accordingly, research in the field of SAC has flourished in the past decade, with different approaches being investigated. In an effort to systematize this active field of research, we conducted a systematic literature review (SLR) of the existing academic studies on SAC. Our review resulted in an in-depth analysis and comparison of 51 papers. Our results indicate that, while there are numerous papers discussing the importance of SAC and their usage scenarios, the literature is still immature with respect to concrete support for practitioners on how to build and maintain a SAC. More importantly, even though some methodologies are available, their validation and tool support is still lacking.


Author(s):  
Qiang Zhi ◽  
Zhengshu Zhou ◽  
Shuji Morisaki

Assurance case helps analyze the system dependability, but the relationships between system elements and assurance case are generally not clearly defined. In order to make system assurance more intuitive and reliable, this paper proposes an approach that clearly defines the relationships between safety issues and system elements and integrates them using ArchiMate. Also, the proposed method applies model checking to system safety assurance, and the checking results are regarded as evidence of assurance cases. This method consists of four steps: interaction visualization, processes model checking, assurance case creation, and composite safety assurance. The significance of this work is that it provides a formalized procedure for safety-critical system assurance, which could increase the confidence in system safety. It would be expected to make the safety of a system easier to explain to third parties and make the system assurance more intuitive and effective. Also, a case study on an automatic driving system is carried out to confirm the effectiveness of this approach.


Author(s):  
Zhengshu Zhou ◽  
Qiang Zhi ◽  
Zilong Liang ◽  
Shuji Morisaki

When deciding and evaluating system security strategies, there is a trade-off relationship between security assuring effect and constraint condition, which has been revealed by many qualitative security assurance methods. However, the existing methods cannot be used to make quantitative analysis on security assurance and constraint conditions to support project managers and system engineers to decide system development strategies. Therefore, a quantitative method which can consider both security strategies and constraints is necessary. This paper proposes a semi-automatic, quantitative system security assurance approach for developing security requirement and security assurance cases by extending the traditional GSN (goal structuring notation). Next, two greedy algorithms for quantitative system security assurance are implemented and evaluated. In addition, a case study and an experiment are carried out to verify the effectiveness and efficiency of the proposed approach and the proposed algorithms.


Computer ◽  
2020 ◽  
Vol 53 (12) ◽  
pp. 35-46
Author(s):  
Erfan Asaadi ◽  
Ewen Denney ◽  
Jonathan Menzies ◽  
Ganesh J. Pai ◽  
Dimo Petroff
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document