Optimal Selection of Model Validation Experiments: Guided by Coverage

Author(s):  
Robert Hällqvist ◽  
Robert Braun ◽  
Magnus Eek ◽  
Petter Krus

Abstract Modeling and Simulation (M&S) is seen as a means to mitigate the difficulties associated with increased system complexity, integration, and cross-couplings effects encountered during development of aircraft sub-systems. As a consequence, knowledge of model validity is necessary for taking robust and justified design decisions. This paper presents a method for using coverage metrics, to formulate an optimal model validation strategy. Three fundamentally different and industrially relevant use-cases are presented. The first use-case entails the successive identification of validation settings and the second considers the simultaneous identification of n validation settings. The latter of these two use-cases is finally expanded to incorporate a secondary model-based objective to the optimization problem in a third use-case. The approach presented is designed to be scalable and generic to models of industrially relevant complexity. As a result, selecting experiments for validation is done objectively with little required manual effort.

Author(s):  
Julian Endres ◽  
Reinhard C. Bernsteiner ◽  
Christian Ploder

This article provides a comprehensive use case-based comparison framework for the selection of the most suitable database for specific requirements and application domains. The concept of a NoSQL Enterprise Readiness Index which is a comparable numeric measurement for the fitness of a NoSQL database in enterprise use cases is proposed. It is calculated by the fulfillment of a comprehensive set of weighted criteria distilled from comprehensive research. The goal of the developed NoSQL Enterprise Readiness Index (NERI) is to provide a numeric index value for the comparison and benchmarking of NoSQL databases in enterprise use cases. The calculation of NERI is based on 43 criteria in which the single database product is evaluated. To be reproducible with more products and to guarantee a consistent evaluation among the products, a fulfillment matrix is used. The fulfillment matrix describes for each of the 43 criteria the four different levels of fulfillment in a mostly qualitative way and therefore guides the evaluator in choosing the right point value.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 592
Author(s):  
Radek Silhavy ◽  
Petr Silhavy ◽  
Zdenka Prokopova

Software size estimation represents a complex task, which is based on data analysis or on an algorithmic estimation approach. Software size estimation is a nontrivial task, which is important for software project planning and management. In this paper, a new method called Actors and Use Cases Size Estimation is proposed. The new method is based on the number of actors and use cases only. The method is based on stepwise regression and led to a very significant reduction in errors when estimating the size of software systems compared to Use Case Points-based methods. The proposed method is independent of Use Case Points, which allows the elimination of the effect of the inaccurate determination of Use Case Points components, because such components are not used in the proposed method.


2021 ◽  
Vol 24 (2) ◽  
pp. 1-35
Author(s):  
Isabel Wagner ◽  
Iryna Yevseyeva

The ability to measure privacy accurately and consistently is key in the development of new privacy protections. However, recent studies have uncovered weaknesses in existing privacy metrics, as well as weaknesses caused by the use of only a single privacy metric. Metrics suites, or combinations of privacy metrics, are a promising mechanism to alleviate these weaknesses, if we can solve two open problems: which metrics should be combined and how. In this article, we tackle the first problem, i.e., the selection of metrics for strong metrics suites, by formulating it as a knapsack optimization problem with both single and multiple objectives. Because solving this problem exactly is difficult due to the large number of combinations and many qualities/objectives that need to be evaluated for each metrics suite, we apply 16 existing evolutionary and metaheuristic optimization algorithms. We solve the optimization problem for three privacy application domains: genomic privacy, graph privacy, and vehicular communications privacy. We find that the resulting metrics suites have better properties, i.e., higher monotonicity, diversity, evenness, and shared value range, than previously proposed metrics suites.


2015 ◽  
Vol 1120-1121 ◽  
pp. 670-674
Author(s):  
Abdelmadjid Ait Yala ◽  
Abderrahmanne Akkouche

The aim of this work is to define a general method for the optimization of composite patch repairing. Fracture mechanics theory shows that the stress intensity factor tends towards an asymptotic limit K∞.This limit is given by Rose’s formula and is a function of the thicknesses and mechanical properties of the cracked plate, the composite patch and the adhesive. The proposed approach consists in considering this limit as an objective function that needs to be minimized. In deed lowering this asymptote will reduce the values of the stress intensity factor hence optimize the repair. However to be effective this robust design must satisfy the stiffness ratio criteria. The resolution of this double objective optimization problem with Matlab program allowed us determine the appropriate geometric and mechanical properties that allow the optimum design; that is the selection of the adhesive, the patch and their respective thicknesses.


2022 ◽  
Author(s):  
Thomas A. Ozoroski ◽  
Aldo Gargiulo ◽  
Julie E. Duetsch-Patel ◽  
Vignesh Sundarraj ◽  
Christopher J. Roy ◽  
...  

2014 ◽  
Vol 23 (01) ◽  
pp. 27-35 ◽  
Author(s):  
S. de Lusignan ◽  
S-T. Liaw ◽  
C. Kuziemsky ◽  
F. Mold ◽  
P. Krause ◽  
...  

Summary Background: Generally benefits and risks of vaccines can be determined from studies carried out as part of regulatory compliance, followed by surveillance of routine data; however there are some rarer and more long term events that require new methods. Big data generated by increasingly affordable personalised computing, and from pervasive computing devices is rapidly growing and low cost, high volume, cloud computing makes the processing of these data inexpensive. Objective: To describe how big data and related analytical methods might be applied to assess the benefits and risks of vaccines. Method: We reviewed the literature on the use of big data to improve health, applied to generic vaccine use cases, that illustrate benefits and risks of vaccination. We defined a use case as the interaction between a user and an information system to achieve a goal. We used flu vaccination and pre-school childhood immunisation as exemplars. Results: We reviewed three big data use cases relevant to assessing vaccine benefits and risks: (i) Big data processing using crowd-sourcing, distributed big data processing, and predictive analytics, (ii) Data integration from heterogeneous big data sources, e.g. the increasing range of devices in the “internet of things”, and (iii) Real-time monitoring for the direct monitoring of epidemics as well as vaccine effects via social media and other data sources. Conclusions: Big data raises new ethical dilemmas, though its analysis methods can bring complementary real-time capabilities for monitoring epidemics and assessing vaccine benefit-risk balance.


2018 ◽  
Vol 77 (2) ◽  
pp. 187-195 ◽  
Author(s):  
Brian E. McGarry ◽  
David C. Grabowski

Given the rising cost of long-term care (LTC) services, the selection of a private long-term care insurance (LTCi) policy with inflation protection has critical implications for the ability of this coverage to protect against potentially catastrophic LTC expenses. This study examines the effect of consumers’ numeric abilities on the decision to add inflation protection to private LTCi policies. Over 40% of current LTCi policies lack inflation protection. Higher scores on a three-question numeracy scale are associated with increases in the probability of choosing inflation protection at the time of policy purchase, with households answering all three questions correctly being 12 percentage points more likely to have this benefit type relative to those with a numeracy score of 0 ( p = .002). Market reforms that simplify the task of evaluating LTCi plans and assessing the value of indexed benefits may be needed to ensure that LTCi policy purchasers are selecting adequate protection against future LTC costs.


Author(s):  
Matt Woodburn ◽  
Gabriele Droege ◽  
Sharon Grant ◽  
Quentin Groom ◽  
Janeen Jones ◽  
...  

The utopian vision is of a future where a digital representation of each object in our collections is accessible through the internet and sustainably linked to other digital resources. This is a long term goal however, and in the meantime there is an urgent need to share data about our collections at a higher level with a range of stakeholders (Woodburn et al. 2020). To sustainably achieve this, and to aggregate this information across all natural science collections, the data need to be standardised (Johnston and Robinson 2002). To this end, the Biodiversity Information Standards (TDWG) Collection Descriptions (CD) Interest Group has developed a data standard for describing collections, which is approaching formal review for ratification as a new TDWG standard. It proposes 20 classes (Suppl. material 1) and over 100 properties that can be used to describe, categorise, quantify, link and track digital representations of natural science collections, from high-level approximations to detailed breakdowns depending on the purpose of a particular implementation. The wide range of use cases identified for representing collection description data means that a flexible approach to the standard and the underlying modelling concepts is essential. These are centered around the ‘ObjectGroup’ (Fig. 1), a class that may represent any group (of any size) of physical collection objects, which have one or more common characteristics. This generic definition of the ‘collection’ in ‘collection descriptions’ is an important factor in making the standard flexible enough to support the breadth of use cases. For any use case or implementation, only a subset of classes and properties within the standard are likely to be relevant. In some cases, this subset may have little overlap with those selected for other use cases. This additional need for flexibility means that very few classes and properties, representing the core concepts, are proposed to be mandatory. Metrics, facts and narratives are represented in a normalised structure using an extended MeasurementOrFact class, so that these can be user-defined rather than constrained to a set identified by the standard. Finally, rather than a rigid underlying data model as part of the normative standard, documentation will be developed to provide guidance on how the classes in the standard may be related and quantified according to relational, dimensional and graph-like models. So, in summary, the standard has, by design, been made flexible enough to be used in a number of different ways. The corresponding risk is that it could be used in ways that may not deliver what is needed in terms of outputs, manageability and interoperability with other resources of collection-level or object-level data. To mitigate this, it is key for any new implementer of the standard to establish how it should be used in that particular instance, and define any necessary constraints within the wider scope of the standard and model. This is the concept of the ‘collection description scheme,’ a profile that defines elements such as: which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. Various factors might influence these decisions, including the types of information that are relevant to the use case, whether quantitative metrics need to be captured and aggregated across collection descriptions, and how many resources can be dedicated to amassing and maintaining the data. This process has particular relevance to the Distributed System of Scientific Collections (DiSSCo) consortium, the design of which incorporates use cases for storing, interlinking and reporting on the collections of its member institutions. These include helping users of the European Loans and Visits System (ELViS) (Islam 2020) to discover specimens for physical and digital loans by providing descriptions and breakdowns of the collections of holding institutions, and monitoring digitisation progress across European collections through a dynamic Collections Digitisation Dashboard. In addition, DiSSCo will be part of a global collections data ecosystem requiring interoperation with other infrastructures such as the GBIF (Global Biodiversity Information Facility) Registry of Scientific Collections, the CETAF (Consortium of European Taxonomic Facilities) Registry of Collections and Index Herbariorum. In this presentation, we will introduce the draft standard and discuss the process of defining new collection description schemes using the standard and data model, and focus on DiSSCo requirements as examples of real-world collection descriptions use cases.


Author(s):  
Mathias Uslar ◽  
Fabian Grüning ◽  
Sebastian Rohjans

Within this chapter, the authors provide two use cases on semantic interoperability in the electric utility industry based on the IEC TR 62357 seamless integration architecture. The first use case on semantic integration based on ontologies deals with the integration of the two heterogeneous standards families IEC 61970 and IEC 61850. Based on a quantitative analysis, we outline the need for integration and provide a solution based on our framework, COLIN. The second use cases points out the need to use better metadata semantics in the utility branch, also being solely based on the IEC 61970 standard. The authors provide a solution to use the CIM as a domain ontology and taxonomy for improving data quality. Finally, this chapter outlines open questions and argues that proper semantics and domain models based on international standards can improve the systems within a utility.


Author(s):  
S R Mani Sekhar ◽  
Siddesh G M ◽  
Swapnil Kalra ◽  
Shaswat Anand

Blockchain technology is an emerging and rapidly growing technology in the current world scenario. It is a collection of records connected through cryptography. They play a vital role in smart contracts. Smart contracts are present in blockchains which are self-controlled and trustable. It can be integrated across various domains like healthcare, finance, self-sovereign identity, governance, logistics management and home care, etc. The purpose of this article is to analyze the various use cases of smart contracts in different domains and come up with a model which may be used in the future. Subsequently, a detailed description of a smart contract and blockchain is provided. Next, different case-studies related to five different domains is discussed with the help of use case diagrams. Finally, a solution for natural disaster management has been proposed by integrating smart contract, digital identity, policies and blockchain technologies, which can be used effectively for providing relief to victims during times of natural disaster.


Sign in / Sign up

Export Citation Format

Share Document