Impact of reliability data quality on risk-based decision making

Author(s):  
S Isaksen ◽  
C Lundtofte
2014 ◽  
Vol 668-669 ◽  
pp. 1374-1377 ◽  
Author(s):  
Wei Jun Wen

ETL refers to the process of data extracting, transformation and loading and is deemed as a critical step in ensuring the quality, data specification and standardization of marine environmental data. Marine data, due to their complication, field diversity and huge volume, still remain decentralized, polyphyletic and isomerous with different semantics and hence far from being able to provide effective data sources for decision making. ETL enables the construction of marine environmental data warehouse in the form of cleaning, transformation, integration, loading and periodic updating of basic marine data warehouse. The paper presents a research on rules for cleaning, transformation and integration of marine data, based on which original ETL system of marine environmental data warehouse is so designed and developed. The system further guarantees data quality and correctness in analysis and decision-making based on marine environmental data in the future.


Author(s):  
Suranga C. H. Geekiyanage ◽  
Dan Sui ◽  
Bernt S. Aadnoy

Drilling industry operations heavily depend on digital information. Data analysis is a process of acquiring, transforming, interpreting, modelling, displaying and storing data with an aim of extracting useful information, so that the decision-making, actions executing, events detecting and incident managing of a system can be handled in an efficient and certain manner. This paper aims to provide an approach to understand, cleanse, improve and interpret the post-well or realtime data to preserve or enhance data features, like accuracy, consistency, reliability and validity. Data quality management is a process with three major phases. Phase I is an evaluation of pre-data quality to identify data issues such as missing or incomplete data, non-standard or invalid data and redundant data etc. Phase II is an implementation of different data quality managing practices such as filtering, data assimilation, and data reconciliation to improve data accuracy and discover useful information. The third and final phase is a post-data quality evaluation, which is conducted to assure data quality and enhance the system performance. In this study, a laboratory-scale drilling rig with a control system capable of drilling is utilized for data acquisition and quality improvement. Safe and efficient performance of such control system heavily relies on quality of the data obtained while drilling and its sufficient availability. Pump pressure, top-drive rotational speed, weight on bit, drill string torque and bit depth are available measurements. The data analysis is challenged by issues such as corruption of data due to noises, time delays, missing or incomplete data and external disturbances. In order to solve such issues, different data quality improvement practices are applied for the testing. These techniques help the intelligent system to achieve better decision-making and quicker fault detection. The study from the laboratory-scale drilling rig clearly demonstrates the need for a proper data quality management process and clear understanding of signal processing methods to carry out an intelligent digitalization in oil and gas industry.


Author(s):  
Tom August ◽  
J Terry ◽  
David Roy

The rapid rise of Artificial Intelligence (AI) methods has presented new opportunities for those who work with biodiversity data. Computer vision, in particular where computers can be trained to identify species in digital photographs, has significant potential to address a number of existing challenges in citizen science. The Biological Records Centre (www.brc.ac.uk) has been a central focus for terrestrial and freshwater citizen science in the United Kingdom for over 50 years. We will present our research on how computer vision can be embedded in citizen science, addressing three important questions. How can contextual information, such as time of year, be included in computer vision? A naturalist will use a wealth of ecological knowledge about species in combination with information about where and when the image was taken to augment their decision making; we should emulate this in our AI. How can citizen scientists be best supported by computer vision? Our ambition is not to replace identification skills with AI but to use AI to support the learning process. How can computer vision support our limited resource of expert verifiers as data volumes increase? We receive more and more data each year, which puts a greater demand on our expert verifiers, who review all records to ensure data quality. We have been exploring how computer vision can lighten this workload. How can contextual information, such as time of year, be included in computer vision? A naturalist will use a wealth of ecological knowledge about species in combination with information about where and when the image was taken to augment their decision making; we should emulate this in our AI. How can citizen scientists be best supported by computer vision? Our ambition is not to replace identification skills with AI but to use AI to support the learning process. How can computer vision support our limited resource of expert verifiers as data volumes increase? We receive more and more data each year, which puts a greater demand on our expert verifiers, who review all records to ensure data quality. We have been exploring how computer vision can lighten this workload. We will present work that addresses these questions including: developing machine learning techniques that incorporate ecological information as well as images to arrive at a species classification; co-designing an identification tool to help farmers identify flowers beneficial to wildlife; and assessing the optimal combination of computer vision and expert verification to improve our verification systems.


2010 ◽  
Vol 108-111 ◽  
pp. 972-978
Author(s):  
Ying Su ◽  
Jing Hua Huang ◽  
Latif Al-Hakim

Purpose – Only limited attention has been paid to the issue of Measurement Data Quality (MDQ) in a metrology context. To address this critique of the literature a methodology to assure MDQ was proposed. Methodology – The study proposes a methodology which consists of four steps can be used to 1 identify the importance of a measurement (identification), 2 determine accuracy and precision (determination), 3 evaluate the criticality of the measurement to its impact on the final result (evaluation) and 4 record the facts that influenced the decision making process (documentation). Findings –When followed and properly documented, these four steps can help ensure our measurements are valid and worthwhile. Identifying the important measurements that are made, determining the level of accuracy required and then using the proper tools to make the measurements will yield valid, useful results.


2020 ◽  
Vol 12 (17) ◽  
pp. 6762
Author(s):  
Young Hyeo Joo

This study investigates the Korean Educational Information Disclosure System (KEIDS) and suggests sustainable development policies for KEIDS to improve school-level data-based decision-making (DBDM) from the educational administration’s perspective. It also raises the following questions: What are the barriers impeding effective data use by the KEIDS? How do school teachers, who are directly involved in using data, effectively prepare for DBDM using the KEIDS? How can the KEIDS be improved for DBDM concerning quality data, school context, and institutional support? To answer these questions, the study reviewed KEIDS-related documents and interviewed 24 school teachers through an interpretive case study approach while using a research framework of data quality, school contexts, and institutional support. Its results highlight important issues with the KEIDS and sustainable DBDM, in other words, teachers and administrators are not always conscious of the need for using data; the lack of data use understanding creates issues among principal leadership and teachers’ involvement and cooperation; the quality of the student data in the Schoolinfo system is questionable; and the central education authority focuses on simply disclosing student data rather than pursuing the goal of the KEIDS. The study suggests facilitating DBDM through the KEIDS in terms of data quality, school context, and institutional support.


2020 ◽  
pp. 1-15
Author(s):  
Chi Wai Yu ◽  
Y. Jane Zhang ◽  
Sharon Xuejing Zuo

A substantial proportion of individuals who complete the widely used multiple price list (MPL) instrument switch back and forth between the safe and the risky choice columns, behavior that is believed to indicate lowquality decision making. We develop a conceptual framework to formally define decision-making quality, test explanations for the nature of low-quality decision making, and introduce a novel “nudge” treatment that reduced multiple switching behavior and increased decision-making quality. We find evidence in support of task-specific miscomprehension of the MPL and that nonmultiple switchers and relatively high-cognitive-ability individuals are not immune to low-quality decision making.


2020 ◽  
Author(s):  
Christiana Photiadou ◽  
Lorna Little ◽  
Peter Berg ◽  
Rafael Pimentel ◽  
Maria Jose Polo ◽  
...  

<p>AQUACLEW (Advancing Data Quality for European Water Services) is an ERA4CS project with the overall goal to improve quality of climate services. The project brings together nine European organisations, with different experience and expertise in developing climate services, providing data and collaborating with users. The project aims to investigate how to increase user uptake in a broad community using general information from a web interface, as well as tailored user-specific decision-support in seven case studies across Europe. Additionally, we track our ‘climate friendliness’ throughout the project.</p><p>AQUACLEW uses innovative research techniques and integrated co-development with users to advance the quality and usability of climate services for a number of water related sectors. We pose the following research questions: 1) how do we improve co-development to better incorporate multiple user feedbacks along the entire climate service production chain, from research to production, service use and decision making? 2) How should data, quality-assurance metrics and guidance be tailored along the whole data-production chain to closer meet user requirements, including resolution and precision?</p><p>Firstly, initial results show that the iterative approach between providers and users of data, demands confidence building through active engagement and involvement of experts to think on different pathways of action for users to interact with climate services and to integrate climate projections into their practice. To facilitate this interaction a number of online activities were designed:  a guided-tour for the climate service, feedback loops, and game-like activities were included in the meetings with focus groups.</p><p>Secondly we focused on investigating how data, quality-assurance metrics and guidance could be tailored along the whole data-production chain to closer meet user requirements, through three different experiments following different protocols. Protocols were developed for differentiated split sample testing in hydrological models and bias adjustment methods, and an expert elicitation. All three protocols were applied across four of seven case studies that had common factors to test the improvements of data production. The protocols had a strong impact through improved data quality in impact assessments for climate change adaptation in water management, thus decision-making can be better supported.</p><p>Lastly, we found preliminarily that ‘climate friendly’ efforts have provoked regular discussions within the consortium, suggestions for new ways to be climate friendly, challenges to travel by train and to find online solutions.</p>


2011 ◽  
Vol 12 (4) ◽  
pp. 323-346 ◽  
Author(s):  
Rosanne Price ◽  
◽  
Graeme Shanks ◽  

Sign in / Sign up

Export Citation Format

Share Document