Data Quality, Data Quantity, and Its Effect on an Applied Stock Assessment of Cisco in Thunder Bay, Ontario

2020 ◽  
Vol 40 (2) ◽  
pp. 368-382
Author(s):  
Nicholas C. Fisch ◽  
James R. Bence
2017 ◽  
Vol 4 (1) ◽  
pp. 25-31 ◽  
Author(s):  
Diana Effendi

Information Product Approach (IP Approach) is an information management approach. It can be used to manage product information and data quality analysis. IP-Map can be used by organizations to facilitate the management of knowledge in collecting, storing, maintaining, and using the data in an organized. The  process of data management of academic activities in X University has not yet used the IP approach. X University has not given attention to the management of information quality of its. During this time X University just concern to system applications used to support the automation of data management in the process of academic activities. IP-Map that made in this paper can be used as a basis for analyzing the quality of data and information. By the IP-MAP, X University is expected to know which parts of the process that need improvement in the quality of data and information management.   Index term: IP Approach, IP-Map, information quality, data quality. REFERENCES[1] H. Zhu, S. Madnick, Y. Lee, and R. Wang, “Data and Information Quality Research: Its Evolution and Future,” Working Paper, MIT, USA, 2012.[2] Lee, Yang W; at al, Journey To Data Quality, MIT Press: Cambridge, 2006.[3] L. Al-Hakim, Information Quality Management: Theory and Applications. Idea Group Inc (IGI), 2007.[4] “Access : A semiotic information quality framework: development and comparative analysis : Journal ofInformation Technology.” [Online]. Available: http://www.palgravejournals.com/jit/journal/v20/n2/full/2000038a.html. [Accessed: 18-Sep-2015].[5] Effendi, Diana, Pengukuran Dan Perbaikan Kualitas Data Dan Informasi Di Perguruan Tinggi MenggunakanCALDEA Dan EVAMECAL (Studi Kasus X University), Proceeding Seminar Nasional RESASTEK, 2012, pp.TIG.1-TI-G.6.


2014 ◽  
Vol 668-669 ◽  
pp. 1374-1377 ◽  
Author(s):  
Wei Jun Wen

ETL refers to the process of data extracting, transformation and loading and is deemed as a critical step in ensuring the quality, data specification and standardization of marine environmental data. Marine data, due to their complication, field diversity and huge volume, still remain decentralized, polyphyletic and isomerous with different semantics and hence far from being able to provide effective data sources for decision making. ETL enables the construction of marine environmental data warehouse in the form of cleaning, transformation, integration, loading and periodic updating of basic marine data warehouse. The paper presents a research on rules for cleaning, transformation and integration of marine data, based on which original ETL system of marine environmental data warehouse is so designed and developed. The system further guarantees data quality and correctness in analysis and decision-making based on marine environmental data in the future.


2021 ◽  
Author(s):  
Victoria Leong ◽  
Kausar Raheel ◽  
Sim Jia Yi ◽  
Kriti Kacker ◽  
Vasilis M. Karlaftis ◽  
...  

Background. The global COVID-19 pandemic has triggered a fundamental reexamination of how human psychological research can be conducted both safely and robustly in a new era of digital working and physical distancing. Online web-based testing has risen to the fore as a promising solution for rapid mass collection of cognitive data without requiring human contact. However, a long-standing debate exists over the data quality and validity of web-based studies. Here, we examine the opportunities and challenges afforded by the societal shift toward web-based testing, highlight an urgent need to establish a standard data quality assurance framework for online studies, and develop and validate a new supervised online testing methodology, remote guided testing (RGT). Methods. A total of 85 healthy young adults were tested on 10 cognitive tasks assessing executive functioning (flexibility, memory and inhibition) and learning. Tasks were administered either face-to-face in the laboratory (N=41) or online using remote guided testing (N=44), delivered using identical web-based platforms (CANTAB, Inquisit and i-ABC). Data quality was assessed using detailed trial-level measures (missed trials, outlying and excluded responses, response times), as well as overall task performance measures. Results. The results indicated that, across all measures of data quality and performance, RGT data was statistically-equivalent to data collected in person in the lab. Moreover, RGT participants out-performed the lab group on measured verbal intelligence, which could reflect test environment differences, including possible effects of mask-wearing on communication. Conclusions. These data suggest that the RGT methodology could help to ameliorate concerns regarding online data quality and - particularly for studies involving high-risk or rare cohorts - offer an alternative for collecting high-quality human cognitive data without requiring in-person physical attendance.


2021 ◽  
Author(s):  
Victoria Leong ◽  
Kausar Raheel ◽  
Jia Yi Sim ◽  
Kriti Kacker ◽  
Vasilis M Karlaftis ◽  
...  

BACKGROUND The global COVID-19 pandemic has triggered a fundamental reexamination of how human psychological research can be conducted both safely and robustly in a new era of digital working and physical distancing. Online web-based testing has risen to the fore as a promising solution for rapid mass collection of cognitive data without requiring human contact. However, a long-standing debate exists over the data quality and validity of web-based studies. OBJECTIVE Here, we examine the opportunities and challenges afforded by the societal shift toward web-based testing, highlight an urgent need to establish a standard data quality assurance framework for online studies, and develop and validate a new supervised online testing methodology, remote guided testing (RGT). METHODS A total of 85 healthy young adults were tested on 10 cognitive tasks assessing executive functioning (flexibility, memory and inhibition) and learning. Tasks were administered either face-to-face in the laboratory (N=41) or online using remote guided testing (N=44), delivered using identical web-based platforms (CANTAB, Inquisit and i-ABC). Data quality was assessed using detailed trial-level measures (missed trials, outlying and excluded responses, response times), as well as overall task performance measures. RESULTS The results indicated that, across all measures of data quality and performance, RGT data was statistically-equivalent to data collected in person in the lab. Moreover, RGT participants out-performed the lab group on measured verbal intelligence, which could reflect test environment differences, including possible effects of mask-wearing on communication. CONCLUSIONS These data suggest that the RGT methodology could help to ameliorate concerns regarding online data quality and - particularly for studies involving high-risk or rare cohorts - offer an alternative for collecting high-quality human cognitive data without requiring in-person physical attendance. CLINICALTRIAL N.A.


Stroke ◽  
2016 ◽  
Vol 47 (suppl_1) ◽  
Author(s):  
Elizabeth Linkewich ◽  
Janine Theben ◽  
Amy Maebrae-Waller ◽  
Shelley Huffman ◽  
Jenn Fearn ◽  
...  

Background and Issues: The collection and reporting of Rehabilitation Intensity (RI) in a national rehabilitation database was mandated on April 1, 2015 for all stroke patients within Ontario, to support evaluation of stroke best practice implementation. RI includes minutes of direct task-specific therapy patients experience per day. This requires a shift in thinking from capturing the clinician’s time spent in therapy to the patient perspective. To ensure that high quality data is collected, it was important to understand clinicians’ experiences in collecting RI data. Purpose: To identify enablers and barriers to RI data collection in order to inform the development of resources to support clinicians. Methods: A 12-item electronic survey was developed by an Ontario Stroke Network (OSN) task group to evaluate the clinician experience of RI data collection (including: demographics, barriers, enablers, education, resources, and practice change). The survey was distributed via SurveyMonkey® and sent to clinicians from 48 hospitals, 3 weeks post implementation of RI data collection. Analyses involved descriptive statistics and thematic analysis. Results: Three hundred and twenty-one clinicians from 47 hospitals responded to the survey. Survey results suggest RI data collection is feasible; seventy-one percent of clinicians report it takes 10 minutes or less to enter RI data. Thematic analysis identified: 5 common challenges with most frequently reported relating to data quality, 30% (N=358) and 6 common enablers with most frequently reported relating to ease of collecting RI data through workload measurement systems, 50% (N=46). Suggestions for educational resources included tools for identifying what is included in RI and the provision of education (e.g. webinars). Conclusions: RI data collection is feasible for clinicians. Education and resources developed should support key challenges and enablers identified by clinicians - to enhance data quality and the consistency of RI collection. As RI data fields are available through a national rehabilitation database, this work sets the foundation for other provinces interested in the systematic collection and reporting of RI data.


2016 ◽  
Author(s):  
Alfred Enyekwe ◽  
Osahon Urubusi ◽  
Raufu Yekini ◽  
Iorkam Azoom ◽  
Oloruntoba Isehunwa

ABSTRACT Significant emphasis on data quality is placed on real-time drilling data for the optimization of drilling operations and on logging data for quality lithological and petrophysical description of a field. This is evidenced by huge sums spent on real time MWD/LWD tools, broadband services, wireline logging tools, etc. However, a lot more needs to be done to harness quality data for future workover and or abandonment operations where data being relied on is data that must have been entered decades ago and costs and time spent are critically linked to already known and certified information. In some cases, data relied on has been migrated across different data management platforms, during which relevant data might have been lost, mis-interpreted or mis-placed. Another common cause of wrong data is improperly documented well intervention operations which have been done in such a short time, that there is no pressure to document the operation properly. This leads to confusion over simple issues such as what depth a plug was set, or what junk was left in hole. The relative lack of emphasis on this type of data quality has led to high costs of workover and abandonment operations. In some cases, well control incidents and process safety incidents have arisen. This paper looks at over 20 workover operations carried out in a span of 10 years. An analysis is done on the wells’ original timeline of operation. The data management system is generally analyzed and a categorization of issues experienced during the workover operations is outlined. Bottlenecks in data management are defined and solutions currently being implemented to manage these problems are listed as recommended good practices.


2021 ◽  
Author(s):  
Adisu Tafari Shama ◽  
Hirbo Shore Roba ◽  
Admas Abera ◽  
Negga Baraki

Abstract Background: Despite the improvements in the knowledge and understanding of the role of health information in the global health system, the quality of data generated by a routine health information system is still very poor in low and middle-income countries. There is a paucity of studies as to what determines data quality in health facilities in the study area. Therefore, this study was aimed to assess the quality of routine health information system data and associated factors in public health facilities of Harari region, Ethiopia.Methods: A cross-sectional study was conducted in all public health facilities in Harari region of Ethiopia. The department-level data were collected from respective department heads through document reviews, interviews, and observation check-lists. Descriptive statistics were used to data quality and multivariate logistic regression was run to identify factors influencing data quality. The level of significance was declared at P-value <0.05. Result: The study found a good quality data in 51.35% (95% CI, 44.6-58.1) of the departments in public health facilities in Harari Region. Departments found in the health centers were 2.5 times more likely to have good quality data as compared to departments found in the health posts. The presence of trained staffs able to fill reporting formats (AOR=2.474; 95%CI: 1.124-5.445) and provision of feedback (AOR=3.083; 95%CI: 1.549-6.135) were also significantly associated with data quality. Conclusion: The level of good data quality in the public health facilities was less than the expected national level. Training should be provided to increase the knowledge and skills of the health workers.


AI Magazine ◽  
2010 ◽  
Vol 31 (1) ◽  
pp. 65 ◽  
Author(s):  
Clint R. Bidlack ◽  
Michael P Wellman

Recent advances in enterprise web-based software have created a need for sophisticated yet user-friendly data quality solutions. A new category of data quality solutions are discussed that fill this need using intelligent matching and retrieval algorithms. Solutions are focused on customer and sales data and include real-time inexact search, batch processing, and data migration. Users are empowered to maintain higher quality data resulting in more efficient sales and marketing operations. Sales managers spend more time with customers and less time managing data.


Sign in / Sign up

Export Citation Format

Share Document