scholarly journals Making citizen science newsworthy in the era of big data

2017 ◽  
Vol 16 (02) ◽  
pp. C05
Author(s):  
Stuart Allan ◽  
Joanna Redden

This article examines certain guiding tenets of science journalism in the era of big data by focusing on its engagement with citizen science. Having placed citizen science in historical context, it highlights early interventions intended to help establish the basis for an alternative epistemological ethos recognising the scientist as citizen and the citizen as scientist. Next, the article assesses further implications for science journalism by examining the challenges posed by big data in the realm of citizen science. Pertinent issues include potential risks associated with data quality, access dynamics, the difficulty investigating algorithms, and concerns about certain constraints impacting on transparency and accountability.

AMBIO ◽  
2015 ◽  
Vol 44 (S4) ◽  
pp. 601-611 ◽  
Author(s):  
Steve Kelling ◽  
Daniel Fink ◽  
Frank A. La Sorte ◽  
Alison Johnston ◽  
Nicholas E. Bruns ◽  
...  

Author(s):  
Christopher D O’Connor ◽  
John Ng ◽  
Dallas Hill ◽  
Tyler Frederick

Policing is increasingly being shaped by data collection and analysis. However, we still know little about the quality of the data police services acquire and utilize. Drawing on a survey of analysts from across Canada, this article examines several data collection, analysis, and quality issues. We argue that as we move towards an era of big data policing it is imperative that police services pay more attention to the quality of the data they collect. We conclude by discussing the implications of ignoring data quality issues and the need to develop a more robust research culture in policing.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 875
Author(s):  
Jesus Cerquides ◽  
Mehmet Oğuz Mülâyim ◽  
Jerónimo Hernández-González ◽  
Amudha Ravi Shankar ◽  
Jose Luis Fernandez-Marquez

Over the last decade, hundreds of thousands of volunteers have contributed to science by collecting or analyzing data. This public participation in science, also known as citizen science, has contributed to significant discoveries and led to publications in major scientific journals. However, little attention has been paid to data quality issues. In this work we argue that being able to determine the accuracy of data obtained by crowdsourcing is a fundamental question and we point out that, for many real-life scenarios, mathematical tools and processes for the evaluation of data quality are missing. We propose a probabilistic methodology for the evaluation of the accuracy of labeling data obtained by crowdsourcing in citizen science. The methodology builds on an abstract probabilistic graphical model formalism, which is shown to generalize some already existing label aggregation models. We show how to make practical use of the methodology through a comparison of data obtained from different citizen science communities analyzing the earthquake that took place in Albania in 2019.


2021 ◽  
Vol 444 ◽  
pp. 109453
Author(s):  
Camille Van Eupen ◽  
Dirk Maes ◽  
Marc Herremans ◽  
Kristijn R.R. Swinnen ◽  
Ben Somers ◽  
...  

2021 ◽  
Vol 10 (4) ◽  
pp. 207
Author(s):  
Annie Gray ◽  
Colin Robertson ◽  
Rob Feick

Citizen science initiatives span a wide range of topics, designs, and research needs. Despite this heterogeneity, there are several common barriers to the uptake and sustainability of citizen science projects and the information they generate. One key barrier often cited in the citizen science literature is data quality. Open-source tools for the analysis, visualization, and reporting of citizen science data hold promise for addressing the challenge of data quality, while providing other benefits such as technical capacity-building, increased user engagement, and reinforcing data sovereignty. We developed an operational citizen science tool called the Community Water Data Analysis Tool (CWDAT)—a R/Shiny-based web application designed for community-based water quality monitoring. Surveys and facilitated user-engagement were conducted among stakeholders during the development of CWDAT. Targeted recruitment was used to gather feedback on the initial CWDAT prototype’s interface, features, and potential to support capacity building in the context of community-based water quality monitoring. Fourteen of thirty-two invited individuals (response rate 44%) contributed feedback via a survey or through facilitated interaction with CWDAT, with eight individuals interacting directly with CWDAT. Overall, CWDAT was received favourably. Participants requested updates and modifications such as water quality thresholds and indices that reflected well-known barriers to citizen science initiatives related to data quality assurance and the generation of actionable information. Our findings support calls to engage end-users directly in citizen science tool design and highlight how design can contribute to users’ understanding of data quality. Enhanced citizen participation in water resource stewardship facilitated by tools such as CWDAT may provide greater community engagement and acceptance of water resource management and policy-making.


2016 ◽  
Vol 30 (3) ◽  
pp. 447-449 ◽  
Author(s):  
Roman Lukyanenko ◽  
Jeffrey Parsons ◽  
Yolanda F. Wiersma
Keyword(s):  

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Ikbal Taleb ◽  
Mohamed Adel Serhani ◽  
Chafik Bouhaddioui ◽  
Rachida Dssouli

AbstractBig Data is an essential research area for governments, institutions, and private agencies to support their analytics decisions. Big Data refers to all about data, how it is collected, processed, and analyzed to generate value-added data-driven insights and decisions. Degradation in Data Quality may result in unpredictable consequences. In this case, confidence and worthiness in the data and its source are lost. In the Big Data context, data characteristics, such as volume, multi-heterogeneous data sources, and fast data generation, increase the risk of quality degradation and require efficient mechanisms to check data worthiness. However, ensuring Big Data Quality (BDQ) is a very costly and time-consuming process, since excessive computing resources are required. Maintaining Quality through the Big Data lifecycle requires quality profiling and verification before its processing decision. A BDQ Management Framework for enhancing the pre-processing activities while strengthening data control is proposed. The proposed framework uses a new concept called Big Data Quality Profile. This concept captures quality outline, requirements, attributes, dimensions, scores, and rules. Using Big Data profiling and sampling components of the framework, a faster and efficient data quality estimation is initiated before and after an intermediate pre-processing phase. The exploratory profiling component of the framework plays an initial role in quality profiling; it uses a set of predefined quality metrics to evaluate important data quality dimensions. It generates quality rules by applying various pre-processing activities and their related functions. These rules mainly aim at the Data Quality Profile and result in quality scores for the selected quality attributes. The framework implementation and dataflow management across various quality management processes have been discussed, further some ongoing work on framework evaluation and deployment to support quality evaluation decisions conclude the paper.


Sign in / Sign up

Export Citation Format

Share Document