scholarly journals Assessing data quality in citizen science (preprint)

2016 ◽  
Author(s):  
Margaret Kosmala ◽  
Andrea Wiggins ◽  
Alexandra Swanson ◽  
Brooke Simmons

AbstractEcological and environmental citizen science projects have enormous potential to advance science, influence policy, and guide resource management by producing datasets that are otherwise infeasible to generate. This potential can only be realized, though, if the datasets are of high quality. While scientists are often skeptical of the ability of unpaid volunteers to produce accurate datasets, a growing body of publications clearly shows that diverse types of citizen science projects can produce data with accuracy equal to or surpassing that of professionals. Successful projects rely on a suite of methods to boost data accuracy and account for bias, including iterative project development, volunteer training and testing, expert validation, replication across volunteers, and statistical modeling of systematic error. Each citizen science dataset should therefore be judged individually, according to project design and application, rather than assumed to be substandard simply because volunteers generated it.

Author(s):  
Andrea Wiggins ◽  
Kevin Crowston

Citizen science has seen enormous growth in recent years, in part due to the influence of the Internet, and a corresponding growth in interest. However, the few stand-out examples that have received attention from media and researchers are not representative of the diversity of the field as a whole, and therefore may not be the best models for those seeking to study or start a citizen science project. In this work, we present the results of a survey of citizen science project leaders, identifying sub-groups of project types according to a variety of features related to project design and management, including funding sources, goals, participant activities, data quality processes, and social interaction. These combined features highlight the diversity of citizen science, providing an overview of the breadth of the phenomenon and laying a foundation for comparison between citizen science projects and to other online communities.


2019 ◽  
Author(s):  
Barbara Strobl ◽  
Simon Etter ◽  
H. J. Ilja van Meerveld ◽  
Jan Seibert

Abstract. Some form of training is often necessary in citizen science projects. While in many citizen science projects it is possible to keep tasks simple so that training requirements are minimal, some projects include more challenging tasks and, thus, require more extensive training. Training can hinder joining a project, and therefore most citizen science projects prefer to keep training requirements low. However, training may be needed to ensure good data quality. In this study, we evaluated if an online game that was originally developed for data quality control in a citizen science project, can be used for training for that project. More specifically, we investigated whether the CrowdWater game can be used to train new participants on how to use the virtual staff gauge in the CrowdWater smartphone app for the collection of water level class data. Within this app, the task of placing a virtual staff gauge to start measurements at a new location has proven to be challenging; however this is a crucial task for all subsequent measurements at this location. We analysed the performance of 52 participants in the placement of the virtual staff gauge before and after playing the online CrowdWater game as a form of training. After playing the game, the performance improved for most participants. This suggests that players learned project related tasks intuitively by observing actual gauge placements by other citizen scientists and thus acquired knowledge about how to best use the app instinctively. Interestingly, self-assessment was not a good proxy for the participants’ performance or performance increase. These results demonstrate the value of an online game for training, particularly when compared to other information materials, which are often not used extensively by citizen scientist. These findings are useful for the development of training strategies for other citizen science projects because they indicate that gamified approaches might provide valuable alternative training methods.


Author(s):  
P. Mooney ◽  
L. Morgan

In the last number of years there has been increased interest from researchers in investigating and understanding the characteristics and backgrounds of citizens who contribute to Volunteered Geographic Information (VGI) and Citizen Science (CS) projects. Much of the reluctance from stakeholders such as National Mapping Agencies, Environmental Ministries, etc. to use data and information generated and collected by VGI and CS projects grows from the lack of knowledge and understanding about who these contributors are. As they are drawn from <i>the crowd</i> there is a sense of the unknown about these citizens. Subsequently there are justifiable concerns about these citizens’ ability to collect, generate and manage high quality and accurate spatial, scientific and environmental data and information. This paper provides a meta review of some of the key literature in the domain of VGI and CS to assess if these concerns are well founded and what efforts are ongoing to improve our understanding of <i>the crowd</i>.


2020 ◽  
Vol 3 (1) ◽  
pp. 109-126 ◽  
Author(s):  
Barbara Strobl ◽  
Simon Etter ◽  
H. J. Ilja van Meerveld ◽  
Jan Seibert

Abstract. Some form of training is often necessary for citizen science projects. While in some citizen science projects, it is possible to keep tasks simple so that training requirements are minimal, other projects include more challenging tasks and, thus, require more extensive training. Training can be a hurdle to joining a project, and therefore most citizen science projects prefer to keep training requirements low. However, training may be needed to ensure good data quality. In this study, we evaluated whether an online game that was originally developed for data quality control in a citizen science project can be used for training for that project. More specifically, we investigated whether the CrowdWater game can be used to train new participants on how to place the virtual staff gauge in the CrowdWater smartphone app for the collection of water level class data. Within this app, the task of placing a virtual staff gauge to start measurements at a new location has proven to be challenging; however, this is a crucial task for all subsequent measurements at this location. We analysed the performance of 52 participants in the placement of the virtual staff gauge before and after playing the online CrowdWater game as a form of training. After playing the game, the performance improved for most participants. This suggests that players learned project-related tasks intuitively by observing actual gauge placements by other citizen scientists in the game and thus acquired knowledge about how to best use the app instinctively. Interestingly, self-assessment was not a good proxy for the participants' performance or the performance increase through the training. These results demonstrate the value of an online game for training. These findings are useful for the development of training strategies for other citizen science projects because they indicate that gamified approaches might provide valuable alternative training methods, particularly when other information materials are not used extensively by citizen scientists.


2012 ◽  
Vol 144 (5) ◽  
pp. 727-731
Author(s):  
Isabelle Létourneau ◽  
Maxim Larrivée ◽  
Antoine Morin

AbstractAssessing biodiversity is essential in conservation biology but the resources needed are often limited. Citizen science, by which volunteers gather data at low cost, represents a potential solution for the lack of resources if it produces usable data for scientific means. Scientific inventories for butterflies are often performed with a Pollard transect, a standardised surveying technique that generates high-quality data. General microhabitat surveys (GMSs) are potentially more appealing to amateurs participating in citizen science projects because they are less constrained. We compare estimates of butterfly species richness acquired by Pollard transects to those obtained by GMSs. We demonstrate that GMSs allow surveyors to detect more butterfly species and a more complete portrait of local butterfly assemblages for the same number of individuals captured.


2021 ◽  
Vol 9 ◽  
Author(s):  
Ågot K. Watne ◽  
Jenny Linden ◽  
Jens Willhelmsson ◽  
Håkan Fridén ◽  
Malin Gustafsson ◽  
...  

Using low-cost air quality sensors (LCS) in citizen science projects opens many possibilities. LCS can provide an opportunity for the citizens to collect and contribute with their own air quality data. However, low data quality is often an issue when using LCS and with it a risk of unrealistic expectations of a higher degree of empowerment than what is possible. If the data quality and intended use of the data is not harmonized, conclusions may be drawn on the wrong basis and data can be rendered unusable. Ensuring high data quality is demanding in terms of labor and resources. The expertise, sensor performance assessment, post-processing, as well as the general workload required will depend strongly on the purpose and intended use of the air quality data. It is therefore a balancing act to ensure that the data quality is high enough for the specific purpose, while minimizing the validation effort. The aim of this perspective paper is to increase awareness of data quality issues and provide strategies to minimizing labor intensity and expenses while maintaining adequate QA/QC for robust applications of LCS in citizen science projects. We believe that air quality measurements performed by citizens can be better utilized with increased awareness about data quality and measurement requirements, in combination with improved metadata collection. Well-documented metadata can not only increase the value and usefulness for the actors collecting the data, but it also the foundation for assessment of potential integration of the data collected by citizens in a broader perspective.


Author(s):  
Brandon Budnicki ◽  
Gregory Newman

CitSci.org is a global citizen science software platform and support organization housed at Colorado State University. The mission of CitSci is to help people do high quality citizen science by amplifying impacts and outcomes. This platform hosts over one thousand projects and a diverse volunteer base that has amassed over one million observations of the natural world, focused on biodiversity and ecosystem sustainability. It is a custom platform built using open source components including: PostgreSQL, Symfony, Vue.js, with React Native for the mobile apps. CitSci sets itself apart from other Citizen Science platforms through the flexibility in the types of projects it supports rather than having a singular focus. This flexibility allows projects to define their own datasheets and methodologies. The diversity of programs we host motivated us to take a founding role in the design of the PPSR Core, a set of global, transdisciplinary data and metadata standards for use in Public Participation in Scientific Research (Citizen Science) projects. Through an international partnership between the Citizen Science Association, European Citizen Science Association, and Australian Citizen Science Association, the PPSR team and associated standards enable interoperability of citizen science projects, datasets, and observations. Here we share our experience over the past 10+ years of supporting biodiversity research both as developers of the CitSci.org platform and as stewards of, and contributors to, the PPSR Core standard. Specifically, we share details about: the origin, development, and informatics infrastructure for CitSci our support for biodiversity projects such as population and community surveys our experiences in platform interoperability through PPSR Core working with the Zooniverse, SciStarter, and CyberTracker data quality data sharing goals and use cases. the origin, development, and informatics infrastructure for CitSci our support for biodiversity projects such as population and community surveys our experiences in platform interoperability through PPSR Core working with the Zooniverse, SciStarter, and CyberTracker data quality data sharing goals and use cases. We conclude by sharing overall successes, limitations, and recommendations as they pertain to trust and rigor in citizen science data sharing and interoperability. As the scientific community moves forward, we show that Citizen Science is a key tool to enabling a systems-based approach to ecosystem problems.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-25
Author(s):  
Elham Shamsa ◽  
Alma Pröbstl ◽  
Nima TaheriNejad ◽  
Anil Kanduri ◽  
Samarjit Chakraborty ◽  
...  

Smartphone users require high Battery Cycle Life (BCL) and high Quality of Experience (QoE) during their usage. These two objectives can be conflicting based on the user preference at run-time. Finding the best trade-off between QoE and BCL requires an intelligent resource management approach that considers and learns user preference at run-time. Current approaches focus on one of these two objectives and neglect the other, limiting their efficiency in meeting users’ needs. In this article, we present UBAR, User- and Battery-aware Resource management, which considers dynamic workload, user preference, and user plug-in/out pattern at run-time to provide a suitable trade-off between BCL and QoE. UBAR personalizes this trade-off by learning the user’s habits and using that to satisfy QoE, while considering battery temperature and State of Charge (SOC) pattern to maximize BCL. The evaluation results show that UBAR achieves 10% to 40% improvement compared to the existing state-of-the-art approaches.


Sign in / Sign up

Export Citation Format

Share Document