scholarly journals Managing Audio Monitoring Data with SIMON – Concept for Data Administration, Online Repository and Dissemination

Author(s):  
Christian Köhler

Automated observations of natural occurrences play a key role in monitoring biodiversity worldwide. With the development of affordable hardware like the AudioMoth (Hill et al. 2019) acoustic logger, large scale and long-term monitoring has come within reach. However, data management and dissemination of monitoring data remain challenging, as the development of software and the infrastructure for the management of monitoring data lag behind. We want to fill this gap, providing a complete audio monitoring solution from affordable audio monitoring hardware, custom data management tools and storage infrastructure based on open source hard- and software, biodiversity information standards and integrable interfaces. The Scientific Monitoring Data Management and Online Repository (SIMON) consists of a portable data collector and a connected online repository. The data collector, a device for the automated extraction of the audio data from the audio loggers in the field, stores the data and metadata in an internal cache. Once connected to the internet via WiFi or a cable connection, the data are automatically uploaded to an online repository for automated analysis, annotation, data management and dissemination. To prevent SIMON from becoming yet another proprietary storage, the FAIR principles (Findable, Accessible, Interoperable, and Re-usable) Wilkinson et al. (2016) are at the very core of data managed in the online repository. We plan to offer an API (application programming interface) to disseminate data to established data infrastructures. A second API will allow the use of external services for data enrichment. While in the planning phase, we would like to take the opportunity to discuss with domain experts the requirements and implementation of different standards—namely ABCD (Access to Biological Collections Data task group, Biodiversity Information Standards (TDWG) 2007), Darwin Core (Darwin Core Task Group, Biodiversity Information Standards (TDWG) 2009) and Darwin Core Archive (Remsen et al. 2017)—connecting to external services and targeting data infrastructures.

Author(s):  
Leif Schulman ◽  
Aino Juslén ◽  
Kari Lahti

The service model of the Global Biodiversity Information Facility (GBIF) is being implemented in an increasing number of national biodiversity (BD) data services. While GBIF already shares >109 data points, national initiatives are an essential component: increase in GBIF-mediated data relies on national data mobilisation and GBIF is not optimised to support local use. The Finnish Biodiversity Information Facility (FinBIF), initiated in 2012 and operational since late 2016, is one of the more recent examples of national BD research infrastructures (RIs) – and arguably among the most comprehensive. Here, we describe FinBIF’s development and service integration, and provide a model approach for the construction of all-inclusive national BD RIs. FinBIF integrates a wide array of BD RI approaches under the same umbrella. These include large-scale and multi-technology digitisation of natural history collections; building a national DNA barcode reference library and linking it to species occurrence data; citizen science platforms enabling recording, managing and sharing of observation data; management and sharing of restricted data among authorities; community-driven species identification support; an e-learning environment for species identification; and IUCN Red Listing (Fig. 1). FinBIF’s aims are to accelerate digitisation, mobilisation, and distribution of biodiversity data and to boost their use in research and education, environmental administration, and the private sector. The core functionalities of FinBIF were built in a 3.5-year project (01/2015–06/2018) by a consortium of four university-based natural history collection facilities led by the Finnish Museum of Natural History Luomus. Close to 30% of the total funding was granted through the Finnish Research Infrastructures programme (FIRI) governed by the national research council and based on scientific excellence. Government funds for productivity enhancement in state administration covered c.40 % of the development and the rest was self-financed by the implementing consortium of organisations that have both a research and an education mission. The cross-sectoral scope of FinBIF has led to rapid uptake and a broad user base of its functionalities and services. Not only researchers but also administrative authorities, various enterprises and a large number of private citizens show a significant interest in the RI (Table 1). FinBIF is now in its second construction cycle (2019–2022), funded through the FIRI programme and, thus, focused on researcher services. The work programme includes integration of tools for data management in ecological restoration and e-Lab tools for spatial analyses, morphometric analysis of 3D images, species identification from sound recordings, and metagenomics analyses.


Author(s):  
Peter Desmet ◽  
Jakub Bubnicki ◽  
Ben Norton

Camera trapping is one of the most important technologies in conservation and ecological research and a well-established, non-invasive method of collecting field data on animal abundance, distribution, behaviour, temporal activity, and space use (Wearn and Glover-Kapfer 2019). Collectively, camera trapping projects are generating a massive and continuous flow of data, consisting of images and videos (with and without animal observations) and associated identifications (Scotson et al. 2017, Kays et al. 2020). In recent years, significant progress has been made by the global camera trapping community to resolve the challenges this brings, from the development of specialized data management tools and analytical packages, to the application of cloud computing and artificial intelligence to automate species recognition (Tabak et al. 2018). However, to effectively exchange camera trap data between infrastructures and to (automatically) harmonize data into large-scale wildlife datasets, there is a need for a common data exchange format—one that captures the essential information about a camera trap study, allows expression of different study and identification approaches, and aligns well with existing biodiversity standards such as Darwin Core (Wieczorek et al. 2012). Here we present Camera Trap Data Package (Camtrap DP), a data exchange format for camera trap data. It is managed by the Machine Observations Interest Group of Biodiversity Information Standards (TDWG) and developed publicly, soliciting community feedback for every change. Camtrap DP is built on Frictionless Standards, a set of generic specifications to describe and package (tabular) data and metadata. Camtrap DP extends these with specific requirements and constraints for camera trap data. By building on an existing framework, users can employ existing open source software to read and validate Camtrap DP formatted data. Validation especially is useful to automatically check if provided data meets the requirements set forth by Camtrap DP, before analysis or integration. Supported by the major camera trap data management systems e.g. Agouti, TRAPPER, eMammal, and Wildlife Insights, Camtrap DP is reaching its first stable version. The first Camtrap DP dataset was published on Zenodo (Cartuyvels et al. 2021b). This dataset was also published to the Global Biodiversity Information Facility (GBIF) (Cartuyvels et al. 2021a), demonstrating the ability and limitations of transforming the data to the Darwin Core standard.


2018 ◽  
Vol 2 ◽  
pp. e25642
Author(s):  
Annie Simpson

Biodiversity Information Serving our Nation - BISON (bison.usgs.gov) is the U.S. node to the Global Biodiversity Information Facility (gbif.org), containing more than 375 million documented locations for all species in the U.S. It is hosted by the United States Geological Survey (USGS) and includes a web site and application programming interface for apps and other websites to use for free. With this massive database one can see not only the 15 million records for nearly 10 thousand non-native species in the U.S. and its territories, but also their relationship to all of the other species in the country as well as their full national range. Leveraging this huge resource and its enterprise level cyberinfrastructure, USGS BISON staff have created a value-added feature by labeling non-native species records, even where contributing datasets have not provided such labels. Based on our ongoing four-year compilation of non-native species scientific names from the literature, specific examples will be shared about the ambiguity and evolution of terms that have been discovered, as they relate to invasiveness, impact, dispersal, and management. The idea of incorporating these terms into an invasive species extension to Darwin Core has been discussed by Biodiversity Information Standards (TDWG) working group participants since at least 2005. One roadblock to the implementation of this standard's extension has been the diverse terminology used to describe the characteristics of biological invasions, terminology which has evolved significantly over the past decade.


2016 ◽  
Vol 8 (1) ◽  
Author(s):  
Chrysanthos Steiakakis ◽  
Zacharias Agioutantis ◽  
Evangelia Apostolou ◽  
Georgia Papavgeri ◽  
Achilles Tripolitsiotis

AbstractThe geotechnical challenges for safe slope design in large scale surface mining operations are enormous. Sometimes one degree of slope inclination can significantly reduce the overburden to ore ratio and therefore dramatically improve the economics of the operation, while large scale slope failures may have a significant impact on human lives. Furthermore, adverse weather conditions, such as high precipitation rates, may unfavorably affect the already delicate balance between operations and safety. Geotechnical, weather and production parameters should be systematically monitored and evaluated in order to safely operate such pits. Appropriate data management, processing and storage are critical to ensure timely and informed decisions.This paper presents an integrated data management system which was developed over a number of years as well as the advantages through a specific application. The presented case study illustrates how the high production slopes of a mine that exceed depths of 100–120 m were successfully mined with an average displacement rate of 10– 20 mm/day, approaching an almost slow to moderate landslide velocity. Monitoring data of the past four years are included in the database and can be analyzed to produce valuable results. Time-series data correlations of movements, precipitation records, etc. are evaluated and presented in this case study. The results can be used to successfully manage mine operations and ensure the safety of the mine and the workforce.


Author(s):  
Massimiliano Assante ◽  
Leonardo Candela ◽  
Donatella Castelli ◽  
Gianpaolo Coro ◽  
Lucio Lelii ◽  
...  

Science is in continuous evolution and so are the methodologies and approaches scientists tend to apply by calling for appropriate supporting environments. This is in part due to the limitations of the existing practices and in part due to the new possibilities offered by technology advances. gCube is a software system promoting elastic and seamless access to research assets (data, services, computing) across the boundaries of institutions, disciplines and providers to favour collaborative-oriented research tasks. Its primary goal is to enable Hybrid Data Infrastructures facilitating the dynamic definition and operation of Virtual Research Environments. To this end, it offers a comprehensive set of data management commodities on various types of data and a rich array of ``mediators'' to interface well-established Infrastructures and Information Systems from various domains. Its effectiveness has been proved by operating the D4Science.org infrastructure and serving concrete, multidisciplinary, challenging, and large scale scenarios. This paper gives an overview of the gCube system.


2012 ◽  
Vol 83 ◽  
pp. 188-197
Author(s):  
Ke Chang Lin ◽  
Yi Qing Ni ◽  
Xiao Wei Ye ◽  
Kai Yuan Wong

The data management system (DMS) is an essential part for long-term structural health monitoring (SHM) systems, which stores a pool of monitoring data for various applications. A robust database within a DMS is generally used to archive, manage and update life-cycle information of civil structures. However, many applications especially those to large-scale structures provide little support for visualizing the long-term monitoring data. This paper presents the development of an efficient visualized DMS by integrating 4-dimension (4D) model technology, nested relational database, and virtual reality (VR) technology. Spatial data of the 4D model are organized in nested tables, while real-time (temporal) monitoring data are linked to the 4D model. The model is then reconstructed by use of an OpenSceneGraph 3D engine. A user interface is developed to query the database and display the data via the 4D model. To demonstrate its efficiency, the proposed method has been applied to the Canton Tower, a supertall tower-like structure instrumented with a long-term SHM system


Author(s):  
Massimiliano Assante ◽  
Leonardo Candela ◽  
Donatella Castelli ◽  
Gianpaolo Coro ◽  
Lucio Lelii ◽  
...  

Science is in continuous evolution and so are the methodologies and approaches scientists tend to apply by calling for appropriate supporting environments. This is in part due to the limitations of the existing practices and in part due to the new possibilities offered by technology advances. gCube is a software system promoting elastic and seamless access to research assets (data, services, computing) across the boundaries of institutions, disciplines and providers to favour collaborative-oriented research tasks. Its primary goal is to enable Hybrid Data Infrastructures facilitating the dynamic definition and operation of Virtual Research Environments. To this end, it offers a comprehensive set of data management commodities on various types of data and a rich array of ``mediators'' to interface well-established Infrastructures and Information Systems from various domains. Its effectiveness has been proved by operating the D4Science.org infrastructure and serving concrete, multidisciplinary, challenging, and large scale scenarios. This paper gives an overview of the gCube system.


2020 ◽  
pp. 68-72
Author(s):  
V.G. Nikitaev ◽  
A.N. Pronichev ◽  
V.V. Dmitrieva ◽  
E.V. Polyakov ◽  
A.D. Samsonova ◽  
...  

The issues of using of information and measurement systems based on processing of digital images of microscopic preparations for solving large-scale tasks of automating the diagnosis of acute leukemia are considered. The high density of leukocyte cells in the preparation (hypercellularity) is a feature of microscopic images of bone marrow preparations. It causes the proximity of cells to eachother and their contact with the formation of conglomerates. Measuring of the characteristics of bone marrow cells in such conditions leads to unacceptable errors (more than 50%). The work is devoted to segmentation of contiguous cells in images of bone marrow preparations. A method of cell separation during white blood cell segmentation on images of bone marrow preparations under conditions of hypercellularity of the preparation has been developed. The peculiarity of the proposed method is the use of an approach to segmentation of cell images based on the watershed method with markers. Key stages of the method: the formation of initial markers and builds the lines of watershed, a threshold binarization, shading inside the outline. The parameters of the separation of contiguous cells are determined. The experiment confirmed the effectiveness of the proposed method. The relative segmentation error was 5 %. The use of the proposed method in information and measurement systems of computer microscopy for automated analysis of bone marrow preparations will help to improve the accuracy of diagnosis of acute leukemia.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5260
Author(s):  
Yi-Bing Lin ◽  
Sheng-Lin Chou

Due to the fast evolution of Sensor and Internet of Things (IoT) technologies, several large-scale smart city applications have been commercially developed in recent years. In these developments, the contracts are often disputed in the acceptance due to the fact that the contract specification is not clear, resulting in a great deal of discussion of the gray area. Such disputes often occur in the acceptance processes of smart buildings, mainly because most intelligent building systems are expensive and the operations of the sub-systems are very complex. This paper proposes SpecTalk, a platform that automatically generates the code to conform IoT applications to the Taiwan Association of Information and Communication Standards (TAICS) specifications. SpecTalk generates a program to accommodate the application programming interface of the IoT devices under test (DUTs). Then, the devices can be tested by SpecTalk following the TAICS data formats. We describe three types of tests: self-test, mutual-test, and visual test. A self-test involves the sensors and the actuators of the same DUT. A mutual-test involves the sensors and the actuators of different DUTs. A visual-test uses a monitoring camera to investigate the actuators of multiple DUTs. We conducted these types of tests in commercially deployed applications of smart campus constructions. Our experiments in the tests proved that SpecTalk is feasible and can effectively conform IoT implementations to TACIS specifications. We also propose a simple analytic model to select the frequency of the control signals for the input patterns in a SpecTalk test. Our study indicates that it is appropriate to select the control signal frequency, such that the inter-arrival time between two control signals is larger than 10 times the activation delay of the DUT.


SoftwareX ◽  
2021 ◽  
Vol 15 ◽  
pp. 100747
Author(s):  
José Daniel Lara ◽  
Clayton Barrows ◽  
Daniel Thom ◽  
Dheepak Krishnamurthy ◽  
Duncan Callaway

Sign in / Sign up

Export Citation Format

Share Document