scholarly journals How the Luxembourg Natural History Museum Has Established and Maintained a National Bio- and Geodiversity Data System

Author(s):  
Tania Walisch ◽  
Claude Pepin ◽  
Paul Braun

Over the past 20 years, the Luxembourg National Museum for Natural History (LMNH) has built a bio- and geodiversity information system to collate, manage and publish natural heritage observation and specimen data on a national and international level. To date the system counts over 2 million taxon occurrence and over 100,000 specimen records. The Museum has chosen, whenever available, public or open source software tools complying to international biodiversity data standards for recording, managing and publishing data to increase resilience, stay connected with community initiatives and mutualise development costs. A central component of the Museum’s national data hub is Recorder 6, a client-server database software for wildlife recording developed by the National Biodiversity Network in the UK. Today, the Recorder-Lux database contains a large portion of natural heritage information in Luxembourg and is synchronised daily into a publication database connected via the Integrated Publishing Toolkit (IPT) to the Global Biodiversity Information Facility (GBIF). Moreover, Recorder-Lux data is accessible via the national species mapping portal mdata.mnhn.lu which has been developed in-house and is aimed at scientists, professionals and decision makers. The Museum has also developed a set of data entry and upload functionalities on its website data.mnhn.lu using the open source software Indicia, a toolkit that provides a ready-made set of services and tools for online wildlife recording. In 2019, we implement the Atlas of Living Luxembourg (ALL) website all.mnhn.lu, based on the open source Atlas of Living Australia software. ALL is the most comprehensive data portal about natural heritage in Luxembourg, showing specimen data from the museum’s botany, zoology, paleontology, petrology and mineralogy collections as well as fungi, animal and plant observations collected from national and international organisations (via GBIF). Data providers vary from individual scientific collaborators to professional regional record centers or private consultancies working for public administrations. They use different tools offered by the museum to enter, manage and transfer their data to the system. Thus several regional record centers chose the client-server Recorder 6 software to manage and exchange their data, whereas individual scientific collaborators of the Museum enter or upload their data via the online data entry forms available on data.mnhn.lu. For large-scale, long-term, professional biodiversity monitoring and inventories at the national level, specific data entry forms and functionalities have been configured on the Indicia website. Finally, citizens can record species observations via the iNaturalist smartphone app. Due to the museum’s long history of conducting field inventories alongside collating and managing natural history collections, the data hub holds observation and collection data in one database. In 2003, the Museum has developed the Collection Management and Thesaurus extensions for the Recorder 6 software to catalogue, describe and manage specimens in the Museum collections. It allows handling of field-gathered data alongside specimen-specific data such as storage location, specimen type and conservation status. In recent years this has become an essential tool for the increasing effort directed at the digitisation of the diverse natural history collections at the Museum. Our small database team faces the challenge of integrating an ever increasing number of records from a variety of datasets, tools and initiatives. To keep the technical and administrative work as simple as possible we have implemented an open data policy and aim to increase the use of IPT to connect databases instead of physically importing all data into one central database. To improve data quality we focus on training experts to work with our Indicia verification tool.

Author(s):  
Falko Glöckler ◽  
James Macklin ◽  
David Shorthouse ◽  
Christian Bölling ◽  
Satpal Bilkhu ◽  
...  

The DINA Consortium (DINA = “DIgital information system for NAtural history data”, https://dina-project.net) is a framework for like-minded practitioners of natural history collections to collaborate on the development of distributed, open source software that empowers and sustains collections management. Target collections include zoology, botany, mycology, geology, paleontology, and living collections. The DINA software will also permit the compilation of biodiversity inventories and will robustly support both observation and molecular data. The DINA Consortium focuses on an open source software philosophy and on community-driven open development. Contributors share their development resources and expertise for the benefit of all participants. The DINA System is explicitly designed as a loosely coupled set of web-enabled modules. At its core, this modular ecosystem includes strict guidelines for the structure of Web application programming interfaces (APIs), which guarantees the interoperability of all components (https://github.com/DINA-Web). Important to the DINA philosophy is that users (e.g., collection managers, curators) be actively engaged in an agile development process. This ensures that the product is pleasing for everyday use, includes efficient yet flexible workflows, and implements best practices in specimen data capture and management. There are three options for developing a DINA module: create a new module compliant with the specifications (Fig. 1), modify an existing code-base to attain compliance (Fig. 2), or wrap a compliant API around existing code that cannot be or may not be modified (e.g., infeasible, dependencies on other systems, closed code) (Fig. 3). create a new module compliant with the specifications (Fig. 1), modify an existing code-base to attain compliance (Fig. 2), or wrap a compliant API around existing code that cannot be or may not be modified (e.g., infeasible, dependencies on other systems, closed code) (Fig. 3). All three of these scenarios have been applied in the modules recently developed: a module for molecular data (SeqDB), modules for multimedia, documents and agents data and a service module for printing labels and reports: The SeqDB collection management and molecular tracking system (Bilkhu et al. 2017) has evolved through two of these scenarios. Originally, the required architectural changes were going to be added into the codebase, but after some time, the development team recognised that the technical debt inherent in the project wasn’t worth the effort of modification and refactoring. Instead a new codebase was created bringing forward the best parts of the system oriented around the molecular data model for Sanger Sequencing and Next Generation Sequencing (NGS) workflows. In the case of the Multimedia and Document Store module and the Agents module, a brand new codebase was established whose technology choices were aligned with the DINA vision. These two modules have been created from fundamental use cases for collection management and digitization workflows and will continue to evolve as more modules come online and broaden their scope. The DINA Labels & Reporting module is a generic service for transforming data in arbitrary printable layouts based on customizable templates. In order to use the module in combination with data managed in collection management software Specify (http://specifysoftware.org) for printing labels of collection objects, we wrapped the Specify 7 API with a DINA-compliant API layer called the “DINA Specify Broker”. This allows for using the easy-to-use web-based template engine within the DINA Labels & Reports module without changing Specify’s codebase. In our presentation we will explain the DINA development philosophy and will outline benefits for different stakeholders who directly or indirectly use collections data and related research data in their daily workflows. We will also highlight opportunities for joining the DINA Consortium and how to best engage with members of DINA who share their expertise in natural science, biodiversity informatics and geoinformatics.


2021 ◽  
Vol 12 ◽  
Author(s):  
Rudolf N. Cardinal ◽  
Martin Burchell

CamCOPS is a free, open-source client–server system for secure data capture in the domain of psychiatry, psychology, and the clinical neurosciences. The client is a cross-platform C++ application, suitable for mobile and offline (disconnected) use. It allows touchscreen data entry by subjects/patients, researchers/clinicians, or both together. It implements a large and extensible range of tasks, from simple questionnaires to complex animated tasks. The client uses encrypted data storage and sends data via an encrypted network connection to a CamCOPS server. Individual institutional users set up and run their own CamCOPS server, so no data is transferred outside the hosting institution's control. The server, written in Python, provides clinically oriented and research-oriented views of tasks, including the tracking of changes over time. It provides an audit trail, export facilities (such as to an institution's primary electronic health record system), and full structured data access subject to authorization. A single CamCOPS server can support multiple research/clinical groups, each having its own identity policy (e.g., fully identifiable for clinical use; de-identified/pseudonymised for research use). Intellectual property rules regarding third-party tasks vary and CamCOPS has several mechanisms to support compliance, including for tasks that may be permitted to some institutions but not others. CamCOPS supports task scheduling and home testing via a simplified user interface. We describe the software, report local information governance approvals within part of the UK National Health Service, and describe illustrative clinical and research uses.


Author(s):  
D. Oxoli ◽  
M. Cannata ◽  
V. Terza ◽  
M. A. Brovelli

<p><strong>Abstract.</strong> Nowadays, sustainable development and preservation of territories poses a number of challenges requiring innovative and robust technological tools to fully accomplish them. According to this, the design of an integrated tourism management system is here discussed. The tourism management system is developed for the Insubria Region within the INSUBRIPARKS project, funded by the Interreg program of the European Union. The Insubria is a historical-geographical area stretches between Northern Italy and Southern Switzerland embedding a valuable historical and natural heritage. Nevertheless, the tourism potential of the region is not fully exploited due to a fragmented political context within the geographical area that extends across different local and national jurisdictions. The final goal of the project to increase tourism attractiveness of the Insubria Region through the provision of physical infrastructure, the development and promotion of new tourism experiences, and the deployment of a standardized Information Technology infrastructure to support cross-border land management and marketing operations. Central to this paper is the preliminary design of this infrastructure that will provide tools for supporting information generating and consuming among project partners and external stakeholders. The design phase leverages exclusively Free and Open Source Software. Alongside the preliminary architecture, both use cases and user requirements are discussed together with the expected benefits deriving from the co-creation of best tourism management practices by means of open and shared software platforms.</p>


ZooKeys ◽  
2012 ◽  
Vol 209 ◽  
pp. 75-86 ◽  
Author(s):  
Riitta Tegelberg ◽  
Jaana Haapala ◽  
Tero Mononen ◽  
Mika Pajari ◽  
Hannu Saarenmaa

Digitarium is a joint initiative of the Finnish Museum of Natural History and the University of Eastern Finland. It was established in 2010 as a dedicated shop for the large-scale digitisation of natural history collections. Digitarium offers service packages based on the digitisation process, including tagging, imaging, data entry, georeferencing, filtering, and validation. During the process, all specimens are imaged, and distance workers take care of the data entry from the images. The customer receives the data in Darwin Core Archive format, as well as images of the specimens and their labels. Digitarium also offers the option of publishing images through Morphbank, sharing data through GBIF, and archiving data for long-term storage. Service packages can also be designed on demand to respond to the specific needs of the customer. The paper also discusses logistics, costs, and intellectual property rights (IPR) issues related to the work that Digitarium undertakes.


2018 ◽  
Vol 9 (3) ◽  
pp. 36-47
Author(s):  
Pushpa Singh ◽  
Narendra Singh

Free and open source software (FOSS) differs from proprietary software. FOSS facilitates the design of various applications per the user's requirement. Web applications are not exceptional in this way. Web-based applications are mostly based on client server architecture. This article is an analytical study of FOSS products used in web-based client server architecture. This article will provide information about FOSS product such as FireFox (web browser), Apache (web server) and MySQL (RDBMS). These reveal that various FOSS products such as Apache server covers 65% of the market share, while MySQL covers 58.7% market share and hold the top-most rank.


2012 ◽  
Vol 3 (6) ◽  
pp. 40
Author(s):  
Jose Luján Valderrama ◽  
Gustau Aguilella Arzo

<p>We presents a virtual reconstruction of the megalithic tomb found near of Tossal del Mortórum (Cabanes, Castellón) dated on the second millennium BC. The structure discovered in 2005 was plundered at an undetermined moment, and their conservation status is very precarious. Given the undoubted interest of the tomb, located in an area of the peninsula with little evidence of megalithism, we decided to make a essay of its virtual reconstruction. The basic software used in the modeling and rendering is Blender 2.56, so this paper can also show the capabilities of open source software for these projects.</p>


2020 ◽  
Vol 16 (12) ◽  
pp. e1008475
Author(s):  
Marko Vendelin ◽  
Martin Laasmaa ◽  
Mari Kalda ◽  
Jelena Branovets ◽  
Niina Karro ◽  
...  

Biological measurements frequently involve measuring parameters as a function of time, space, or frequency. Later, during the analysis phase of the study, the researcher splits the recorded data trace into smaller sections, analyzes each section separately by finding a mean or fitting against a specified function, and uses the analysis results in the study. Here, we present the software that allows to analyze these data traces in a manner that ensures repeatability of the analysis and simplifies the application of FAIR (findability, accessibility, interoperability, and reusability) principles in such studies. At the same time, it simplifies the routine data analysis pipeline and gives access to a fast overview of the analysis results. For that, the software supports reading the raw data, processing the data as specified in the protocol, and storing all intermediate results in the laboratory database. The software can be extended by study- or hardware-specific modules to provide the required data import and analysis facilities. To simplify the development of the data entry web interfaces, that can be used to enter data describing the experiments, we released a web framework with an example implementation of such a site. The software is covered by open-source license and is available through several online channels.


Author(s):  
Marielle Adam ◽  
Franck Theeten ◽  
Jean-Marc Herpers ◽  
Thomas Vandenberghe ◽  
Patrick Semal ◽  
...  

DaRWIN (Data Research Warehouse Information Network) is an in-house solution developed by the Royal Belgian Institute of Natural Sciences (RBINS), as a Natural History collections management system for biological and geological samples in collections. In 2014, the Royal Museum for Central Africa (RMCA) adopted this system for its collections and started to take part in new developments. The DaRWIN database currently manages information on more than 600,000 records (about 4 million specimens) housed at the RBINS and more than 650,000 records (more than 1 million specimens) at the RMCA. DaRWIN is an open source system, consisting of a PostgreSQL database and a customizable web-interface based on the Symfony framework (https://symfony.com). DaRWIN is divided into 2 parts: one public section that gives a “read-only” access to digitised specimens, one section for registered users, with different levels of access rights (user, encoder, conservator and administrator), customizable for each collection and allowing update of specimens and collections, daily management of collections, and the potential for dealing with sensitive information. one public section that gives a “read-only” access to digitised specimens, one section for registered users, with different levels of access rights (user, encoder, conservator and administrator), customizable for each collection and allowing update of specimens and collections, daily management of collections, and the potential for dealing with sensitive information. DaRWIN stores sample data and related information such as place and date of collection, missions and collectors, identifiers, technicians involved, taxonomy, identification information (type, stage, state, etc.), bibliography, related files, storage, etc. Other features that deal with day-to-day curation operations are available: loans, printing of labels for storage, statistics and reporting. DaRWIN features its own JSON (JavaScript Object Notation) webservice for specimens and scientific names and can export data in tab-delimited, Excel, PDF and GeoJSON formats. More recently, a procedure for importing batches of data has been developed, based on tab-delimited files, making integration of data from (old/historical) databases faster and more controlled. Additional improvements of the user interface and database model have been made. For example, parallel taxonomical hierarchies can be created, allowing users to work with temporary taxonomies, old scientific names (basionyms and synonyms) and document the history of type specimens. Finally, quality control and data cleaning on several tables have been implemented, e.g. mapping of locality names with vocabularies like Geonames, adding ISO 3166 two-letter country codes (https://www.iso.org/iso-3166-country-codes.html), cleaning duplicates from people/institutions and taxonomy catalogues. A tool for checking taxonomical names on GBIF (Global Biodiversity Information Facility), WoRMS (World Register of Marine Species) and DaRWIN itself, based on webservices and tab-delimited files, has been developed. Last year, RBINS, RMCA and Meise Botanic Garden (MBG) defined a new framework of collaboration in the NaturalHeritage project (http://www.naturalheritage.be), in order to foster interoperability among their collection data sources. This new framework presents itself as one common research portal for data on natural history collections (from DaRWIN and other existing collection databases) of the three partnered institutions and makes data compliant to a standard agreed by the partners. See Poster "NaturalHeritage: Bridging Belgian Natural History Collections" for more information. DaRWIN is accessible online (http://darwin.naturalsciences.be). A Github repository is also available (https://github.com/naturalsciences/natural_heritage_darwin).


Author(s):  
Pushpa Singh ◽  
Narendra Singh

Free and open source software (FOSS) differs from proprietary software. FOSS facilitates the design of various applications per the user's requirement. Web applications are not exceptional in this way. Web-based applications are mostly based on client server architecture. This article is an analytical study of FOSS products used in web-based client server architecture. This article will provide information about FOSS product such as FireFox (web browser), Apache (web server) and MySQL (RDBMS). These reveal that various FOSS products such as Apache server covers 65% of the market share, while MySQL covers 58.7% market share and hold the top-most rank.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 914 ◽  
Author(s):  
Lennart C. Karssen ◽  
Cornelia M. van Duijn ◽  
Yurii S. Aulchenko

Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the “core team”, facilitating agile statistical omics methodology development and fast dissemination.


Sign in / Sign up

Export Citation Format

Share Document