specimen management
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 3)

H-INDEX

6
(FIVE YEARS 0)

AORN Journal ◽  
2021 ◽  
Vol 114 (5) ◽  
pp. 443-455
Author(s):  
Terri Link
Keyword(s):  

AORN Journal ◽  
2021 ◽  
Vol 113 (5) ◽  
pp. 505-513
Author(s):  
Lisa Spruce
Keyword(s):  

AORN Journal ◽  
2020 ◽  
Vol 112 (5) ◽  
Author(s):  
Lisa Croke
Keyword(s):  

Author(s):  
Edward Gilbert ◽  
Nico Franz ◽  
Beckett Sterner

Symbiota (Gries et al. 2014) is an open-source software platform designed to function as a biodiversity Content Management System (CMS) for specimen-based datasets. Primarily in North America though also increasingly on other continents, the Symbiota software platform has risen to prominence in the past ten years as one of the more heavily accessed mid-level aggregation tools for assembling, managing, and distributing datasets associated with biological collections. There are more than 50 public Symbiota portals being managed and promoted by various biodiversity projects and communities. Together, these portals assist in the distribution and mobilization of more than 55 million specimen and 20 million image records associated with hundreds of institutions. The central premise of a standard Symbiota installation is to function as a mini-aggregator capable of integrating multiple occurrence datasets that collectively represent a community-based research data perspective. Datasets are typically limited to geographic and taxonomic scopes that best represent the community of researchers leading the project. Symbiota portals often publish "snapshot records" that originate from external management systems but otherwise align with the portal's community of practice and data focus. Specimen management tools integrated into the Symbiota platform also support the ability to manage occurrence data directly within the portal as “live datasets”. The software has become widely adopted as a data management platform. Approximately 550 specimen datasets consisting of more than 14 million specimen records are being directly managed within a portal instance. The appeal of Symbiota as an occurrence management tool is also exemplified by the fact that 18 of the 30 federally funded Thematic Collection Networks (https://www.idigbio.org/content/thematic-collections-networks) have elected to use Symbiota as their central data management system. Symbiota's well-developed data ingestion tools, coupled with the ability to store import profile definitions, allows data snapshots to be partially coordinated with source data managed within a variety of remote systems such as Specify (https://specifysoftware.org), EMu (https://emu.axiell.com), Integrated Publishing Toolkit (IPT, https://gbif.org/ipt) publishers, as well as other Symbiota instances. As with Global Biodiversity Information Facility (GBIF) and Integrated Digitized Biocollections (iDigBio) publishing models, data snapshots are periodically refreshed, based on transfer protocols compliant with Darwin Core (DwC) data exchange standards. The Symbiota data management tools provide the means for the community of experts running the portal to annotate and augment snapshot datasets with the goal of improving the overall fitness-for-use of the aggregated dataset. Even though a data refresh from the source dataset would effectively replace the data improvement with the original flawed data, the system’s ability to maintain data versioning of all annotations made within the portal allows data improvements to be reapplied. However, inadequate support for bi-directional data flow between the portal and the source collection effectively isolates the annotations within the portal. On one hand, the mini-aggregator model of Symbiota can be viewed as compounding the further fragmentation of occurrence data. Rather than conforming to the vision of pushing data from the source, to the global aggregators and ultimately the research community, specimen data are being pushed from source collections to a growing array of mini-aggregators. On the other hand, community portals have the ability to incentivize experts and enthusiasts to publish high-quality, "data-intelligent" biodiversity data products with the potential of channeling data improvements back to the source. This presentation will begin with a historical review of the development of the Symbiota model including major shifts in the evolution of the development goals. We will discuss the benefits and shortcomings of the data model and provide a description of schema modifications that are currently in development. We will also discuss the successes and challenges associated with building data commons directly associated with communities of researchers. We will address the software’s role in mobilizing occurrence data within North America and the efficacy of adhering to the FAIR use principles of making datasets findable, accessible, interoperable, and reusable (Wilkinson et al. 2016). Finally, we will discuss interoperability developments that we hope will improve the flow of data annotations between decentralized networks of data portals and the original data providers at the source.


Micromachines ◽  
2020 ◽  
Vol 11 (8) ◽  
pp. 755
Author(s):  
Yen-Hung Chen ◽  
Yen-An Chen ◽  
Shu-Rong Huang

Hospitals are continuously working to reduce delayed analysis and specimen errors during transfers from testing stations to clinical laboratories. Radio-frequency identification (RFID) tags, which provide automated specimen labeling and tracking, have been proposed as a solution to specimen management that reduces human resource costs and analytic delays. Conventional RFID solutions, however, confront the problem of traffic jams and bottlenecks on the conveyor belts that connect testing stations with clinical laboratories. This mainly results from methods which assume that the arrival rate of specimens to laboratory RFID readers is fixed/stable, which is unsuitable and impractical in the real world. Previous RFID algorithms have attempted to minimize the time required for tag identification without taking the dynamic arrival rates of specimens into account. Therefore, we propose a novel RFID anti-collision algorithm called the Mobility Aware Binary Tree Algorithm (MABT), which can be used to improve the identification of dynamic tags within the reader’s coverage area and limited dwell time.


2020 ◽  
Vol 9 (3) ◽  
pp. e000926
Author(s):  
Olivia Barratt ◽  
Melanie Simms ◽  
Miriam John ◽  
Michael Lewis ◽  
Phil Atkin

Histological, haematological and microbiological investigations are essential in the field of oral medicine and are a crucial adjunct to clinical findings, often being relied on to obtain a definitive diagnosis. Importantly, in some cases, these investigations can help exclude or confirm the presence of malignancy. This project highlighted some problems regarding labelling and recording of specimens in an oral medicine department and a lack of clear specimen management processes. It aimed to improve specimen management by reducing reported incidents surrounding diagnostic tests. Quality improvement methods such as process mapping were key to understanding the journey of specimens and the departments involved at each stage of the system. Initiatives included a recording log book, staff training, information signage around the clinic and delegation of responsibilities, all of which were implemented over multiple plan, do, study, act (PDSA) cycles. The project was extremely successful and since implementation there has been a clear and sustained reduction in reported incidents. The small number of incidents which did occur all involved transportation of specimens and none involved labelling or recording. One can conclude that the change in test management systems in terms of recording and labelling of specimens in the department has been sustained. Ongoing engagement with stakeholders and senior leaders is the priority to ensure further reduction in incidents in the future and that the improvements are maintained. This project demonstrates how simple, realistic, cost-effective, quality improvement initiatives can have a significant positive impact on patient care and hospital management systems.


Author(s):  
Edward Gilbert ◽  
Corinna Gries ◽  
Nico Franz ◽  
Landrum Leslie R. ◽  
Thomas H. Nash III

The SEINet Portal Network has a complex social and development history spanning nearly two decades. Initially established as a basic online search engine for a select handful of biological collections curated within the southwestern United States, SEINet has since matured into a biodiversity data network incorporating more than 330 institutions and 1,900 individual data contributors. Participating institutions manage and publish over 14 million specimen records, 215,000 observations, and 8 million images. Approximately 70% of the collections make use of the data portal as their primary "live" specimen management platform. The SEINet interface now supports 13 regional data portals distributed across the United States and northern Mexico (http://symbiota.org/docs/seinet/). Through many collaborative efforts, it has matured into a tool for biodiversity data exploration, which includes species inventories, interactive identification keys, specimen and field images, taxonomic information, species distribution maps, and taxonomic descriptions. SEINet’s initial developmental goals were to construct a read-only interface that integrated specimen records harvested from a handful of distributed natural history databases. Intermittent network conductivity and inconsistent data exchange protocols frequently restricted data persistence. National funding opportunities supported a complete redesign towards the development of a centralized data cache model with periodic "snapshot" updates from original data sources. A service-based management infrastructure was integrated into the interface to mobilize small- to medium-sized collections (<1 million specimen records) that commonly lack consistent infrastructure and technical expertise to maintain a standard compliant specimen database. These developments were the precursors to the Symbiota software project (Gries et al. 2014). Through further development of Symbiota, SEINet transformed into a robust specimen management system specifically geared toward specimen digitization with features including data entry from label images, harvesting data from specimen duplicates, batch georeferencing, data validation and cleaning, generating progress reports, and additional tools to improve the efficiency of the digitization process. The central developmental paradigm focused on data mobilization through the production of: a versatile import module capable of ingesting a diverse range of data structures, a robust toolkit to assist in digitizing and managing specimen data and images, and a Darwin Core Archive (DwC-A) compliant data publishing and export toolkit to facilitate data distribution to global aggregators such as Global Biodiversity Information Facility (GBIF) and iDigBio. a versatile import module capable of ingesting a diverse range of data structures, a robust toolkit to assist in digitizing and managing specimen data and images, and a Darwin Core Archive (DwC-A) compliant data publishing and export toolkit to facilitate data distribution to global aggregators such as Global Biodiversity Information Facility (GBIF) and iDigBio. User interfaces consist of a decentralized network of regional data portals, all connecting to a centralized shared data source. Each of the 13 data portals are configured to present a regional perspective specifically tailored to represent the needs of the local research community. This infrastructure has supported the formation of regional consortia, who provide network support to aid local institutions in digitizing and publishing their collections within the network. The community-based infrastructure creates a sense of ownership – perhaps even good-natured competition – by the data providers and provides extra incentive to improve data quality and expand the network. Certain areas of development remain challenging in spite of the project's overall success. For instance, data managers continuously struggle to maintain a current local taxonomic thesaurus used for name validation, data cleaning, and to resolve taxonomic discrepancies commonly encountered when integrating collection datasets. We will discuss the successes and challenges associated with the long-term sustainability model and explore potential future paths for SEINet that support the long-term goal of maintaining a data provider that is in full compliance with the FAIR use principles of making the datasets findable, accessible, interoperable, and reusable (Wilkinson et al. 2016).


Sign in / Sign up

Export Citation Format

Share Document