scholarly journals Natural Selection: Finding Specimens in a Natural History Collection

Author(s):  
Marieke van ◽  
Antal van den Bosch ◽  
Steve Hunt ◽  
Marian van der Meij ◽  
Rene Dekker ◽  
...  
Author(s):  
Leonor Venceslau ◽  
Luis Lopes

Major efforts are being made to digitize natural history collections to make these data available online for retrieval and analysis (Beaman and Cellinese 2012). Georeferencing, an important part of the digitization process, consists of obtaining geographic coordinates from a locality description. In many natural history collection specimens, the coordinates of the sampling location are not recorded, rather they contain a description of the site. Inaccurate georeferencing of sampling locations negatively impacts data quality and the accuracy of any geographic analysis on those data. In addition to latitude and longitude, it is important to define a degree of uncertainty of the coordinates, since in most cases it is impossible to pinpoint the exact location retrospectively. This is usually done by defining an uncertainty value represented as a radius around the center of the locality where the sampling took place. Georeferencing is a time-consuming process requiring manual validation; as such, a significant part of all natural history collection data available online are not georeferenced. Of the 161 million records of preserved specimens currently available in the Global Biodiversity Information Facility (GBIF), only 86 million (53.4%) include coordinates. It is therefore important to develop and optimize automatic tools that allow a fast and accurate georeferencing. The objective of this work was to test existing automatic georeferencing services and evaluate their potential to accelerate georeferencing of large collection datasets. For this end, several open-source georeferencing services are currently available, which provide an application programming interface (API) for batch georeferencing. We evaluated five programs: Google Maps, MapQuest, GeoNames, OpenStreetMap, and GEOLocate. A test dataset of 100 records (reference dataset), which had been previously individually georreferenced following Chapman and Wieczorek 2006, was randomly selected from the Museu Nacional de História Natural e da Ciência, Universidade de Lisboa insect collection catalogue (Lopes et al. 2016). An R (R Core Team 2018) script was used to georeference these records using the five services. In cases where multiple results were returned, only the first one was considered and compared with the manually obtained coordinates of the reference dataset. Two factors were considered in evaluating accuracy: Total number of results obtained and Distance to the original location in the reference dataset. Total number of results obtained and Distance to the original location in the reference dataset. Of the five programs tested, Google Maps yielded the most results (99) and was the most accurate with 57 results < 1000 m from the reference location and 79 within the uncertainty radius. GEOLocate provided results for 87 locations, of which 47 were within 1000 m of the correct location, and 57 were within the uncertainty radius. The other 3 services tested all had less than 35 results within 1000 m from the reference location, and less than 50 results within the uncertainty radius. Google Maps and Open Street Map had the lowest average distance from the reference location, both around 5500 m. Google Maps has a usage limit of around 40000 free georeferencing requests per month, beyond which the service is paid, while GEOLocate is free with no usage limit. For large collections, this may be a factor to take into account. In the future, we hope to optimize these methods and test them with larger datasets.


2019 ◽  
Vol 374 (1777) ◽  
pp. 20180248 ◽  
Author(s):  
Sangeet Lamichhaney ◽  
Daren C. Card ◽  
Phil Grayson ◽  
João F. R. Tonini ◽  
Gustavo A. Bravo ◽  
...  

Evolutionary convergence has been long considered primary evidence of adaptation driven by natural selection and provides opportunities to explore evolutionary repeatability and predictability. In recent years, there has been increased interest in exploring the genetic mechanisms underlying convergent evolution, in part, owing to the advent of genomic techniques. However, the current ‘genomics gold rush’ in studies of convergence has overshadowed the reality that most trait classifications are quite broadly defined, resulting in incomplete or potentially biased interpretations of results. Genomic studies of convergence would be greatly improved by integrating deep ‘vertical’, natural history knowledge with ‘horizontal’ knowledge focusing on the breadth of taxonomic diversity. Natural history collections have and continue to be best positioned for increasing our comprehensive understanding of phenotypic diversity, with modern practices of digitization and databasing of morphological traits providing exciting improvements in our ability to evaluate the degree of morphological convergence. Combining more detailed phenotypic data with the well-established field of genomics will enable scientists to make progress on an important goal in biology: to understand the degree to which genetic or molecular convergence is associated with phenotypic convergence. Although the fields of comparative biology or comparative genomics alone can separately reveal important insights into convergent evolution, here we suggest that the synergistic and complementary roles of natural history collection-derived phenomic data and comparative genomics methods can be particularly powerful in together elucidating the genomic basis of convergent evolution among higher taxa. This article is part of the theme issue ‘Convergent evolution in the genomics era: new insights and directions’.


2016 ◽  
Vol 283 (1831) ◽  
pp. 20160499 ◽  
Author(s):  
Rebecca H. Chisholm ◽  
Mark M. Tanaka

Mycobacterium tuberculosis has an unusual natural history in that the vast majority of its human hosts enter a latent state that is both non-infectious and devoid of any symptoms of disease. From the pathogen perspective, it seems counterproductive to relinquish reproductive opportunities to achieve a détente with the host immune response. However, a small fraction of latent infections reactivate to the disease state. Thus, latency has been argued to provide a safe harbour for future infections which optimizes the persistence of M. tuberculosis in human populations. Yet, if a pathogen begins interactions with humans as an active disease without latency, how could it begin to evolve latency properties without incurring an immediate reproductive disadvantage? We address this question with a mathematical model. Results suggest that the emergence of tuberculosis latency may have been enabled by a mechanism akin to cryptic genetic variation in that detrimental latency properties were hidden from natural selection until their expression became evolutionarily favoured.


Nuncius ◽  
2016 ◽  
Vol 31 (2) ◽  
pp. 439-483
Author(s):  
Elena Canadelli

The historical catalogs of the museum collections contain a wealth of information for historians seeking to reconstruct their contents, how they were displayed and the ways in which they were used. This paper will present the complete transcription of a draft catalog that was prepared in 1797 for the Museum of Natural History and Antiquities of the University of Padua. Conserved in the university’s Museum of Geology and Paleontology, the catalog was the first to be compiled of the museum, which was established in 1733 thanks to the donation by Antonio Vallisneri Jr. of his father Antonio Vallisneri Sr.’s collection of antiquities and natural history. The catalog was compiled by the custodian of the museum, the herbalist and amateur naturalist Bartolomeo Fabris. It is of great interest because it provides a record of the number and nature of the pieces conserved in the museum at a time when natural history and archeology collections were still undivided. It also provides indications as to how such collections were arranged for display in the public halls of a university at the end of the eighteenth century. Based on this catalog, with additional information drawn from other manuscript and published sources and museum catalogs from the 1830s conserved in various institutes at the University of Padua, it is possible to reconstruct the contents and layout of a significant late 18th-century natural history collection.


2015 ◽  
Vol 29 (1-2) ◽  
pp. 61-66
Author(s):  
Amanda N. Lawrence ◽  
Jennifer Strotman

Abstract A case study involving a comprehensive inspection to discriminate between old and active pest infestations is described. Integrated pest management (IPM) processes within the National Museum of Natural History (NMNH), Smithsonian Institution, Division of Mammals (DOM) are challenging because of the size and composition of the collection, the age of storage equipment, and a low staffing to specimen ratio. Each specimen cabinet was inspected by IPM technicians during a 6-week period in late 2012. Following that inspection, two members of the NMNH collections program technician team began a 9-week project to clean 5,925 incidents in the affected cabinets in DOM storage areas in the Natural History Building downtown. The results of this project show that cleaning up a pest infestation in any natural history collection can be done in a reasonable amount of time and will help ensure the preservation of collections in the future. Knowing that the collections have been fully inspected and cleaned will allow staff in the DOM to easily and rapidly address future IPM issues in a structured way. Such efforts facilitate future IPM inspections because evidence of any new pest activity is no longer at risk of being overlooked due to debris from past infestations.


Sign in / Sign up

Export Citation Format

Share Document