scholarly journals A three-dimensional approach to visualize pairwise morphological variation and its application to fragmentary palaeontological specimens

PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10545
Author(s):  
Matt A. White ◽  
Nicolás E. Campione

Classifying isolated vertebrate bones to a high level of taxonomic precision can be difficult. Many of Australia’s Cretaceous terrestrial vertebrate fossil-bearing deposits, for example, produce large numbers of isolated bones and very few associated or articulated skeletons. Identifying these often fragmentary remains beyond high-level taxonomic ranks, such as Ornithopoda or Theropoda, is difficult and those classified to lower taxonomic levels are often debated. The ever-increasing accessibility to 3D-based comparative techniques has allowed palaeontologists to undertake a variety of shape analyses, such as geometric morphometrics, that although powerful and often ideal, require the recognition of diagnostic landmarks and the generation of sufficiently large data sets to detect clusters and accurately describe major components of morphological variation. As a result, such approaches are often outside the scope of basic palaeontological research that aims to simply identify fragmentary specimens. Herein we present a workflow in which pairwise comparisons between fragmentary fossils and better known exemplars are digitally achieved through three-dimensional mapping of their surface profiles and the iterative closest point (ICP) algorithm. To showcase this methodology, we compared a fragmentary theropod ungual (NMV P186153) from Victoria, Australia, identified as a neovenatorid, with the manual unguals of the megaraptoran Australovenator wintonensis (AODF604). We discovered that NMV P186153 was a near identical match to AODF604 manual ungual II-3, differing only in size, which, given their 10–15Ma age difference, suggests stasis in megaraptoran ungual morphology throughout this interval. Although useful, our approach is not free of subjectivity; care must be taken to eliminate the effects of broken and incomplete surfaces and identify the human errors incurred during scaling, such as through replication. Nevertheless, this approach will help to evaluate and identify fragmentary remains, adding a quantitative perspective to an otherwise qualitative endeavour.

Some steps are taken towards a parametric statistical model for the velocity and velocity derivative fields in stationary turbulence, building on the background of existing theoretical and empirical knowledge of such fields. While the ultimate goal is a model for the three-dimensional velocity components, and hence for the corresponding velocity derivatives, we concentrate here on the stream wise velocity component. Discrete and continuous time stochastic processes of the first-order autoregressive type and with one-dimensional marginals having log-linear tails are constructed and compared with two large data-sets. It turns out that a first-order autoregression that fits the local correlation structure well is not capable of describing the correlations over longer ranges. A good fit locally as well as at longer ranges is achieved by using a process that is the sum of two independent autoregressions. We study this type of model in some detail. We also consider a model derived from the above-mentioned autoregressions and with dependence structure on the borderline to long-range dependence. This model is obtained by means of a general method for construction of processes with long-range dependence. Some suggestions for future empirical and theoretical work are given.


2019 ◽  
Vol 8 (3) ◽  
pp. 103
Author(s):  
Zhigang Han ◽  
Fen Qin ◽  
Caihui Cui ◽  
Yannan Liu ◽  
Lingling Wang ◽  
...  

A soil erosion model is used to evaluate the conditions of soil erosion and guide agricultural production. Recently, high spatial resolution data have been collected in new ways, such as three-dimensional laser scanning, providing the foundation for refined soil erosion modelling. However, serial computing cannot fully meet the computational requirements of massive data sets. Therefore, it is necessary to perform soil erosion modelling under a parallel computing framework. This paper focuses on a parallel computing framework for soil erosion modelling based on the Hadoop platform. The framework includes three layers: the methodology, algorithm, and application layers. In the methodology layer, two types of parallel strategies for data splitting are defined as row-oriented and sub-basin-oriented methods. The algorithms for six parallel calculation operators for local, focal and zonal computing tasks are designed in detail. These operators can be called to calculate the model factors and perform model calculations. We defined the key-value data structure of GeoCSV format for vector, row-based and cell-based rasters as the inputs for the algorithms. A geoprocessing toolbox is developed and integrated with the geographic information system (GIS) platform in the application layer. The performance of the framework is examined by taking the Gushanchuan basin as an example. The results show that the framework can perform calculations involving large data sets with high computational efficiency and GIS integration. This approach is easy to extend and use and provides essential support for applying high-precision data to refine soil erosion modelling.


Author(s):  
Gina Brander ◽  
Colleen Pawliuk

Program  objective:  To  advance  the  methodology  and  improve  the  data  management  of  the  scoping  review through  the  integration  of  two  health  librarians  onto  the  clinical  research  team.  Participants  and  setting:  Two  librarians were  embedded  on  a  multidisciplinary,  geographically  dispersed  pediatric  palliative  and  end-of-life  research  team  conducting a  scoping  review  headquartered  at  the  British  Columbia  Children’s  Hospital  Research  Institute.  Program:  The  team’s embedded  librarians  guided  and  facilitated  all  stages  of  a  scoping  review  of  180  Q3  conditions  and  10  symptoms.  Outcomes: The  scoping  review  was  enhanced  in  quality  and  efficiency  through  the  integration  of  librarians  onto  the  team.  Conclusions: Health  librarians  embedded  on  clinical  research  teams  can  help  guide  and  facilitate  the  scoping  review  process  to  improve workflow  management  and  overall  methodology.  Librarians  are  particularly  well  equipped  to  solve  challenges  arising  from large  data  sets,  broad  research  questions  with  a  high  level  of  specificity,  and  geographically  dispersed  team  members. Knowledge  of  emerging  and  established  citation-screening  and  bibliographic  software  and  review  tools  can  help  librarians  to address  these  challenges  and  provide  efficient  workflow  management. 


Author(s):  
James N. Turner ◽  
Donald H. Szarowski ◽  
Karen L. Smith ◽  
John W Swann

Thick slices of brain tissue are studied in vitro because neurons deep in the slice maintain physiologic contact with large numbers of other neurons, and are thought to function in a manner similar to that of in tact brain. The three-dimensional (3-D) morphology and electrophysiology of these cells can be studied and correlated. The confocal light microscope with its z-direction discrimination forms optical sections through the entire thickness of the slice, and stereo pairs or full 3-D reconstructions can be displayed using the optical sections as data sets. Individual neurons injected with fluorescent dyes or peroxidase based stains are imaged in either the fluorescent or reflection modes.


2001 ◽  
Vol 34 (1) ◽  
pp. 76-79 ◽  
Author(s):  
Lynn Ribaud ◽  
Guang Wu ◽  
Yuegang Zhang ◽  
Philip Coppens

As the combination of high-intensity synchrotron sources and area detectors allows collection of large data sets in a much shorter time span than previously possible, the use of open helium gas-flow systems is much facilitated. A flow system installed at the SUNY X3 synchrotron beamline at the National Synchrotron Light Source has been used for collection of a number of large data sets at a temperature of ∼16 K. Instability problems encountered when using a helium cryostat for three-dimensional data collection are eliminated. Details of the equipment, its temperature calibration and a typical result are described.


2016 ◽  
Vol 44 (2) ◽  
pp. 227-255 ◽  
Author(s):  
Stephen Evans ◽  
Rob Liddiard ◽  
Philip Steadman

This article describes the development of a new three-dimensional model of the British building stock, called ‘3DStock’. The model differs from other 3D urban and stock models, in that it represents explicitly and in detail the spatial relationships between ‘premises’ and ‘buildings’. It also represents the pattern of activities on different floors within buildings. The geometrical/geographical structure of the model is assembled automatically from two existing national data sets. Additional data from other sources including figures for electricity and gas consumption are then attached. Some sample results are given for energy use intensities. The first purpose of the model is in the analysis of energy use in the building stock. With actual energy data for very large numbers of premises, it is possible to take a completely new type of statistical approach, in which consumption can be related to a range of characteristics including activity, built form, construction and materials. Models have been built to date of the London Borough of Camden and the cities of Leicester, Tamworth and Swindon. Work is in progress to extend the modelling to other parts of Britain. Because of the coverage of the data, this will be limited however to England and Wales.


2020 ◽  
Vol 1 (1) ◽  
pp. 31-40
Author(s):  
Hina Afzal ◽  
Arisha Kamran ◽  
Asifa Noreen

The market nowadays, due to the rapid changes happening in the technologies requires a high level of interaction between the educators and the fresher coming to going the market. The demand for IT-related jobs in the market is higher than all other fields, In this paper, we are going to discuss the survival analysis in the market of parallel two programming languages Python and R . Data sets are growing large and the traditional methods are not capable enough of handling the large data sets, therefore, we tried to use the latest data mining techniques through python and R programming language. It took several months of effort to gather such an amount of data and process it with the data mining techniques using python and R but the results showed that both languages have the same rate of growth over the past years.


2021 ◽  
Author(s):  
Stephen Taylor

Molecular biology experiments are generating an unprecedented amount of information from a variety of different experimental modalities. DNA sequencing machines, proteomics mass cytometry and microscopes generate huge amounts of data every day. Not only is the data large, but it is also multidimensional. Understanding trends and getting actionable insights from these data requires techniques that allow comprehension at a high level but also insight into what underlies these trends. Lots of small errors or poor summarization can lead to false results and reproducibility issues in large data sets. Hence it is essential we do not cherry-pick results to suit a hypothesis but instead examine all data and publish accurate insights in a data-driven way. This article will give an overview of some of the problems faced by the researcher in understanding epigenetic changes (which are related to changes in the physical structure of DNA) when presented with raw analysis results using visualization methods. We will also discuss the new challenges faced by using machine learning which can be helped by visualization.


2013 ◽  
Vol 6 (4) ◽  
pp. 1261-1273 ◽  
Author(s):  
T. Heus ◽  
A. Seifert

Abstract. This paper presents a method for feature tracking of fields of shallow cumulus convection in large eddy simulations (LES) by connecting the projected cloud cover in space and time, and by accounting for splitting and merging of cloud objects. Existing methods tend to be either imprecise or, when using the full three-dimensional (3-D) spatial field, prohibitively expensive for large data sets. Compared to those 3-D methods, the current method reduces the memory footprint by up to a factor 100, while retaining most of the precision by correcting for splitting and merging events between different clouds. The precision of the algorithm is further enhanced by taking the vertical extent of the cloud into account. Furthermore, rain and subcloud thermals are also tracked, and links between clouds, their rain, and their subcloud thermals are made. The method compares well with results from the literature. Resolution and domain dependencies are also discussed. For the current simulations, the cloud size distribution converges for clouds larger than an effective resolution of 6 times the horizontal grid spacing, and smaller than about 20% of the horizontal domain size.


Geophysics ◽  
1990 ◽  
Vol 55 (10) ◽  
pp. 1321-1326 ◽  
Author(s):  
X. Wang ◽  
R. O. Hansen

Two‐dimensional (profile) inversion techniques for magnetic anomalies are widely used in exploration geophysics: but, until now, the three‐dimensional (3-D) methods available have been restricted in their geologic applicability, dependent upon good initial values or limited by the capabilities of existing computers. We have developed a fully 3-D inversion algorithm intended for routine application to large data sets. The algorithm based on a Fourier transform expression for the magnetic field of homogeneous polyhedral bodies (Hansen and Wang, 1998), is a 3-D generalization of CompuDepth (O’Brien, 1972). Like CompuDepth, the new inversion algorithm employs thespatial equivalent of frequency‐domain autoregression to determine a series of coefficients from which the depths and locations of polyhedral vertices are calculated by solving complex polynomials. These vertices are used to build a 3-D geologic model. Application to the Medicine Lake Volcano aeromagnetic anomaly resulted in a geologically reasonable model of the source.


Sign in / Sign up

Export Citation Format

Share Document