Processing of Eye/Head-Tracking Data in Large-Scale Naturalistic Driving Data Sets

2012 ◽  
Vol 13 (2) ◽  
pp. 553-564 ◽  
Author(s):  
Christer Ahlstrom ◽  
Trent Victor ◽  
Claudia Wege ◽  
Erik Steinmetz
Author(s):  
Aaron Dean ◽  
Pasi Lautala ◽  
David Nelson

Highway-rail grade crossing (crossing) collisions and fatalities have been in decline, but a recent ‘plateau’ has caused the Federal Railroad Administration (FRA) to concentrate on decreasing further casualties. The Michigan Tech Rail Transportation Program has been selected to perform a large-scale study that will utilize the SHRP2 Naturalistic Driving Study (NDS) data to analyze how various crossing warning devices affect driver behavior and whether there are clear differences between the effectiveness of the warning devices. The main results of this study are the development of a coding scheme for a visual narrative, used to validate machine vision head tracking data, and an improved baseline for the head tracking data using bivariate probability density. Head tracking data from the NDS and its correlation with coded narratives are vital to analyze driver behavior as they traverse crossings. This paper also presents preliminary results for the comparative analysis of the head tracking data from an initial test sample. Future work will extend the analysis to a larger data set, and ensure that use of the head tracking data is a viable tool for the ongoing behavior analysis work. Based on preliminary results from testing of the first data set, it is expected there will be significant positive correlation in future samples and the machine vision head tracking will prove consistent enough for use in the large scale behavioral study.


Author(s):  
Lior Shamir

Abstract Several recent observations using large data sets of galaxies showed non-random distribution of the spin directions of spiral galaxies, even when the galaxies are too far from each other to have gravitational interaction. Here, a data set of $\sim8.7\cdot10^3$ spiral galaxies imaged by Hubble Space Telescope (HST) is used to test and profile a possible asymmetry between galaxy spin directions. The asymmetry between galaxies with opposite spin directions is compared to the asymmetry of galaxies from the Sloan Digital Sky Survey. The two data sets contain different galaxies at different redshift ranges, and each data set was annotated using a different annotation method. The results show that both data sets show a similar asymmetry in the COSMOS field, which is covered by both telescopes. Fitting the asymmetry of the galaxies to cosine dependence shows a dipole axis with probabilities of $\sim2.8\sigma$ and $\sim7.38\sigma$ in HST and SDSS, respectively. The most likely dipole axis identified in the HST galaxies is at $(\alpha=78^{\rm o},\delta=47^{\rm o})$ and is well within the $1\sigma$ error range compared to the location of the most likely dipole axis in the SDSS galaxies with $z>0.15$ , identified at $(\alpha=71^{\rm o},\delta=61^{\rm o})$ .


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Moritz Mercker ◽  
Philipp Schwemmer ◽  
Verena Peschko ◽  
Leonie Enners ◽  
Stefan Garthe

Abstract Background New wildlife telemetry and tracking technologies have become available in the last decade, leading to a large increase in the volume and resolution of animal tracking data. These technical developments have been accompanied by various statistical tools aimed at analysing the data obtained by these methods. Methods We used simulated habitat and tracking data to compare some of the different statistical methods frequently used to infer local resource selection and large-scale attraction/avoidance from tracking data. Notably, we compared spatial logistic regression models (SLRMs), spatio-temporal point process models (ST-PPMs), step selection models (SSMs), and integrated step selection models (iSSMs) and their interplay with habitat and animal movement properties in terms of statistical hypothesis testing. Results We demonstrated that only iSSMs and ST-PPMs showed nominal type I error rates in all studied cases, whereas SSMs may slightly and SLRMs may frequently and strongly exceed these levels. iSSMs appeared to have on average a more robust and higher statistical power than ST-PPMs. Conclusions Based on our results, we recommend the use of iSSMs to infer habitat selection or large-scale attraction/avoidance from animal tracking data. Further advantages over other approaches include short computation times, predictive capacity, and the possibility of deriving mechanistic movement models.


GigaScience ◽  
2020 ◽  
Vol 9 (1) ◽  
Author(s):  
T Cameron Waller ◽  
Jordan A Berg ◽  
Alexander Lex ◽  
Brian E Chapman ◽  
Jared Rutter

Abstract Background Metabolic networks represent all chemical reactions that occur between molecular metabolites in an organism’s cells. They offer biological context in which to integrate, analyze, and interpret omic measurements, but their large scale and extensive connectivity present unique challenges. While it is practical to simplify these networks by placing constraints on compartments and hubs, it is unclear how these simplifications alter the structure of metabolic networks and the interpretation of metabolomic experiments. Results We curated and adapted the latest systemic model of human metabolism and developed customizable tools to define metabolic networks with and without compartmentalization in subcellular organelles and with or without inclusion of prolific metabolite hubs. Compartmentalization made networks larger, less dense, and more modular, whereas hubs made networks larger, more dense, and less modular. When present, these hubs also dominated shortest paths in the network, yet their exclusion exposed the subtler prominence of other metabolites that are typically more relevant to metabolomic experiments. We applied the non-compartmental network without metabolite hubs in a retrospective, exploratory analysis of metabolomic measurements from 5 studies on human tissues. Network clusters identified individual reactions that might experience differential regulation between experimental conditions, several of which were not apparent in the original publications. Conclusions Exclusion of specific metabolite hubs exposes modularity in both compartmental and non-compartmental metabolic networks, improving detection of relevant clusters in omic measurements. Better computational detection of metabolic network clusters in large data sets has potential to identify differential regulation of individual genes, transcripts, and proteins.


2013 ◽  
Vol 12 (6) ◽  
pp. 2858-2868 ◽  
Author(s):  
Nadin Neuhauser ◽  
Nagarjuna Nagaraj ◽  
Peter McHardy ◽  
Sara Zanivan ◽  
Richard Scheltema ◽  
...  

2012 ◽  
Vol 38 (2) ◽  
pp. 57-69 ◽  
Author(s):  
Abdulghani Hasan ◽  
Petter Pilesjö ◽  
Andreas Persson

Global change and GHG emission modelling are dependent on accurate wetness estimations for predictions of e.g. methane emissions. This study aims to quantify how the slope, drainage area and the TWI vary with the resolution of DEMs for a flat peatland area. Six DEMs with spatial resolutions from 0.5 to 90 m were interpolated with four different search radiuses. The relationship between accuracy of the DEM and the slope was tested. The LiDAR elevation data was divided into two data sets. The number of data points facilitated an evaluation dataset with data points not more than 10 mm away from the cell centre points in the interpolation dataset. The DEM was evaluated using a quantile-quantile test and the normalized median absolute deviation. It showed independence of the resolution when using the same search radius. The accuracy of the estimated elevation for different slopes was tested using the 0.5 meter DEM and it showed a higher deviation from evaluation data for steep areas. The slope estimations between resolutions showed differences with values that exceeded 50%. Drainage areas were tested for three resolutions, with coinciding evaluation points. The model ability to generate drainage area at each resolution was tested by pair wise comparison of three data subsets and showed differences of more than 50% in 25% of the evaluated points. The results show that consideration of DEM resolution is a necessity for the use of slope, drainage area and TWI data in large scale modelling.


2015 ◽  
Vol 8 (1) ◽  
pp. 421-434 ◽  
Author(s):  
M. P. Jensen ◽  
T. Toto ◽  
D. Troyan ◽  
P. E. Ciesielski ◽  
D. Holdridge ◽  
...  

Abstract. The Midlatitude Continental Convective Clouds Experiment (MC3E) took place during the spring of 2011 centered in north-central Oklahoma, USA. The main goal of this field campaign was to capture the dynamical and microphysical characteristics of precipitating convective systems in the US Central Plains. A major component of the campaign was a six-site radiosonde array designed to capture the large-scale variability of the atmospheric state with the intent of deriving model forcing data sets. Over the course of the 46-day MC3E campaign, a total of 1362 radiosondes were launched from the enhanced sonde network. This manuscript provides details on the instrumentation used as part of the sounding array, the data processing activities including quality checks and humidity bias corrections and an analysis of the impacts of bias correction and algorithm assumptions on the determination of convective levels and indices. It is found that corrections for known radiosonde humidity biases and assumptions regarding the characteristics of the surface convective parcel result in significant differences in the derived values of convective levels and indices in many soundings. In addition, the impact of including the humidity corrections and quality controls on the thermodynamic profiles that are used in the derivation of a large-scale model forcing data set are investigated. The results show a significant impact on the derived large-scale vertical velocity field illustrating the importance of addressing these humidity biases.


2018 ◽  
Vol 22 (6) ◽  
pp. 3105-3124 ◽  
Author(s):  
Zilefac Elvis Asong ◽  
Howard Simon Wheater ◽  
Barrie Bonsal ◽  
Saman Razavi ◽  
Sopan Kurkute

Abstract. Drought is a recurring extreme climate event and among the most costly natural disasters in the world. This is particularly true over Canada, where drought is both a frequent and damaging phenomenon with impacts on regional water resources, agriculture, industry, aquatic ecosystems, and health. However, nationwide drought assessments are currently lacking and impacted by limited ground-based observations. This study provides a comprehensive analysis of historical droughts over the whole of Canada, including the role of large-scale teleconnections. Drought events are characterized by the Standardized Precipitation Evapotranspiration Index (SPEI) over various temporal scales (1, 3, 6, and 12 consecutive months, 6 months from April to September, and 12 months from October to September) applied to different gridded monthly data sets for the period 1950–2013. The Mann–Kendall test, rotated empirical orthogonal function, continuous wavelet transform, and wavelet coherence analyses are used, respectively, to investigate the trend, spatio-temporal patterns, periodicity, and teleconnectivity of drought events. Results indicate that southern (northern) parts of the country experienced significant trends towards drier (wetter) conditions although substantial variability exists. Two spatially well-defined regions with different temporal evolution of droughts were identified – the Canadian Prairies and northern central Canada. The analyses also revealed the presence of a dominant periodicity of between 8 and 32 months in the Prairie region and between 8 and 40 months in the northern central region. These cycles of low-frequency variability are found to be associated principally with the Pacific–North American (PNA) and Multivariate El Niño/Southern Oscillation Index (MEI) relative to other considered large-scale climate indices. This study is the first of its kind to identify dominant periodicities in drought variability over the whole of Canada in terms of when the drought events occur, their duration, and how often they occur.


Sign in / Sign up

Export Citation Format

Share Document