Estimation of Average Payloads from Weigh-in-Motion Data

Author(s):  
Sarah Hernandez

Average payloads define the ton-to-truck conversion factors that are critical inputs to commodity-based freight forecasting models. However, average payloads are derived primarily from outdated, unrepresentative truck surveys. With increasing technological and methodological means of concurrently measuring truck configurations, commodity types, and weights, there are now viable alternatives to truck surveys. In this paper, a method to derive average payloads by truck body type and using weight data from weigh-in-motion (WIM) sensors is presented. Average payloads by truck body type are found by subtracting an estimated average empty weight from an estimated average loaded weight. Empty and loaded weights are derived from a Gaussian mixture model fit to a gross vehicle weight distribution. An analysis of truck body type distributions, loaded weights, empty weights, and resulting payloads of five-axle tractor trailer (FHWA Class 9 or 3-S2) trucks is presented to compare national and state-level Vehicle Inventory and Use Survey (VIUS) data and the WIM-based approach. Results show statistically significant differences between the three data sets in each of the comparison categories. A challenge in this analysis is the definition of a correct set of payloads because the WIM and survey data are subject to their own inherent misrepresentations. WIM data, however, provide a continuous source of measured weight data that overcome the drawback of using out-of-date surveys. Overall, average payloads from measured weights are lower than those for the national or California VIUS estimates.

2018 ◽  
Author(s):  
Jesse A. Pfammatter ◽  
Rachel A. Bergstrom ◽  
Eli P. Wallace ◽  
Rama K. Maganti ◽  
Mathew V. Jones

AbstractQuantification of interictal spikes in EEG may provide insight on epilepsy disease burden, but manual quantification of spikes is time-consuming and subject to bias. We present a probability-based, automated method for the classification and quantification of interictal events, using EEG data from kainate- and saline-injected mice (C57BL/6J background) several weeks post-treatment. We first detected high-amplitude events, then projected event waveforms into Principal Components space and identified clusters of spike morphologies using a Gaussian Mixture Model. We calculated the odds-ratio of events from kainate-versus saline-treated mice within each cluster, converted these values to probability scores, P(kainate), and calculated an Hourly Epilepsy Index for each animal by summing the probabilities for events where the cluster P(kainate) > 0.5 and dividing the resultant sum by the record duration. This Index is predictive of whether an animal received an epileptogenic treatment (i.e., kainate), even if a seizure was never observed. We applied this method to an out-of-sample dataset to assess epileptiform spike morphologies in five kainate mice monitored for ~1 month. The magnitude of the Index increased over time in a subset of animals and revealed changes in the prevalence of epileptiform (P(kainate) > 0.5) spike morphologies. Importantly, in both data sets, animals that had electrographic seizures also had a high Index. This analysis is fast, unbiased, and provides information regarding the salience of spike morphologies for disease progression. Future refinement will allow a better understanding of the definition of interictal spikes in quantitative and unambiguous terms.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 1020
Author(s):  
Mohamed Chiheb Ben Nasr ◽  
Sofia Ben Jebara ◽  
Samuel Otis ◽  
Bessam Abdulrazak ◽  
Neila Mezghani

This paper has two objectives: the first is to generate two binary flags to indicate useful frames permitting the measurement of cardiac and respiratory rates from Ballistocardiogram (BCG) signals—in fact, human body activities during measurements can disturb the BCG signal content, leading to difficulties in vital sign measurement; the second objective is to achieve refined BCG signal segmentation according to these activities. The proposed framework makes use of two approaches: an unsupervised classification based on the Gaussian Mixture Model (GMM) and a supervised classification based on K-Nearest Neighbors (KNN). Both of these approaches consider two spectral features, namely the Spectral Flatness Measure (SFM) and Spectral Centroid (SC), determined during the feature extraction step. Unsupervised classification is used to explore the content of the BCG signals, justifying the existence of different classes and permitting the definition of useful hyper-parameters for effective segmentation. In contrast, the considered supervised classification approach aims to determine if the BCG signal content allows the measurement of the heart rate (HR) and the respiratory rate (RR) or not. Furthermore, two levels of supervised classification are used to classify human-body activities into many realistic classes from the BCG signal (e.g., coughing, holding breath, air expiration, movement, et al.). The first one considers frame-by-frame classification, while the second one, aiming to boost the segmentation performance, transforms the frame-by-frame SFM and SC features into temporal series which track the temporal variation of the measures of the BCG signal. The proposed approach constitutes a novelty in this field and represents a powerful method to segment BCG signals according to human body activities, resulting in an accuracy of 94.6%.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Itziar Irigoien ◽  
Basilio Sierra ◽  
Concepción Arenas

In the problem of one-class classification (OCC) one of the classes, the target class, has to be distinguished from all other possible objects, considered as nontargets. In many biomedical problems this situation arises, for example, in diagnosis, image based tumor recognition or analysis of electrocardiogram data. In this paper an approach to OCC based on a typicality test is experimentally compared with reference state-of-the-art OCC techniques—Gaussian, mixture of Gaussians, naive Parzen, Parzen, and support vector data description—using biomedical data sets. We evaluate the ability of the procedures using twelve experimental data sets with not necessarily continuous data. As there are few benchmark data sets for one-class classification, all data sets considered in the evaluation have multiple classes. Each class in turn is considered as the target class and the units in the other classes are considered as new units to be classified. The results of the comparison show the good performance of the typicality approach, which is available for high dimensional data; it is worth mentioning that it can be used for any kind of data (continuous, discrete, or nominal), whereas state-of-the-art approaches application is not straightforward when nominal variables are present.


2018 ◽  
Vol 18 (2) ◽  
pp. 610-620 ◽  
Author(s):  
Longwei Zhang ◽  
Hua Zhao ◽  
Eugene J OBrien ◽  
Xudong Shao

This article outlines a Virtual Monitoring approach for fatigue life assessment of orthotropic steel deck bridges. Bridge weigh-in-motion was used to calculate traffic loads which were then used to calculate “virtual” strains. Some of these strains were checked through long-term monitoring of dynamic strain data. Field tests, incorporating calibration with pre-weighed trucks and monitoring the response to regular traffic, were conducted at Fochen Bridge, which has an orthotropic steel deck and is located in Foshan City, China. In the calibration tests, a 45-t 3-axle truck ran repeatedly across Lane 2, the middle lane in a 3-lane carriageway. The results show that using an influence surface to weigh vehicles can improve the accuracy of the weights and, by inference, of remaining service life calculations. The most fatigue-prone position was found to be at the cutout in the diaphragms. Results show that many vehicles are overweight—the maximum gross vehicle weight recorded was 148 t, nearly 3.6 times heavier than the fatigue design truck.


2011 ◽  
Vol 32 (1) ◽  
pp. 70-80 ◽  
Author(s):  
Federico E Turkheimer ◽  
Sudhakar Selvaraj ◽  
Rainer Hinz ◽  
Venkatesha Murthy ◽  
Zubin Bhagwagar ◽  
...  

This paper aims to build novel methodology for the use of a reference region with specific binding for the quantification of brain studies with radioligands and positron emission tomography (PET). In particular: (1) we introduce a definition of binding potential BPD = DVR–1 where DVR is the volume of distribution relative to a reference tissue that contains ligand in specifically bound form, (2) we validate a numerical methodology, rank-shaping regularization of exponential spectral analysis (RS-ESA), for the calculation of BPD that can cope with a reference region with specific bound ligand, (3) we demonstrate the use of RS-ESA for the accurate estimation of drug occupancies with the use of correction factors to account for the specific binding in the reference. [11C]-DASB with cerebellum as a reference was chosen as an example to validate the methodology. Two data sets were used; four normal subjects scanned after infusion of citalopram or placebo and further six test—retest data sets. In the drug occupancy study, the use of RS-ESA with cerebellar input plus corrections produced estimates of occupancy very close the ones obtained with plasma input. Test-retest results demonstrated a tight linear relationship between BPD calculated either with plasma or with a reference input and high reproducibility.


Author(s):  
А. Mukasheva

The purpose of this article is to study one of the methods of social networks analysis – text sentiment analysis. Today, social media has become a big data base that social network analysis is used for various purposes – from setting up targeted advertising for a cosmetics store to preventing riots at the state level. There are various methods for analyzing social networks such as graph method, text sentiment analysis, audio, and video object analysis. Among them, sentiment analysis is widely used for political, social, consumer research, and also for cybersecurity. Since the analysis of the sentiment of the text involves the analysis of the emotional opinions expressed in the text, the first step is to define the term opinion. An opinion can be simple, that is, a positive, negative or neutral emotion towards a particular object or its aspect. Comparison is also an opinion, but devoid of emotional connotation. To work with simple opinions, the first task of text sentiment analysis is to classify the text. There are three levels of classifications: classification at the text level, at the level of a sentence, and at the aspect level of the object. After classifying the text at the desired level, the next task is to extract structured data from unstructured information. The problem can be solved using the five-tuple method. One of the important elements of a tuple is the aspect in which an opinion is usually expressed. Next, aspect-based sentiment analysis is applied, which involves identifying aspects of the desired object and assessing the polarity of mood for each aspect. This task is divided into two sub-tasks such as aspect extraction and aspect classification. Sentiment analysis has limitations such as the definition of sarcasm and difficulty of working with abbreviated words.


Author(s):  
Vladislav Andreyevich Shcherbakov ◽  
◽  
Svetlana Aleksandrovna Chevereva ◽  

The definition of the term Big Date is given. Particular attention is paid to how, in practice, Big Data technology is being introduced into people's lives at the state level and how it can be used for total control using the example of the People’s Republic of China.


2002 ◽  
Vol 2 ◽  
pp. 169-189 ◽  
Author(s):  
Lawrence W. Barnthouse ◽  
Douglas G. Heimbuch ◽  
Vaughn C. Anthony ◽  
Ray W. Hilborn ◽  
Ransom A. Myers

We evaluated the impacts of entrainment and impingement at the Salem Generating Station on fish populations and communities in the Delaware Estuary. In the absence of an agreed-upon regulatory definition of “adverse environmental impact” (AEI), we developed three independent benchmarks of AEI based on observed or predicted changes that could threaten the sustainability of a population or the integrity of a community.Our benchmarks of AEI included: (1) disruption of the balanced indigenous community of fish in the vicinity of Salem (the “BIC” analysis); (2) a continued downward trend in the abundance of one or more susceptible fish species (the “Trends” analysis); and (3) occurrence of entrainment/impingement mortality sufficient, in combination with fishing mortality, to jeopardize the future sustainability of one or more populations (the “Stock Jeopardy” analysis).The BIC analysis utilized nearly 30 years of species presence/absence data collected in the immediate vicinity of Salem. The Trends analysis examined three independent data sets that document trends in the abundance of juvenile fish throughout the estuary over the past 20 years. The Stock Jeopardy analysis used two different assessment models to quantify potential long-term impacts of entrainment and impingement on susceptible fish populations. For one of these models, the compensatory capacities of the modeled species were quantified through meta-analysis of spawner-recruit data available for several hundred fish stocks.All three analyses indicated that the fish populations and communities of the Delaware Estuary are healthy and show no evidence of an adverse impact due to Salem. Although the specific models and analyses used at Salem are not applicable to every facility, we believe that a weight of evidence approach that evaluates multiple benchmarks of AEI using both retrospective and predictive methods is the best approach for assessing entrainment and impingement impacts at existing facilities.


Sign in / Sign up

Export Citation Format

Share Document