Assessment of Long-Term Pavement Performance Plan Wall Projection-Based Distress Data Variability

1998 ◽  
Vol 1643 (1) ◽  
pp. 95-109 ◽  
Author(s):  
A. Raja Shekharan ◽  
Gonzalo R. Rada ◽  
Gary E. Elkins ◽  
William Y. Bellinger

In the Long-Term Pavement Performance (LTPP) program, 35-mm, black and white, continuous-strip photographs are used as a permanent record of pavement distress development for archival purposes and to quantify the distress severity and extent for pavement performance analysis. The traditional method of interpreting distress from LTPP film utilizes a relatively small image projected onto a digitizing tablet. From quality control checks performed on the interpreted data, it was found that some low severity types of distress, identified from larger magnified images projected onto a wall or projection screen, could not be seen in the smaller image used for distress interpretation. The variability in distresses interpreted directly off of the large format, wall-image projection was assessed through analysis of interpretations performed on six asphalt concrete and six portland cement concrete pavement sections used in the LTPP distress rater accreditation workshops. The data set included distress ratings from eight individuals, four two-person rater teams, and an experienced rater team. Also available were distress ratings performed in the field by the experienced rater team, which are used as reference values which represent the best estimate of ground-truth. Statistical tests show that the film-interpreted distresses from individual raters exhibit much larger variability than those from the rating teams. The most significant contributor to this finding is outlier observations in which one of the individual raters had significantly different ratings than the rest of the group. The spread in the rating teams was much lower. The film interpreted distresses from the experienced group correlated very well with the field-derived reference values.

1997 ◽  
Vol 1592 (1) ◽  
pp. 151-168 ◽  
Author(s):  
Gonzalo R. Rada ◽  
Rajesh K. Bhandari ◽  
Gary E. Elkins ◽  
William Y. Bellinger

The use of manual survey methods within the Long-Term Pavement Performance (LTPP) program for the collection of distress data has drastically increased both in intensity and in coverage over the past couple of years. Because these surveys are conducted by individual raters whose biases can lead to variability between raters, it was hypothesized that distress data variability existed and that it could potentially be quite large. Thus, the purpose of the presented study was to quantify manual distress data variability, with special emphasis on the bias and precision of the data. Results from seven LTPP program distress rater accreditation workshops conducted during the period from 1992 to 1995 were used as the only source of data. On the basis of analyses of these data, both the apparent bias and the precision for the common distress type-severity level combinations were quantified. It was also concluded from this study that individual rater variability for any given distress type-severity level combination is typically large and increases as the distress quantity increases; however, when all distress type-severity level combinations are viewed in terms of a single composite number such as the pavement condition index value, there is excellent agreement between the individual raters, the group mean, and the ground truth value, and individual rater variability is also quite small. Because LTPP program distress data are to be used in the development of pavement performance prediction models, improvements in variability are highly desirable to ensure that they serve their intended purpose. Recognizing that the LTPP program distress raters are experienced individuals, such improvements are not envisioned to come through additional training. It is the authors’ contention that the only way of achieving the desired improvement is through the conduct of group consensus surveys.


2014 ◽  
Vol 112 (11) ◽  
pp. 2729-2744 ◽  
Author(s):  
Carlo J. De Luca ◽  
Joshua C. Kline

Over the past four decades, various methods have been implemented to measure synchronization of motor-unit firings. In this work, we provide evidence that prior reports of the existence of universal common inputs to all motoneurons and the presence of long-term synchronization are misleading, because they did not use sufficiently rigorous statistical tests to detect synchronization. We developed a statistically based method (SigMax) for computing synchronization and tested it with data from 17,736 motor-unit pairs containing 1,035,225 firing instances from the first dorsal interosseous and vastus lateralis muscles—a data set one order of magnitude greater than that reported in previous studies. Only firing data, obtained from surface electromyographic signal decomposition with >95% accuracy, were used in the study. The data were not subjectively selected in any manner. Because of the size of our data set and the statistical rigor inherent to SigMax, we have confidence that the synchronization values that we calculated provide an improved estimate of physiologically driven synchronization. Compared with three other commonly used techniques, ours revealed three types of discrepancies that result from failing to use sufficient statistical tests necessary to detect synchronization. 1) On average, the z-score method falsely detected synchronization at 16 separate latencies in each motor-unit pair. 2) The cumulative sum method missed one out of every four synchronization identifications found by SigMax. 3) The common input assumption method identified synchronization from 100% of motor-unit pairs studied. SigMax revealed that only 50% of motor-unit pairs actually manifested synchronization.


2018 ◽  
Vol 15 (6) ◽  
pp. 172988141881470
Author(s):  
Nezih Ergin Özkucur ◽  
H Levent Akın

Self-localization in autonomous robots is one of the fundamental issues in the development of intelligent robots, and processing of raw sensory information into useful features is an integral part of this problem. In a typical scenario, there are several choices for the feature extraction algorithm, and each has its weaknesses and strengths depending on the characteristics of the environment. In this work, we introduce a localization algorithm that is capable of capturing the quality of a feature type based on the local environment and makes soft selection of feature types throughout different regions. A batch expectation–maximization algorithm is developed for both discrete and Monte Carlo localization models, exploiting the probabilistic pose estimations of the robot without requiring ground truth poses and also considering different observation types as blackbox algorithms. We tested our method in simulations, data collected from an indoor environment with a custom robot platform and a public data set. The results are compared with the individual feature types as well as naive fusion strategy.


1994 ◽  
Vol 21 (6) ◽  
pp. 954-965 ◽  
Author(s):  
N. Ali ◽  
Shaher Zahran ◽  
Jim Trogdon ◽  
Art Bergan

The main purpose of this study was to facilitate decisions concerning the effectiveness of modifiers in mitigating pavement distress and improving long-term overall pavement performance in actual field conditions, by utilizing short-term laboratory results and a mathematical prediction model. The modifiers investigated were carbon black, neoprene latex, and polymer modified asphalt (STYRELF). The statistical general linear model (GLM) and the Fisher least significant difference (LSD) were used for the analysis of data. The results of the study indicate that the effect of the modifier on the paving mixture properties was insignificant at low temperatures (down to −17 °C), but significant at high temperatures (up to 60 °C) where the synergistic effect of the modifier on the paving mixture was pronounced. The VESYS IIIA pavement performance prediction model was utilized to assess the effects, if any, of the modifier on the pavement's overall performance. All the modifiers improve, to some degree, the overall pavement performance. Key words: modifiers, asphalt, paving mixtures, pavements, polymer asphalt.


1993 ◽  
Vol 11 (3) ◽  
pp. 400-407 ◽  
Author(s):  
H I Scher ◽  
N L Geller ◽  
T Curley ◽  
Y Tao

PURPOSE To evaluate the received dose-intensity in a mature data set of patients with advanced urothelial cancer who received at least one cycle of the methotrexate (M), vinblastine (V), Adriamycin ([A], doxorubicin; Adria Laboratories, Columbus, OH), and cisplatin (C) regimen (M-VAC). PATIENTS AND METHODS Received dose-intensity was evaluated over time by summing doses over cycles for each patient, cumulating treatment times, and assuming four cycles of chemotherapy were planned. Relative cumulative dose-intensity was then calculated for individual patients at the end of each cycle. To assess a relationship with survival, relative cumulative dose-intensity was then used as a time-dependent covariate in Cox regression. RESULTS The median follow-up was 6 years and median survival 13.3 months, with 20 patients alive at the time of analysis. Out of a maximum of 1.0, the median relative dose-intensity for the M-VAC combination decreased from .69 to .59 from cycle 1 to cycle 4. Similarly, a decrease from .68 to .62 and from .80 to .72 was observed for A and C, respectively. The median received dose-intensity for A was 6.0 mg/m2/wk, and for C 14 mg/m2/wk. Neither the four-cycle relative cumulative dose-intensity for the M-VAC combination, nor the relative cumulative dose-intensities for A or C were found to be significant prognostic factors. CONCLUSION The absence of an effect for received dose-intensity on survival may reflect the low dose-intensities of the components of the regimen actually delivered in this study. The results question whether the individual agents can be escalated sufficiently, with growth factor support, to improve significantly complete response proportions, a prerequisite for increasing the proportion of long-term survivors.


2011 ◽  
Vol 4 (6) ◽  
pp. 1147-1159 ◽  
Author(s):  
A. Richter ◽  
M. Begoin ◽  
A. Hilboll ◽  
J. P. Burrows

Abstract. Satellite observations of nitrogen dioxide (NO2) provide valuable information on both stratospheric and tropospheric composition. Nadir measurements from GOME, SCIAMACHY, OMI, and GOME-2 have been used in many studies on tropospheric NO2 burdens, the importance of different NOx emissions sources and their change over time. The observations made by the three GOME-2 instruments will extend the existing data set by more than a decade, and a high quality of the data as well as their good consistency with existing time series is of particular importance. In this paper, an improved GOME-2 NO2 retrieval is described which reduces the scatter of the individual NO2 columns globally but in particular in the region of the Southern Atlantic Anomaly. This is achieved by using a larger fitting window including more spectral points, and by applying a two step spike removal algorithm in the fit. The new GOME-2 data set is shown to have good consistency with SCIAMACHY NO2 columns. Remaining small differences are shown to be linked to changes in the daily solar irradiance measurements used in both GOME-2 and SCIAMACHY retrievals. In the large retrieval window, a not previously identified spectral signature was found which is linked to deserts and other regions with bare soil. Inclusion of this empirically derived pseudo cross-section significantly improves the retrievals and potentially provides information on surface properties and desert aerosols. Using the new GOME-2 NO2 data set, a long-term average of tropospheric columns was computed and high-pass filtered. The resulting map shows evidence for pollution from several additional shipping lanes, not previously identified in satellite observations. This illustrates the excellent signal to noise ratio achievable with the improved GOME-2 retrievals.


Author(s):  
Jerome F. Daleiden ◽  
Amy L. Simpson

Variability of pavement surface distress data collection has always been an area of significant concern. When conducting evaluations of distress data manually (with raters observing pavements in question, interpreting what they see, and recording on paper) the process is subject to human errors. To minimize the impact of such human errors on these important pavement performance data, sophisticated equipment has been developed to eliminate as much of the human intervention as possible. Such technology is not without its own limitations of precision and bias. With both methodologies being used for the collection of surface distress data for the long-term pavement performance (LTPP) program, questions regarding precision and bias have been identified. In attempting to define the variability of the data for incorporation in stochastic analyses, it has become apparent how diverse and complex these distress data truly are. To adequately quantify the precision and bias, a detailed experiment was designed to evaluate the errors inherent in the different distress data collection methodologies. The facet of the experiment reported targets the variability of human distress surveyors and the biases associated with conducting surveys from film, using a slightly different projection system. Specifically, a collection of surveyors was assembled to establish the variability associated with experienced raters versus novice raters, engineers versus engineering technicians, and teams versus individuals.


2018 ◽  
Vol 31 (4) ◽  
pp. 153-159 ◽  
Author(s):  
Darly Dash ◽  
George A. Heckman ◽  
Veronique M. Boscart ◽  
Andrew P. Costa ◽  
Jaimie Killingbeck ◽  
...  

interRAI is a non-profit international consortium of clinicians and scientists who have developed the Minimum Data Set (MDS) 2.0 assessment to systematically identify the health status and care plan of residents in Long-Term Care (LTC). However, LTC staff often fail to realize the clinical utility of this information, viewing it as “data collection for funding purposes” and an administrative task adding to the daily workload. This article reports how one research institute and senior living organization work together to use MDS 2.0 and other information to support better care for residents, plan resource allocation and staffing models, and conduct applied research for older Canadians. A multi-level approach is described on how MDS 2.0 provides a robust infrastructure at the individual, team, organizational, and system levels. Long-term care stakeholders can do much more to unleash the full potential of this powerful tool, and other healthcare sectors can take advantage of this approach.


2020 ◽  
Vol 4 (2) ◽  
Author(s):  
Nicole Williams ◽  
Natalie A Phillips ◽  
Walter Wittich ◽  
Jennifer L Campos ◽  
Paul Mick ◽  
...  

Abstract Background and Objectives The objective of the study was to understand how sensory impairments, alone or in combination with cognitive impairment (CI), relate to long-term care (LTC) admissions. Research Design and Methods This retrospective cohort study used existing information from two interRAI assessments; the Resident Assessment Instrument for Home Care (RAI-HC) and the Minimum Data Set 2.0 (MDS 2.0), which were linked at the individual level for 371,696 unique individuals aged 65+ years. The exposure variables of interest included hearing impairment (HI), vision impairment (VI) and dual sensory impairment (DSI) ascertained at participants’ most recent RAI-HC assessment. The main outcome was admission to LTC. Survival analysis, using Cox proportional hazards regression models and Kaplan–Meier curves, was used to identify risk factors associated with LTC admissions. Observations were censored if they remained in home care, died or were discharged somewhere other than to LTC. Results In this sample, 12.7% of clients were admitted to LTC, with a mean time to admission of 49.6 months (SE = 0.20). The main risk factor for LTC admission was a diagnosis of Alzheimer’s dementia (HR = 1.87; CI: 1.83, 1.90). A significant interaction between HI and CI was found, whereby individuals with HI but no CI had a slightly faster time to admission (40.5 months; HR = 1.14) versus clients with both HI and CI (44.9 months; HR = 2.11). Discussion and Implications Although CI increases the risk of LTC admission, HI is also important, making it is imperative to continue to screen for sensory issues among older home care clients.


2011 ◽  
Vol 4 (1) ◽  
pp. 213-246 ◽  
Author(s):  
A. Richter ◽  
M. Begoin ◽  
A. Hilboll ◽  
J. P. Burrows

Abstract. Satellite observations of nitrogen dioxide (NO2) provide valuable information on both stratospheric and tropospheric composition. Nadir measurements from GOME, SCIAMACHY, OMI, and GOME-2 have been used in many studies on tropospheric NO2 burdens, the importance of different NOx emissions sources and their change over time. The observations made by the three GOME-2 instruments will extend the existing data set by more than a decade, and a high quality of the data as well as their good consistency with existing time series is of high importance. In this paper, an improved GOME-2 NO2 retrieval is described which reduces the scatter of the individual NO2 columns globally but in particular in the region of the Southern Atlantic Anomaly. This is achieved by using a larger fitting window including more spectral points, and by applying a two step spike removal algorithm in the fit. The new GOME-2 data set is shown to have good consistency with SCIAMACHY NO2 columns. Remaining small differences are shown to be linked to changes in the daily solar irradiance measurements used in both GOME-2 and SCIAMACHY retrievals. In the large retrieval window, a not previously identified spectral signature was found which is linked to deserts and other regions with bare soil. Inclusion of this empirically derived pseudo cross-section significantly improves the retrievals and potentially provides information on surface properties and desert aerosols. Using the new GOME-2 NO2 data set, a long-term average of tropospheric columns was computed and high-pass filtered. The resulting map shows evidence for pollution from several additional shipping lanes, not previously identified in satellite observations. This illustrates the excellent signal to noise ratio achievable with the improved GOME-2 retrievals.


Sign in / Sign up

Export Citation Format

Share Document