scholarly journals Sensing Architecture for Terrestrial Crop Monitoring: Harvesting Data as an Asset

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3114
Author(s):  
Francisco Rovira-Más ◽  
Verónica Saiz-Rubio ◽  
Andrés Cuenca-Cuenca

Very often, the root of problems found to produce food sustainably, as well as the origin of many environmental issues, derive from making decisions with unreliable or inexistent data. Data-driven agriculture has emerged as a way to palliate the lack of meaningful information when taking critical steps in the field. However, many decisive parameters still require manual measurements and proximity to the target, which results in the typical undersampling that impedes statistical significance and the application of AI techniques that rely on massive data. To invert this trend, and simultaneously combine crop proximity with massive sampling, a sensing architecture for automating crop scouting from ground vehicles is proposed. At present, there are no clear guidelines of how monitoring vehicles must be configured for optimally tracking crop parameters at high resolution. This paper structures the architecture for such vehicles in four subsystems, examines the most common components for each subsystem, and delves into their interactions for an efficient delivery of high-density field data from initial acquisition to final recommendation. Its main advantages rest on the real time generation of crop maps that blend the global positioning of canopy location, some of their agronomical traits, and the precise monitoring of the ambient conditions surrounding such canopies. As a use case, the envisioned architecture was embodied in an autonomous robot to automatically sort two harvesting zones of a commercial vineyard to produce two wines of dissimilar characteristics. The information contained in the maps delivered by the robot may help growers systematically apply differential harvesting, evidencing the suitability of the proposed architecture for massive monitoring and subsequent data-driven actuation. While many crop parameters still cannot be measured non-invasively, the availability of novel sensors is continually growing; to benefit from them, an efficient and trustable sensing architecture becomes indispensable.

2020 ◽  
Vol 12 (1) ◽  
pp. 10
Author(s):  
W Glenn Bond ◽  
Haley Dozier ◽  
Thomas L Arnold ◽  
Michael Y Lam ◽  
Quyen T Dong ◽  
...  

Attempts to leverage operational time-series data in Condition Based Maintenance (CBM) approaches to optimize the life cycle management and Reliability, Availability, and Maintainability (RAM) of military vehicles have encountered several obstacles over decades of data collection. These obstacles have beset similar approaches on civilian ground vehicles, as well as on aircraft and other complex systems. Analysis of operational data is critical because it represents a continuous recording of the state of the system. Applying rudimentary data analytics to operational data can provide insights like fuel usage patterns or observed reliability of one vehicle or even a fleet. Monitoring trends and analyzing patterns in this data over time, however, can provide insight into the health of a vehicle, a complex system, or a fleet, predicting mean time to failure or compiling logistic or life cycle needs. Such High-Performance Data Analytics (HPDA) on operational time-series datasets has been historically difficult due to the large amount of data gathered from vehicle sensors, the lack of association between clusters observed in the data and failures or unscheduled maintenance events, and the deficiency of unsupervised learning techniques for time-series data. We present an HPDA environment and a method of discovering patterns in vehicle operational data that determines models for predicting the likelihood of imminent failure, referred to as Parameter-Based Indicators (PBIs). Our method is a data-driven approach that uses both time-series and relational maintenance data. This hybrid approach combines both supervised and unsupervised machine learning and data analytic techniques to correlate labeled, relational maintenance event data with unlabeled operational time-series data utilizing the DoD High Performance Computing (HPC) capabilities at the U.S. Army Engineer Research and Development Center. In leveraging both time-series and relational data, we demonstrate a means of fast, purely data-driven model creation that is more broadly applicable and requires less a priori information than physics informed, data-driven models. By blending these approaches, this system will be able to relate some lifecycle management goals through the workflow to generate specific PBIs that will predict failures or highlight appropriate areas of concern in individual or collective vehicle histories.


2021 ◽  
Vol 10 (3) ◽  
pp. e001342
Author(s):  
Stijn Schretlen ◽  
Paulien Hoefsmit ◽  
Suzanne Kats ◽  
Geofridus van Merode ◽  
Jos Maessen ◽  
...  

ObjectiveThe COVID-19 pandemic emphasises the need to use healthcare resources efficient and effective to guarantee access to high-quality healthcare in an affordable manner. Surgical cancellations have a negative impact on these. We used the Lean Six Sigma (LSS) methodology to reduce cardiac surgical cancellations in a University Medical Center in the Netherlands, where approximately 20% of cardiac surgeries were being cancelled.MethodA multifunctional project team used the data-driven LSS process improvement methodology and followed the ‘DMAIC’ improvement cycle (Define, Measure, Analyse, Improve, Control). Through all DMAIC phases, real-world data from the hospital information system supported the team during biweekly problem-solving sessions. This quality improvement study used an ‘interrupted time series’ study design. Data were collected between January 2014 and December 2016, covering 20 months prior and 16 months after implementation. Outcomes were number of last-minute coronary artery bypass graft cancellations, number of repeated diagnostics, referral to treatment time and patient satisfaction. Statistical process control charts visualised the change and impact over time. Students two-sample t-test was used to test statistical significance. A p<0.05 was considered as statistically significant.ResultsLast-minute cancellations were reduced by 50% (p=0.010), repeated preoperative diagnostics (X-ray) declined by 67% (p=0.021), referral to treatment time reduced by 35% (p=0.000) and patient Net Promoter Score increased by 14% (p=0.005).ConclusionThis study shows that LSS is an effective quality improvement approach to help healthcare organisations to deliver more safe, timely, effective, efficient, equitable and patient-centred care. Crucial success factors were the use of a structured data-driven problem-solving approach, focus on patient value and process flow, leadership support and engagement of involved healthcare professionals through the entire care pathway. Ongoing monitoring of key performance indicators is helpful in engaging the organisation to maintain continuous process improvement and sustaining long-term impact.


2021 ◽  
Author(s):  
Mirka Henninger ◽  
Rudolf Debelak ◽  
Carolin Strobl

To detect differential item functioning (DIF), Rasch trees search for optimal splitpoints in covariates and identify subgroups of respondents in a data-driven way. To determine whether and in which covariate a split should be performed, Rasch trees use statistical significance tests. Consequently, Rasch trees are more likely to label small DIF effects as significant in larger samples. This leads to larger trees, which split the sample into more subgroups. What would be more desirable is an approach that is driven more by effect size rather than sample size. In order to achieve this, we suggest to implement an additional stopping criterion: the popular ETS classification scheme based on the Mantel-Haenszel odds ratio. This criterion helps us to evaluate whether a split in a Rasch tree is based on a substantial or an ignorable difference in item parameters, and it allows the Rasch tree to stop growing when DIF between the identified subgroups is small. Furthermore, it supports identifying DIF items and quantifying DIF effect sizes in each split. Based on simulation results, we conclude that the Mantel-Haenszel effect size further reduces unnecessary splits in Rasch trees under the null hypothesis, or when the sample size is large but DIF effects are negligible. To make the stopping criterion easy-to-use for applied researchers, we have implemented the procedure in the statistical software R. Finally, we discuss how DIF effects between different nodes in a Rasch tree can be interpreted and emphasize the importance of purification strategies for the Mantel-Haenszel procedure on tree stopping and DIF item classification.


2018 ◽  
Vol 9 (1) ◽  
pp. 95 ◽  
Author(s):  
Xudong Teng ◽  
Xin Zhang ◽  
Yuantao Fan ◽  
Dong Zhang

Non-linear acoustic technique is an attractive approach in evaluating early fatigue as well as cracks in material. However, its accuracy is greatly restricted by external non-linearities of ultra-sonic measurement systems. In this work, an acoustical data-driven deviation detection method, called the consensus self-organizing models (COSMO) based on statistical probability models, was introduced to study the evolution of localized crack growth. By using pitch-catch technique, frequency spectra of acoustic echoes collected from different locations of a specimen were compared, resulting in a Hellinger distance matrix to construct statistical parameters such as z-score, p-value and T-value. It is shown that statistical significance p-value of COSMO method has a strong relationship with the crack growth. Particularly, T-values, logarithm transformed p-value, increases proportionally with the growth of cracks, which thus can be applied to locate the position of cracks and monitor the deterioration of materials.


2021 ◽  
Author(s):  
Indu Shukla ◽  
Antoinette Silas ◽  
Haley Dozier ◽  
Brandon Hansen ◽  
W. Bond

Author(s):  
Bingyu Wang ◽  
Sivakumar Rathinam ◽  
Rajnikant Sharma ◽  
Kaarthik Sundar

A majority of the routing algorithms for unmanned aerial or ground vehicles rely on Global Positioning System (GPS) information for localization. However, disruption of GPS signals, by intention or otherwise, can render these algorithms ineffective. This article provides a way to address this issue by utilizing landmarks to aid localization in GPS-denied environments. Specifically, given a number of vehicles and a set of targets, we formulate a joint routing and landmark placement problem as a combinatorial optimization problem: to compute paths for the vehicles that traverse every target at least once, and to place landmarks to aid the vehicles in localization while each of them traverses its route, such that the sum of the traveling cost and the landmark placement cost is minimized. A mixed-integer linear program is presented, and a set of algorithms and heuristics are proposed for different approaches to address certain issues not covered by the linear program. The performance of each proposed algorithm is evaluated and compared through extensive computational and simulation results.


Author(s):  
Vanessa Boudewyns ◽  
Pamela A. Williams

Purpose The purpose of this study is to describe the trends and practices of comparative prescription drug advertising by examining the types of comparative claims made in direct-to-consumer (DTC) and direct-to-physician (DTP) print advertisements. Design/methodology/approach The authors conducted a content analysis of 54 DTC and DTP print prescription drug advertisements (published between 1997 and 2014) with comparative claims. Findings Efficacy-based comparisons appeared in 64 per cent of advertisements, and attribute-based comparisons appeared in 37 per cent of advertisements. Most advertisements made direct (vs indirect) references to competitors (85 per cent), compared the advertised drug to a single (vs multiple) competitor (78 per cent), focused exclusively on one type of comparison claim (i.e. efficacy-, risk- or attribute-based) (70 per cent) and did not contain data-driven visual aids (82 per cent). Some differences between DTC and DTP advertisements emerged. More DTP than DTC advertisements included data-driven visual aids (82 per cent vs 0 per cent, respectively), included numerical data (88 per cent vs 53 per cent) and conveyed statistical significance (52 per cent vs 12 per cent). Research limitations/implications The study used a convenience sample rather than a random sample of advertisements; thus, the findings might not be generalizable to all pharmaceutical DTC and DTP advertisements. Examining the tactics that advertisers use to educate and influence consumers and physicians sets the foundation for future studies that examine the effects of their exposure to comparative claims. Suggestions for future research are discussed. Originality/value This study is the first to examine and statistically compare the comparative advertising tactics used in both consumer and physician prescription drug advertisements.


Sign in / Sign up

Export Citation Format

Share Document