scholarly journals L*a*b*Fruits: A Rapid and Robust Outdoor Fruit Detection System Combining Bio-Inspired Features with One-Stage Deep Learning Networks

Sensors ◽  
2020 ◽  
Vol 20 (1) ◽  
pp. 275 ◽  
Author(s):  
Raymond Kirk ◽  
Grzegorz Cielniak ◽  
Michael Mangan

Automation of agricultural processes requires systems that can accurately detect and classify produce in real industrial environments that include variation in fruit appearance due to illumination, occlusion, seasons, weather conditions, etc. In this paper we combine a visual processing approach inspired by colour-opponent theory in humans with recent advancements in one-stage deep learning networks to accurately, rapidly and robustly detect ripe soft fruits (strawberries) in real industrial settings and using standard (RGB) camera input. The resultant system was tested on an existent data-set captured in controlled conditions as well our new real-world data-set captured on a real strawberry farm over two months. We utilise F 1 score, the harmonic mean of precision and recall, to show our system matches the state-of-the-art detection accuracy ( F 1 : 0.793 vs. 0.799) in controlled conditions; has greater generalisation and robustness to variation of spatial parameters (camera viewpoint) in the real-world data-set ( F 1 : 0.744); and at a fraction of the computational cost allowing classification at almost 30fps. We propose that the L*a*b*Fruits system addresses some of the most pressing limitations of current fruit detection systems and is well-suited to application in areas such as yield forecasting and harvesting. Beyond the target application in agriculture this work also provides a proof-of-principle whereby increased performance is achieved through analysis of the domain data, capturing features at the input level rather than simply increasing model complexity.

Circulation ◽  
2020 ◽  
Vol 141 (Suppl_1) ◽  
Author(s):  
Åke Olsson ◽  
Magnus Samulesson

Background: Automatic ECG algorithms using only RR-variability in ECG to detect AF have shown high false positive rates. By including P-wave presence in the algorithm, research has shown that it can increase detection accuracy for AF. Methods: A novel RR- and P-wave based automatic detection algorithm implemented in the Coala Heart Monitor ("Coala", Coala Life AB, Sweden) was evaluated for detection accuracy by the comparison to blinded manual ECG interpretation based on real-world data. Evaluation was conducted on 100 consecutive anonymous printouts of chest- and thumb-ECG waveforms, where the algorithm had detected both irregular RR-rhythms and strong P-waves in either chest or thumb recording (non-AF episodes classified by algorithm as Category 12).The recordings, without exclusions, were generated from 5,512 real-world data recordings from actual Coala users in Sweden (both OTC and Rx users) during the period of March 5 to March 22, 2019, with no control or influence by the researchers or any other organization or individual. The prevalence of cardiac conditions in the user population was unknown.The blinded recordings were each manually interpreted by a trained cardiologist. The manual interpretation was compared with the automatic analysis performed by the detection algorithm to determine the number of additional false negative indications for AF as presented to the user. Results: The trained cardiologist manually interpreted 0 of the 100 recordings as AF. Manual interpretation showed that the novel automatic AF algorithm yielded 0 % False Negative error and 100 % Negative Predictive Value (NPV) for detection of AF. Irregular RR-rhythms were detected in 569 recordings (10 % of a total of 5,512 recordings). The 100 non-AF recordings containing both irregular RR-rhythms and strong P-waves constituted 18% of all recordings with irregular RR-rhythms. Respiratory sinus arrhythmia was the single most prevalent condition and was found in 47% of irregular RR-rhythms with strong P-waves. Conclusion: The novel, P-wave based automatic ECG algorithm used in the Coala, showed a zero percent False Negative error rate for AF detection in ECG recordings with RR-variability but presence of P-waves, as compared to manual interpretation by a cardiologist.


2019 ◽  
Vol 10 (03) ◽  
pp. 409-420 ◽  
Author(s):  
Steven Horng ◽  
Nathaniel R. Greenbaum ◽  
Larry A. Nathanson ◽  
James C. McClay ◽  
Foster R. Goss ◽  
...  

Objective Numerous attempts have been made to create a standardized “presenting problem” or “chief complaint” list to characterize the nature of an emergency department visit. Previous attempts have failed to gain widespread adoption as they were not freely shareable or did not contain the right level of specificity, structure, and clinical relevance to gain acceptance by the larger emergency medicine community. Using real-world data, we constructed a presenting problem list that addresses these challenges. Materials and Methods We prospectively captured the presenting problems for 180,424 consecutive emergency department patient visits at an urban, academic, Level I trauma center in the Boston metro area. No patients were excluded. We used a consensus process to iteratively derive our system using real-world data. We used the first 70% of consecutive visits to derive our ontology, followed by a 6-month washout period, and the remaining 30% for validation. All concepts were mapped to Systematized Nomenclature of Medicine–Clinical Terms (SNOMED CT). Results Our system consists of a polyhierarchical ontology containing 692 unique concepts, 2,118 synonyms, and 30,613 nonvisible descriptions to correct misspellings and nonstandard terminology. Our ontology successfully captured structured data for 95.9% of visits in our validation data set. Discussion and Conclusion We present the HierArchical Presenting Problem ontologY (HaPPy). This ontology was empirically derived and then iteratively validated by an expert consensus panel. HaPPy contains 692 presenting problem concepts, each concept being mapped to SNOMED CT. This freely sharable ontology can help to facilitate presenting problem-based quality metrics, research, and patient care.


2019 ◽  
Vol 37 (7_suppl) ◽  
pp. 180-180 ◽  
Author(s):  
A. Oliver Sartor ◽  
Sreevalsa Appukkuttan ◽  
Ronald E. Aubert ◽  
Jeffrey Weiss ◽  
Joy Wang ◽  
...  

180 Background: Radium-223 (RA-223) is the first FDA approved targeted alpha therapy that significantly improves overall survival (OS) in patients (pts) with metastatic castration resistant prostate cancer (mCRPC) with symptomatic bone metastases. There is limited real world data describing RA-223 current use. Methods: A retrospective patient chart review was done of men who received at least 1 cycle of Ra-223 for mCRPC in 10 centers throughout the US (4 academic, 6 private practices). All pts had a minimum follow-up of 4 months, or placed in hospice or death. Descriptive analyses for clinical characteristics and treatment outcomes were performed. Results: Among the 200 pts (mean age-73.6 years, mean Charlson comorbidity index-6.9) RA-223 was initiated on average 1.6 years from mCRPC diagnosis (first line use (1L)=38.5%, 2L=31.5% and ≥3L=30%). 78% completed 5-6 cycles of RA-223 with mean therapy duration of 4.2 months. Among all pts, 43% received RA-223 as monotherapy (no overlap with other mCRPC therapies) while 57% had combination therapy with either abiraterone or enzalutamide. Median OS following RA-223 initiation was 21.2 months (95% CI 19.6- 29.2). Table provides the RA-223 utilization by type of clinical practice. Conclusions: Utilization of RA-223 in this real world data set was distinct from clinical trial data. Most patients received RA-223 in combination with abiraterone or enzalutamide, therapies that were unavailable when the pilot trial was conducted. Median survival was 21.2 months. Real world use of RA-223 has evolved as newer agents have become FDA approved in bone-metastatic CRPC. Academic and community patterns of practice were more similar than distinct. [Table: see text]


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. e18725-e18725
Author(s):  
Ravit Geva ◽  
Barliz Waissengrin ◽  
Dan Mirelman ◽  
Felix Bokstein ◽  
Deborah T. Blumenthal ◽  
...  

e18725 Background: Healthcare data sharing is important for the creation of diverse and large data sets, supporting clinical decision making, and accelerating efficient research to improve patient outcomes. This is especially vital in the case of real world data analysis. However, stakeholders are reluctant to share their data without ensuring patients’ privacy, proper protection of their data sets and the ways they are being used. Homomorphic encryption is a cryptographic capability that can address these issues by enabling computation on encrypted data without ever decrypting it, so the analytics results are obtained without revealing the raw data. The aim of this study is to prove the accuracy of analytics results and the practical efficiency of the technology. Methods: A real-world data set of colorectal cancer patients’ survival data, following two different treatment interventions, including 623 patients and 24 variables, amounting to 14,952 items of data, was encrypted using leveled homomorphic encryption implemented in the PALISADE software library. Statistical analysis of key oncological endpoints was blindly performed on both the raw data and the homomorphically-encrypted data using descriptive statistics and survival analysis with Kaplan-Meier curves. Results were then compared with an accuracy goal of two decimals. Results: The difference between the raw data and the homomorphically encrypted data results, regarding all variables analyzed was within the pre-determined accuracy range goal, as well as the practical efficiency of the encrypted computation measured by run time, are presented in table. Conclusions: This study demonstrates that data encrypted with Homomorphic Encryption can be statistical analyzed with a precision of at least two decimal places, allowing safe clinical conclusions drawing while preserving patients’ privacy and protecting data owners’ data assets. Homomorphic encryption allows performing efficient computation on encrypted data non-interactively and without requiring decryption during computation time. Utilizing the technology will empower large-scale cross-institution and cross- stakeholder collaboration, allowing safe international collaborations. Clinical trial information: 0048-19-TLV. [Table: see text]


1985 ◽  
Vol 22 (4) ◽  
pp. 462-467 ◽  
Author(s):  
Dennis H. Gensch

All disaggregate multiattribute choice models contain the assumption that the population is reasonably homogeneous with respect to the aggregate parameters estimated by the model. The author points out that one particular choice model, logit, has a structure that makes it particularly suited to test a data set for possible segments. A real-world data set is used to illustrate a simple procedure for testing the homogeneity assumption. The analysis provides a warning that managers may easily derive suboptimal or counterproductive strategies if they fail to test this assumption.


2018 ◽  
Vol 210 ◽  
pp. 04019 ◽  
Author(s):  
Hyontai SUG

Recent world events in go games between human and artificial intelligence called AlphaGo showed the big advancement in machine learning technologies. While AlphaGo was trained using real world data, AlphaGo Zero was trained using massive random data, and the fact that AlphaGo Zero won AlphaGo completely revealed that diversity and size in training data is important for better performance for the machine learning algorithms, especially in deep learning algorithms of neural networks. On the other hand, artificial neural networks and decision trees are widely accepted machine learning algorithms because of their robustness in errors and comprehensibility respectively. In this paper in order to prove that diversity and size in data are important factors for better performance of machine learning algorithms empirically, the two representative algorithms are used for experiment. A real world data set called breast tissue was chosen, because the data set consists of real numbers that is very good property for artificial random data generation. The result of the experiment proved the fact that the diversity and size of data are very important factors for better performance.


Sign in / Sign up

Export Citation Format

Share Document