scholarly journals Follow-Up Control and Image Recognition of Neck Level for Standard Metal Gauge

2020 ◽  
Vol 10 (18) ◽  
pp. 6624
Author(s):  
Chenquan Hua ◽  
Chengjin Xie ◽  
Xuan Xu

An image recognition technique is proposed for determining optimal neck levels for standard metal gauges, in the process of validating pipe provers. A camera-level follow-up control system was designed to achieve automated tracking of fluid levels by a camera, thereby preventing errors from inclined viewing angles. An orange background plate was placed behind the tube to reduce background interference, and highlight scale numbers/lines and concave meniscus. A segmentation algorithm, based on edge detection and K-means clustering, was used to segment indicator tubes and scales in the acquired images. The concave meniscus reconstruction algorithm and curve-fitting algorithm were proposed to better identify the lowest point of the meniscus. A characteristic edge detection model was used to identify centimeter-scale lines corresponding to the meniscus. A binary tree multiclass support vector machine (MCSVM) classifier was then used to identify scale numbers corresponding to scale lines and determine the optimal neck level for standard metal gauges. Experimental results showed that measurement errors were within ±0.1 mm compared to a ground truth acquired manually using Vernier calipers. The recognition time, including follow-up control, was less than 10 s, which is much lower than the switching time required between measuring individual tanks. This automated measurement approach for gauge neck levels can effectively reduce measurement times, decrease manmade errors in liquid level readings, and improve the efficiency of pipe prover validation.

2020 ◽  
Vol 77 (4) ◽  
pp. 1609-1622
Author(s):  
Franziska Mathies ◽  
Catharina Lange ◽  
Anja Mäurer ◽  
Ivayla Apostolova ◽  
Susanne Klutmann ◽  
...  

Background: Positron emission tomography (PET) of the brain with 2-[F-18]-fluoro-2-deoxy-D-glucose (FDG) is widely used for the etiological diagnosis of clinically uncertain cognitive impairment (CUCI). Acute full-blown delirium can cause reversible alterations of FDG uptake that mimic neurodegenerative disease. Objective: This study tested whether delirium in remission affects the performance of FDG PET for differentiation between neurodegenerative and non-neurodegenerative etiology of CUCI. Methods: The study included 88 patients (82.0±5.7 y) with newly detected CUCI during hospitalization in a geriatric unit. Twenty-seven (31%) of the patients were diagnosed with delirium during their current hospital stay, which, however, at time of enrollment was in remission so that delirium was not considered the primary cause of the CUCI. Cases were categorized as neurodegenerative or non-neurodegenerative etiology based on visual inspection of FDG PET. The diagnosis at clinical follow-up after ≥12 months served as ground truth to evaluate the diagnostic performance of FDG PET. Results: FDG PET was categorized as neurodegenerative in 51 (58%) of the patients. Follow-up after 16±3 months was obtained in 68 (77%) of the patients. The clinical follow-up diagnosis confirmed the FDG PET-based categorization in 60 patients (88%, 4 false negative and 4 false positive cases with respect to detection of neurodegeneration). The fraction of correct PET-based categorization did not differ between patients with delirium in remission and patients without delirium (86% versus 89%, p = 0.666). Conclusion: Brain FDG PET is useful for the etiological diagnosis of CUCI in hospitalized geriatric patients, as well as in patients with delirium in remission.


2021 ◽  
Vol 13 (15) ◽  
pp. 3024
Author(s):  
Huiqin Ma ◽  
Wenjiang Huang ◽  
Yingying Dong ◽  
Linyi Liu ◽  
Anting Guo

Fusarium head blight (FHB) is a major winter wheat disease in China. The accurate and timely detection of wheat FHB is vital to scientific field management. By combining three types of spectral features, namely, spectral bands (SBs), vegetation indices (VIs), and wavelet features (WFs), in this study, we explore the potential of using hyperspectral imagery obtained from an unmanned aerial vehicle (UAV), to detect wheat FHB. First, during the wheat filling period, two UAV-based hyperspectral images were acquired. SBs, VIs, and WFs that were sensitive to wheat FHB were extracted and optimized from the two images. Subsequently, a field-scale wheat FHB detection model was formulated, based on the optimal spectral feature combination of SBs, VIs, and WFs (SBs + VIs + WFs), using a support vector machine. Two commonly used data normalization algorithms were utilized before the construction of the model. The single WFs, and the spectral feature combination of optimal SBs and VIs (SBs + VIs), were respectively used to formulate models for comparison and testing. The results showed that the detection model based on the normalized SBs + VIs + WFs, using min–max normalization algorithm, achieved the highest R2 of 0.88 and the lowest RMSE of 2.68% among the three models. Our results suggest that UAV-based hyperspectral imaging technology is promising for the field-scale detection of wheat FHB. Combining traditional SBs and VIs with WFs can improve the detection accuracy of wheat FHB effectively.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4237
Author(s):  
Hoon Ko ◽  
Kwangcheol Rim ◽  
Isabel Praça

The biggest problem with conventional anomaly signal detection using features was that it was difficult to use it in real time and it requires processing of network signals. Furthermore, analyzing network signals in real-time required vast amounts of processing for each signal, as each protocol contained various pieces of information. This paper suggests anomaly detection by analyzing the relationship among each feature to the anomaly detection model. The model analyzes the anomaly of network signals based on anomaly feature detection. The selected feature for anomaly detection does not require constant network signal updates and real-time processing of these signals. When the selected features are found in the received signal, the signal is registered as a potential anomaly signal and is then steadily monitored until it is determined as either an anomaly or normal signal. In terms of the results, it determined the anomaly with 99.7% (0.997) accuracy in f(4)(S0) and in case f(4)(REJ) received 11,233 signals with a normal or 171anomaly judgment accuracy of 98.7% (0.987).


2021 ◽  
pp. 1-7
Author(s):  
Orit Kliuk-Ben Bassat ◽  
Doron Schwartz ◽  
Alexander Zubkov ◽  
Amir Gal-Oz ◽  
Alexander Gorevoy ◽  
...  

<b><i>Introduction:</i></b> Decannulation of the arteriovenous fistula (AVF) after each hemodialysis session requires a precise compression on the needle puncture site. The objective of our study was to evaluate the bleeding time (BT) needed to achieve hemostasis using WoundClot, an innovative hemostatic gauze, and to assess whether its long-term use can improve AVF preservation. <b><i>Methods:</i></b> This is a prospective single center study. Initially, the time to hemostasis after AVF decannulation was compared between WoundClot and cotton gauze in 24 prevalent hemodialysis patients. Thereafter, the patients continued to use WoundClot for 12 months and were compared to a control group consisting of 25 patients using regular cotton gauze. Follow-up data included parameters of dialysis adequacy, AVF interventions, and thrombotic events. <b><i>Results:</i></b> WoundClot use shortened significantly the time needed for hemostasis. Mean venous BT decreased by 3.99 (±4.6) min and mean arterial BT by 6.38 (±4.8) min when using WoundClot compared to cotton gauze (<i>p</i> &#x3c; 0.001). At the end of the study, dialysis adequacy expressed by spKt/V was higher in the WoundClot group compared to control (1.73 vs. 1.53, respectively, <i>p</i> = 0.047). Although patients in WoundClot group had a higher baseline BT, arterial and venous pressures did not differ between the groups after a median follow up of 10.8 months. AVF thrombosis rate was similar between the groups. <b><i>Conclusions:</i></b> WoundClot hemostatic gauze significantly reduced the time required for hemostasis after AVF decannulation and may be associated with better AVF preservation. We suggest using WoundClot for arterial BT longer than 15 min and for venous BT longer than 12.5 min.


Author(s):  
J. R. Barnes ◽  
C. A. Haswell

AbstractAriel’s ambitious goal to survey a quarter of known exoplanets will transform our knowledge of planetary atmospheres. Masses measured directly with the radial velocity technique are essential for well determined planetary bulk properties. Radial velocity masses will provide important checks of masses derived from atmospheric fits or alternatively can be treated as a fixed input parameter to reduce possible degeneracies in atmospheric retrievals. We quantify the impact of stellar activity on planet mass recovery for the Ariel mission sample using Sun-like spot models scaled for active stars combined with other noise sources. Planets with necessarily well-determined ephemerides will be selected for characterisation with Ariel. With this prior requirement, we simulate the derived planet mass precision as a function of the number of observations for a prospective sample of Ariel targets. We find that quadrature sampling can significantly reduce the time commitment required for follow-up RVs, and is most effective when the planetary RV signature is larger than the RV noise. For a typical radial velocity instrument operating on a 4 m class telescope and achieving 1 m s−1 precision, between ~17% and ~ 37% of the time commitment is spent on the 7% of planets with mass Mp < 10 M⊕. In many low activity cases, the time required is limited by asteroseismic and photon noise. For low mass or faint systems, we can recover masses with the same precision up to ~3 times more quickly with an instrumental precision of ~10 cm s−1.


Author(s):  
Xuhai Xu ◽  
Ebrahim Nemati ◽  
Korosh Vatanparvar ◽  
Viswam Nathan ◽  
Tousif Ahmed ◽  
...  

The prevalence of ubiquitous computing enables new opportunities for lung health monitoring and assessment. In the past few years, there have been extensive studies on cough detection using passively sensed audio signals. However, the generalizability of a cough detection model when applied to external datasets, especially in real-world implementation, is questionable and not explored adequately. Beyond detecting coughs, researchers have looked into how cough sounds can be used in assessing lung health. However, due to the challenges in collecting both cough sounds and lung health condition ground truth, previous studies have been hindered by the limited datasets. In this paper, we propose Listen2Cough to address these gaps. We first build an end-to-end deep learning architecture using public cough sound datasets to detect coughs within raw audio recordings. We employ a pre-trained MobileNet and integrate a number of augmentation techniques to improve the generalizability of our model. Without additional fine-tuning, our model is able to achieve an F1 score of 0.948 when tested against a new clean dataset, and 0.884 on another in-the-wild noisy dataset, leading to an advantage of 5.8% and 8.4% on average over the best baseline model, respectively. Then, to mitigate the issue of limited lung health data, we propose to transform the cough detection task to lung health assessment tasks so that the rich cough data can be leveraged. Our hypothesis is that these tasks extract and utilize similar effective representation from cough sounds. We embed the cough detection model into a multi-instance learning framework with the attention mechanism and further tune the model for lung health assessment tasks. Our final model achieves an F1-score of 0.912 on healthy v.s. unhealthy, 0.870 on obstructive v.s. non-obstructive, and 0.813 on COPD v.s. asthma classification, outperforming the baseline by 10.7%, 6.3%, and 3.7%, respectively. Moreover, the weight value in the attention layer can be used to identify important coughs highly correlated with lung health, which can potentially provide interpretability for expert diagnosis in the future.


2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Xiaoya Guo ◽  
Akiko Maehara ◽  
Mitsuaki Matsumura ◽  
Liang Wang ◽  
Jie Zheng ◽  
...  

Abstract Background Coronary plaque vulnerability prediction is difficult because plaque vulnerability is non-trivial to quantify, clinically available medical image modality is not enough to quantify thin cap thickness, prediction methods with high accuracies still need to be developed, and gold-standard data to validate vulnerability prediction are often not available. Patient follow-up intravascular ultrasound (IVUS), optical coherence tomography (OCT) and angiography data were acquired to construct 3D fluid–structure interaction (FSI) coronary models and four machine-learning methods were compared to identify optimal method to predict future plaque vulnerability. Methods Baseline and 10-month follow-up in vivo IVUS and OCT coronary plaque data were acquired from two arteries of one patient using IRB approved protocols with informed consent obtained. IVUS and OCT-based FSI models were constructed to obtain plaque wall stress/strain and wall shear stress. Forty-five slices were selected as machine learning sample database for vulnerability prediction study. Thirteen key morphological factors from IVUS and OCT images and biomechanical factors from FSI model were extracted from 45 slices at baseline for analysis. Lipid percentage index (LPI), cap thickness index (CTI) and morphological plaque vulnerability index (MPVI) were quantified to measure plaque vulnerability. Four machine learning methods (least square support vector machine, discriminant analysis, random forest and ensemble learning) were employed to predict the changes of three indices using all combinations of 13 factors. A standard fivefold cross-validation procedure was used to evaluate prediction results. Results For LPI change prediction using support vector machine, wall thickness was the optimal single-factor predictor with area under curve (AUC) 0.883 and the AUC of optimal combinational-factor predictor achieved 0.963. For CTI change prediction using discriminant analysis, minimum cap thickness was the optimal single-factor predictor with AUC 0.818 while optimal combinational-factor predictor achieved an AUC 0.836. Using random forest for predicting MPVI change, minimum cap thickness was the optimal single-factor predictor with AUC 0.785 and the AUC of optimal combinational-factor predictor achieved 0.847. Conclusion This feasibility study demonstrated that machine learning methods could be used to accurately predict plaque vulnerability change based on morphological and biomechanical factors from multi-modality image-based FSI models. Large-scale studies are needed to verify our findings.


Sign in / Sign up

Export Citation Format

Share Document