Are All Deep Learning Architectures Alike for Point‐of‐Care Ultrasound?: Evidence From a Cardiac Image Classification Model Suggests Otherwise

2019 ◽  
Vol 39 (6) ◽  
pp. 1187-1194 ◽  
Author(s):  
Michael Blaivas ◽  
Laura Blaivas
Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2629
Author(s):  
Kunkyu Lee ◽  
Min Kim ◽  
Changhyun Lim ◽  
Tai-Kyong Song

Point-of-care ultrasound (POCUS), realized by recent developments in portable ultrasound imaging systems for prompt diagnosis and treatment, has become a major tool in accidents or emergencies. Concomitantly, the number of untrained/unskilled staff not familiar with the operation of the ultrasound system for diagnosis is increasing. By providing an imaging guide to assist clinical decisions and support diagnosis, the risk brought by inexperienced users can be managed. Recently, deep learning has been employed to guide users in ultrasound scanning and diagnosis. However, in a cloud-based ultrasonic artificial intelligence system, the use of POCUS is limited due to information security, network integrity, and significant energy consumption. To address this, we propose (1) a structure that simultaneously provides ultrasound imaging and a mobile device-based ultrasound image guide using deep learning, and (2) a reverse scan conversion (RSC) method for building an ultrasound training dataset to increase the accuracy of the deep learning model. Experimental results show that the proposed structure can achieve ultrasound imaging and deep learning simultaneously at a maximum rate of 42.9 frames per second, and that the RSC method improves the image classification accuracy by more than 3%.


Author(s):  
Koyel Datta Gupta ◽  
Deepak Kumar Sharma ◽  
Shakib Ahmed ◽  
Harsh Gupta ◽  
Deepak Gupta ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yong Liang ◽  
Qi Cui ◽  
Xing Luo ◽  
Zhisong Xie

Rock classification is a significant branch of geology which can help understand the formation and evolution of the planet, search for mineral resources, and so on. In traditional methods, rock classification is usually done based on the experience of a professional. However, this method has problems such as low efficiency and susceptibility to subjective factors. Therefore, it is of great significance to establish a simple, fast, and accurate rock classification model. This paper proposes a fine-grained image classification network combining image cutting method and SBV algorithm to improve the classification performance of a small number of fine-grained rock samples. The method uses image cutting to achieve data augmentation without adding additional datasets and uses image block voting scoring to obtain richer complementary information, thereby improving the accuracy of image classification. The classification accuracy of 32 images is 75%, 68.75%, and 75%. The results show that the method proposed in this paper has a significant improvement in the accuracy of image classification, which is 34.375%, 18.75%, and 43.75% higher than that of the original algorithm. It verifies the effectiveness of the algorithm in this paper and at the same time proves that deep learning has great application value in the field of geology.


2018 ◽  
Vol 38 (7) ◽  
pp. 1887-1897 ◽  
Author(s):  
Hamid Shokoohi ◽  
Maxine A. LeSaux ◽  
Yusuf H. Roohani ◽  
Andrew Liteplo ◽  
Calvin Huang ◽  
...  

Water ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 298
Author(s):  
Jiwen Tang ◽  
Damien Arvor ◽  
Thomas Corpetti ◽  
Ping Tang

Irrigation systems play an important role in agriculture. Center pivot irrigation systems are popular in many countries as they are labor-saving and water consumption efficient. Monitoring the distribution of center pivot irrigation systems can provide important information for agricultural production, water consumption and land use. Deep learning has become an effective method for image classification and object detection. In this paper, a new method to detect the precise shape of center pivot irrigation systems is proposed. The proposed method combines a lightweight real-time object detection network (PVANET) based on deep learning, an image classification model (GoogLeNet) and accurate shape detection (Hough transform) to detect and accurately delineate center pivot irrigation systems and their associated circular shape. PVANET is lightweight and fast and GoogLeNet can reduce the false detections associated with PVANET, while Hough transform can accurately detect the shape of center pivot irrigation systems. Experiments with Sentinel-2 images in Mato Grosso achieved a precision of 95% and a recall of 95.5%, which demonstrated the effectiveness of the proposed method. Finally, with the accurate shape of center pivot irrigation systems detected, the area of irrigation in the region was estimated.


2022 ◽  
Vol 14 (2) ◽  
pp. 286
Author(s):  
Shawn D. Taylor ◽  
Dawn M. Browning

Near-surface cameras, such as those in the PhenoCam network, are a common source of ground truth data in modelling and remote sensing studies. Despite having locations across numerous agricultural sites, few studies have used near-surface cameras to track the unique phenology of croplands. Due to management activities, crops do not have a natural vegetation cycle which many phenological extraction methods are based on. For example, a field may experience abrupt changes due to harvesting and tillage throughout the year. A single camera can also record several different plants due to crop rotations, fallow fields, and cover crops. Current methods to estimate phenology metrics from image time series compress all image information into a relative greenness metric, which discards a large amount of contextual information. This can include the type of crop present, whether snow or water is present on the field, the crop phenology, or whether a field lacking green plants consists of bare soil, fully senesced plants, or plant residue. Here, we developed a modelling workflow to create a daily time series of crop type and phenology, while also accounting for other factors such as obstructed images and snow covered fields. We used a mainstream deep learning image classification model, VGG16. Deep learning classification models do not have a temporal component, so to account for temporal correlation among images, our workflow incorporates a hidden Markov model in the post-processing. The initial image classification model had out of sample F1 scores of 0.83–0.85, which improved to 0.86–0.91 after all post-processing steps. The resulting time series show the progression of crops from emergence to harvest, and can serve as a daily, local-scale dataset of field states and phenological stages for agricultural research.


2020 ◽  
Vol 2020 (12) ◽  
pp. 172-1-172-7 ◽  
Author(s):  
Tejaswini Ananthanarayana ◽  
Raymond Ptucha ◽  
Sean C. Kelly

CMOS Image sensors play a vital role in the exponentially growing field of Artificial Intelligence (AI). Applications like image classification, object detection and tracking are just some of the many problems now solved with the help of AI, and specifically deep learning. In this work, we target image classification to discern between six categories of fruits — fresh/ rotten apples, fresh/ rotten oranges, fresh/ rotten bananas. Using images captured from high speed CMOS sensors along with lightweight CNN architectures, we show the results on various edge platforms. Specifically, we show results using ON Semiconductor’s global-shutter based, 12MP, 90 frame per second image sensor (XGS-12), and ON Semiconductor’s 13 MP AR1335 image sensor feeding into MobileNetV2, implemented on NVIDIA Jetson platforms. In addition to using the data captured with these sensors, we utilize an open-source fruits dataset to increase the number of training images. For image classification, we train our model on approximately 30,000 RGB images from the six categories of fruits. The model achieves an accuracy of 97% on edge platforms using ON Semiconductor’s 13 MP camera with AR1335 sensor. In addition to the image classification model, work is currently in progress to improve the accuracy of object detection using SSD and SSDLite with MobileNetV2 as the feature extractor. In this paper, we show preliminary results on the object detection model for the same six categories of fruits.


Sign in / Sign up

Export Citation Format

Share Document