Real-time dynamic scheduling based adaptive ultrasound sequence programming for research and rapid prototyping

Author(s):  
Richard J. Tobias ◽  
Bicheng William Wu ◽  
Ashish Parikh
CICTP 2020 ◽  
2020 ◽  
Author(s):  
Lina Mao ◽  
Wenquan Li ◽  
Pengsen Hu ◽  
Guiliang Zhou ◽  
Huiting Zhang ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1104
Author(s):  
Shin-Yan Chiou ◽  
Kun-Ju Lin ◽  
Ya-Xin Dong

Positron emission tomography (PET) is one of the commonly used scanning techniques. Medical staff manually calculate the estimated scan time for each PET device. However, the number of PET scanning devices is small, the number of patients is large, and there are many changes including rescanning requirements, which makes it very error-prone, puts pressure on staff, and causes trouble for patients and their families. Although previous studies proposed algorithms for specific inspections, there is currently no research on improving the PET process. This paper proposes a real-time automatic scheduling and control system for PET patients with wearable sensors. The system can automatically schedule, estimate and instantly update the time of various tasks, and automatically allocate beds and announce schedule information in real time. We implemented this system, collected time data of 200 actual patients, and put these data into the implementation program for simulation and comparison. The average time difference between manual and automatic scheduling was 7.32 min, and it could reduce the average examination time of 82% of patients by 6.14 ± 4.61 min. This convinces us the system is correct and can improve time efficiency, while avoiding human error and staff pressure, and avoiding trouble for patients and their families.


2021 ◽  
Vol 11 (4) ◽  
pp. 1933
Author(s):  
Hiroomi Hikawa ◽  
Yuta Ichikawa ◽  
Hidetaka Ito ◽  
Yutaka Maeda

In this paper, a real-time dynamic hand gesture recognition system with gesture spotting function is proposed. In the proposed system, input video frames are converted to feature vectors, and they are used to form a posture sequence vector that represents the input gesture. Then, gesture identification and gesture spotting are carried out in the self-organizing map (SOM)-Hebb classifier. The gesture spotting function detects the end of the gesture by using the vector distance between the posture sequence vector and the winner neuron’s weight vector. The proposed gesture recognition method was tested by simulation and real-time gesture recognition experiment. Results revealed that the system could recognize nine types of gesture with an accuracy of 96.6%, and it successfully outputted the recognition result at the end of gesture using the spotting result.


Sign in / Sign up

Export Citation Format

Share Document