A Study of Voice-Recognition Software as a Tool for Teacher Response

2008 ◽  
Vol 25 (2) ◽  
pp. 165-181 ◽  
Author(s):  
Thomas Batt ◽  
Sandip Wilson
2003 ◽  
Vol 127 (6) ◽  
pp. 721-725
Author(s):  
Maamoun M. Al-Aynati ◽  
Katherine A. Chorneyko

Abstract Context.—Software that can convert spoken words into written text has been available since the early 1980s. Early continuous speech systems were developed in 1994, with the latest commercially available editions having a claimed accuracy of up to 98% of speech recognition at natural speech rates. Objectives.—To evaluate the efficacy of one commercially available voice-recognition software system with pathology vocabulary in generating pathology reports and to compare this with human transcription. To draw cost analysis conclusions regarding human versus computer-based transcription. Design.—Two hundred six routine pathology reports from the surgical pathology material handled at St Joseph's Healthcare, Hamilton, Ontario, were generated simultaneously using computer-based transcription and human transcription. The following hardware and software were used: a desktop 450-MHz Intel Pentium III processor with 192 MB of RAM, a speech-quality sound card (Sound Blaster), noise-canceling headset microphone, and IBM ViaVoice Pro version 8 with pathology vocabulary support (Voice Automated, Huntington Beach, Calif). The cost of the hardware and software used was approximately Can $2250. Results.—A total of 23 458 words were transcribed using both methods with a mean of 114 words per report. The mean accuracy rate was 93.6% (range, 87.4%–96%) using the computer software, compared to a mean accuracy of 99.6% (range, 99.4%–99.8%) for human transcription (P < .001). Time needed to edit documents by the primary evaluator (M.A.) using the computer was on average twice that needed for editing the documents produced by human transcriptionists (range, 1.4–3.5 times). The extra time needed to edit documents was 67 minutes per week (13 minutes per day). Conclusions.—Computer-based continuous speech-recognition systems in pathology can be successfully used in pathology practice even during the handling of gross pathology specimens. The relatively low accuracy rate of this voice-recognition software with resultant increased editing burden on pathologists may not encourage its application on a wide scale in pathology departments with sufficient human transcription services, despite significant potential financial savings. However, computer-based transcription represents an attractive and relatively inexpensive alternative to human transcription in departments where there is a shortage of transcription services, and will no doubt become more commonly used in pathology departments in the future.


2013 ◽  
pp. 1005-1011
Author(s):  
Andrew Kitchenham ◽  
Doug Bowes

In this chapter, the authors discuss the promise of speech or voice recognition software and provide practical suggestions for the teacher or any stakeholder working with a disabled child. The authors begin the chapter with a brief overview of the legislation mandating the accommodation of special needs students in the classroom and discuss the implications of assistive technology. The authors then move on to an examination of the promise of the software. The authors end the chapter with practical ideas for implementation should the caregiver believe that voice recognition software will assist the disabled child in the learning process.


2013 ◽  
Vol 10 (7) ◽  
pp. 538-543 ◽  
Author(s):  
Marianne T. Luetmer ◽  
Christopher H. Hunt ◽  
Robert J. McDonald ◽  
Brian J. Bartholmai ◽  
David F. Kallmes

2013 ◽  
Vol 309 ◽  
pp. 280-285
Author(s):  
Judit Maria Pinter ◽  
Attila Trohák

In our paper we introduce a part of our research where we develop a voice-commanded operator surface which can help to improve the efficiency of the operators work. We want to integrate a voice recognition software module to the control system of a monorail. In the paper we will examine the speech recognizer module and the possibilities of integration to the PLC system of the monorail. The integration can be done via serial communication, via Ethernet or via OPC using a SCADA system. Our aim is to create a reliable and cost-effective system.


2003 ◽  
Vol 12 (1) ◽  
pp. 17-22 ◽  
Author(s):  
Roderick T. Hinman ◽  
E. C. Lupton ◽  
Steven B. Leeb ◽  
Al-Thaddeus Avestruz ◽  
Robert Gilmore ◽  
...  

This article details a new method that has been developed to transmit auditory and visual information to people who are deaf or hard of hearing. In this method, ordinary fluorescent lighting is modulated to carry an assistive data signal throughout a room while causing no flicker or other distracting visual problems. In limited trials with participants who are deaf or hard of hearing, this assistive system, combined with commercial voice recognition software, showed statistically significant improvement in sentence recognition compared to recognition of audio-only or audio-plus-speech-reading stimuli.


2018 ◽  
Vol 7 (7-8) ◽  
pp. 205846011879472 ◽  
Author(s):  
Subba Rao Digumarthy ◽  
Rachel Vining ◽  
Azadeh Tabari ◽  
Sireesha Nandimandalam ◽  
Alexi Otrakji ◽  
...  

Background Laterality errors in radiology reports can lead to serious errors in management. Purpose To reduce errors related to side discrepancies in radiology reports from thoracic imaging by 50% over a six-month period with education and voice recognition software tools. Material and Methods All radiology reports at the Thoracic Imaging Division from the fourth quarter of 2016 were reviewed manually for presence of side discrepancies (baseline data). Side discrepancies were defined as a lack of consistency in side labeling of any abnormality in the “Findings” to “Impression” sections of the reports. Process map and Ishikawa fishbone diagram (Microsoft Visio) were created. All thoracic radiologists were educated on side-related errors in radiology reports for plan–design–study–act cycle 1 (PDSA #1). Two weeks later, voice recognition software was configured to capitalize sides (RIGHT and LEFT) in the reports during dictated (PDSA# 2). Radiology reports were analyzed to determine side-discrepancy errors following each PDSA cycle (post-interventional data). Statistical run charts were created using QI Macros statistical software. Results Baseline data revealed 33 side-discrepancy errors in 47,876 reports with an average of 2.5 errors per week (range = 1–8 errors). Following PDSA #1, there were seven errors pertaining to side discrepancies over a two-week period. Errors declined following implementation of PDSA #2 to meet the target of 0.85 side-discrepancy error per week over seven weeks. Conclusion Automated processes (such as capitalization of sides) help reduce left/right errors substantially without affecting reporting turnaround time.


Sign in / Sign up

Export Citation Format

Share Document