scholarly journals Designing search engine user interfaces for the visually impaired

Author(s):  
Barbara Leporini ◽  
Patrizia Andronico ◽  
Marina Buzzi
2013 ◽  
Vol 8 (1) ◽  
pp. 90 ◽  
Author(s):  
R. Laval Hunsucker

A Review of: Sahib, N. G., Tombros, A., & Stockman, T. (2012). A comparative analysis of the information-seeking behavior of visually impaired and sighted searchers. Journal of the American Society for Information Science and Technology, 63(2), 377–391. doi: 10.1002/asi.21696 Objective – To determine how the behaviour of visually impaired persons significantly differs from that of sighted persons in the carrying out of complex search tasks on the internet. Design – A comparative observational user study, plus semi-structured interviews. Setting – Not specified. Subjects – 15 sighted and 15 visually impaired persons, all of them experienced and frequent Internet search engine users, of both sexes and varying in age from early twenties to mid-fifties. Methods – The subjects carried out self-selected complex search tasks on their own equipment and in their own familiar environments. The investigators observed this activity to some extent directly, but for the most part via video camera, through use of a screen-sharing facility, or with screen-capture software. They distinguished four stages of search task activity: query formulation, search results exploration, query reformulation, and search results management. The visually impaired participants, of whom 13 were totally blind and two had only marginal vision, were all working with text-to-speech screen readers and depended exclusively for all their observed activity on those applications’ auditory output. For data analysis, the investigators devised a grounded-theory-based coding scheme. They employed a search log format for deriving further quantitative data which they later controlled for statistical significance (two-tailed unpaired t-test; p < 0.05). The interviews allowed them to document, in particular, how the visually impaired subjects themselves subsequently accounted for, interpreted, and vindicated various observed aspects of their searching behaviour. Main Results – The investigators found significant differences between the sighted participants’ search behaviour and that of the visually impaired searchers. The latter displayed a clearly less “orienteering” (O'Day & Jeffries, 1993) disposition and style, more often starting out with already relatively long and comprehensive combinations of relatively precise search terms; “their queries were more expressive” (p. 386). They submitted fewer follow-up queries, and were considerably less inclined to attempt query reformulation. They were aiming to achieve a satisfactory search outcome in a single step. Nevertheless, they rarely employed advanced operators, and made far less use (in only 4 instances) of their search engine’s query-support features than did the sighted searchers (37 instances). Fewer of them (13%) ventured beyond the first page of the results returned for their query by the search engine than was the case among the sighted searchers (43%). They viewed fewer (a mean of 4.27, as opposed to 13.40) retrieved pages, and they visited fewer external links (6 visits by 4 visually impaired searchers, compared with 34 visits by 11 sighted searchers). The visually impaired participants more frequently engaged in note taking than did the sighted participants. The visually impaired searchers were in some cases, the investigators discovered, unaware of search engine facilities or searching tactics which might have improved their search outcomes. Yet even when they were aware of these, they very often chose not to employ them because doing so via their screen readers would have cost them more time and effort than they were willing to expend. In general, they were more diffident and less resourceful than the sighted searchers, and had more trust in the innate capacity and reliability of their search engine to return in an efficient manner the best available results. Conclusion – Despite certain inherent limitations of the present study (the relatively small sample sizes and the non-randomness of the purposive sighted-searcher sample, the possible presence of extraneous variables, the impossibility of entirely ruling out familiarity bias), its findings strongly support the conclusion that working with today’s search engine user interfaces through the intermediation of currently available assistive technologies necessarily imposes severe limits on the degree to which visually impaired persons can efficiently search the web for information relevant to their needs. The findings furthermore suggest that there are various measures that it would be possible to take toward alleviating the situation, in the form of further improvements to retrieval systems, to search interfaces, and to text-to-speech screen readers. Such improvements would include: • more accessible system hints to support a better, and less cognitively intensive, query formulation; • web page layouts which are more suitable to screen-reader intermediation; • a results presentation which more readily facilitates browsing and exploratory behaviour, preferably including auditory previews and overviews; • presentation formats which allow for quicker and more accurate relevance judgments; • mechanisms for (a better) monitoring of search progress. In any event, further information behaviour studies ought now to be conducted, with the specific aim of more closely informing the development of user interfaces which will offer the kind of support that visually impaired Internet searchers are most in need of. Success in this undertaking will ultimately contribute to the further empowerment of visually disabled persons and thereby facilitate efforts to combat social exclusion.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 45061-45070 ◽  
Author(s):  
Aboubakr Aqle ◽  
Kamran Khowaja ◽  
Dena Al-Thani

Author(s):  
Aboubakr Aqle ◽  
Dena Al-Thani ◽  
Ali Jaoua

AbstractThere are limited studies that are addressing the challenges of visually impaired (VI) users when viewing search results on a search engine interface by using a screen reader. This study investigates the effect of providing an overview of search results to VI users. We present a novel interactive search engine interface called InteractSE to support VI users during the results exploration stage in order to improve their interactive experience and web search efficiency. An overview of the search results is generated using an unsupervised machine learning approach to present the discovered concepts via a formal concept analysis that is domain-independent. These concepts are arranged in a multi-level tree following a hierarchical order and covering all retrieved documents that share maximal features. The InteractSE interface was evaluated by 16 legally blind users and compared with the Google search engine interface for complex search tasks. The evaluation results were obtained based on both quantitative (as task completion time) and qualitative (as participants’ feedback) measures. These results are promising and indicate that InteractSE enhances the search efficiency and consequently advances user experience. Our observations and analysis of the user interactions and feedback yielded design suggestions to support VI users when exploring and interacting with search results.


Symmetry ◽  
2020 ◽  
Vol 12 (7) ◽  
pp. 1069
Author(s):  
Bi-Min Hsu

Assistive braille technology has existed for many years with the purpose of aiding the blind in performing common tasks such as reading, writing, and communicating with others. Such technologies are aimed towards helping those who are visually impaired to better adapt to the visual world. However, an obvious gap exists in current technology when it comes to symmetric two-way communication between the blind and non-blind, as little technology allows non-blind individuals to understand the braille system. This research presents a novel approach to convert images of braille into English text by employing a convolutional neural network (CNN) model and a ratio character segmentation algorithm (RCSA). Further, a new dataset was constructed, containing a total of 26,724 labeled braille images, which consists of 37 braille symbols that correspond to 71 different English characters, including the alphabet, punctuation, and numbers. The performance of the CNN model yielded a prediction accuracy of 98.73% on the test set. The functionality performance of this artificial intelligence (AI) based recognition system could be tested through accessible user interfaces in the future.


AI Magazine ◽  
2015 ◽  
Vol 36 (4) ◽  
pp. 61-70 ◽  
Author(s):  
Daniel M. Russell

For the vast majority of queries (for example, navigation, simple fact lookup, and others), search engines do extremely well. Their ability to quickly provide answers to queries is a remarkable testament to the power of many of the fundamental methods of AI. They also highlight many of the issues that are common to sophisticated AI question-answering systems. It has become clear that people think of search programs in ways that are very different from traditional information sources. Rapid and ready-at-hand access, depth of processing, and the way they enable people to offload some ordinary memory tasks suggest that search engines have become more of a cognitive amplifier than a simple repository or front-end to the Internet. Like all sophisticated tools, people still need to learn how to use them. Although search engines are superb at finding and presenting information—up to and including extracting complex relations and making simple inferences—knowing how to frame questions and evaluate their results for accuracy and credibility remains an ongoing challenge. Some questions are still deep and complex, and still require knowledge on the part of the search user to work through to a successful answer. And the fact that the underlying information content, user interfaces, and capabilities are all in a continual state of change means that searchers need to continually update their knowledge of what these programs can (and cannot) do.


Author(s):  
Benoît Encelle ◽  
Nadine Baptiste-Jessel ◽  
Florence Sèdes

Personalization of user interfaces for browsing content is a key concept to ensure content accessibility. This personalization is especially needed for people with disabilities (e.g,. visually impaired) and/or for highly mobile individuals (driving, off-screen environments) and/or for people with limited devices (PDAs, mobile phones, etc.). In this direction, we introduce mechanisms, based on a user requirements study, that result in the generation of personalized user interfaces for browsing particular XML content types. These on-the-fly generated user interfaces can use several modalities for increasing communication possibilities: in this way, interactions between the user and the system can take place in a more natural manner.


Author(s):  
A. A. Azeta ◽  
C. K. Ayo ◽  
N. A. Ikhu-Omoregbe

With the proliferation of learning resources on the Web, finding suitable content (using telephone) has become a rigorous task for voice-based online learners to achieve better performance. The problem with Finding Content Suitability (FCS) with voice E-Learning applications is more complex when the sight-impaired learner is involved. Existing voice-enabled applications in the domain of E-Learning lack the attributes of adaptive and reusable learning objects to be able to address the FCS problem. This study provides a Voice-enabled Framework for Recommender and Adaptation (VeFRA) Systems in E-learning and an implementation of a system based on the framework with dual user interfaces – voice and Web. A usability study was carried out in a visually impaired and non-visually impaired school using the International Standard Organization’s (ISO) 9241-11 specification to determine the level of effectiveness, efficiency and user satisfaction. The result of the usability evaluation reveals that the prototype application developed for the school has “Good Usability” rating of 4.13 out of 5 scale. This shows that the application will not only complement existing mobile and Web-based learning systems, but will be of immense benefit to users, based on the system’s capacity for taking autonomous decisions that are capable of adapting to the needs of both visually impaired and non-visually impaired learners.


2009 ◽  
pp. 1234-1250
Author(s):  
Francesco Bellotti ◽  
Riccardo Berta ◽  
Alessandro De Gloria ◽  
Massimiliano Margarone

Diffusion of radio frequency identification (RFID) promises to boost the added value of assistive technologies for mobile users. Visually impaired people may benefit from RFID-based applications that support users in maintaining “spatial orientation” (Mann, 2004) through provision of information on where they are, and a description of what lies in their surroundings. To investigate this issue, we have integrated our development tool for mobile device, (namely: MADE, Bellotti, Berta, De Gloria, & Margarone, 2003), with a complete support for RFID tag detection, and implemented an RFID-enabled location-aware tour-guide. We have evaluated the guide in an . ecological context (fully operational application, real users, real context of use (Abowd & Mynatt, 2000)) during the EuroFlora 2006 international exhibition (EuroFlora). In this chapter, we describe the MADE enhancement to support RFID-based applications, present the main concepts of the interaction modalities we have designed in order to support visually impaired users, and discuss results from our field experience.


Sign in / Sign up

Export Citation Format

Share Document