Online On Ramps: A Pilot Study Evaluation of the Accessibility of Canadian Public Library Web Sties to Visually and Hearing Challenged Users

Author(s):  
Sarah Forgrave ◽  
Lynne (E.F) McKechnie

A pilot study was conducted to assess the accessibility of Canadian public library web pages to visually and hearing challenged individuals using adaptive technologies. A random sample of their library home pages were evaluated using Bobby, a software program created by the Centre for Applied Special Technology using the World Wide Web Consortium's Web Accessibility Guidelines. Results suggest that Canadian public...

2002 ◽  
Vol 7 (1) ◽  
pp. 9-25 ◽  
Author(s):  
Moses Boudourides ◽  
Gerasimos Antypas

In this paper we are presenting a simple simulation of the Internet World-Wide Web, where one observes the appearance of web pages belonging to different web sites, covering a number of different thematic topics and possessing links to other web pages. The goal of our simulation is to reproduce the form of the observed World-Wide Web and of its growth, using a small number of simple assumptions. In our simulation, existing web pages may generate new ones as follows: First, each web page is equipped with a topic concerning its contents. Second, links between web pages are established according to common topics. Next, new web pages may be randomly generated and subsequently they might be equipped with a topic and be assigned to web sites. By repeated iterations of these rules, our simulation appears to exhibit the observed structure of the World-Wide Web and, in particular, a power law type of growth. In order to visualise the network of web pages, we have followed N. Gilbert's (1997) methodology of scientometric simulation, assuming that web pages can be represented by points in the plane. Furthermore, the simulated graph is found to possess the property of small worlds, as it is the case with a large number of other complex networks.


2011 ◽  
pp. 178-184
Author(s):  
David Parry

The World Wide Web (WWW) is a critical source of information for healthcare. Because of this, systems for allowing increased efficiency and effectiveness of information retrieval and discovery are critical. Increased intelligence in web pages will allow information sharing and discovery to become vastly more efficient .The semantic web is an umbrella term for a series of standards and technologies that will support this development.


2020 ◽  
pp. 143-158
Author(s):  
Chris Bleakley

Chapter 8 explores the arrival of the World Wide Web, Amazon, and Google. The web allows users to display “pages” of information retrieved from remote computers by means of the Internet. Inventor Tim Berners-Lee released the first web software for free, setting in motion an explosion in Internet usage. Seeing the opportunity of a lifetime, Jeff Bezos set-up Amazon as an online bookstore. Amazon’s success was accelerated by a product recommender algorithm that selectively targets advertising at users. By the mid-1990s there were so many web sites that users often couldn’t find what they were looking for. Stanford PhD student Larry Page invented an algorithm for ranking search results based on the importance and relevance of web pages. Page and fellow student, Sergey Brin, established a company to bring their search algorithm to the world. Page and Brin - the founders of Google - are now worth US$35-40 billion, each.


Author(s):  
Alison Harcourt ◽  
George Christou ◽  
Seamus Simpson

This chapter explains one of the most important components of the web: the development and standardization of Hypertext Markup Language (HTML) and DOM (Document Object Model) which are used for creating web pages and applications. In 1994, Tim Berners-Lee established the World Wide Web consortium (W3C) to work on HTML development. In 1995, the W3C decided to introduce a new standard, WHTML 2.0. However, it was incompatible with the older HTML/WHTML versions. This led to the establishment of Web Hypertext Application Technology Working Group (WHATWG) which worked externally to the W3C. WHATWG developed HTML5 which was adopted by the major browser developers Google, Opera, Mozilla, IBM, Microsoft, and Apple. For this reason, the W3C decided to work on HTML5, leading to a joint WHATWG/W3C working group. This chapter explains the development of HTML and WHATWG’s Living Standard with explanation of ongoing splits and agreements between the two fora. It explains how this division of labour led to W3C focus on the main areas of web architecture, the semantic web, the web of devices, payments applications, and web and television (TV) standards. This has led to the spillover of work to the W3C from the national sphere, notably in the development of copyright protection for TV streaming.


Author(s):  
Kevin Curran ◽  
Gary Gumbleton

Tim Berners-Lee, director of the World Wide Web Consortium (W3C), states that, “The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation” (Berners-Lee, 2001). The Semantic Web will bring structure to the meaningful content of Web pages, creating an environment where software agents, roaming from page to page, can readily carry out sophisticated tasks for users. The Semantic Web (SW) is a vision of the Web where information is more efficiently linked up in such a way that machines can more easily process it. It is generating interest not just because Tim Berners-Lee is advocating it, but because it aims to solve the problem of information being hidden away in HTML documents, which are easy for humans to get information out of but are difficult for machines to do so. We will discuss the Semantic Web here.


2002 ◽  
pp. 145-152 ◽  
Author(s):  
Fiona Fui-Hoon Nah

The explosive expansion of the World Wide Web (WWW) is the biggest event in the Internet. Since its public introduction in 1991, the WWW has become an important channel for electronic commerce, information access, and publication. However, the long waiting time for accessing web pages has become a critical issue, especially with the popularity of multimedia technology and the exponential increase in the number of Web users. Although various technologies and techniques have been implemented to alleviate the situation and to comfort the impatient users, there is still the need to carry out fundamental research to investigate what constitutes an acceptable waiting time for a typical WWW user. This research not only evaluates Nielsen’s hypothesis of 15 seconds as the maximum waiting time of WWW users, but also provides approximate distributions of waiting time of WWW users.


Author(s):  
Xiaoying Gao ◽  
Leon Sterling

The World Wide Web is known as the “universe of network-accessible information, the embodiment of human knowledge” (W3C, 1999). Internet-based knowledge management aims to use the Internet as the world wide environment for knowledge publishing, searching, sharing, reusing, and integration, and to support collaboration and decision making. However, knowledge on the Internet is buried in documents. Most of the documents are written in languages for human readers. The knowledge contained therein cannot be easily accessed by computer programs such as knowledge management systems. In order to make the Internet “machine readable,” information extraction from Web pages becomes a crucial research problem.


Robotica ◽  
1997 ◽  
Vol 15 (2) ◽  
pp. 239-239

Two more IFR member associations have established home pages on the World Wide Web. BRA (formerly the British Robot Association) can be found at http://www.bra-automation.co.uk, and the Danish Industrial Robot Association (DIRA) has its home page at http://inet.uni-c.dk/~i29876.


2020 ◽  
Vol 18 (06) ◽  
pp. 1119-1125 ◽  
Author(s):  
Kessia Nepomuceno ◽  
Thyago Nepomuceno ◽  
Djamel Sadok

Sign in / Sign up

Export Citation Format

Share Document