World Wide Database---integrating the Web, CORBA and databases

Author(s):  
Athman Bouguettaya ◽  
Boualem Benatallah ◽  
Lily Hendra ◽  
James Beard ◽  
Kevin Smith ◽  
...  
Keyword(s):  
Author(s):  
Anthony D. Andre

This paper provides an overview of the various human factors and ergonomics (HF/E) resources on the World Wide Web (WWW). A list of the most popular and useful HF/E sites will be provided, along with several critical guidelines relevant to using the WWW. The reader will gain a clear understanding of how to find HF/E information on the Web and how to successfully use the Web towards various HF/E professional consulting activities. Finally, we consider the ergonomic implications of surfing the Web.


2016 ◽  
Vol 28 (2) ◽  
pp. 241-251 ◽  
Author(s):  
Luciane Lena Pessanha Monteiro ◽  
Mark Douglas de Azevedo Jacyntho

The study addresses the use of the Semantic Web and Linked Data principles proposed by the World Wide Web Consortium for the development of Web application for semantic management of scanned documents. The main goal is to record scanned documents describing them in a way the machine is able to understand and process them, filtering content and assisting us in searching for such documents when a decision-making process is in course. To this end, machine-understandable metadata, created through the use of reference Linked Data ontologies, are associated to documents, creating a knowledge base. To further enrich the process, (semi)automatic mashup of these metadata with data from the new Web of Linked Data is carried out, considerably increasing the scope of the knowledge base and enabling to extract new data related to the content of stored documents from the Web and combine them, without the user making any effort or perceiving the complexity of the whole process.


Author(s):  
Saikou Y Diallo ◽  
Ross Gore ◽  
Jose J Padilla ◽  
Hamdi Kavak ◽  
Christopher J Lynch

The process of developing and running simulations needs to become simple and accessible to audiences ranging from middle school students in a learning environment to subject matter experts in order to make the benefits of modeling and simulation commonly available. However, current simulations are for the most part developed and run on platforms that are: (1) demanding in terms of computational resources, (2) difficult for general audiences to use owing to unintuitive interfaces mired in mathematical syntax, (3) expensive to acquire and maintain and (4) hard to interoperate and compose. The result is a four-dimensional expense that makes simulation inaccessible to the general public. In this paper we show that by embracing the web and its standards, the use and development of simulations can become democratized and be part of a Web of Simulation where people of all skill levels are able to build, upload, retrieve, rate, and connect simulations. We show how the Web of Simulation can be built using the three basic principles of service orientation, platform independence, and interoperability. Finally, we present strategies for implementing the Web of Simulation and discuss challenges and possible approaches.


2005 ◽  
Vol 11 (3) ◽  
pp. 278-281 ◽  

Following is a list of microscopy-related meetings and courses. The editors would greatly appreciate input to this list via the electronic submission form found in the MSA World-Wide Web page at http://www.msa.microscopy.com. We will gladly add hypertext links to the notice on the web and insert a listing of the meeting in the next issue of the Journal. Send comments and questions to JoAn Hudson, [email protected] or Nestor Zaluzec, [email protected]. Please furnish the following information (any additional information provided will be edited as required and printed on a space-available basis):


2018 ◽  
Vol 31 (5) ◽  
pp. 154-182
Author(s):  
Cadence Kinsey

This article analyses Camille Henrot’s 2013 film Grosse Fatigue in relation to the histories of hypermedia and modes of interaction with the World Wide Web. It considers the development of non-hierarchical systems for the organisation of information, and uses Grosse Fatigue to draw comparisons between the Web, the natural history museum and the archive. At stake in focusing on the way in which information is organised through hypermedia is the question of subjectivity, and this article argues that such systems are made ‘user-friendly’ by appearing to accommodate intuitive processes of information retrieval, reflecting the subject back to itself as autonomous. This produces an ideology of individualism which belies the forms of heteronomy that in fact shape and structure access to information online in significant ways. At the heart of this argument is an attention to the visual, and the significance of art as an immanent mode of analysis. Through the themes of transparency and opacity, and order and chaos, the article thus proposes a defining dynamic between autonomy and automation as a model for understanding the contemporary subject.


2017 ◽  
Vol 4 (1) ◽  
pp. 95-110 ◽  
Author(s):  
Deepika Punj ◽  
Ashutosh Dixit

In order to manage the vast information available on web, crawler plays a significant role. The working of crawler should be optimized to get maximum and unique information from the World Wide Web. In this paper, architecture of migrating crawler is proposed which is based on URL ordering, URL scheduling and document redundancy elimination mechanism. The proposed ordering technique is based on URL structure, which plays a crucial role in utilizing the web efficiently. Scheduling ensures that URLs should go to optimum agent for downloading. To ensure this, characteristics of both agents and URLs are taken into consideration for scheduling. Duplicate documents are also removed to make the database unique. To reduce matching time, document matching is made on the basis of their Meta information only. The agents of proposed migrating crawler work more efficiently than traditional single crawler by providing ordering and scheduling of URLs.


Author(s):  
Jane E. Klobas ◽  
Stefano Renzi

While virtual universities and remote classrooms have captured the headlines, there has been a quiet revolution in university education. Around the globe, the information and communications technology (ICT) infrastructure needed to support Web-enhanced learning (WEL) is well established, and the Internet and the World Wide Web (the Web) are being used by teachers and students in traditional universities in ways that complement and enhance traditional classroom-based learning (Observatory of Borderless Education, 2002). The Web is most frequently used by traditional universities to provide access to resources—as a substitute for, or complement to, notice boards, distribution of handouts, and use of the library (Collis & Van der Wende, 2002). Therefore, most of the change has been incremental rather than transformational. Adoption of WEL has yet to meet its potential—some would say the imperative (Bates, 2000; Rudestam & Schoenholtz- Read, 2002)—to change the nature of learning at university and to transform the university itself.


2021 ◽  
Author(s):  
Michael Dick

Since it was first formally proposed in 1990 (and since the first website was launched in 1991), the World Wide Web has evolved from a collection of linked hypertext documents residing on the Internet, to a "meta-medium" featuring platforms that older media have leveraged to reach their publics through alternative means. However, this pathway towards the modernization of the Web has not been entirely linear, nor will it proceed as such. Accordingly, this paper problematizes the notion of "progress" as it relates to the online realm by illuminating two distinct perspectives on the realized and proposed evolution of the Web, both of which can be grounded in the broader debate concerning technological determinism versus the social construction of technology: on the one hand, the centralized and ontology-driven shift from a human-centred "Web of Documents" to a machine-understandable "Web of Data" or "Semantic Web", which is supported by the Web's inventor, Tim Berners-Lee, and the organization he heads, the World Wide Web Consortium (W3C); on the other, the decentralized and folksonomy-driven mechanisms through which individuals and collectives exert control over the online environment (e.g. through the social networking applications that have come to characterize the contemporary period of "Web 2.0"). Methodologically, the above is accomplished through a sustained exploration of theory derived from communication and cultural studies, which discursively weaves these two viewpoints together with a technical history of recent W3C projects. As a case study, it is asserted that the forward slashes contained in a Uniform Resource Identifier (URI) were a social construct that was eventually rendered extraneous by the end-user community. By focusing On the context of the technology itself, it is anticipated that this paper will contribute to the broader debate concerning the future of the Web and its need to move beyond a determinant "modernization paradigm" or over-arching ontology, as well as advance the potential connections that can be cultivated with cognate disciplines.


Author(s):  
Axel Polleres ◽  
Simon Steyskal

The World Wide Web Consortium (W3C) as the main standardization body for Web standards has set a particular focus on publishing and integrating Open Data. In this chapter, the authors explain various standards from the W3C's Semantic Web activity and the—potential—role they play in the context of Open Data: RDF, as a standard data format for publishing and consuming structured information on the Web; the Linked Data principles for interlinking RDF data published across the Web and leveraging a Web of Data; RDFS and OWL to describe vocabularies used in RDF and for describing mappings between such vocabularies. The authors conclude with a review of current deployments of these standards on the Web, particularly within public Open Data initiatives, and discuss potential risks and challenges.


Author(s):  
Punam Bedi ◽  
Neha Gupta ◽  
Vinita Jindal

The World Wide Web is a part of the Internet that provides data dissemination facility to people. The contents of the Web are crawled and indexed by search engines so that they can be retrieved, ranked, and displayed as a result of users' search queries. These contents that can be easily retrieved using Web browsers and search engines comprise the Surface Web. All information that cannot be crawled by search engines' crawlers falls under Deep Web. Deep Web content never appears in the results displayed by search engines. Though this part of the Web remains hidden, it can be reached using targeted search over normal Web browsers. Unlike Deep Web, there exists a portion of the World Wide Web that cannot be accessed without special software. This is known as the Dark Web. This chapter describes how the Dark Web differs from the Deep Web and elaborates on the commonly used software to enter the Dark Web. It highlights the illegitimate and legitimate sides of the Dark Web and specifies the role played by cryptocurrencies in the expansion of Dark Web's user base.


Sign in / Sign up

Export Citation Format

Share Document