scholarly journals Web page characteristics of educational adaptive web sites

Author(s):  
Željko Eremić ◽  
Dragica Radosav

Abstract: Educational information about single topic may be found on many different website pages. Those web pages may have different roles, such as the display of information related to teaching, teaching content or routing to other web pages. Educational material can be placed on adaptive websites. Adaptive websites can customize their view and the structure on the basis of previously recorded user behavior. Documents on which visitors often end their navigation are called target documents, and users often visit waypost documents before visiting the target documents. Characteristics of different types of documents are being investigated in this paperwork. Also guidelines related to the design of such educational web sites are being provided.

2002 ◽  
Vol 7 (1) ◽  
pp. 9-25 ◽  
Author(s):  
Moses Boudourides ◽  
Gerasimos Antypas

In this paper we are presenting a simple simulation of the Internet World-Wide Web, where one observes the appearance of web pages belonging to different web sites, covering a number of different thematic topics and possessing links to other web pages. The goal of our simulation is to reproduce the form of the observed World-Wide Web and of its growth, using a small number of simple assumptions. In our simulation, existing web pages may generate new ones as follows: First, each web page is equipped with a topic concerning its contents. Second, links between web pages are established according to common topics. Next, new web pages may be randomly generated and subsequently they might be equipped with a topic and be assigned to web sites. By repeated iterations of these rules, our simulation appears to exhibit the observed structure of the World-Wide Web and, in particular, a power law type of growth. In order to visualise the network of web pages, we have followed N. Gilbert's (1997) methodology of scientometric simulation, assuming that web pages can be represented by points in the plane. Furthermore, the simulated graph is found to possess the property of small worlds, as it is the case with a large number of other complex networks.


2011 ◽  
pp. 404-413
Author(s):  
Jane Moon

There has been an explosion in the number of different types of portals in the last decade, and at the same time there has been a lot of confusion with them, especially in relation to the enormous number of portals and their differences from Web sites or Web-pages. This coincides with increased use by consumers seeking medical information on the Internet, and with the important role played by medical portals for evidence based medicine. This article explores current portal technology available from an evaluation of market leaders in the industry and identifies important functional components that are necessary in building an intelligent portal to assist users seeking information on the Internet. The emphasis will be on government to consumer portals (G2C) and uses two reputable government portals Betterhealth and Healthinsite as examples to discuss issues involved with those.


Mobile precise internet web sites dissent drastically from their computer laptop equivalents in cloth, format and functionality. Sooner or later, present techniques to sight detrimental net internet internet sites rectangular movement now not probably to determine for such webpages. During this paper, we often typically have a propensity to format and exercising paintings over, a mechanism that distinguishes amongst terrible and benign mobile net net web sites. Activity over makes this energy of will supported normal picks of a net internet web page beginning with the quantity of iframes to the life of identified dishonourable cellular mobile cellphone numbers. First, we have a tendency to via attempting out show the requirement for mobile information strategies so installation a spread of new regular options that very correlate with cellular malicious pages. We will be predisposed to then use work over to a dataset of over 350,000 famous benign similarly to volatile cellular webpages and show 90th accuracy in splendor. In addition, we frequently normally normally have a tendency to discover, end up aware of and furthermore document choice of websites incomprehensible through Google Safe Surfing and furthermore Virus Total, however decided through art work over. Lastly, we will be inclined to growth a web browser extension victimization undertaking over to comfortable customers from damaging mobile internet web sites in length. In doing consequently, we provide the number one everyday assessment technique to view volatile cellular webpages


2002 ◽  
Vol 63 (4) ◽  
pp. 354-365 ◽  
Author(s):  
Susan Augustine ◽  
Courtney Greene

Have Internet search engines influenced the way students search library Web pages? The results of this usability study reveal that students consistently and frequently use the library Web site’s internal search engine to find information rather than navigating through pages. If students are searching rather than navigating, library Web page designers must make metadata and powerful search engines priorities. The study also shows that students have difficulty interpreting library terminology, experience confusion discerning difference amongst library resources, and prefer to seek human assistance when encountering problems online. These findings imply that library Web sites have not alleviated some of the basic and long-range problems that have challenged librarians in the past.


2011 ◽  
pp. 1558-1566
Author(s):  
Jane Moon

There has been an explosion in the number of different types of portals in the last decade, and at the same time there has been a lot of confusion with them, especially in relation to the enormous number of portals and their differences from Web sites or Web-pages. This coincides with increased use by consumers seeking medical information on the Internet, and with the important role played by medical portals for evidence based medicine. This article explores current portal technology available from an evaluation of market leaders in the industry and identifies important functional components that are necessary in building an intelligent portal to assist users seeking information on the Internet. The emphasis will be on government to consumer portals (G2C) and uses two reputable government portals Betterhealth and Healthinsite as examples to discuss issues involved with those.


Author(s):  
Kai-Hsiang Yang

This chapter will address the issues of Uniform Resource Locator (URL) correction techniques in proxy servers. The proxy servers are more and more important in the World Wide Web (WWW), and they provide Web page caches for browsing the Web pages quickly, and also reduce unnecessary network traffic. Traditional proxy servers use the URL to identify their cache, and it is a cache-miss when the request URL is non-existent in its caches. However, for general users, there must be some regularity and scope in browsing the Web. It would be very convenient for users when they do not need to enter the whole long URL, or if they still could see the Web content even though they forgot some part of the URL, especially for those personal favorite Web sites. We will introduce one URL correction mechanism into the personal proxy server to achieve this goal.


Author(s):  
Evelin Carvalho Freire de Amorim

Search engines manage several types of challenges daily. One of those challenges is locating relevant content in a Web page. However, the concept of relevance in information retrieval depends on the problem to be solved. For instance, the menu of a website does not impact the results of an algorithm to detect duplicate Web pages. An HTML segmentation algorithm partitions a Web page visually in such a way that parts from a same partition are semantically related. This chapter presents two strategies to segment different types of Web pages.


2004 ◽  
Vol 4 (1) ◽  
Author(s):  
David Carabantes Alarcón ◽  
Carmen García Carrión ◽  
Juan Vicente Beneit Montesinos

La calidad en Internet tiene un gran valor, y más aún cuando se trata de una página web sobre salud como es un recurso sobre drogodependencias. El presente artículo recoge los estimadores y sistemas más destacados sobre calidad web para el desarrollo de un sistema específico de valoración de la calidad de recursos web sobre drogodependencias. Se ha realizado una prueba de viabilidad mediante el análisis de las principales páginas web sobre este tema (n=60), recogiendo la valoración, desde el punto de vista del usuario, de la calidad de los recursos. Se han detectado aspectos de mejora en cuanto a la exactitud y fiabilidad de la información, autoría, y desarrollo de descripciones y valoraciones de los enlaces externos. AbstractThe quality in Internet has a great value, and still more when is a web page on health like a resource of drug dependence. This paper contains the estimators and systems on quality in the web for the development of a specific system to value the quality of a web site about drug dependence. A test of viability by means of the analysis of the main web pages has been made on this subject, gathering the valuation from the point of view of the user of the quality of the resources. Aspects of improvement as the exactitude and reliability of the information, responsibility, and development of descriptions and valuations of the external links have been detected.


2010 ◽  
Vol 108-111 ◽  
pp. 222-227
Author(s):  
Shu Dong Zhang ◽  
Y. Qin ◽  
N.M. Yao

Web information is main data source for the agricultural product quantity security system which is used to provide comprehensive analysis and early warning for national agriculture through large amounts of basic data. In this paper, Web information extraction architecture and a novel approach of wrapper construction are presented. The intelligence of wrapper is that both intensive and sparse data in web pages can be distinguished and extracted at one time. During the wrapper construction, hierarchical clustering is used to determine key information node and DOM technique and heuristic rules are applied to generate extraction expression according to different types of data. Experiments on a large of Web pages from different Web sites indicate that the extraction method, which has a high rate of recall and precision, is feasible and efficient.


Author(s):  
Nataliia Kotenko ◽  
Tetiana Zhyrova ◽  
Vitalii Chybaievskyi ◽  
Alona Desiatko

The article contains the following sections: introduction, research results, conclusions and prospects for further research. In the introduction, a problem regarding the current trends in the development of web pages is considered, an analysis of recent research and publications is carried out, and the purpose of the article is formulated. The second section of the article reflects the main steps that should be followed in the development of web pages, namely: the collection of materials for the development of a web page (technical task), the division of the technical task into components; designing a web page; ; developing Web page (front-end and back-end) components; testing a web page by component; web page placement. The main components of the front-end development are described. \A detailed review of the text editor Sublime Text , as one of the most popular text editors with a wide range of convenient tools for the selection, marking and text code fragments, , is carried out. Since for modern developers a plugin is an integral part of the tools, the article discusses this concept and also describes the most popular plugins for Sublime Text: Package Control, JavaScript & NodeJS Snippets, Emmet, Advanced New File, Git, GitGutter, Sidebar Enhancements , ColorPicker, Placeholders, DocBlockr, SublimeCodeIntel, Minify, Sublime Linter, Color Highlighter. An example of developing an elementary web page is given to demonstrate the use of the described plug-ins, which consists of the following sections: a header; homepage; about us; contacts; basement. The use of the carousel interactive component has been demonstrated. The nuances of using frameworks and their components, such as CSS-Framework and Bootstrap, are considered. As a result of the research, a clear algorithm for the development of an elementary web page has been formed and and methods and means that can be used for this are described. The conclusions are about the prospects for the development of technologies for creating highquality web pages.


Sign in / Sign up

Export Citation Format

Share Document