clickstream analysis
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 0)

H-INDEX

5
(FIVE YEARS 0)

2020 ◽  
Vol 7 (1) ◽  
pp. 92-106
Author(s):  
Tünde Lengyel Molnár

Abstract The era of Web 1.0 implied the connection of web-based documents via links, which enabled search engines to scan for information and guarantee the search and availability of webpages. Web 2.0 represented the next evolutionary stage. Known as the collaborative web, the emphasis in this case was on the establishment of services and content by the community. Search options were complemented with labelling and frequently undesirable clickstream analysis coupled with push technology-supported information provision. The semantic web is a revolutionary development, which, in addition to processing information by humans, assures the readability of datasets by machines and facilitates communication between devices. In order to promote data and information processing by machines, the semantic web relies on a special ontology allocating the respective meaning to the given data along with relying on the global indexing and naming schemes of the web. Several ontologies emerged with differing basic guidelines while displaying compatibility to the RDF standard ranging from the more semantic description of bibliographical data in libraries to the description of information gained from social networks and human conversations. While Web 3.0 is often used interchangeably with the semantic web, the former one with its intelligent server function exceeds the semantic web. We have to ask ourselves, however, whether we can rely on the accuracy of the obtained data, and we must explore what progress have libraries – expected to increase reliability – made regarding the implementation of semantic data storage.


2020 ◽  
Vol 8 (1) ◽  
pp. 256-261 ◽  
Author(s):  
Andoni Eguiluz ◽  
Mariluz Guenaga ◽  
Pablo Garaizar ◽  
Cristian Olivares-Rodriguez

Author(s):  
Naima Belarbi ◽  
Nadia Chafiq ◽  
Mohammed Talbi ◽  
Abdelwahed Namir ◽  
Elhabib Benlahmar

In the present paper, we address to construct a structured user profile in a Small Private Online Course (SPOC) based on user’s video clickstream analysis. We adopt an implicit approach to infer user’s preferences and experience difficulty based on user’s video sequence viewing analysis at the click-level as Play, Pause, Move forward… the Bayesian method is used in order to infer implicitly user’s interests. Learners with similar clickstream behavior are then segmented into clusters by using the unsupervised K-Means clustering algorithm. Videos that could meet the individual learner interests and offer a best and personalized experienced learning can therefore be recommended for a learner while enrolling in a SPOC based on his videos interactions and exploiting similar learners’ profiles.


2018 ◽  
Vol 40 (12) ◽  
pp. 2814-2826 ◽  
Author(s):  
Eric Heim ◽  
Alexander Seitel ◽  
Jonas Andrulis ◽  
Fabian Isensee ◽  
Christian Stock ◽  
...  

Author(s):  
Harshit Makhecha ◽  
◽  
Dharmendra Singh ◽  
Bhagirath Prajapati ◽  
Priyanka Puvar
Keyword(s):  

This chapter analyzes the technologies that underpin organizational processes for customization and personalization. It discusses enterprise systems such as customer relationship management systems (CRM) that can assist service customization, as well as components of such systems, such as recommender tools. This chapter also introduces technologies for web adaptation, i.e. approaches that automatically or semi-automatically, that create different versions of the web site for each user, or for each different user group, by customizing web content, navigation, and presentation. Recommender systems are used to help people identifying interesting products or services when the complexity and quantity of the choices is too vast for users to consider all the possibilities. This chapter covers technologies such as content-based filtering, collaborative filtering, fuzzy workflow process Management Systems, eCRM, tracking and clickstream analysis, and others.


Author(s):  
Paolo Baldini ◽  
Paolo Giudici

Every time a user links up to a web site, the server keeps track of all the transactions accomplished in a log file. What is captured is the "click flow" (clickstream) of the mouse and the keys used by the user during the navigation inside the site. Usually every click of the mouse corresponds to the viewing of a web page. The objective of this chapter is to show how web clickstream data can be used to understand the most likely paths of navigation in a web site, with the aim of predicting, possibly on-line, which pages will be seen, having seen a specific path of other pages before. Such analysis can be very useful to understand, for instance, what is the probability of seeing a page of interest (such as the buying page in an e-commerce site) coming from another page. Or what is the probability of entering (or exiting) the web site from any particular page. From a methodological viewpoint, we present two main research contributions. On one hand we show how to improve the efficiency of the Apriori algorithm; on the other hand we show how Markov chain models can be usefully developed and implemented for web usage mining. In both cases we compare the results obtained with classical association rules algorithms and models.


Sign in / Sign up

Export Citation Format

Share Document