scholarly journals Personalized Standard Deviations Improve the Baseline Estimation of Collaborative Filtering Recommendation

2020 ◽  
Vol 10 (14) ◽  
pp. 4756
Author(s):  
Zhenhua Tan ◽  
Liangliang He ◽  
Danke Wu ◽  
Qiuyun Chang ◽  
Bin Zhang

Baseline estimation is a critical component for latent factor-based collaborative filtering (CF) recommendations to obtain baseline predictions by evaluating global deviations for both users and items from personalized ratings. Classical baseline estimation presupposes that the user’s factual rating range is the same as the system’s given rating range. However, from observations on real datasets of movie recommender systems, we found that different users have different actual rating ranges, and users can be classified into four kinds according to their personalized rating criterion, including normal, strict, lenient, and middle. We analyzed ratings’ distributions and found that the proportion of user ratings’ local standard deviation to the system’s global standard deviation is equal to that of the user’s actual rating range to the system’s rating range. We propose an improved and unified baseline estimation model based on the standard deviation’s proportion to alleviate the influence of classical baseline estimation’s limitation. We also apply the proposed baseline estimation model in existing latent factor-based CF recommendations and propose two instances. We performed experiments on full ratings of datasets by cross evaluations, including Flixster, Movielens (10 M), Movielens (latest small), FilmTrust, and MiniFilm. The results prove that the proposed baseline estimation model has better predictive accuracy than the classical model and is efficient in improving prediction performance for existing latent factor-based CF recommendations.

2021 ◽  
Vol 13 (11) ◽  
pp. 2126
Author(s):  
Yuliang Wang ◽  
Mingshi Li

Vegetation measures are crucial for assessing changes in the ecological environment. Fractional vegetation cover (FVC) provides information on the growth status, distribution characteristics, and structural changes of vegetation. An in-depth understanding of the dynamic changes in urban FVC contributes to the sustainable development of ecological civilization in the urbanization process. However, dynamic change detection of urban FVC using multi-temporal remote sensing images is a complex process and challenge. This paper proposed an improved FVC estimation model by fusing the optimized dynamic range vegetation index (ODRVI) model. The ODRVI model improved sensitivity to the water content, roughness degree, and soil type by minimizing the influence of bare soil in areas of sparse vegetation cover. The ODRVI model enhanced the stability of FVC estimation in the near-infrared (NIR) band in areas of dense and sparse vegetation cover through introducing the vegetation canopy vertical porosity (VCVP) model. The verification results confirmed that the proposed model had better performance than typical vegetation index (VI) models for multi-temporal Landsat images. The coefficient of determination (R2) between the ODRVI model and the FVC was 0.9572, which was 7.4% higher than the average R2 of other typical VI models. Moreover, the annual urban FVC dynamics were mapped using the proposed improved FVC estimation model in Hefei, China (1999–2018). The total area of all grades FVC decreased by 33.08% during the past 20 years in Hefei, China. The areas of the extremely low, low, and medium grades FVC exhibited apparent inter-annual fluctuations. The maximum standard deviation of the area change of the medium grade FVC was 13.35%. For other grades of FVC, the order of standard deviation of the change ratio was extremely low FVC > low FVC > medium-high FVC > high FVC. The dynamic mapping of FVC revealed the influence intensity and direction of the urban sprawl on vegetation coverage, which contributes to the strategic development of sustainable urban management plans.


Author(s):  
H. Inbarani ◽  
K. Thangavel

The technology behind personalization or Web page recommendation has undergone tremendous changes, and several Web-based personalization systems have been proposed in recent years. The main goal of Web personalization is to dynamically recommend Web pages based on online behavior of users. Although personalization can be accomplished in numerous ways, most Web personalization techniques fall into four major categories: decision rule-based filtering, content-based filtering, and collaborative filtering and Web usage mining. Decision rule-based filtering reviews users to obtain user demographics or static profiles, and then lets Web sites manually specify rules based on them. It delivers the appropriate content to a particular user based on the rules. However, it is not particularly useful because it depends on users knowing in advance the content that interests them. Content-based filtering relies on items being similar to what a user has liked previously. Collaborative filtering, also called social or group filtering, is the most successful personalization technology to date. Most successful recommender systems on the Web typically use explicit user ratings of products or preferences to sort user profile information into peer groups. It then tells users what products they might want to buy by combining their personal preferences with those of like-minded individuals. However, collaborative filtering has limited use for a new product that no one has seen or rated, and content-based filtering to obtain user profiles might miss novel or surprising information. Additionally, traditional Web personalization techniques, including collaborative or content-based filtering, have other problems, such as reliance on subject user ratings and static profiles or the inability to capture richer semantic relationships among Web objects. To overcome these shortcomings, the new Web personalization tool, nonintrusive personalization, attempts to increasingly incorporate Web usage mining techniques. Web usage mining can help improve the scalability, accuracy, and flexibility of recommender systems. Thus, Web usage mining can reduce the need for obtaining subjective user ratings or registration-based personal preferences. This chapter provides a survey of Web usage mining approaches.


2013 ◽  
Vol 475-476 ◽  
pp. 1084-1089
Author(s):  
Hui Yuan Chang ◽  
Ding Xia Li ◽  
Qi Dong Liu ◽  
Rong Jing Hu ◽  
Rui Sheng Zhang

Recommender systems are widely employed in many fields to recommend products, services and information to potential customers. As the most successful approach to recommender systems, collaborative filtering (CF) predicts user preferences in item selection based on the known user ratings of items. It can be divided into two main braches - the neighbourhood approach (NB) and latent factor models. Some of the most successful realizations of latent factor models are based on matrix factorization (MF). Accuracy is one of the most important measurement criteria for recommender systems. In this paper, to improve accuracy, we propose an improved MF model. In this model, we not only consider the latent factors describing the user and item, but also incorporate content information directly into MF.Experiments are performed on the Movielens dataset to compare the present approach with the other method. The experiment results indicate that the proposed approach can remarkably improve the recommendation quality.


Author(s):  
Norhashimah Mohd Saad ◽  
Nor Nabilah Syazana Abdul Rahma ◽  
Abdul Rahim Abdullah ◽  
Mohd Juzaila Abd Latif

This paper presents shape analysis using Local Standard Deviation (LSD) technique to detect shape defect of the bottle for product quality inspection. The proposed analysis framework includes segmentation, feature extraction, and classification. The shape of the bottle was segmented using LSD technique in order to obtain higher enhancement at the low contrast area and low enhancement at the high contrast area. <span lang="EN-MY">The contrast gain that was applied in Adaptive Contrast Enhancement (ACE) algorithm, was presented inversely proportional to LSD in order to detect and eliminate background noise at the bottle edge. After the segmentation process, the parameters of the bottle shape such as height, width, area, and extent were extracted and applied in classification stage. The rule-based classifier was used to classify the shape of the bottle either good or defect. The offline experimental results exhibit superior segmentation on performance with 100% accuracy for 100 sample images. This shows that the LSD could be an effective technique to monitor the product quality.</span>


2017 ◽  
Vol 51 (03) ◽  
pp. 82-88 ◽  
Author(s):  
Kazunari Yoshida ◽  
Hiroyuki Uchida ◽  
Takefumi Suzuki ◽  
Masahiro Watanabe ◽  
Nariyasu Yoshino ◽  
...  

Abstract Introduction Therapeutic drug monitoring is necessary for lithium, but clinical application of several prediction strategies is still limited because of insufficient predictive accuracy. We herein proposed a suitable model, using creatinine clearance (CLcr)-based lithium clearance (Li-CL). Methods Patients receiving lithium provided the following information: serum lithium and creatinine concentrations, time of blood draw, dosing regimen, concomitant medications, and demographics. Li-CL was calculated as a daily dose per trough concentration for each subject, and the mean of Li-CL/CLcr was used to estimate Li-CL for another 30 subjects. Serum lithium concentrations at the time of sampling were estimated by 1-compartment model with Li-CL, fixed distribution volume (0.79 L/kg), and absorption rate (1.5/hour) in the 30 subjects. Results One hundred thirty-one samples from 82 subjects (44 men; mean±standard deviation age: 51.4±16.0 years; body weight: 64.6±13.8 kg; serum creatinine: 0.78±0.20 mg/dL; dose of lithium: 680.2±289.1 mg/day) were used to develop the pharmacokinetic model. The mean±standard deviation (95% confidence interval) of absolute error was 0.13±0.09 (0.10–0.16) mEq/L. Discussion Serum concentrations of lithium can be predicted from oral dosage with high precision, using our prediction model.


2017 ◽  
Vol 2017 ◽  
pp. 1-16 ◽  
Author(s):  
Li Xiong ◽  
Huiqi Li ◽  
Liang Xu

Cataract is one of the leading causes of blindness in the world’s population. A method to evaluate blurriness for cataract diagnosis in retinal images with vitreous opacity is proposed in this paper. Three types of features are extracted, which include pixel number of visible structures, mean contrast between vessels and background, and local standard deviation. To avoid the wrong detection of vitreous opacity as retinal structures, a morphological method is proposed to detect and remove such lesions from retinal visible structure segmentation. Based on the extracted features, a decision tree is trained to classify retinal images into five grades of blurriness. The proposed approach was tested using 1355 clinical retinal images, and the accuracies of two-class classification and five-grade grading compared with that of manual grading are 92.8% and 81.1%, respectively. The kappa value between automatic grading and manual grading is 0.74 in five-grade grading, in which both variance and P value are less than 0.001. Experimental results show that the grading difference between automatic grading and manual grading is all within 1 grade, which is much improvement compared with that of other available methods. The proposed grading method provides a universal measure of cataract severity and can facilitate the decision of cataract surgery.


Sign in / Sign up

Export Citation Format

Share Document