ArticlePDF Available

Abstract

Recommendation systems are important part of electronic commerce, where appropriate items are recommended to potential users. The most common algorithms used for constructing recommender systems in commercial applications are collaborative filtering methods and their variants, which is mainly due to their simple implementation. In these methods, structural features of bipartite network of users and items are used and potential items are recommended to the users based on a similarity measure that shows how similar the behavior of the users is. Indeed, the performance of the memory-based CF algorithms heavily depends on the quality of similarities obtained among users/items. As the obtained similarities are more reliable, better performance for the recommender systems is expected. In this paper, we propose three models to extract reliability of similarities estimated in classic recommenders. We incorporate the obtained reliabilities to improve performance of the recommender systems. In the proposed algorithms for reliability extraction, a number of elements are taken into account including the structure of the user-item bipartite network, the individual profile of the users, i.e., how many items they have rated, and that of the items, i.e., how many users have rated them. Among the proposed methods, the method based on resource allocation provides the highest performance as compared to others. Our numerical results on two benchmark datasets (Movielens and Netflix) shows that employing resource allocation in classical recommenders significantly improves their performance. These results are of great importance since including resource allocation in the systems does not increase their computational complexity.
... Based on hybrid lattice factorization methods, Zhang et al. [5] created a model for medical care; it proposed the same working as recommendation system but for medical purpose. Doctors differ from earlier studies in the following ways: (i) Client inclination and expert highlighted can be extracted by regular lattice factorization by latent Dirichlet; (ii) client inclination and expert highlighted can be extracted from latent Dirichlet allocation and joined to lattice factorization. ...
... 3. Resource allocation collaborative filtering (RA) [54,55] is to use the concept of link prediction in the recommendation system to improve performance. Popular products should have less impact when determining user similarity because most users like popular items, hence they are not suited for users to offer recommendations. ...
Article
Full-text available
Slope One algorithm and its descendants measure user-score distance and use the statistical score distance between users to predict unknown ratings, as opposed to the typical collaborative filtering algorithm that uses similarity for neighbor selection and prediction. Compared to collaborative filtering systems that select only similar neighbors, algorithms based on user-score distance typically include all possible related users in the process, which needs more computation time and requires more memory. To improve the scalability and accuracy of distance-based recommendation algorithm, we provide a user-item link prediction approach that combines user distance measurement with similarity-based user selection. The algorithm predicts unknown ratings based on the filtered users by calculating user similarity and removing related users with similarity below a threshold, which reduces 26 to 29 percent of neighbors and improves prediction error, ranking, and prediction accuracy overall.
Conference Paper
Full-text available
The primary issue that a patient has lower limb arthroplasty is difficulty in walking. Recent advancements in sensor-based wearable technology enable the monitoring and quantification of gait in patients undergoing lower limb arthroplasty in both the clinical and home environment. The goal of this systematic review would have been to offer an overview of sensor-based insole technology, sensor types, and their usefulness as a rehabilitation tool for enhancing and monitoring walking. Two reviewers, “SR” and “AS”, both physiotherapists, did a thorough search utilizing several electronic databases, such as PubMed, Science Direct, Scopus, Google Scholar, and Web of Science, and reviewed titles and abstracts for eligibility. The present systematic review includes fifteen studies (Eligible – 33, Excluded – 18, and Included – 15). All included studies covered the various types of sensor-based insole technologies, outcome measures, data-processing algorithms, and study population. All included studies were categorized and put in a tabular form on the basis of types of sensor-based insole technologies used, type of sensor, and outcome measure that were taken to monitor and quantify the data of gait and activities of lower limb in the patients with mobility impairments. This review summarizes the uses of sensor-based insole technology as a rehabilitation tool for monitoring and quantifying lower extremity activity in individuals who have had lower limb arthroplasty or have another form of mobility limitation.
Article
Link prediction plays an important role in information filtering and numerous research works have been made in this field. However, traditional link prediction algorithms mainly focus on overall prediction accuracy, ignoring the heterogeneity of the prediction accuracy for different links. In this paper, we analyzed the prediction accuracy of each link in networks and found that the prediction accuracy for different links is severely polarized. Further analysis shows that the accuracy of edges with low edge betweenness is consistently high while that of edges with high edge betweenness is consistently low, i.e. AUC follows a bimodal distribution with one peak around 0.5 and the other peak around 1. Our results indicate that link prediction algorithms should focus more on edges with high betweenness instead of edges with low betweenness. To improve the accuracy of edges with high betweenness, we proposed an improved algorithm called RA_LP which takes advantage of resource transfer of the second-order and third-order paths of local path. Results show that this algorithm can improve the link prediction accuracy for edges with high betweenness as well as the overall accuracy.
Chapter
Delay-tolerant networks (DTNs) have potential of working in disconnected environment and tolerate high delays. Due to its promising service behaviour, researchers are encouraged to work in the area of DTNs. Over the last few years a verity of work has been carried out in the area of DTNs. To enlighten the researchers about current trends and to understand the scope of further research, 134 research papers have been referred and formulated a graphical and organized perspective. We have performed a bibliometric analysis by designating referred papers using research objectives, citations, publishers, article efficiency and article type. This analysis would cater a perception for researchers, students, experts and publishers to investigate modern research trends in the area of delay-tolerant networks.KeywordsDelay-tolerant networksGraphical interpretationTrends analysis
Chapter
Traditional collaborative filtering disregards the granularity of users’ preference drifting and item popularity bias in modeling, thus diminished the accuracy of recommendation. This paper proposes a new collaborative filtering algorithm based on constrained random walk. Two new trust network: user-based and item-based are proposed, with Restricted Random Walk (RW) to adaptively track the change of users’ preference drifting and item popularity bias. Experimental results on social data sets show that the proposed method is superior to existing recommendation algorithms.KeywordsSocial networksCollaborative filteringRandom walkCloud similarity
Article
Full-text available
Recommender systems and link prediction techniques have been widely used in areas such as online information filtering and improving user retrieval efficiency, and their performance and principles are of significant research interest. However, existing mainstream recommendation algorithms still face many challenges, such as the contradiction between prediction accuracy and recommendation diversity, and the limited scalability of algorithms due to the need to use a large number of neighbors for prediction. To address these two issues, this paper designs a user-item link prediction algorithm based on resource allocation within the user similarity network to enhance prediction accuracy while maintaining recommendation diversity and using as few neighbors as possible to achieve better algorithm scalability. We first calculate inter-user similarity based on user history ratings and construct a similarity network among users by filtering the similarity results; subsequently, based on the centrality and community features in this network, we design a similarity measure for resource allocation that incorporates the bipartite graph model and the similarity network; finally, we use this similarity method to select the set of prediction target neighbors, synthesize and use the similarity results, centrality, and community features for the prediction of user-item links. Experimental results on two well-known datasets with three state-of-the-art algorithms show that the proposed approach can improve the prediction accuracy by 2.34% to 15.76% in a shorter time and maintain a high recommendation diversity, and the ranking accuracy of recommendation is also improved. Compared with the benchmark algorithm with the second highest performance ranking, the method designed in this paper can further reduce the number of neighbors required at optimal prediction error by 25% to 56%. The study reveals that resource allocation in similarity networks successfully mines the features embedded in the recommender system, laying the foundation for further understanding the recommender system and improving the performance of related prediction methods.
Article
Full-text available
Recommender systems leverage product and community information to target products to consumers. Researchers have developed collaborative recommenders, content-based recommenders, and a few hybrid systems. We propose a unified probabilistic framework for merging collaborative and content-based recommendations. We extend Hofmann’s (1999) aspect model to incorporate three-way co-occurrence data among users, items, and item content. The relative influence of collaboration data versus content data is not imposed as an exogenous parameter, but rather emerges naturally from the given data sources. However, global probabilistic models coupled with standard EM learning algorithms tend to drastically overfit in the sparse data situations typical of recommendation applications. We show that secondary content information can often be used to overcome sparsity. Experiments on data from the ResearchIndex library of Computer Science publications show that appropriate mixture models incorporating secondary data produce significantly better quality recommenders than k-nearest neighbors (k-NN). Global probabilistic models also allow more general inferences than local methods like k-NN.
Article
Full-text available
Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, EntreeC, a system that combines knowledge-based recommendation and collaborative filtering to recommend restaurants. Further, we show that semantic ratings obtained from the knowledge-based part of the system enhance the effectiveness of collaborative filtering.
Conference Paper
Full-text available
We review accuracy estimation methods and compare the two most common methods: crossvalidation and bootstrap. Recent experimental results on arti cial data and theoretical results in restricted settings have shown that for selecting a good classi er from a set of classiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment|over half a million runs of C4.5 and a Naive-Bayes algorithm|to estimate the e ects of di erent parameters on these algorithms on real-world datasets. For crossvalidation, we vary the number of folds and whether the folds are strati ed or not � for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, the best method to use for model selection is ten-fold strati ed cross validation, even if computation power allows using more folds. 1
Book
Online social networks collect information from users' social contacts and their daily interactions (co-tagging of photos, co-rating of products etc.) to provide them with recommendations of new products or friends. Lately, technological progressions in mobile devices (i.e. smart phones) enabled the incorporation of geo-location data in the traditional web-based online social networks, bringing the new era of Social and Mobile Web. The goal of this book is to bring together important research in a new family of recommender systems aimed at serving Location-based Social Networks (LBSNs). The chapters introduce a wide variety of recent approaches, from the most basic to the state-of-the-art, for providing recommendations in LBSNs. The book is organized into three parts. Part 1 provides introductory material on recommender systems, online social networks and LBSNs. Part 2 presents a wide variety of recommendation algorithms, ranging from basic to cutting edge, as well as a comparison of the characteristics of these recommender systems. Part 3 provides a step-by-step case study on the technical aspects of deploying and evaluating a real-world LBSN, which provides location, activity and friend recommendations. The material covered in the book is intended for graduate students, teachers, researchers, and practitioners in the areas of web data mining, information retrieval, and machine learning.
Article
This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations.
Article
Focusing on the problems of over-specialization and concen- tration bias, this paper presents a novel probabilistic method for recommending items in the neighborhood-based collab- orative filtering framework. For the probabilistic neighbor- hood selection phase, we use an efficient method for weighted sampling of k neighbors that takes into consideration the similarity levels between the target user (or item) and the candidate neighbors. We conduct an empirical study show- ing that the proposed method increases the coverage, dis- persion, and diversity reinforcement of recommendations by selecting diverse sets of representative neighbors. We also demonstrate that the proposed approach outperforms pop- ular methods in terms of item prediction accuracy, utility- based ranking, and other popular measures, across various experimental settings. This performance improvement is in accordance with ensemble learning theory and the phe- nomenon of \hubness" in recommender systems.
Article
Recommender systems are in the center of network science, and they are becoming increasingly important in individual businesses for providing efficient, personalized services and products to users. Previous research in the field of recommendation systems focused on improving the precision of the system through designing more accurate recommendation lists. Recently, the community has been paying attention to diversity and novelty of recommendation lists as key characteristics of modern recommender systems. In many cases, novelty and precision do not go hand in hand, and the accuracy--novelty dilemma is one of the challenging problems in recommender systems, which needs efforts in making a trade-off between them. In this work, we propose an algorithm for providing novel and accurate recommendation to users. We consider the standard definition of accuracy and an effective self-information--based measure to assess novelty of the recommendation list. The proposed algorithm is based on item popularity, which is defined as the number of votes received in a certain time interval. Wavelet transform is used for analyzing popularity time series and forecasting their trend in future timesteps. We introduce two filtering algorithms based on the information extracted from analyzing popularity time series of the items. The popularity-based filtering algorithm gives a higher chance to items that are predicted to be popular in future timesteps. The other algorithm, denoted as a novelty and population-based filtering algorithm, is to move toward items with low popularity in past timesteps that are predicted to become popular in the future. The introduced filters can be applied as adds-on to any recommendation algorithm. In this article, we use the proposed algorithms to improve the performance of classic recommenders, including item-based collaborative filtering and Markov-based recommender systems. The experiments show that the algorithms could significantly improve both the accuracy and effective novelty of the classic recommenders.
Article
Recommender systems are in the center of network science and becoming increasingly important in individual businesses for providing efficient personalized services and products to users. The focus of previous research in the field of recommendation systems was on improving the precision of the system through designing more accurate recommendation lists. Recently, the community has been paying attention to diversity and novelty of recommendation list as key characteristics of modern recommender systems. In many cases, novelty and precision do not go in the same direction and the accuracy-novelty dilemma is one of the challenging problems in recommender systems, which needs efforts in making a trade-off between them. In this work, we propose an algorithm for providing novel and accurate recommendation to users. We consider the standard definition of accuracy and an effective self-information based measure to assess novelty of the recommendation list. The proposed algorithm is based on item popularity, which is defined as the number of votes they receive in a certain time interval. Wavelet transform is used for analyzing popularity time series and forecasting their trend in future time steps. We introduce two filtering algorithms based on the information extracted from analyzing popularity time series of the items. Popularity-based filtering algorithm, gives higher chance to items which are predicted to be popular in future time steps. The other algorithm, denoted as novelty and population based filtering algorithm, is to move towards items with low popularity in past time steps that are predicted to become popular in the future. The introduced filters can be applied as adds-on to any recommendation algorithm. In this paper, we use the proposed algorithms to improve the performance of classic recommenders including item-based collaborative filtering and Markov-based recommender systems. The experiments show that the algorithms could significantly improve both the accuracy and effective novelty of the classic recommenders.