[go: up one dir, main page]

Reference Hub2
A Deep Autoencoder-Based Hybrid Recommender System

A Deep Autoencoder-Based Hybrid Recommender System

Yahya Bougteb, Brahim Ouhbi, Bouchra Frikh, Elmoukhtar Zemmouri
Copyright: © 2022 |Volume: 13 |Issue: 1 |Pages: 19
ISSN: 1937-9412|EISSN: 1937-9404|EISBN13: 9781683180449|DOI: 10.4018/IJMCMC.297963
Cite Article Cite Article

MLA

Bougteb, Yahya, et al. "A Deep Autoencoder-Based Hybrid Recommender System." IJMCMC vol.13, no.1 2022: pp.1-19. http://doi.org/10.4018/IJMCMC.297963

APA

Bougteb, Y., Ouhbi, B., Frikh, B., & Zemmouri, E. (2022). A Deep Autoencoder-Based Hybrid Recommender System. International Journal of Mobile Computing and Multimedia Communications (IJMCMC), 13(1), 1-19. http://doi.org/10.4018/IJMCMC.297963

Chicago

Bougteb, Yahya, et al. "A Deep Autoencoder-Based Hybrid Recommender System," International Journal of Mobile Computing and Multimedia Communications (IJMCMC) 13, no.1: 1-19. http://doi.org/10.4018/IJMCMC.297963

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Recommender systems build their suggestions on rating data, given explicitly or implicitly by users on items. These ratings create a huge sparse user-item rating matrix which opens two challenges for researchers on the field. The first challenge is the sparsity of the rating matrix and the second one is the scalability of the data. This article proposes a hybrid recommender system based on deep autoencoder to learn the user interests and reconstruct the missing ratings. Then, SVD++ decomposition is employed, in parallel, to hold information of correlation between different features factors. Additionally, the authors gave a deep analysis of the top-N recommender list from the user's perspective to ensure that the proposed model can be used for practical application. Experiments showed that the proposed model performed better with high-dimensional datasets, and outperformed various hybrid algorithms in terms of RMSE, MAE, and in terms of the final top-N recommendation list.