This paper introduces a novel deep matrix factorization model for recommender systems, which aims to predict personalized rankings of items for individual users. The model leverages both explicit ratings and implicit feedback to learn a common low-dimensional space for user and item representations. The authors propose a deep structure learning architecture inspired by deep structured semantic models, which maps users and items into a latent space through multiple layers of non-linear projections. Additionally, they design a new loss function, normalized cross entropy loss (NCE), that incorporates both explicit and implicit feedback for better optimization. Experimental results on several benchmark datasets show that the proposed model outperforms state-of-the-art methods in terms of recommendation performance. The paper also explores the impact of different hyper-parameters and input matrices, providing insights into the model's sensitivity and effectiveness. Future work includes extending the model to use pairwise objective functions and incorporating auxiliary data to handle sparsity and missing values.This paper introduces a novel deep matrix factorization model for recommender systems, which aims to predict personalized rankings of items for individual users. The model leverages both explicit ratings and implicit feedback to learn a common low-dimensional space for user and item representations. The authors propose a deep structure learning architecture inspired by deep structured semantic models, which maps users and items into a latent space through multiple layers of non-linear projections. Additionally, they design a new loss function, normalized cross entropy loss (NCE), that incorporates both explicit and implicit feedback for better optimization. Experimental results on several benchmark datasets show that the proposed model outperforms state-of-the-art methods in terms of recommendation performance. The paper also explores the impact of different hyper-parameters and input matrices, providing insights into the model's sensitivity and effectiveness. Future work includes extending the model to use pairwise objective functions and incorporating auxiliary data to handle sparsity and missing values.