This paper introduces collective matrix factorization, a method for relational learning that simultaneously factors multiple matrices, sharing parameters among factors when an entity participates in multiple relations. The approach allows for nonlinear relationships between parameters and outputs, using Bregman divergences to measure error. The method extends standard alternating projection algorithms and derives an efficient Newton update for the projection. It also proposes stochastic optimization methods to handle large, sparse matrices. The model generalizes several existing matrix factorization methods and yields new large-scale optimization algorithms for these problems. The model can handle any pairwise relational schema and a wide variety of error models. The paper demonstrates the efficiency and benefit of sharing parameters among relations.
The paper discusses the use of collective matrix factorization for relational data, which consists of entities and relations between them. It presents a unified view of matrix factorization, showing how different models can be derived from choices of prediction link, loss function, and regularizers. The paper also discusses the use of Bregman divergences for modeling error and the extension of matrix factorization to relational schemas with multiple relations.
The paper presents experiments on movie rating prediction, showing that collective matrix factorization improves prediction accuracy compared to using a single matrix. It also discusses the use of stochastic approximation for optimizing collective matrix factorization, showing that it provides an efficient alternative to Newton updates in the alternating projections algorithm. The paper compares collective matrix factorization to pLSI-pHITS, showing that the additional flexibility of collective matrix factorization can lead to better results in some cases.This paper introduces collective matrix factorization, a method for relational learning that simultaneously factors multiple matrices, sharing parameters among factors when an entity participates in multiple relations. The approach allows for nonlinear relationships between parameters and outputs, using Bregman divergences to measure error. The method extends standard alternating projection algorithms and derives an efficient Newton update for the projection. It also proposes stochastic optimization methods to handle large, sparse matrices. The model generalizes several existing matrix factorization methods and yields new large-scale optimization algorithms for these problems. The model can handle any pairwise relational schema and a wide variety of error models. The paper demonstrates the efficiency and benefit of sharing parameters among relations.
The paper discusses the use of collective matrix factorization for relational data, which consists of entities and relations between them. It presents a unified view of matrix factorization, showing how different models can be derived from choices of prediction link, loss function, and regularizers. The paper also discusses the use of Bregman divergences for modeling error and the extension of matrix factorization to relational schemas with multiple relations.
The paper presents experiments on movie rating prediction, showing that collective matrix factorization improves prediction accuracy compared to using a single matrix. It also discusses the use of stochastic approximation for optimizing collective matrix factorization, showing that it provides an efficient alternative to Newton updates in the alternating projections algorithm. The paper compares collective matrix factorization to pLSI-pHITS, showing that the additional flexibility of collective matrix factorization can lead to better results in some cases.