This survey provides a comprehensive overview of poisoning attacks on recommender systems and their countermeasures. Poisoning attacks involve injecting malicious data into the training process of a recommender system to manipulate its recommendations. These attacks can be classified into two categories: model-agnostic attacks, which are applicable to any recommender system, and model-intrinsic attacks, which are tailored to specific types of systems. The survey presents a novel taxonomy for poisoning attacks, formalises its dimensions, and organises 30+ attacks described in the literature. It also reviews 40+ countermeasures to detect and prevent poisoning attacks, evaluating their effectiveness against specific types of attacks. The survey highlights the importance of understanding the different types of poisoning attacks, their goals, capacities, and impacts, for designing robust models. It also discusses open issues in the field and impactful directions for future research. A rich repository of resources associated with poisoning attacks is available at https://github.com/tamlhp/awesome-recsys-poisoning. The survey concludes with a discussion on the challenges of securing recommender systems against poisoning attacks, including openness, concept drift, and imbalanced data. It also examines the application perspectives of poisoning attacks in e-commerce, social media, and news recommendations. The survey provides a formal and comprehensive view of the dimensions of the taxonomy and the threat model associated with poisoning attacks on recommender systems. It also discusses the adversary's goal, knowledge, impact, and capabilities in poisoning attacks. The survey highlights the importance of understanding the different types of poisoning attacks and their countermeasures for protecting recommender systems against malicious manipulation.This survey provides a comprehensive overview of poisoning attacks on recommender systems and their countermeasures. Poisoning attacks involve injecting malicious data into the training process of a recommender system to manipulate its recommendations. These attacks can be classified into two categories: model-agnostic attacks, which are applicable to any recommender system, and model-intrinsic attacks, which are tailored to specific types of systems. The survey presents a novel taxonomy for poisoning attacks, formalises its dimensions, and organises 30+ attacks described in the literature. It also reviews 40+ countermeasures to detect and prevent poisoning attacks, evaluating their effectiveness against specific types of attacks. The survey highlights the importance of understanding the different types of poisoning attacks, their goals, capacities, and impacts, for designing robust models. It also discusses open issues in the field and impactful directions for future research. A rich repository of resources associated with poisoning attacks is available at https://github.com/tamlhp/awesome-recsys-poisoning. The survey concludes with a discussion on the challenges of securing recommender systems against poisoning attacks, including openness, concept drift, and imbalanced data. It also examines the application perspectives of poisoning attacks in e-commerce, social media, and news recommendations. The survey provides a formal and comprehensive view of the dimensions of the taxonomy and the threat model associated with poisoning attacks on recommender systems. It also discusses the adversary's goal, knowledge, impact, and capabilities in poisoning attacks. The survey highlights the importance of understanding the different types of poisoning attacks and their countermeasures for protecting recommender systems against malicious manipulation.