This paper addresses the challenge of incomplete multi-view partial multi-label classification (iMvPLMC) by proposing a novel attention-induced embedding imputation technique. The authors aim to enhance the generalization ability of models by approximating the latent features of missing instances in the embedding space, rather than recovering missing views in kernel space or original feature space. The proposed method, called Attention-Induced IMputation Network (AIMNet), consists of two branches: a multi-view feature extraction module and a graph attention network (GAT) based label semantic feature extraction module. The GAT module learns label semantic features by aggregating inter-instance attention scores, while the multi-view feature extraction module completes missing instances in the embedding space using cross-view joint attention. The completed features are dynamically weighted by confidence derived from joint attention in the late fusion phase. Extensive experiments on five datasets confirm the effectiveness and superiority of AIMNet in handling incomplete multi-view and partial multi-label data. The method outperforms existing state-of-the-art approaches, demonstrating its robustness and adaptability.This paper addresses the challenge of incomplete multi-view partial multi-label classification (iMvPLMC) by proposing a novel attention-induced embedding imputation technique. The authors aim to enhance the generalization ability of models by approximating the latent features of missing instances in the embedding space, rather than recovering missing views in kernel space or original feature space. The proposed method, called Attention-Induced IMputation Network (AIMNet), consists of two branches: a multi-view feature extraction module and a graph attention network (GAT) based label semantic feature extraction module. The GAT module learns label semantic features by aggregating inter-instance attention scores, while the multi-view feature extraction module completes missing instances in the embedding space using cross-view joint attention. The completed features are dynamically weighted by confidence derived from joint attention in the late fusion phase. Extensive experiments on five datasets confirm the effectiveness and superiority of AIMNet in handling incomplete multi-view and partial multi-label data. The method outperforms existing state-of-the-art approaches, demonstrating its robustness and adaptability.