This paper proposes a method for class-incremental learning (CIL) of multi-label audio classification tasks, focusing on potentially overlapping sounds. The authors design an incremental learner that independently learns new classes while preserving knowledge about old classes. They introduce a cosine similarity-based distillation loss to minimize discrepancy in feature representations and a Kullback-Leibler divergence-based distillation loss to minimize discrepancy in outputs. The method is evaluated on a dataset with 50 sound classes, using an initial classification task with 30 base classes and four incremental phases of 5 classes each. The proposed method achieves an average F1-score of 40.9% over the five phases, with only a 0.7 percentage point degradation from the initial F1-score of 45.2%. The results demonstrate that the method effectively balances plasticity and stability, making it suitable for sequential sound classification tasks.This paper proposes a method for class-incremental learning (CIL) of multi-label audio classification tasks, focusing on potentially overlapping sounds. The authors design an incremental learner that independently learns new classes while preserving knowledge about old classes. They introduce a cosine similarity-based distillation loss to minimize discrepancy in feature representations and a Kullback-Leibler divergence-based distillation loss to minimize discrepancy in outputs. The method is evaluated on a dataset with 50 sound classes, using an initial classification task with 30 base classes and four incremental phases of 5 classes each. The proposed method achieves an average F1-score of 40.9% over the five phases, with only a 0.7 percentage point degradation from the initial F1-score of 45.2%. The results demonstrate that the method effectively balances plasticity and stability, making it suitable for sequential sound classification tasks.