The paper introduces the Diffusion Language-Shapelets (DiffShape) model for semi-supervised time series classification, addressing the issue of limited labeled data and improving model interpretability. DiffShape combines two mechanisms: self-supervised diffusion learning and contrastive language-shapelets learning. The self-supervised diffusion learning mechanism uses real subsequences as conditions to guide the generation of shapelets, enhancing their similarity to original subsequences. The contrastive language-shapelets learning mechanism leverages natural language descriptions of time series to improve the discriminability of generated shapelets. Extensive experiments on the UCR time series archive demonstrate that DiffShape outperforms existing methods in both classification performance and interpretability. The model's effectiveness is further validated through ablation studies and visualization analyses, showing that both mechanisms contribute significantly to its success.The paper introduces the Diffusion Language-Shapelets (DiffShape) model for semi-supervised time series classification, addressing the issue of limited labeled data and improving model interpretability. DiffShape combines two mechanisms: self-supervised diffusion learning and contrastive language-shapelets learning. The self-supervised diffusion learning mechanism uses real subsequences as conditions to guide the generation of shapelets, enhancing their similarity to original subsequences. The contrastive language-shapelets learning mechanism leverages natural language descriptions of time series to improve the discriminability of generated shapelets. Extensive experiments on the UCR time series archive demonstrate that DiffShape outperforms existing methods in both classification performance and interpretability. The model's effectiveness is further validated through ablation studies and visualization analyses, showing that both mechanisms contribute significantly to its success.