The paper introduces SmartFRZ, an efficient training framework that uses attention-based layer freezing to reduce computational costs and accelerate training processes. Layer freezing is a technique that stops updating certain layers during training to save resources, but existing methods often lack generalizability and accuracy. SmartFRZ addresses these issues by using an attention mechanism to automatically select layers for freezing based on historical training data. The attention-based predictor is trained offline using a layer representational similarity method, which ensures the predictor can be applied to different datasets and networks. Experimental results show that SmartFRZ significantly reduces training time and computation costs while maintaining or improving accuracy compared to existing layer freezing methods. The framework is evaluated on both computer vision and natural language processing tasks, demonstrating its effectiveness in various scenarios.The paper introduces SmartFRZ, an efficient training framework that uses attention-based layer freezing to reduce computational costs and accelerate training processes. Layer freezing is a technique that stops updating certain layers during training to save resources, but existing methods often lack generalizability and accuracy. SmartFRZ addresses these issues by using an attention mechanism to automatically select layers for freezing based on historical training data. The attention-based predictor is trained offline using a layer representational similarity method, which ensures the predictor can be applied to different datasets and networks. Experimental results show that SmartFRZ significantly reduces training time and computation costs while maintaining or improving accuracy compared to existing layer freezing methods. The framework is evaluated on both computer vision and natural language processing tasks, demonstrating its effectiveness in various scenarios.