Learning with Logical Constraints but without Shortcut Satisfaction

Learning with Logical Constraints but without Shortcut Satisfaction

2023 | Zenan Li, Zehua Liu, Yuan Yao, Jingwei Xu, Taolue Chen, Xiaoxing Ma, Jian Li
This paper proposes a new framework for learning with logical constraints that avoids shortcut satisfaction. The key idea is to encode logical constraints into a distributional loss function that is compatible with the original training loss, ensuring monotonicity with respect to logical entailment. The framework introduces dual variables for logical connectives to capture how constraints are satisfied, allowing for a more nuanced representation of logical satisfaction. The proposed approach is based on a variational framework that jointly trains the logical constraint and the original training loss, ensuring compatibility and convergence. Theoretical analysis shows that the proposed method has desirable properties, and experimental evaluations demonstrate its superior performance in both model generalizability and constraint satisfaction. The method is tested on various tasks, including handwritten digit recognition, handwritten formula recognition, shortest distance prediction, and image classification, showing significant improvements over existing approaches. The framework addresses the issue of shortcut satisfaction by considering how different satisfying assignments of a logical constraint are handled for different inputs, leading to better model performance and logical constraint satisfaction. The approach also improves the stability of numerical computations and provides a more interpretable model by controlling the satisfaction degree of individual atomic formulas. The results show that the proposed method outperforms existing approaches in terms of accuracy and constraint satisfaction across multiple tasks.This paper proposes a new framework for learning with logical constraints that avoids shortcut satisfaction. The key idea is to encode logical constraints into a distributional loss function that is compatible with the original training loss, ensuring monotonicity with respect to logical entailment. The framework introduces dual variables for logical connectives to capture how constraints are satisfied, allowing for a more nuanced representation of logical satisfaction. The proposed approach is based on a variational framework that jointly trains the logical constraint and the original training loss, ensuring compatibility and convergence. Theoretical analysis shows that the proposed method has desirable properties, and experimental evaluations demonstrate its superior performance in both model generalizability and constraint satisfaction. The method is tested on various tasks, including handwritten digit recognition, handwritten formula recognition, shortest distance prediction, and image classification, showing significant improvements over existing approaches. The framework addresses the issue of shortcut satisfaction by considering how different satisfying assignments of a logical constraint are handled for different inputs, leading to better model performance and logical constraint satisfaction. The approach also improves the stability of numerical computations and provides a more interpretable model by controlling the satisfaction degree of individual atomic formulas. The results show that the proposed method outperforms existing approaches in terms of accuracy and constraint satisfaction across multiple tasks.
Reach us at info@study.space