2024 | Rui Zhang, Dawei Cheng, Jie Yang, Yi Ouyang, Xian Wu, Yefeng Zheng, Changjun Jiang
This paper addresses the challenge of medical insurance fraud detection, which is crucial in the healthcare industry. Traditional offline learning models struggle to adapt to evolving fraud patterns, leading to suboptimal performance. To tackle this issue, the authors propose Pre-trained Online Contrastive Learning (POCL), an innovative online learning method that combines contrastive learning pre-training with online updating strategies. In the pre-training stage, POCL leverages contrastive learning to learn deep features from historical data, enabling rich risk representations. In the online learning stage, it employs a Temporal Memory Aware Synapses (MAS) strategy to perform incremental learning and optimization based on new data, ensuring timely adaptation to fraud patterns and reducing forgetting of past knowledge.
The model is evaluated on real-world insurance fraud datasets, demonstrating significant advantages in accuracy compared to state-of-the-art baseline methods. It also exhibits lower running time and space consumption. The authors conduct extensive experiments and ablation studies to validate the effectiveness of each component of the POCL model. A case study further illustrates the model's ability to maintain robustness and adapt to evolving fraud patterns.
Overall, POCL provides a robust solution for medical insurance fraud detection, offering high accuracy, efficiency, and adaptability to changing fraud patterns.This paper addresses the challenge of medical insurance fraud detection, which is crucial in the healthcare industry. Traditional offline learning models struggle to adapt to evolving fraud patterns, leading to suboptimal performance. To tackle this issue, the authors propose Pre-trained Online Contrastive Learning (POCL), an innovative online learning method that combines contrastive learning pre-training with online updating strategies. In the pre-training stage, POCL leverages contrastive learning to learn deep features from historical data, enabling rich risk representations. In the online learning stage, it employs a Temporal Memory Aware Synapses (MAS) strategy to perform incremental learning and optimization based on new data, ensuring timely adaptation to fraud patterns and reducing forgetting of past knowledge.
The model is evaluated on real-world insurance fraud datasets, demonstrating significant advantages in accuracy compared to state-of-the-art baseline methods. It also exhibits lower running time and space consumption. The authors conduct extensive experiments and ablation studies to validate the effectiveness of each component of the POCL model. A case study further illustrates the model's ability to maintain robustness and adapt to evolving fraud patterns.
Overall, POCL provides a robust solution for medical insurance fraud detection, offering high accuracy, efficiency, and adaptability to changing fraud patterns.