26 Jul 2024 | Chang Yu, Yongshun Xu, Jin Cao, Ye Zhang, Yixin Jin, Mengran Zhu
This study presents an innovative application of the Transformer model for credit card fraud detection, aiming to improve the accuracy and efficiency of fraud detection systems. With the increasing prevalence of online and mobile payment systems, credit card fraud has become a significant threat to financial security. The research focuses on leveraging the latest Transformer models to achieve more robust and precise fraud detection. To ensure data reliability, the dataset was meticulously processed, with a focus on balancing the dataset to address data sparsity. Highly correlated vectors were selected to strengthen the training process.
The study compares the performance of the Transformer model with several widely used models, including Support Vector Machine (SVM), Random Forest, Neural Network, Logistic Regression, XGBoost, and TabNet, using metrics such as Precision, Recall, F1 Score, and ROC AUC. The results demonstrate that the Transformer model not only excels in traditional applications but also shows great potential in niche areas like fraud detection, offering a substantial advancement in the field.
The dataset used consists of European credit card transaction data, including over 550,000 transaction records up to 2023. The data was processed to address data imbalance, including resampling, feature correlation analysis, outlier detection, and dimensionality reduction techniques such as T-SNE, PCA, and Truncated SVD.
The Transformer model was implemented with a Self-Attention Mechanism and a Feed-Forward Neural Network. The model's performance was evaluated on both 2023 and 2013 data, showing significant improvements in Precision, Recall, F1-score, and ROC AUC compared to other models. The results indicate that the Transformer model maintains high accuracy and recall across different time periods, demonstrating its excellent generalization ability and stability.
The study concludes that the Transformer model is an ideal choice for addressing complex fraud detection problems and provides strong support for its application in other domains. The results validate the Transformer's ability to capture complex feature interactions and long-range dependencies, enabling it to maintain consistently high performance on data from different time periods. The research contributes innovative methods that advance the state of the art in credit card fraud detection, offering more effective solutions for the industry.This study presents an innovative application of the Transformer model for credit card fraud detection, aiming to improve the accuracy and efficiency of fraud detection systems. With the increasing prevalence of online and mobile payment systems, credit card fraud has become a significant threat to financial security. The research focuses on leveraging the latest Transformer models to achieve more robust and precise fraud detection. To ensure data reliability, the dataset was meticulously processed, with a focus on balancing the dataset to address data sparsity. Highly correlated vectors were selected to strengthen the training process.
The study compares the performance of the Transformer model with several widely used models, including Support Vector Machine (SVM), Random Forest, Neural Network, Logistic Regression, XGBoost, and TabNet, using metrics such as Precision, Recall, F1 Score, and ROC AUC. The results demonstrate that the Transformer model not only excels in traditional applications but also shows great potential in niche areas like fraud detection, offering a substantial advancement in the field.
The dataset used consists of European credit card transaction data, including over 550,000 transaction records up to 2023. The data was processed to address data imbalance, including resampling, feature correlation analysis, outlier detection, and dimensionality reduction techniques such as T-SNE, PCA, and Truncated SVD.
The Transformer model was implemented with a Self-Attention Mechanism and a Feed-Forward Neural Network. The model's performance was evaluated on both 2023 and 2013 data, showing significant improvements in Precision, Recall, F1-score, and ROC AUC compared to other models. The results indicate that the Transformer model maintains high accuracy and recall across different time periods, demonstrating its excellent generalization ability and stability.
The study concludes that the Transformer model is an ideal choice for addressing complex fraud detection problems and provides strong support for its application in other domains. The results validate the Transformer's ability to capture complex feature interactions and long-range dependencies, enabling it to maintain consistently high performance on data from different time periods. The research contributes innovative methods that advance the state of the art in credit card fraud detection, offering more effective solutions for the industry.