26 Jul 2024 | Chang Yu, Yongshun Xu, Jin Cao, Ye Zhang, Yixin Jin, Mengran Zhu
This study explores the application of advanced Transformer models for credit card fraud detection, addressing the challenge of detecting fraudulent transactions in datasets with a small proportion of fraudulent samples. The authors, from various universities across the United States, meticulously processed the data to ensure reliability and balance the dataset to mitigate data sparsity. They compared the performance of their Transformer model with several widely adopted models, including Support Vector Machine (SVM), Random Forest, Neural Network, Logistic Regression, XGBoost, and TabNet, using metrics such as Precision, Recall, F1 Score, and ROC AUC.
The study begins with an introduction to the problem of credit card fraud, highlighting the importance of accurate fraud detection in the context of growing internet financial services. It reviews existing machine learning techniques, such as Neural Networks (NN) and XGBoost, and discusses the limitations of these methods in handling complex, high-dimensional data and long-range dependencies. The authors then introduce the Transformer model, emphasizing its ability to capture intricate patterns and relationships within data through its self-attention mechanism.
The methodology section details the data processing techniques, including resampling and preprocessing to address data imbalance, feature correlation analysis, outlier detection, and dimensionality reduction using techniques like T-SNE, PCA, and Truncated SVD. The Transformer model architecture is described, focusing on the self-attention mechanism and feed-forward neural network.
The evaluation section presents the results of the experiments, showing that the Transformer model outperforms other models in terms of Precision, Recall, F1 Score, and ROC AUC. Cross-validation using data from 2013 and 2023 further confirms the model's stability and generalization ability. The authors conclude that the Transformer model is a promising solution for credit card fraud detection, offering superior performance and potential for further development in financial security and other critical areas.This study explores the application of advanced Transformer models for credit card fraud detection, addressing the challenge of detecting fraudulent transactions in datasets with a small proportion of fraudulent samples. The authors, from various universities across the United States, meticulously processed the data to ensure reliability and balance the dataset to mitigate data sparsity. They compared the performance of their Transformer model with several widely adopted models, including Support Vector Machine (SVM), Random Forest, Neural Network, Logistic Regression, XGBoost, and TabNet, using metrics such as Precision, Recall, F1 Score, and ROC AUC.
The study begins with an introduction to the problem of credit card fraud, highlighting the importance of accurate fraud detection in the context of growing internet financial services. It reviews existing machine learning techniques, such as Neural Networks (NN) and XGBoost, and discusses the limitations of these methods in handling complex, high-dimensional data and long-range dependencies. The authors then introduce the Transformer model, emphasizing its ability to capture intricate patterns and relationships within data through its self-attention mechanism.
The methodology section details the data processing techniques, including resampling and preprocessing to address data imbalance, feature correlation analysis, outlier detection, and dimensionality reduction using techniques like T-SNE, PCA, and Truncated SVD. The Transformer model architecture is described, focusing on the self-attention mechanism and feed-forward neural network.
The evaluation section presents the results of the experiments, showing that the Transformer model outperforms other models in terms of Precision, Recall, F1 Score, and ROC AUC. Cross-validation using data from 2013 and 2023 further confirms the model's stability and generalization ability. The authors conclude that the Transformer model is a promising solution for credit card fraud detection, offering superior performance and potential for further development in financial security and other critical areas.