This study focuses on enhancing sentiment prediction in code-mixed tweets, which combine English and Roman Urdu. The authors introduce robust transformer-based algorithms, including Electra, cm-BERT, and mBART, to address the challenges of syntactic ambiguity and semantic interpretation in code-mixed texts. The research aims to improve sentiment prediction accuracy, particularly for low-resource languages like Urdu, Arabic, and Hindi. The study uses the MultiSenti dataset, which includes tweets from Pakistan's 2019 general election, and employs various hyperparameters such as temperature, top-k, and top-p to enhance model performance. The results show that mBART outperforms Electra and cm-BERT in sentiment prediction with an overall F1-score of 0.73. Additionally, topic modeling using Latent Dirichlet Allocation (LDA) is used to uncover shared characteristics and patterns across different sentiment classes, contributing to a deeper understanding of the data. The study concludes by discussing future work, including the application of advanced transformers to handle more diverse and informal texts in multiple languages.This study focuses on enhancing sentiment prediction in code-mixed tweets, which combine English and Roman Urdu. The authors introduce robust transformer-based algorithms, including Electra, cm-BERT, and mBART, to address the challenges of syntactic ambiguity and semantic interpretation in code-mixed texts. The research aims to improve sentiment prediction accuracy, particularly for low-resource languages like Urdu, Arabic, and Hindi. The study uses the MultiSenti dataset, which includes tweets from Pakistan's 2019 general election, and employs various hyperparameters such as temperature, top-k, and top-p to enhance model performance. The results show that mBART outperforms Electra and cm-BERT in sentiment prediction with an overall F1-score of 0.73. Additionally, topic modeling using Latent Dirichlet Allocation (LDA) is used to uncover shared characteristics and patterns across different sentiment classes, contributing to a deeper understanding of the data. The study concludes by discussing future work, including the application of advanced transformers to handle more diverse and informal texts in multiple languages.