This paper introduces a novel neural network architecture for sequence labeling tasks, specifically part-of-speech (POS) tagging and named entity recognition (NER). The architecture combines bidirectional LSTM (BLSTM), convolutional neural networks (CNNs), and conditional random fields (CRFs) to automatically utilize both word- and character-level representations. The system is end-to-end trained, requiring no feature engineering or data preprocessing, making it applicable to a wide range of sequence labeling tasks. The model is evaluated on the Penn Treebank WSJ corpus for POS tagging and the CoNLL 2003 corpus for NER, achieving state-of-the-art performance with 97.55% accuracy for POS tagging and 91.21% F1 for NER. The contributions of the work include proposing a novel neural network architecture, empirical evaluations on benchmark datasets, and achieving state-of-the-art performance with an end-to-end system.This paper introduces a novel neural network architecture for sequence labeling tasks, specifically part-of-speech (POS) tagging and named entity recognition (NER). The architecture combines bidirectional LSTM (BLSTM), convolutional neural networks (CNNs), and conditional random fields (CRFs) to automatically utilize both word- and character-level representations. The system is end-to-end trained, requiring no feature engineering or data preprocessing, making it applicable to a wide range of sequence labeling tasks. The model is evaluated on the Penn Treebank WSJ corpus for POS tagging and the CoNLL 2003 corpus for NER, achieving state-of-the-art performance with 97.55% accuracy for POS tagging and 91.21% F1 for NER. The contributions of the work include proposing a novel neural network architecture, empirical evaluations on benchmark datasets, and achieving state-of-the-art performance with an end-to-end system.