**Trigrams'n'Tags (TnT)** is an efficient statistical part-of-speech tagger developed by Thorsten Brants. The paper argues that TnT, based on Markov models, performs at least as well as other current approaches, including the Maximum Entropy framework, and even outperforms it in some tests. The model uses second-order Markov models where states represent tags and outputs represent words. Transition probabilities depend on pairs of tags, while output probabilities depend only on the most recent tag. The paper details the smoothing techniques used to handle sparse data and the handling of unknown words through suffix analysis. The tagger also incorporates capitalization information and uses beam search to reduce processing time. Evaluations on the NEGRA corpus and the Penn Treebank show high accuracy, with reliable assignments achieving over 99% accuracy. The paper concludes that TnT, despite its simplicity, yields state-of-the-art results and is freely available for research purposes.**Trigrams'n'Tags (TnT)** is an efficient statistical part-of-speech tagger developed by Thorsten Brants. The paper argues that TnT, based on Markov models, performs at least as well as other current approaches, including the Maximum Entropy framework, and even outperforms it in some tests. The model uses second-order Markov models where states represent tags and outputs represent words. Transition probabilities depend on pairs of tags, while output probabilities depend only on the most recent tag. The paper details the smoothing techniques used to handle sparse data and the handling of unknown words through suffix analysis. The tagger also incorporates capitalization information and uses beam search to reduce processing time. Evaluations on the NEGRA corpus and the Penn Treebank show high accuracy, with reliable assignments achieving over 99% accuracy. The paper concludes that TnT, despite its simplicity, yields state-of-the-art results and is freely available for research purposes.