This study explores the use of large language models (LLMs), particularly generative pretrained transformers (GPT), to enhance materials language processing (MLP) tasks. MLP aims to automate the extraction of structured data from research papers, facilitating materials science research. Traditional MLP models often suffer from complex architectures, extensive fine-tuning, and the need for large, labeled datasets. The study introduces GPT models, which can achieve high performance in text classification, named entity recognition (NER), and extractive question answering (QA) with limited datasets and few-shot learning. GPT models are shown to be effective in classifying papers, recognizing named entities, and answering questions related to materials science, even with minimal training data. The approach reduces the workload for researchers by automating initial labeling and verifying human annotations. The study also highlights the reliability and generative properties of GPT models, making them valuable tools for materials scientists, especially those with limited machine learning expertise. However, limitations such as overconfidence and the need for domain-specific prompts are noted, emphasizing the importance of continuous monitoring and adaptation. Overall, the integration of GPT models into MLP tasks represents a significant advancement, offering new avenues for knowledge extraction from materials science literature.This study explores the use of large language models (LLMs), particularly generative pretrained transformers (GPT), to enhance materials language processing (MLP) tasks. MLP aims to automate the extraction of structured data from research papers, facilitating materials science research. Traditional MLP models often suffer from complex architectures, extensive fine-tuning, and the need for large, labeled datasets. The study introduces GPT models, which can achieve high performance in text classification, named entity recognition (NER), and extractive question answering (QA) with limited datasets and few-shot learning. GPT models are shown to be effective in classifying papers, recognizing named entities, and answering questions related to materials science, even with minimal training data. The approach reduces the workload for researchers by automating initial labeling and verifying human annotations. The study also highlights the reliability and generative properties of GPT models, making them valuable tools for materials scientists, especially those with limited machine learning expertise. However, limitations such as overconfidence and the need for domain-specific prompts are noted, emphasizing the importance of continuous monitoring and adaptation. Overall, the integration of GPT models into MLP tasks represents a significant advancement, offering new avenues for knowledge extraction from materials science literature.