GPT Understands, Too

GPT Understands, Too

25 Oct 2023 | Xiao Liu1*, Yanan Zheng1*, Zhengxiao Du1, Ming Ding1, Yujie Qian2, Zhilin Yang1†, Jie Tang1†
The paper introduces P-Tuning, a novel method that combines trainable continuous prompt embeddings with discrete prompts to improve the performance and stability of natural language understanding (NLU) tasks. The authors observe that manual discrete prompts often lead to unstable performance, with even a single word change in the prompt causing a significant drop in performance. P-Tuning addresses this issue by training continuous prompts that are concatenated with discrete prompts, which helps stabilize training and improve performance on various NLU benchmarks such as LAMA and SuperGLUE. The method is effective for both frozen and tuned language models, and it performs well in both fully-supervised and few-shot settings. The paper also includes experiments to demonstrate the effectiveness of P-Tuning, showing significant improvements over existing methods in terms of both performance and stability.The paper introduces P-Tuning, a novel method that combines trainable continuous prompt embeddings with discrete prompts to improve the performance and stability of natural language understanding (NLU) tasks. The authors observe that manual discrete prompts often lead to unstable performance, with even a single word change in the prompt causing a significant drop in performance. P-Tuning addresses this issue by training continuous prompts that are concatenated with discrete prompts, which helps stabilize training and improve performance on various NLU benchmarks such as LAMA and SuperGLUE. The method is effective for both frozen and tuned language models, and it performs well in both fully-supervised and few-shot settings. The paper also includes experiments to demonstrate the effectiveness of P-Tuning, showing significant improvements over existing methods in terms of both performance and stability.
Reach us at info@study.space