6 Jun 2024 | Nihal V. Nayak, Yiyang Nan, Avi Trost, Stephen H. Bach
The paper introduces Bonito, an open-source model designed to convert unannotated text from specialized domains into task-specific training datasets for instruction tuning. The goal is to enable zero-shot task adaptation of large language models on users' specialized, private data. Bonito is trained on a large-scale dataset called Conditional Task Generation with Attributes (CTGA), which is created by remixing existing instruction tuning datasets into meta-templates. These meta-templates produce training examples where the input is unannotated text and the task attribute, and the output consists of the instruction and the response. The authors demonstrate that Bonito significantly improves the performance of pretrained and instruction-tuned models over self-supervised baselines. They also show that Bonito-generated tasks can further improve the performance of instruction-tuned models, with an average improvement of 22.1 F1 points. The paper includes experiments on seven datasets across three task types—yes-no question answering, extractive question answering, and natural language inference—and discusses the effects of domain, training size, and alternative task generators. Overall, the results highlight the effectiveness of learning with synthetic instruction tuning datasets for adapting language models to new domains.The paper introduces Bonito, an open-source model designed to convert unannotated text from specialized domains into task-specific training datasets for instruction tuning. The goal is to enable zero-shot task adaptation of large language models on users' specialized, private data. Bonito is trained on a large-scale dataset called Conditional Task Generation with Attributes (CTGA), which is created by remixing existing instruction tuning datasets into meta-templates. These meta-templates produce training examples where the input is unannotated text and the task attribute, and the output consists of the instruction and the response. The authors demonstrate that Bonito significantly improves the performance of pretrained and instruction-tuned models over self-supervised baselines. They also show that Bonito-generated tasks can further improve the performance of instruction-tuned models, with an average improvement of 22.1 F1 points. The paper includes experiments on seven datasets across three task types—yes-no question answering, extractive question answering, and natural language inference—and discusses the effects of domain, training size, and alternative task generators. Overall, the results highlight the effectiveness of learning with synthetic instruction tuning datasets for adapting language models to new domains.