Overview
I fine-tune large language models (LLMs) for BatchPrompting, the ability to answer multiple questions in a single inference pass. Existing BatchPrompting techniques rely on lengthy prompts needing few-shot examples and suffer from decreased performance as the number of questions grows. We demonstrate that after fine-tuning, LLMs maintain consistent performance across a wide range of batch sizes without relying on lengthy prompts or few-shot examples. This enables users to efficiently include any number of questions into one prompt while preserving the model's response quality.

Fine-tuned Model:
Answer as many questions as you want without decrease in model quality.
Note: Please check the BatchPrompting dataset to see the expected input format for optimal results.
Novel Dataset
A comprehensive collection of text-based question-answer pairs designed to fine-tune and evaluate the performance of large language models (LLMs) across a diverse range of tasks. This dataset aims to facilitate research and development in the field of natural language processing (NLP) by providing a standardized benchmark for assessing the capabilities of LLMs in various domains, such as commonsense reasoning, textual entailment, sentiment analysis, and question answering.