Recent works leveraging the in-context learning capabilities of modern large language models (LLMs) have independently proposed variations of a novel technique: Batch Prompting (BP), wherein b question prompts are indexed and concatenated into a single BatchPrompt. The queried LLM then learns from batch-formatted in-context examples how to answer all b questions in one generation. In this work, we consider sufficient conditions of a BatchPrompt in order to explore the technique’s efficacy in the zero-shot and multitask scenarios. We present new BatchPrompt templates and demonstrate on both open- and closed-source models of varying size that BP is possible without few-shot exemplars and also robust to in-batch task diversity provided sufficient instructions. We conclude with a modification to the token efficiency metric η proposed by the original work and a discussion of which regimes of NLG are best suited to the technique. 

Expanding Batch Prompting to Zero-Shot and Multi-Task Regimes.pdf
Copy of Batch Prompting - Expanding Capabilities to Multi-Task and Zero-Shot - Presentation Version