Chain-of-thought prompting elicits reasoning in large language models

J Wei, X Wang, D Schuurmans… - Advances in neural …, 2022 - proceedings.neurips.cc
Advances in neural information processing systems, 2022proceedings.neurips.cc
We explore how generating a chain of thought---a series of intermediate reasoning steps---
significantly improves the ability of large language models to perform complex reasoning. In
particular, we show how such reasoning abilities emerge naturally in sufficiently large
language models via a simple method called chain of thought prompting, where a few chain
of thought demonstrations are provided as exemplars in prompting. Experiments on three
large language models show that chain of thought prompting improves performance on a …
Abstract
We explore how generating a chain of thought---a series of intermediate reasoning steps---significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.
proceedings.neurips.cc
Показан е най-добрият резултат за това търсене. Показване на всички резултати