Language models are few-shot learners
We demonstrate that scaling up language models greatly improves task-agnostic, few-shot
performance, sometimes even becoming competitive with prior state-of-the-art fine-tuning
approaches. Specifically, we train GPT-3, an autoregressive language model with 175
billion parameters, 10x more than any previous non-sparse language model, and test its
performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient
updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text …
performance, sometimes even becoming competitive with prior state-of-the-art fine-tuning
approaches. Specifically, we train GPT-3, an autoregressive language model with 175
billion parameters, 10x more than any previous non-sparse language model, and test its
performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient
updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text …
Language models are few-shot learners
… Language Models are Few-Shot Learners Ben Mann … Few shot …
Показани са най-добрите резултати за това търсене. Показване на всички резултати