Textbooks are all you need ii: phi-1.5 technical report

Y Li, S Bubeck, R Eldan, A Del Giorno… - arXiv preprint arXiv …, 2023 - arxiv.org
arXiv preprint arXiv:2309.05463, 2023arxiv.org
We continue the investigation into the power of smaller Transformer-based language
models as initiated by\textbf {TinyStories}--a 10 million parameter model that can produce
coherent English--and the follow-up work on\textbf {phi-1}, a 1.3 billion parameter model
with Python coding performance close to the state-of-the-art. The latter work proposed to use
existing Large Language Models (LLMs) to generate``textbook quality" data as a way to
enhance the learning process compared to traditional web data. We follow the``Textbooks …
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
arxiv.org
Показан е най-добрият резултат за това търсене. Показване на всички резултати