From the course: GPT-4 Turbo: The New GPT Model and What You Need to Know

Unlock the full course today

Join today to access over 23,200 courses taught by industry experts.

GPT-4 Turbo with 128K context

GPT-4 Turbo with 128K context

- [Instructor] When having a conversation with a large language model, how can you figure out how much of the conversation it remembers? That's what context windows are all about. Now, a prompt is the text you input into the model and it's made up of a couple of tokens. Now, one token is roughly three quarters of a word, so 100 tokens is around 75 words. The completion is the text that the model outputs, which also makes up a couple of tokens. So the sum of the tokens of the prompt and the completion is known as the context window or context length. Now, the longer the context length, the more information the model has for generating a response. For a language model to produce a more meaningful and relevant response, it needs to be able to consider an entire conversation. Different large language models will have different context lengths. So for example, GPT-4 supported a context length of up to 8,000 tokens, and…

Contents