From the course: Building Apps with AI Tools: ChatGPT, Semantic Kernel, and Langchain

Introduction to ChatGPT and its parameters

- [Instructor] Understanding ChatGPT Parameters will help us build ChatGPT apps that are more optimal. Let's dive into ChatGPT and its key parameters. You've likely seen the UI console which is a simplified ChatGPT interface. As a quick recap, ChatGPT works by taking a prompt and generating a response based on the data it's been trained on. To dive deeper into ChatGPT, let's head over to the playground by going to platform.openai.com/playground. Let's walk through the different settings we have. On the left, we have our system prompt which allows us to provide an objective to our ChatGPT based model. The default one is you're a helpful assistant. In the middle is where we can add our standard prompts, anything under the user tag message, we can also change the user prompt to an assistant response if we want to mimic a past conversation. Now, let's head over to the right hand side. At the top, we have our mode. Previously we had many modes or tasks for open AI models to accomplish, but currently chat is the only one that's being supported going forward. Next, we have our model. We have three different items that make up a model name. The first is the model name like GPT 3.5 or GPT 4. Second, we have the context window. For example, 16K and 32K. This is the number of tokens you can both input and output out of this GPT model. Third, we have the model version. If we click on show more models, we can see the different dates that the models are supported for. For example, GPT 4 06/13 is June 13th. Also, the models with no suffix are the ones that are latest. You can think of this as being analogous to the latest code versus a stable supported version. Now moving on, we have the temperature. This is a setting that changes how much variation there are in your responses. The higher this number, the more random the responses. Next, we have maximum length. This is the number of tokens that can be generated based on a prompt. You can slide it all the way up to the maximum context of your model. Now we have stop sequences. These are custom characters you can add that will force the model to stop generating a response. These can be useful when having a model that falls a certain format. We'll skip over top P since it's beyond the scope of this course. And finally we have the frequency penalty and presence penalty. These allow us to adjust the output of the model to generate non repeated tokens. And there we go. That was a quick walkthrough. Through all the different settings that we can use in our open AI API, we'll be using these to customize our outputs while building our apps.

Contents