Skip to content

Azure-Samples/openai-dotnet-samples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenAI .NET Samples

Collection of OpenAI samples written in .NET. Similar to the OpenAI website samples.

Open in GitHub Codespaces

Open in Remote - Containers

Prerequisites

Running samples

  1. Open repository in VS Code. To minimize setup, it's highly recommended you use Codespaces

  2. Configure environment variables.

    1. Open the .devcontainer.json file

    2. Replace the following values with your own:

      Azure OpenAI Service - For more details on how to get these variables, see the Azure OpenAI documentation.

      • AOAI_ENDPOINT - The endpoint for your Azure OpenAI Service resource.
      • AOAI_KEY - The access key for your Azure OpenAI Service resource.
      • AOAI_DEPLOYMENTID - The name of your model deployment (gpt-35-turbo-instruct-deployment).

      OpenAI

      • AOAI_KEY - The API key for your OpenAI account. For more details on getting your API keys, see the OpenAI documentation.
      • AOAI_DEPLOYMENTID - The model name (i.e. gpt-35-turbo-instruct). For more details on models, see OpenAI model documentation.
    3. Save your changes

  3. Rebuild the container

    1. Open the command palette. In the menu bar, select View > Command Palette.
    2. Enter the following command into the command palette >Codespaces: Rebuild Container.
  4. When your Codespace rebuilds, open a notebook and run it. For more information on getting started with notebooks, see the Polyglot Notebooks documentation.

Azure OpenAI .NET SDK Notes

NOTE: The Azure OpenAI Service .NET SDK is currently in preview

The following are things to be mindful of when using the Azure .NET SDK with each of the OpenAI model providers. For more information on each of the services, see the comparing Azure OpenAI and OpenAI documentation.

Azure OpenAI Service

Deployment ID

Deployments are a way to provide a user-friendly name for OpenAI models. These deployments are backed by OpenAI models such as gpt-35-turbo-instruct. When using the Azure OpenAI .NET SDK, your Deployment ID is the name you provided to your deployment, not the name of the OpenAI model.

For more details, see the deploy a model documentation.

OpenAI

API Keys

When initializing the client using OpenAI as the model service provider, the only credential you need to provide is your API key. Use the Azure OpenAI .NET SDK to initialize the client as follows:

var AOAI_KEY = Environment.GetEnvironmentVariable("AOAI_KEY");
var openAIClient = new OpenAIClient(AOAI_KEY);

Deployment ID

Unlike Azure OpenAI Service, OpenAI doesn't use deployments. Instead, it uses the model names. The value of your AOAI_DEPLOYMENTID environment variable should be the name of the OpenAI model. For almost all of these samples, the model used is gpt-35-turbo-instruct. For chat samples use gpt-35-turbo.

Settings

Setting Description
Model The name of the OpenAI model. For more details, see the Azure OpenAI models documentation
Max tokens The maximum number of tokens to generate. The max tokens number can't exceed the number of tokens supported by the model. See the Azure OpenAI models documentation for more details on token limits
Temperature A value between 0 and 1 that lets you control how confident the model is when making predictions. Lower temperatures mean less randomness in completion output.
Top p A value between 0 and 1 that lets you control which tokens to consider in the results. For example a value of 0.1 means only the top 10% are considered.
Frequency penalty A value between -2 and 2. Positive values penalize the text returned based on the frequency of a token frequency. This makes it so there's a lower likelihood of tokens repeating themselves.
Presence penalty A value between -2 and 2. Positive numbers increase the model's likelihood the text returned talks about new topics.
Stop Sequence String values that indicate when the model should stop generating new text. Returned text won't contain the stop sequence.

Examples

Tokenization

Classification

Generation

Translation

Code

ChatGPT

Note: These samples use GPT3.5 Turbo models

Chat

Additional Resources