{"payload":{"categories":{"apps":[{"name":"API management","slug":"api-management","description_html":"

Structure your API infrastructure to enable various internet gateways to interact with your service.

\n"},{"name":"Backup Utilities","slug":"backup-utilities","description_html":"

Utilities providing periodic backups of your GitHub data

\n"},{"name":"Chat","slug":"chat","description_html":"

Bring GitHub into your conversations.

\n"},{"name":"Code quality","slug":"code-quality","description_html":"

Automate your code review with style, quality, security, and test‑coverage checks when you need them.

\n"},{"name":"Code review","slug":"code-review","description_html":"

Ensure your code meets quality standards and ship with confidence.

\n"},{"name":"Container CI","slug":"container-ci","description_html":"

Continuous integration for container applications.

\n"},{"name":"Continuous integration","slug":"continuous-integration","description_html":"

Automatically build and test your code as you push it to GitHub, preventing bugs from being deployed to production.

\n"},{"name":"Dependency management","slug":"dependency-management","description_html":"

Secure and manage your third-party dependencies.

\n"},{"name":"Deployment","slug":"deployment","description_html":"

Streamline your code deployment so you can focus on your product.

\n"},{"name":"Deployment Protection Rules","slug":"deployment-protection-rules","description_html":"

Enables custom protection rules to gate deployments with third-party services

\n"},{"name":"Game CI","slug":"game-ci","description_html":"

Tools for building a CI pipeline for game development

\n"},{"name":"IDEs","slug":"ides","description_html":"

Find the right interface to build, debug, and deploy your source code.

\n"},{"name":"Learning","slug":"learning","description_html":"

Get the skills you need to level up.

\n"},{"name":"Localization","slug":"localization","description_html":"

Extend your software's reach. Localize and translate continuously from GitHub.

\n"},{"name":"Mobile","slug":"mobile","description_html":"

Improve your workflow for the small screen.

\n"},{"name":"Mobile CI","slug":"mobile-ci","description_html":"

Continuous integration for Mobile applications

\n"},{"name":"Monitoring","slug":"monitoring","description_html":"

Monitor the impact of your code changes. Measure performance, track errors, and analyze your application.

\n"},{"name":"Project management","slug":"project-management","description_html":"

Organize, manage, and track your project with tools that build on top of issues and pull requests.

\n"},{"name":"Publishing","slug":"publishing","description_html":"

Get your site ready for production so you can get the word out.

\n"},{"name":"Recently added","slug":"recently-added","description_html":"

The latest tools that help you and your team build software better, together.

\n"},{"name":"Security","slug":"security","description_html":"

Find, fix, and prevent security vulnerabilities before they can be exploited.

\n"},{"name":"Support","slug":"support","description_html":"

Get your team and customers the help they need.

\n"},{"name":"Testing","slug":"testing","description_html":"

Eliminate bugs and ship with more confidence by adding these tools to your workflow.

\n"},{"name":"Utilities","slug":"utilities","description_html":"

Auxiliary tools to enhance your experience on GitHub

\n"}],"actions":[{"name":"API management","slug":"api-management","description_html":"

Structure your API infrastructure to enable various internet gateways to interact with your service.

\n"},{"name":"Backup Utilities","slug":"backup-utilities","description_html":"

Utilities providing periodic backups of your GitHub data

\n"},{"name":"Chat","slug":"chat","description_html":"

Bring GitHub into your conversations.

\n"},{"name":"Code quality","slug":"code-quality","description_html":"

Automate your code review with style, quality, security, and test‑coverage checks when you need them.

\n"},{"name":"Code review","slug":"code-review","description_html":"

Ensure your code meets quality standards and ship with confidence.

\n"},{"name":"Container CI","slug":"container-ci","description_html":"

Continuous integration for container applications.

\n"},{"name":"Continuous integration","slug":"continuous-integration","description_html":"

Automatically build and test your code as you push it to GitHub, preventing bugs from being deployed to production.

\n"},{"name":"Dependency management","slug":"dependency-management","description_html":"

Secure and manage your third-party dependencies.

\n"},{"name":"Deployment","slug":"deployment","description_html":"

Streamline your code deployment so you can focus on your product.

\n"},{"name":"Deployment Protection Rules","slug":"deployment-protection-rules","description_html":"

Enables custom protection rules to gate deployments with third-party services

\n"},{"name":"Game CI","slug":"game-ci","description_html":"

Tools for building a CI pipeline for game development

\n"},{"name":"GitHub Sponsors","slug":"github-sponsors","description_html":"

Tools to manage your GitHub Sponsors community

\n"},{"name":"IDEs","slug":"ides","description_html":"

Find the right interface to build, debug, and deploy your source code.

\n"},{"name":"Learning","slug":"learning","description_html":"

Get the skills you need to level up.

\n"},{"name":"Localization","slug":"localization","description_html":"

Extend your software's reach. Localize and translate continuously from GitHub.

\n"},{"name":"Mobile","slug":"mobile","description_html":"

Improve your workflow for the small screen.

\n"},{"name":"Mobile CI","slug":"mobile-ci","description_html":"

Continuous integration for Mobile applications

\n"},{"name":"Monitoring","slug":"monitoring","description_html":"

Monitor the impact of your code changes. Measure performance, track errors, and analyze your application.

\n"},{"name":"Project management","slug":"project-management","description_html":"

Organize, manage, and track your project with tools that build on top of issues and pull requests.

\n"},{"name":"Publishing","slug":"publishing","description_html":"

Get your site ready for production so you can get the word out.

\n"},{"name":"Security","slug":"security","description_html":"

Find, fix, and prevent security vulnerabilities before they can be exploited.

\n"},{"name":"Support","slug":"support","description_html":"

Get your team and customers the help they need.

\n"},{"name":"Testing","slug":"testing","description_html":"

Eliminate bugs and ship with more confidence by adding these tools to your workflow.

\n"},{"name":"Utilities","slug":"utilities","description_html":"

Auxiliary tools to enhance your experience on GitHub

\n"}]},"models":[{"id":"azureml://registries/azureml-ai21/models/AI21-Jamba-1.5-Large/versions/3","registry":"azureml-ai21","name":"AI21-Jamba-1-5-Large","original_name":"AI21-Jamba-1.5-Large","friendly_name":"AI21 Jamba 1.5 Large","task":"chat-completion","publisher":"AI21 Labs","license":"custom","description":"","summary":"A 398B parameters (94B active) multilingual model, offering a 256K long context window, function calling, structured output, and grounded generation.","model_family":"AI21 Labs","model_version":"3","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/ai21 labs.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-ai21/models/AI21-Jamba-1.5-Mini/versions/3","registry":"azureml-ai21","name":"AI21-Jamba-1-5-Mini","original_name":"AI21-Jamba-1.5-Mini","friendly_name":"AI21 Jamba 1.5 Mini","task":"chat-completion","publisher":"AI21 Labs","license":"custom","description":"","summary":"A 52B parameters (12B active) multilingual model, offering a 256K long context window, function calling, structured output, and grounded generation.","model_family":"AI21 Labs","model_version":"3","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/ai21 labs.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-ai21/models/AI21-Jamba-Instruct/versions/2","registry":"azureml-ai21","name":"AI21-Jamba-Instruct","original_name":"AI21-Jamba-Instruct","friendly_name":"AI21-Jamba-Instruct","task":"chat-completion","publisher":"AI21 Labs","license":"custom","description":"","summary":"A production-grade Mamba-based LLM model to achieve best-in-class performance, quality, and cost efficiency.","model_family":"AI21 Labs","model_version":"2","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/ai21 labs.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-cohere/models/Cohere-command-r/versions/3","registry":"azureml-cohere","name":"Cohere-command-r","original_name":"Cohere-command-r","friendly_name":"Cohere Command R","task":"chat-completion","publisher":"cohere","license":"custom","description":"","summary":"Command R is a scalable generative model targeting RAG and Tool Use to enable production-scale AI for enterprise.","model_family":"cohere","model_version":"3","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/cohere.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-cohere/models/Cohere-command-r-plus/versions/3","registry":"azureml-cohere","name":"Cohere-command-r-plus","original_name":"Cohere-command-r-plus","friendly_name":"Cohere Command R+","task":"chat-completion","publisher":"cohere","license":"custom","description":"","summary":"Command R+ is a state-of-the-art RAG-optimized model designed to tackle enterprise-grade workloads.","model_family":"cohere","model_version":"3","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/cohere.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-cohere/models/Cohere-embed-v3-english/versions/3","registry":"azureml-cohere","name":"Cohere-embed-v3-english","original_name":"Cohere-embed-v3-english","friendly_name":"Cohere Embed v3 English","task":"embeddings","publisher":"cohere","license":"custom","description":"","summary":"Cohere Embed English is the market's leading text representation model used for semantic search, retrieval-augmented generation (RAG), classification, and clustering.","model_family":"cohere","model_version":"3","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/cohere.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-cohere/models/Cohere-embed-v3-multilingual/versions/3","registry":"azureml-cohere","name":"Cohere-embed-v3-multilingual","original_name":"Cohere-embed-v3-multilingual","friendly_name":"Cohere Embed v3 Multilingual","task":"embeddings","publisher":"cohere","license":"custom","description":"","summary":"Cohere Embed Multilingual is the market's leading text representation model used for semantic search, retrieval-augmented generation (RAG), classification, and clustering.","model_family":"cohere","model_version":"3","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/cohere.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-meta/models/Meta-Llama-3-70B-Instruct/versions/6","registry":"azureml-meta","name":"Meta-Llama-3-70B-Instruct","original_name":"Meta-Llama-3-70B-Instruct","friendly_name":"Meta-Llama-3-70B-Instruct","task":"chat-completion","publisher":"meta","license":"custom","description":"","summary":"A powerful 70-billion parameter model excelling in reasoning, coding, and broad language applications.","model_family":"meta","model_version":"6","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/meta.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-meta/models/Meta-Llama-3-8B-Instruct/versions/6","registry":"azureml-meta","name":"Meta-Llama-3-8B-Instruct","original_name":"Meta-Llama-3-8B-Instruct","friendly_name":"Meta-Llama-3-8B-Instruct","task":"chat-completion","publisher":"meta","license":"custom","description":"","summary":"A versatile 8-billion parameter model optimized for dialogue and text generation tasks.","model_family":"meta","model_version":"6","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/meta.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-meta/models/Meta-Llama-3.1-405B-Instruct/versions/1","registry":"azureml-meta","name":"Meta-Llama-3-1-405B-Instruct","original_name":"Meta-Llama-3.1-405B-Instruct","friendly_name":"Meta-Llama-3.1-405B-Instruct","task":"chat-completion","publisher":"meta","license":"custom","description":"","summary":"The Llama 3.1 instruction tuned text only models are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.","model_family":"meta","model_version":"1","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/meta.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-meta/models/Meta-Llama-3.1-70B-Instruct/versions/1","registry":"azureml-meta","name":"Meta-Llama-3-1-70B-Instruct","original_name":"Meta-Llama-3.1-70B-Instruct","friendly_name":"Meta-Llama-3.1-70B-Instruct","task":"chat-completion","publisher":"meta","license":"custom","description":"","summary":"The Llama 3.1 instruction tuned text only models are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.","model_family":"meta","model_version":"1","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/meta.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-meta/models/Meta-Llama-3.1-8B-Instruct/versions/1","registry":"azureml-meta","name":"Meta-Llama-3-1-8B-Instruct","original_name":"Meta-Llama-3.1-8B-Instruct","friendly_name":"Meta-Llama-3.1-8B-Instruct","task":"chat-completion","publisher":"meta","license":"custom","description":"","summary":"The Llama 3.1 instruction tuned text only models are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.","model_family":"meta","model_version":"1","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/meta.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-mistral/models/Mistral-large/versions/1","registry":"azureml-mistral","name":"Mistral-large","original_name":"Mistral-large","friendly_name":"Mistral Large","task":"chat-completion","publisher":"Mistral AI","license":"custom","description":"","summary":"Mistral's flagship model that's ideal for complex tasks that require large reasoning capabilities or are highly specialized (Synthetic Text Generation, Code Generation, RAG, or Agents).","model_family":"Mistral AI","model_version":"1","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/mistral ai.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-mistral/models/Mistral-large-2407/versions/1","registry":"azureml-mistral","name":"Mistral-large-2407","original_name":"Mistral-large-2407","friendly_name":"Mistral Large (2407)","task":"chat-completion","publisher":"Mistral AI","license":"custom","description":"","summary":"Mistral Large (2407) is an advanced Large Language Model (LLM) with state-of-the-art reasoning, knowledge and coding capabilities.","model_family":"Mistral AI","model_version":"1","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/mistral ai.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-mistral/models/Mistral-Nemo/versions/1","registry":"azureml-mistral","name":"Mistral-Nemo","original_name":"Mistral-Nemo","friendly_name":"Mistral Nemo","task":"chat-completion","publisher":"Mistral AI","license":"custom","description":"","summary":"Mistral Nemo is a cutting-edge Language Model (LLM) boasting state-of-the-art reasoning, world knowledge, and coding capabilities within its size category.","model_family":"Mistral AI","model_version":"1","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/mistral ai.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml-mistral/models/Mistral-small/versions/1","registry":"azureml-mistral","name":"Mistral-small","original_name":"Mistral-small","friendly_name":"Mistral Small","task":"chat-completion","publisher":"Mistral AI","license":"custom","description":"","summary":"Mistral Small can be used on any language-based task that requires high efficiency and low latency.","model_family":"Mistral AI","model_version":"1","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/mistral ai.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azure-openai/models/gpt-4o/versions/2024-05-13","registry":"azure-openai","name":"gpt-4o","original_name":"gpt-4o","friendly_name":"OpenAI GPT-4o","task":"chat-completion","publisher":"openai","license":"custom","description":"","summary":"OpenAI's most advanced multimodal model in the GPT-4 family. Can handle both text and image inputs.","model_family":"openai","model_version":"2024-05-13","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/openai.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azure-openai/models/gpt-4o-mini/versions/2","registry":"azure-openai","name":"gpt-4o-mini","original_name":"gpt-4o-mini","friendly_name":"OpenAI GPT-4o mini","task":"chat-completion","publisher":"openai","license":"custom","description":"","summary":"An affordable, efficient AI solution for diverse text and image tasks.","model_family":"openai","model_version":"2","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/openai.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azure-openai/models/text-embedding-3-large/versions/1","registry":"azure-openai","name":"text-embedding-3-large","original_name":"text-embedding-3-large","friendly_name":"OpenAI Text Embedding 3 (large)","task":"embeddings","publisher":"openai","license":"custom","description":"","summary":"Text-embedding-3 series models are the latest and most capable embedding model from OpenAI.","model_family":"openai","model_version":"1","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/openai.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azure-openai/models/text-embedding-3-small/versions/1","registry":"azure-openai","name":"text-embedding-3-small","original_name":"text-embedding-3-small","friendly_name":"OpenAI Text Embedding 3 (small)","task":"embeddings","publisher":"openai","license":"custom","description":"","summary":"Text-embedding-3 series models are the latest and most capable embedding model from OpenAI.","model_family":"openai","model_version":"1","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/openai.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml/models/Phi-3-medium-128k-instruct/versions/4","registry":"azureml","name":"Phi-3-medium-128k-instruct","original_name":"Phi-3-medium-128k-instruct","friendly_name":"Phi-3-medium instruct (128k)","task":"chat-completion","publisher":"microsoft","license":"mit","description":"","summary":"Same Phi-3-medium model, but with a larger context size for RAG or few shot prompting.","model_family":"microsoft","model_version":"4","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/microsoft.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml/models/Phi-3-medium-4k-instruct/versions/3","registry":"azureml","name":"Phi-3-medium-4k-instruct","original_name":"Phi-3-medium-4k-instruct","friendly_name":"Phi-3-medium instruct (4k)","task":"chat-completion","publisher":"microsoft","license":"mit","description":"","summary":"A 14B parameters model, proves better quality than Phi-3-mini, with a focus on high-quality, reasoning-dense data.","model_family":"microsoft","model_version":"3","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/microsoft.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml/models/Phi-3-mini-128k-instruct/versions/11","registry":"azureml","name":"Phi-3-mini-128k-instruct","original_name":"Phi-3-mini-128k-instruct","friendly_name":"Phi-3-mini instruct (128k)","task":"chat-completion","publisher":"microsoft","license":"mit","description":"","summary":"Same Phi-3-mini model, but with a larger context size for RAG or few shot prompting.","model_family":"microsoft","model_version":"11","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/microsoft.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml/models/Phi-3-mini-4k-instruct/versions/11","registry":"azureml","name":"Phi-3-mini-4k-instruct","original_name":"Phi-3-mini-4k-instruct","friendly_name":"Phi-3-mini instruct (4k)","task":"chat-completion","publisher":"microsoft","license":"mit","description":"","summary":"Tiniest member of the Phi-3 family. Optimized for both quality and low latency.","model_family":"microsoft","model_version":"11","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/microsoft.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml/models/Phi-3-small-128k-instruct/versions/4","registry":"azureml","name":"Phi-3-small-128k-instruct","original_name":"Phi-3-small-128k-instruct","friendly_name":"Phi-3-small instruct (128k)","task":"chat-completion","publisher":"microsoft","license":"mit","description":"","summary":"Same Phi-3-small model, but with a larger context size for RAG or few shot prompting.","model_family":"microsoft","model_version":"4","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/microsoft.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml/models/Phi-3-small-8k-instruct/versions/4","registry":"azureml","name":"Phi-3-small-8k-instruct","original_name":"Phi-3-small-8k-instruct","friendly_name":"Phi-3-small instruct (8k)","task":"chat-completion","publisher":"microsoft","license":"mit","description":"","summary":"A 7B parameters model, proves better quality than Phi-3-mini, with a focus on high-quality, reasoning-dense data.","model_family":"microsoft","model_version":"4","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/microsoft.svg","evaluation":"","license_description":""},{"id":"azureml://registries/azureml/models/Phi-3.5-mini-instruct/versions/2","registry":"azureml","name":"Phi-3-5-mini-instruct","original_name":"Phi-3.5-mini-instruct","friendly_name":"Phi-3.5-mini instruct (128k)","task":"chat-completion","publisher":"microsoft","license":"mit","description":"","summary":"Refresh of Phi-3-mini model.","model_family":"microsoft","model_version":"2","notes":"","tags":[],"rate_limit_tier":null,"supported_languages":[],"max_output_tokens":null,"max_input_tokens":0,"training_data_date":"","logo_url":"/images/modules/marketplace/models/families/microsoft.svg","evaluation":"","license_description":""}],"on_waitlist":false},"title":"Marketplace"}