Consolidate Copilot AI model documentation to reduce duplication and improve usability (#55996)
Co-authored-by: Sunbrye Ly <56200261+sunbrye@users.noreply.github.com> Co-authored-by: Siara <108543037+SiaraMist@users.noreply.github.com>
This commit is contained in:
@@ -12,6 +12,8 @@ By default, {% data variables.copilot.copilot_chat_short %} uses {% data variabl
|
||||
|
||||
However, you are not limited to using this model. You can choose from a selection of other models, each with its own particular strengths. You may have a favorite model that you like to use, or you might prefer to use a particular model for inquiring about a specific subject.
|
||||
|
||||
To view the available models per client, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/supported-ai-models-in-copilot#supported-models-per-client).
|
||||
|
||||
> [!NOTE] Different models have different premium request multipliers, which can affect how much of your monthly usage allowance is consumed. For details, see [AUTOTITLE](/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests).
|
||||
|
||||
{% data variables.product.prodname_copilot_short %} allows you to change the model during a chat and have the alternative model used to generate responses to your prompts.
|
||||
@@ -21,29 +23,7 @@ Changing the model that's used by {% data variables.copilot.copilot_chat_short %
|
||||
{% webui %}
|
||||
|
||||
> [!NOTE]
|
||||
> * Support for {% data variables.copilot.copilot_gpt_45 %}, {% data variables.copilot.copilot_claude_opus %}, and {% data variables.copilot.copilot_o3 %} are only available on {% data variables.copilot.copilot_pro_plus_short %}{% ifversion copilot-enterprise %} and {% data variables.copilot.copilot_enterprise_short %}{% endif %}.
|
||||
> * You can only use an alternative AI model in the immersive view of {% data variables.copilot.copilot_chat_short %} on GitHub.com. This is the full-page version of {% data variables.copilot.copilot_chat_short %} that's displayed at [https://github.com/copilot](https://github.com/copilot). The {% data variables.copilot.copilot_chat_short %} panel always uses the default model.
|
||||
|
||||
## AI models for {% data variables.copilot.copilot_chat_short %}
|
||||
|
||||
The following models are currently available in the immersive mode of {% data variables.copilot.copilot_chat_short %}:
|
||||
|
||||
* {% data variables.copilot.copilot_gpt_4o %}
|
||||
* {% data variables.copilot.copilot_gpt_41 %}
|
||||
* {% data variables.copilot.copilot_gpt_45 %} (preview)
|
||||
* {% data variables.copilot.copilot_claude_sonnet_35 %}
|
||||
* {% data variables.copilot.copilot_claude_sonnet_37 %}
|
||||
* {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking
|
||||
* {% data variables.copilot.copilot_claude_sonnet_40 %} (preview)
|
||||
* {% data variables.copilot.copilot_claude_opus %} (preview)
|
||||
* {% data variables.copilot.copilot_gemini_flash %}
|
||||
* {% data variables.copilot.copilot_gemini_25_pro %} (preview)
|
||||
* {% data variables.copilot.copilot_o1 %} (preview)
|
||||
* {% data variables.copilot.copilot_o3 %} (preview)
|
||||
* {% data variables.copilot.copilot_o3_mini %}
|
||||
* {% data variables.copilot.copilot_o4_mini %} (preview)
|
||||
|
||||
For more information about these models, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task).
|
||||
> You can only use an alternative AI model in the immersive view of {% data variables.copilot.copilot_chat_short %} on {% data variables.product.prodname_dotcom_the_website %}. This is the full-page version of {% data variables.copilot.copilot_chat_short %} that's displayed at [https://github.com/copilot](https://github.com/copilot). The {% data variables.copilot.copilot_chat_short %} panel always uses the default model.
|
||||
|
||||
### Limitations of AI models for {% data variables.copilot.copilot_chat_short %}
|
||||
|
||||
@@ -70,29 +50,7 @@ These instructions are for {% data variables.product.prodname_copilot_short %} o
|
||||
{% vscode %}
|
||||
|
||||
> [!NOTE]
|
||||
> * Multiple model support in {% data variables.copilot.copilot_chat_short %} is in {% data variables.release-phases.public_preview %} and is subject to change.
|
||||
> * Support for {% data variables.copilot.copilot_gpt_45 %}, {% data variables.copilot.copilot_claude_opus %}, and {% data variables.copilot.copilot_o3 %} are only available on {% data variables.copilot.copilot_pro_plus_short %}{% ifversion copilot-enterprise %} and {% data variables.copilot.copilot_enterprise_short %}{% endif %}.
|
||||
|
||||
## AI models for {% data variables.copilot.copilot_chat_short %}
|
||||
|
||||
The following models are currently available through multi-model {% data variables.copilot.copilot_chat_short %}:
|
||||
|
||||
* {% data variables.copilot.copilot_gpt_4o %}
|
||||
* {% data variables.copilot.copilot_gpt_41 %}
|
||||
* {% data variables.copilot.copilot_gpt_45 %} (preview)
|
||||
* {% data variables.copilot.copilot_claude_sonnet_35 %}
|
||||
* {% data variables.copilot.copilot_claude_sonnet_37 %}
|
||||
* {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking
|
||||
* {% data variables.copilot.copilot_claude_sonnet_40 %} (preview)
|
||||
* {% data variables.copilot.copilot_claude_opus %} (preview)
|
||||
* {% data variables.copilot.copilot_gemini_flash %}
|
||||
* {% data variables.copilot.copilot_gemini_25_pro %} (preview)
|
||||
* {% data variables.copilot.copilot_o1 %} (preview)
|
||||
* {% data variables.copilot.copilot_o3 %} (preview)
|
||||
* {% data variables.copilot.copilot_o3_mini %}
|
||||
* {% data variables.copilot.copilot_o4_mini %} (preview)
|
||||
|
||||
For more information about these models, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task).
|
||||
> Multiple model support in {% data variables.copilot.copilot_chat_short %} is in {% data variables.release-phases.public_preview %} and is subject to change.
|
||||
|
||||
## Changing the AI model
|
||||
|
||||
@@ -109,25 +67,6 @@ These instructions are for {% data variables.product.prodname_vscode %}. For ins
|
||||
|
||||
{% visualstudio %}
|
||||
|
||||
## AI models for {% data variables.copilot.copilot_chat_short %}
|
||||
|
||||
The following models are currently available through multi-model {% data variables.copilot.copilot_chat_short %}:
|
||||
|
||||
* {% data variables.copilot.copilot_gpt_4o %}
|
||||
* {% data variables.copilot.copilot_gpt_41 %}
|
||||
* {% data variables.copilot.copilot_gpt_45 %} (preview)
|
||||
* {% data variables.copilot.copilot_claude_sonnet_35 %}
|
||||
* {% data variables.copilot.copilot_claude_sonnet_37 %}
|
||||
* {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking
|
||||
* {% data variables.copilot.copilot_gemini_flash %}
|
||||
* {% data variables.copilot.copilot_gemini_25_pro %} (preview)
|
||||
* {% data variables.copilot.copilot_o1 %} (preview)
|
||||
* {% data variables.copilot.copilot_o3 %} (preview)
|
||||
* {% data variables.copilot.copilot_o3_mini %}
|
||||
* {% data variables.copilot.copilot_o4_mini %} (preview)
|
||||
|
||||
For more information about these models, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task).
|
||||
|
||||
## Changing the AI model
|
||||
|
||||
These instructions are for {% data variables.product.prodname_vs %}. For instructions on different clients, click the appropriate tab at the top of this page.
|
||||
@@ -146,27 +85,7 @@ To use multi-model {% data variables.copilot.copilot_chat_short %}, you must use
|
||||
{% jetbrains %}
|
||||
|
||||
> [!NOTE]
|
||||
> * Multiple model support in {% data variables.copilot.copilot_chat_short %} is in {% data variables.release-phases.public_preview %} and is subject to change.
|
||||
> * Support for {% data variables.copilot.copilot_gpt_45 %} and {% data variables.copilot.copilot_o3 %} are only available on {% data variables.copilot.copilot_pro_plus_short %}{% ifversion copilot-enterprise %} and {% data variables.copilot.copilot_enterprise_short %}{% endif %}.
|
||||
|
||||
## AI models for {% data variables.copilot.copilot_chat_short %}
|
||||
|
||||
The following models are currently available through multi-model {% data variables.copilot.copilot_chat_short %}:
|
||||
|
||||
* {% data variables.copilot.copilot_gpt_4o %}
|
||||
* {% data variables.copilot.copilot_gpt_41 %}
|
||||
* {% data variables.copilot.copilot_gpt_45 %} (preview)
|
||||
* {% data variables.copilot.copilot_claude_sonnet_35 %}
|
||||
* {% data variables.copilot.copilot_claude_sonnet_37 %}
|
||||
* {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking
|
||||
* {% data variables.copilot.copilot_gemini_flash %}
|
||||
* {% data variables.copilot.copilot_gemini_25_pro %} (preview)
|
||||
* {% data variables.copilot.copilot_o1 %} (preview)
|
||||
* {% data variables.copilot.copilot_o3 %} (preview)
|
||||
* {% data variables.copilot.copilot_o3_mini %}
|
||||
* {% data variables.copilot.copilot_o4_mini %} (preview)
|
||||
|
||||
For more information about these models, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task).
|
||||
> Multiple model support in {% data variables.copilot.copilot_chat_short %} is in {% data variables.release-phases.public_preview %} and is subject to change.
|
||||
|
||||
## Changing the AI model
|
||||
|
||||
@@ -185,27 +104,7 @@ These instructions are for the JetBrains IDEs. For instructions on different cli
|
||||
{% eclipse %}
|
||||
|
||||
> [!NOTE]
|
||||
> * Multiple model support in {% data variables.copilot.copilot_chat_short %} is in {% data variables.release-phases.public_preview %} and is subject to change.
|
||||
> * Support for {% data variables.copilot.copilot_gpt_45 %} and {% data variables.copilot.copilot_o3 %} are only available on {% data variables.copilot.copilot_pro_plus_short %}{% ifversion copilot-enterprise %} and {% data variables.copilot.copilot_enterprise_short %}{% endif %}.
|
||||
|
||||
## AI models for {% data variables.copilot.copilot_chat_short %}
|
||||
|
||||
The following models are currently available through multi-model {% data variables.copilot.copilot_chat_short %}:
|
||||
|
||||
* {% data variables.copilot.copilot_gpt_4o %}
|
||||
* {% data variables.copilot.copilot_gpt_41 %}
|
||||
* {% data variables.copilot.copilot_gpt_45 %} (preview)
|
||||
* {% data variables.copilot.copilot_claude_sonnet_35 %}
|
||||
* {% data variables.copilot.copilot_claude_sonnet_37 %}
|
||||
* {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking
|
||||
* {% data variables.copilot.copilot_gemini_flash %}
|
||||
* {% data variables.copilot.copilot_gemini_25_pro %} (preview)
|
||||
* {% data variables.copilot.copilot_o1 %} (preview)
|
||||
* {% data variables.copilot.copilot_o3 %} (preview)
|
||||
* {% data variables.copilot.copilot_o3_mini %}
|
||||
* {% data variables.copilot.copilot_o4_mini %} (preview)
|
||||
|
||||
For more information about these models, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task).
|
||||
> Multiple model support in {% data variables.copilot.copilot_chat_short %} is in {% data variables.release-phases.public_preview %} and is subject to change.
|
||||
|
||||
## Changing the AI model
|
||||
|
||||
@@ -225,26 +124,6 @@ These instructions are for the Eclipse IDE. For instructions on different client
|
||||
|
||||
> [!NOTE]
|
||||
> * Multiple model support in {% data variables.copilot.copilot_chat_short %} is in {% data variables.release-phases.public_preview %} and is subject to change.
|
||||
> * Support for {% data variables.copilot.copilot_gpt_45 %} and {% data variables.copilot.copilot_o3 %} are only available on {% data variables.copilot.copilot_pro_plus_short %}{% ifversion copilot-enterprise %} and {% data variables.copilot.copilot_enterprise_short %}{% endif %}.
|
||||
|
||||
## AI models for {% data variables.copilot.copilot_chat_short %}
|
||||
|
||||
The following models are currently available through multi-model {% data variables.copilot.copilot_chat_short %}:
|
||||
|
||||
* {% data variables.copilot.copilot_gpt_4o %}
|
||||
* {% data variables.copilot.copilot_gpt_41 %}
|
||||
* {% data variables.copilot.copilot_gpt_45 %} (preview)
|
||||
* {% data variables.copilot.copilot_claude_sonnet_35 %}
|
||||
* {% data variables.copilot.copilot_claude_sonnet_37 %}
|
||||
* {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking
|
||||
* {% data variables.copilot.copilot_gemini_flash %}
|
||||
* {% data variables.copilot.copilot_gemini_25_pro %} (preview)
|
||||
* {% data variables.copilot.copilot_o1 %} (preview)
|
||||
* {% data variables.copilot.copilot_o3 %} (preview)
|
||||
* {% data variables.copilot.copilot_o3_mini %}
|
||||
* {% data variables.copilot.copilot_o4_mini %} (preview)
|
||||
|
||||
For more information about these models, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task).
|
||||
|
||||
## Changing the AI model
|
||||
|
||||
@@ -264,6 +143,4 @@ To use multi-model {% data variables.copilot.copilot_chat_short %}, you must ins
|
||||
## Further reading
|
||||
|
||||
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-code-completion)
|
||||
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-in-github-copilot)
|
||||
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot)
|
||||
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task)
|
||||
|
||||
@@ -66,7 +66,7 @@ Changing the model that's used for {% data variables.product.prodname_copilot_sh
|
||||
|
||||
There are no changes to the data collection and usage policy if you change the AI model.
|
||||
|
||||
If you are on a {% data variables.copilot.copilot_free_short %} subscription, all completions count against your completions quota regardless of the model used. See [AUTOTITLE](/copilot/about-github-copilot/subscription-plans-for-github-copilot#comparing-copilot-subscriptions).
|
||||
If you are on a {% data variables.copilot.copilot_free_short %} plan, all completions count against your completions quota regardless of the model used. See [AUTOTITLE](/copilot/about-github-copilot/subscription-plans-for-github-copilot#comparing-copilot-subscriptions).
|
||||
|
||||
The setting to enable or disable suggestions that match public code are applied irrespective of which model you choose. See [AUTOTITLE](/enterprise-cloud@latest/copilot/using-github-copilot/finding-public-code-that-matches-github-copilot-suggestions).
|
||||
|
||||
@@ -74,7 +74,7 @@ The setting to enable or disable suggestions that match public code are applied
|
||||
|
||||
{% ifversion fpt %}
|
||||
|
||||
If you have a {% data variables.copilot.copilot_free_short %} or {% data variables.copilot.copilot_pro_short %} subscription, the model switcher for {% data variables.product.prodname_copilot_short %} code completion is automatically enabled.
|
||||
If you have a {% data variables.copilot.copilot_free_short %} or {% data variables.copilot.copilot_pro_short %} plan, the model switcher for {% data variables.product.prodname_copilot_short %} code completion is automatically enabled.
|
||||
|
||||
{% endif %}
|
||||
|
||||
|
||||
@@ -10,489 +10,141 @@ topics:
|
||||
|
||||
## Comparison of AI models for {% data variables.product.prodname_copilot %}
|
||||
|
||||
{% data variables.product.prodname_copilot %} supports multiple AI models with different capabilities. The model you choose affects the quality and relevance of responses by {% data variables.copilot.copilot_chat_short %} and {% data variables.product.prodname_copilot_short %} code completion. Some models offer lower latency, while others offer fewer hallucinations or better performance on specific tasks.
|
||||
{% data variables.product.prodname_copilot %} supports multiple AI models with different capabilities. The model you choose affects the quality and relevance of responses by {% data variables.copilot.copilot_chat_short %} and {% data variables.product.prodname_copilot_short %} code completion. Some models offer lower latency, while others offer fewer hallucinations or better performance on specific tasks. This guide helps you pick the best model based on your task, not just model names.
|
||||
|
||||
This article helps you compare the available models, understand the strengths of each model, and choose the model that best fits your task. For guidance across different models using real-world tasks, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/comparing-ai-models-using-different-tasks).
|
||||
|
||||
The best model depends on your use case:
|
||||
|
||||
* For **balance between cost and performance**, try {% data variables.copilot.copilot_gpt_41 %} or {% data variables.copilot.copilot_claude_sonnet_37 %}.
|
||||
* For **fast, low-cost support for basic tasks**, try {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_claude_sonnet_35 %}.
|
||||
* For **deep reasoning or complex coding challenges**, try {% data variables.copilot.copilot_o3 %}, GPT-4.5, or {% data variables.copilot.copilot_claude_sonnet_37 %}.
|
||||
* For **multimodal inputs and real-time performance**, try {% data variables.copilot.copilot_gemini_flash %} or {% data variables.copilot.copilot_gpt_41 %}.
|
||||
|
||||
You can click a model name in the list below to jump to a detailed overview of its strengths and use cases.
|
||||
* [{% data variables.copilot.copilot_gpt_41 %}](#gpt-41)
|
||||
* [{% data variables.copilot.copilot_gpt_4o %}](#gpt-4o)
|
||||
* [{% data variables.copilot.copilot_gpt_45 %}](#gpt-45)
|
||||
* [{% data variables.copilot.copilot_o1 %}](#o1)
|
||||
* [{% data variables.copilot.copilot_o3 %}](#o3)
|
||||
* [{% data variables.copilot.copilot_o3_mini %}](#o3-mini)
|
||||
* [{% data variables.copilot.copilot_o4_mini %}](#o4-mini)
|
||||
* [{% data variables.copilot.copilot_claude_sonnet_35 %}](#claude-sonnet-35)
|
||||
* [{% data variables.copilot.copilot_claude_sonnet_37 %}](#claude-sonnet-37)
|
||||
* [{% data variables.copilot.copilot_claude_sonnet_40 %}](#claude-sonnet-4)
|
||||
* [{% data variables.copilot.copilot_claude_opus %}](#claude-opus-4)
|
||||
* [{% data variables.copilot.copilot_gemini_flash %}](#gemini-20-flash)
|
||||
* [{% data variables.copilot.copilot_gemini_25_pro %}](#gemini-25-pro)
|
||||
> [!NOTE] Different models have different premium request multipliers, which can affect how much of your monthly usage allowance is consumed. For details, see [AUTOTITLE](/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests).
|
||||
|
||||
## {% data variables.copilot.copilot_gpt_41 %}
|
||||
### Recommended models by task
|
||||
|
||||
OpenAI’s latest model, {% data variables.copilot.copilot_gpt_41 %}, is now available in {% data variables.product.prodname_copilot %} and {% data variables.product.prodname_github_models %}, bringing OpenAI’s newest model to your coding workflow. This model outperforms {% data variables.copilot.copilot_gpt_4o %} across the board, with major gains in coding, instruction following, and long-context understanding. It has a larger context window and features a refreshed knowledge cutoff of June 2024.
|
||||
Use this table to find a suitable model quickly, see more detail in the sections below.
|
||||
|
||||
OpenAI has optimized {% data variables.copilot.copilot_gpt_41 %} for real-world use based on direct developer feedback about: frontend coding, making fewer extraneous edits, following formats reliably, adhering to response structure and ordering, consistent tool usage, and more. This model is a strong default choice for common development tasks that benefit from speed, responsiveness, and general-purpose reasoning.
|
||||
| Model | Task area | Excels at (primary use case) | Additional capabilities |
|
||||
|-------|-----------|-------------------------------|--------------------------|
|
||||
| {% data variables.copilot.copilot_gpt_41 %} | General-purpose coding and writing | Fast, accurate code completions and explanations | Agent mode, visual |
|
||||
| {% data variables.copilot.copilot_gpt_45 %} | Deep reasoning and debugging | Multi-step reasoning and complex code generation | Reasoning |
|
||||
| {% data variables.copilot.copilot_gpt_4o %} | General-purpose coding and writing | Fast completions and visual input understanding | Agent mode, visual |
|
||||
| {% data variables.copilot.copilot_o1 %} | Deep reasoning and debugging | Step-by-step problem solving and deep logic analysis | Reasoning |
|
||||
| {% data variables.copilot.copilot_o3 %} | Deep reasoning and debugging | Multi-step problem solving and architecture-level code analysis | Reasoning |
|
||||
| {% data variables.copilot.copilot_o3_mini %} | Fast help with simple or repetitive tasks | Quick responses for code snippets, explanations, and prototyping | Lower latency |
|
||||
| {% data variables.copilot.copilot_o4_mini %} | Fast help with simple or repetitive tasks | Fast, reliable answers to lightweight coding questions | Lower latency |
|
||||
| {% data variables.copilot.copilot_claude_opus %} | Deep reasoning and debugging | Advanced agentic workflows over large codebases, long-horizon projects | Reasoning |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_35 %} | Fast help with simple or repetitive tasks | Quick responses for code, syntax, and documentation | Agent mode |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_37 %} | Deep reasoning and debugging | Structured reasoning across large, complex codebases | Agent mode |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_40 %} | Deep reasoning and debugging | High-performance code review, bug fixes, and efficient research workflows | Agent mode |
|
||||
| {% data variables.copilot.copilot_gemini_25_pro %} | Deep reasoning and debugging | Complex code generation, debugging, and research workflows | Reasoning |
|
||||
| {% data variables.copilot.copilot_gemini_flash %} | Working with visuals (diagrams, screenshots) | Real-time responses and visual reasoning for UI and diagram-based tasks | Visual |
|
||||
|
||||
### Use cases
|
||||
## Task: General-purpose coding and writing
|
||||
|
||||
{% data reusables.copilot.model-use-cases.gpt-41 %}
|
||||
Use these models for common development tasks that require a balance of quality, speed, and cost efficiency. These models are a good default when you don't have specific requirements.
|
||||
|
||||
### Strengths
|
||||
| Model | Why it's a good fit |
|
||||
|-------|---------------------|
|
||||
| {% data variables.copilot.copilot_gpt_41 %} | Reliable default for most coding and writing tasks. Fast, accurate, and works well across languages and frameworks. |
|
||||
| {% data variables.copilot.copilot_gpt_4o %} | Delivers GPT-4–level performance with lower latency. |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_37 %} | Produces clear, structured output. Follows formatting instructions and maintains consistent style. |
|
||||
| {% data variables.copilot.copilot_gemini_flash %} | Fast and cost-effective. Well suited for quick questions, short code snippets, and lightweight writing tasks. |
|
||||
| {% data variables.copilot.copilot_o4_mini %} | Optimized for speed and cost efficiency. Ideal for real-time suggestions with low usage overhead. |
|
||||
|
||||
The following table summarizes the strengths of {% data variables.copilot.copilot_gpt_41 %}:
|
||||
### When to use these models
|
||||
|
||||
{% rowheaders %}
|
||||
Use one of these models if you want to:
|
||||
|
||||
| Task | Description | Why {% data variables.copilot.copilot_gpt_41 %} is a good fit |
|
||||
|-----------------------------------|---------------------------------------------------------------------|-----------------------------------------------------------------|
|
||||
| Code explanation | Understand what a block of code does or walk through logic. | Fast and accurate explanations. |
|
||||
| Code commenting and documentation | Generate or refine comments and documentation. | Writes clear, concise explanations. |
|
||||
| Bug investigation | Get a quick explanation or suggestion for an error. | Provides fast diagnostic insight. |
|
||||
| Code snippet generation | Generate small, reusable pieces of code. | Delivers high-quality results quickly. |
|
||||
| Multilingual prompts | Work with non-English prompts or identifiers. | Improved multilingual comprehension. |
|
||||
* Write or review functions, short files, or code diffs.
|
||||
* Generate documentation, comments, or summaries.
|
||||
* Explain errors or unexpected behavior quickly.
|
||||
* Work in a non-English programming environment.
|
||||
|
||||
{% endrowheaders %}
|
||||
### When to use a different model
|
||||
|
||||
### Alternative options
|
||||
If you're working on complex refactoring, architectural decisions, or multi-step logic, consider a model from [Deep reasoning and debugging](#task-deep-reasoning-and-debugging). For faster, simpler tasks like repetitive edits or one-off code suggestions, see [Fast help with simple or repetitive tasks](#task-fast-help-with-simple-or-repetitive-tasks).
|
||||
|
||||
| Task | Description | Why another model may be better |
|
||||
|------------------------------------|-------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|
|
||||
| Multi-step reasoning or algorithms | Design complex logic or break down multi-step problems. | GPT-4.5 or {% data variables.copilot.copilot_claude_sonnet_37 %} provide better step-by-step thinking. |
|
||||
| Complex refactoring | Refactor large codebases or update multiple interdependent files. | GPT-4.5 handles context and code dependencies more robustly. |
|
||||
| System review or architecture | Analyze structure, patterns, or architectural decisions in depth. | {% data variables.copilot.copilot_claude_sonnet_37 %} or GPT-4.5 offer deeper analysis. |
|
||||
## Task: Fast help with simple or repetitive tasks
|
||||
|
||||
## {% data variables.copilot.copilot_gpt_4o %}
|
||||
These models are optimized for speed and responsiveness. They’re ideal for quick edits, utility functions, syntax help, and lightweight prototyping. You’ll get fast answers without waiting for unnecessary depth or long reasoning chains.
|
||||
|
||||
OpenAI {% data variables.copilot.copilot_gpt_4o %} is a multimodal model that supports text and images. It responds in real time and works well for lightweight development tasks and conversational prompts in {% data variables.copilot.copilot_chat_short %}.
|
||||
### Recommended models
|
||||
|
||||
Compared to previous models, {% data variables.copilot.copilot_gpt_4o %} improves performance in multilingual contexts and demonstrates stronger capabilities when interpreting visual content. It delivers GPT-4 Turbo–level performance with lower latency and cost, making it a good default choice for many common developer tasks.
|
||||
| Model | Why it's a good fit |
|
||||
|-------|---------------------|
|
||||
| {% data variables.copilot.copilot_o4_mini %} | A quick and cost-effective model for repetitive or simple coding tasks. Offers clear, concise suggestions. |
|
||||
| {% data variables.copilot.copilot_o3_mini %} | Provides low-latency, accurate responses. Great for real-time suggestions and code walkthroughs. |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_35 %} | Balances fast responses with quality output. Ideal for small tasks and lightweight code explanations. |
|
||||
| {% data variables.copilot.copilot_gemini_flash %} | Extremely low latency and multimodal support (where available). Great for fast, interactive feedback. |
|
||||
|
||||
For more information about {% data variables.copilot.copilot_gpt_4o %}, see [OpenAI's documentation](https://platform.openai.com/docs/models/gpt-4o).
|
||||
### When to use these models
|
||||
|
||||
### Use cases
|
||||
Use one of these models if you want to:
|
||||
|
||||
{% data reusables.copilot.model-use-cases.gpt-4o %}
|
||||
* Write or edit small functions or utility code.
|
||||
* Ask quick syntax or language questions.
|
||||
* Prototype ideas with minimal setup.
|
||||
* Get fast feedback on simple prompts or edits.
|
||||
|
||||
### Strengths
|
||||
### When to use a different model
|
||||
|
||||
The following table summarizes the strengths of {% data variables.copilot.copilot_gpt_4o %}:
|
||||
If you’re working on complex refactoring, architectural decisions, or multi-step logic, see [Deep reasoning and debugging](#task-deep-reasoning-and-debugging).
|
||||
For tasks that need stronger general-purpose reasoning or more structured output, see [General-purpose coding and writing](#task-general-purpose-coding-and-writing).
|
||||
|
||||
{% rowheaders %}
|
||||
## Task: Deep reasoning and debugging
|
||||
|
||||
| Task | Description | Why {% data variables.copilot.copilot_gpt_4o %} is a good fit |
|
||||
|-----------------------------------|---------------------------------------------------------------------|---------------------------------------------------------------|
|
||||
| Code explanation | Understand what a block of code does or walk through logic. | Fast and accurate explanations. |
|
||||
| Code commenting and documentation | Generate or refine comments and documentation. | Writes clear, concise explanations. |
|
||||
| Bug investigation | Get a quick explanation or suggestion for an error. | Provides fast diagnostic insight. |
|
||||
| Code snippet generation | Generate small, reusable pieces of code. | Delivers high-quality results quickly. |
|
||||
| Multilingual prompts | Work with non-English prompts or identifiers. | Improved multilingual comprehension. |
|
||||
| Image-based questions | Ask about a diagram or screenshot (where image input is supported). | Supports visual reasoning. |
|
||||
These models are designed for tasks that require step-by-step reasoning, complex decision-making, or high-context awareness. They work well when you need structured analysis, thoughtful code generation, or multi-file understanding.
|
||||
|
||||
{% endrowheaders %}
|
||||
### Recommended models
|
||||
|
||||
### Alternative options
|
||||
| Model | Why it's a good fit |
|
||||
|-------|---------------------|
|
||||
| {% data variables.copilot.copilot_gpt_45 %} | Delivers consistent results for multi-step logic, long-context tasks, and complex reasoning. Ideal for debugging and planning. |
|
||||
| {% data variables.copilot.copilot_o3 %} | Strong at algorithm design, system debugging, and architecture decisions. Balances performance and reasoning. |
|
||||
| {% data variables.copilot.copilot_o1 %} | Excels at deliberate, structured reasoning and deep analysis. Good for performance tuning and problem-solving. |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_37 %} | Provides hybrid reasoning that adapts to both fast tasks and deeper thinking. |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_40 %} | Improves on 3.7 with more reliable completions and smarter reasoning under pressure. |
|
||||
| {% data variables.copilot.copilot_claude_opus %} | Anthropic’s most powerful model. Strong at strategy, debugging, and multi-layered logic. |
|
||||
| {% data variables.copilot.copilot_gemini_25_pro %} | Advanced reasoning across long contexts and scientific or technical analysis. |
|
||||
|
||||
The following table summarizes when an alternative model may be a better choice:
|
||||
### When to use these models
|
||||
|
||||
{% rowheaders %}
|
||||
Use one of these models if you want to:
|
||||
|
||||
| Task | Description | Why another model may be better |
|
||||
|------------------------------------|--------------------------------------------------------------|-------------------------------------------------------------|
|
||||
| Multi-step reasoning or algorithms | Design complex logic or break down multi-step problems. | GPT-4.5 or {% data variables.copilot.copilot_claude_sonnet_37 %} provide better step-by-step thinking. |
|
||||
| Complex refactoring | Refactor large codebases or update multiple interdependent files. | GPT-4.5 handles context and code dependencies more robustly.|
|
||||
| System review or architecture | Analyze structure, patterns, or architectural decisions in depth. | {% data variables.copilot.copilot_claude_sonnet_37 %} or GPT-4.5 offer deeper analysis. |
|
||||
* Debug complex issues with context across multiple files.
|
||||
* Refactor large or interconnected codebases.
|
||||
* Plan features or architecture across layers.
|
||||
* Weigh trade-offs between libraries, patterns, or workflows.
|
||||
* Analyze logs, performance data, or system behavior.
|
||||
|
||||
{% endrowheaders %}
|
||||
### When to use a different model
|
||||
|
||||
## GPT-4.5
|
||||
For fast iteration or lightweight tasks, see [Fast help with simple or repetitive tasks](#task-fast-help-with-simple-or-repetitive-tasks).
|
||||
For general development workflows or content generation, see [General-purpose coding and writing](#task-general-purpose-coding-and-writing).
|
||||
|
||||
> [!NOTE]
|
||||
> GPT-4.5 in {% data variables.copilot.copilot_chat_short %} is currently in {% data variables.release-phases.public_preview %} and subject to change.
|
||||
## Task: Working with visuals (diagrams, screenshots)
|
||||
|
||||
OpenAI GPT-4.5 improves reasoning, reliability, and contextual understanding. It works well for development tasks that involve complex logic, high-quality code generation, or interpreting nuanced intent.
|
||||
Use these models when you want to ask questions about screenshots, diagrams, UI components, or other visual input. These models support multimodal input and are well suited for front-end work or visual debugging.
|
||||
|
||||
Compared to {% data variables.copilot.copilot_gpt_41 %}, GPT-4.5 produces more consistent results for multi-step reasoning, long-form content, and complex problem-solving. It may have slightly higher latency and costs than {% data variables.copilot.copilot_gpt_41 %} and other smaller models.
|
||||
| Model | Why it's a good fit |
|
||||
|-------|---------------------|
|
||||
| {% data variables.copilot.copilot_gpt_4o %} | Supports image input. Great for interpreting screenshots or debugging UI issues with visual context. |
|
||||
| {% data variables.copilot.copilot_gemini_flash %} | Fast, multimodal model optimized for real-time interaction. Useful for feedback on diagrams, visual prototypes, and UI layouts. |
|
||||
|
||||
For more information about GPT-4.5, see [OpenAI's documentation](https://platform.openai.com/docs/models/gpt-4.5-preview).
|
||||
### When to use these models
|
||||
|
||||
### Use cases
|
||||
Use one of these models if you want to:
|
||||
|
||||
{% data reusables.copilot.model-use-cases.gpt-45 %}
|
||||
* Ask questions about diagrams, screenshots, or UI components.
|
||||
* Get feedback on visual drafts or workflows.
|
||||
* Understand front-end behavior from visual context.
|
||||
|
||||
### Strengths
|
||||
> [!TIP]
|
||||
> If you're using a model in a context that doesn’t support image input (like a code editor), you won’t see visual reasoning benefits. You may be able to use an MCP server to get access to visual input indirectly. See [AUTOTITLE](/copilot/customizing-copilot/using-model-context-protocol/extending-copilot-chat-with-mcp).
|
||||
|
||||
The following table summarizes the strengths of GPT-4.5:
|
||||
### When to use a different model
|
||||
|
||||
{% rowheaders %}
|
||||
If your task involves deep reasoning or large-scale refactoring, consider a model from [Deep reasoning and debugging](#task-deep-reasoning-and-debugging). For text-only tasks or simpler code edits, see [Fast help with simple or repetitive tasks](#task-fast-help-with-simple-or-repetitive-tasks).
|
||||
|
||||
| Task | Description | Why GPT-4.5 is a good fit |
|
||||
|-------------------------|--------------------------------------------------------------|-----------------------------------------------------------------|
|
||||
| Code documentation | Draft README files, or technical explanations. | Generates clear, context-rich writing with minimal editing. |
|
||||
| Complex code generation | Write full functions, classes, or multi-file logic. | Provides better structure, consistency, and fewer logic errors. |
|
||||
| Bug investigation | Trace errors or walk through multi-step issues. | Maintains state and offers reliable reasoning across steps. |
|
||||
| Decision-making prompts | Weigh pros and cons of libraries, patterns, or architecture. | Provides balanced, contextualized reasoning. |
|
||||
## Next steps
|
||||
|
||||
{% endrowheaders %}
|
||||
Choosing the right model helps you get the most out of {% data variables.product.prodname_copilot_short %}. If you're not sure which model to use, start with a general-purpose option like {% data variables.copilot.copilot_gpt_41 %} or {% data variables.copilot.copilot_gpt_4o %}, then adjust based on your needs.
|
||||
|
||||
### Alternative options
|
||||
|
||||
The following table summarizes when an alternative model may be a better choice:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why another model may be better |
|
||||
|--------------------------|------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|
|
||||
| High-speed iteration | Rapid back-and-forth prompts or code tweaks. | {% data variables.copilot.copilot_gpt_41 %} responds faster with similar quality for lightweight tasks. |
|
||||
| Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | {% data variables.copilot.copilot_gpt_41 %} or {% data variables.copilot.copilot_o4_mini %} are more cost-effective. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
## o1
|
||||
|
||||
OpenAI o1 is an older reasoning model that supports complex, multi-step tasks and deep logical reasoning to find the best solution.
|
||||
|
||||
For more information about o1, see [OpenAI's documentation](https://platform.openai.com/docs/models/o1).
|
||||
|
||||
### Use cases
|
||||
|
||||
{% data reusables.copilot.model-use-cases.o1 %}
|
||||
|
||||
### Strengths
|
||||
|
||||
The following table summarizes the strengths of o1:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why o1 is a good fit |
|
||||
|----------------------------|-------------------------------------------------------------------------|--------------------------------------------------------------------|
|
||||
| Code optimization | Analyze and improve performance-critical or algorithmic code. | Excels at deep reasoning and identifying non-obvious improvements. |
|
||||
| Debugging complex systems | Isolate and fix performance bottlenecks or multi-file issues. | Provides step-by-step analysis and high reasoning accuracy. |
|
||||
| Structured code generation | Generate reusable functions, typed outputs, or structured responses. | Supports function calling and structured output natively. |
|
||||
| Analytical summarization | Interpret logs, benchmark results, or code behavior. | Translates raw data into clear, actionable insights. |
|
||||
| Refactoring code | Improve maintainability and modularity of existing systems. | Applies deliberate and context-aware suggestions. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
### Alternative options
|
||||
|
||||
The following table summarizes when an alternative model may be a better choice:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why another model may be better |
|
||||
|---------------------------|----------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Quick iterations | Rapid back-and-forth prompts or code tweaks. | {% data variables.copilot.copilot_gpt_41 %} or {% data variables.copilot.copilot_gemini_flash %} responds faster for lightweight tasks. |
|
||||
| Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_gemini_flash %} are more cost-effective for basic use cases. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
## {% data variables.copilot.copilot_o3 %}
|
||||
|
||||
{% data reusables.copilot.o3-public-preview-note %}
|
||||
|
||||
OpenAI {% data variables.copilot.copilot_o3 %} is the most capable reasoning model in the o-series. It is ideal for deep coding workflows and complex, multi-step tasks.
|
||||
For more information about {% data variables.copilot.copilot_o3 %}, see [OpenAI's documentation](https://platform.openai.com/docs/models/o3).
|
||||
|
||||
### Use cases
|
||||
|
||||
{% data reusables.copilot.model-use-cases.o3 %}
|
||||
|
||||
### Strengths
|
||||
|
||||
The following table summarizes the strengths of {% data variables.copilot.copilot_o3 %}:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why {% data variables.copilot.copilot_o3 %} is a good fit |
|
||||
|----------------------------|-------------------------------------------------------------------------|--------------------------------------------------------------------|
|
||||
| Code optimization | Analyze and improve performance-critical or algorithmic code. | Excels at deep reasoning and identifying non-obvious improvements. |
|
||||
| Debugging complex systems | Isolate and fix performance bottlenecks or multi-file issues. | Provides step-by-step analysis and high reasoning accuracy. |
|
||||
| Structured code generation | Generate reusable functions, typed outputs, or structured responses. | Supports function calling and structured output natively. |
|
||||
| Analytical summarization | Interpret logs, benchmark results, or code behavior. | Translates raw data into clear, actionable insights. |
|
||||
| Refactoring code | Improve maintainability and modularity of existing systems. | Applies deliberate and context-aware suggestions. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
### Alternative options
|
||||
|
||||
The following table summarizes when an alternative model may be a better choice:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why another model may be better |
|
||||
|---------------------------|----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Quick iterations | Rapid back-and-forth prompts or code tweaks. | {% data variables.copilot.copilot_gpt_41 %} or {% data variables.copilot.copilot_gemini_flash %} responds faster for lightweight tasks. |
|
||||
| Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_gemini_flash %} are more cost-effective for basic use cases. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
## o3-mini
|
||||
|
||||
OpenAI o3-mini is a fast, cost-effective reasoning model designed to deliver coding performance while maintaining lower latency and resource usage. o3-mini outperforms o1 on coding benchmarks with response times that are comparable to o1-mini. Copilot is configured to use OpenAI's "medium" reasoning effort.
|
||||
|
||||
For more information about o3-mini, see [OpenAI's documentation](https://platform.openai.com/docs/models/o3-mini).
|
||||
|
||||
### Use cases
|
||||
|
||||
{% data reusables.copilot.model-use-cases.o3-mini %}
|
||||
|
||||
### Strengths
|
||||
|
||||
The following table summarizes the strengths of o3-mini:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why o3-mini is a good fit |
|
||||
|----------------------------|-------------------------------------------------------------|--------------------------------------------------------------|
|
||||
| Real-time code suggestions | Write or extend basic functions and utilities. | Responds quickly with accurate, concise suggestions. |
|
||||
| Code explanation | Understand what a block of code does or walk through logic. | Fast, accurate summaries with clear language. |
|
||||
| Learn new concepts | Ask questions about programming concepts or patterns. | Offers helpful, accessible explanations with quick feedback. |
|
||||
| Quick prototyping | Try out small ideas or test simple code logic quickly. | Fast, low-latency responses for iterative feedback. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
### Alternative options
|
||||
|
||||
The following table summarizes when an alternative model may be a better choice:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why another model may be better |
|
||||
|-----------------------------|------------------------------------------------------------|------------------------------------------------------------|
|
||||
| Deep reasoning tasks | Multi-step analysis or architectural decisions. | GPT-4.5 or o1 provide more structured, thorough reasoning. |
|
||||
| Creative or long-form tasks | Writing docs, refactoring across large codebases. | o3-mini is less expressive and structured than larger models. |
|
||||
| Complex code generation | Write full functions, classes, or multi-file logic. | Larger models handle complexity and structure more reliably. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
## {% data variables.copilot.copilot_o4_mini %}
|
||||
|
||||
{% data reusables.copilot.o4-mini-public-preview-note %}
|
||||
|
||||
OpenAI {% data variables.copilot.copilot_o4_mini %} is the most efficient model in the o-series. It is a cost-effective reasoning model designed to deliver coding performance while maintaining lower latency and resource usage.
|
||||
|
||||
For more information about o4, see [OpenAI's documentation](https://platform.openai.com/docs/models/o4-mini).
|
||||
|
||||
### Use cases
|
||||
|
||||
{% data reusables.copilot.model-use-cases.o4-mini %}
|
||||
|
||||
### Strengths
|
||||
|
||||
The following table summarizes the strengths of {% data variables.copilot.copilot_o4_mini %}:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why {% data variables.copilot.copilot_o4_mini %} is a good fit |
|
||||
|----------------------------|-------------------------------------------------------------|--------------------------------------------------------------|
|
||||
| Real-time code suggestions | Write or extend basic functions and utilities. | Responds quickly with accurate, concise suggestions. |
|
||||
| Code explanation | Understand what a block of code does or walk through logic. | Fast, accurate summaries with clear language. |
|
||||
| Learn new concepts | Ask questions about programming concepts or patterns. | Offers helpful, accessible explanations with quick feedback. |
|
||||
| Quick prototyping | Try out small ideas or test simple code logic quickly. | Fast, low-latency responses for iterative feedback. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
### Alternative options
|
||||
|
||||
The following table summarizes when an alternative model may be a better choice:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why another model may be better |
|
||||
|-----------------------------|------------------------------------------------------------|---------------------------------------------------------------|
|
||||
| Deep reasoning tasks | Multi-step analysis or architectural decisions. | GPT-4.5 or {% data variables.copilot.copilot_o3 %} provide more structured, thorough reasoning. |
|
||||
| Creative or long-form tasks | Writing docs, refactoring across large codebases. | {% data variables.copilot.copilot_o4_mini %} is less expressive and structured than larger models. |
|
||||
| Complex code generation | Write full functions, classes, or multi-file logic. | Larger models handle complexity and structure more reliably. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
## {% data variables.copilot.copilot_claude_sonnet_35 %}
|
||||
|
||||
{% data variables.copilot.copilot_claude_sonnet_35 %} is a fast and cost-efficient model designed for everyday developer tasks. While it doesn't have the deeper reasoning capabilities of {% data variables.copilot.copilot_claude_sonnet_37 %}, it still performs well on coding tasks that require quick responses, clear summaries, and basic logic.
|
||||
|
||||
For more information about {% data variables.copilot.copilot_claude_sonnet_35 %}, see [Anthropic's documentation](https://www.anthropic.com/news/claude-3-5-sonnet).
|
||||
For more information on using Claude in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-in-github-copilot).
|
||||
|
||||
### Use cases
|
||||
|
||||
{% data reusables.copilot.model-use-cases.claude-35-sonnet %}
|
||||
|
||||
### Strengths
|
||||
|
||||
The following table summarizes the strengths of {% data variables.copilot.copilot_claude_sonnet_35 %}:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why {% data variables.copilot.copilot_claude_sonnet_35 %} is a good fit |
|
||||
|-----------------------------------|---------------------------------------------------------------|------------------------------------------|
|
||||
| Code explanation | Understand what a block of code does or walk through logic. | Fast and accurate explanations. |
|
||||
| Code commenting and documentation | Generate or refine comments and documentation. | Writes clear, concise explanations. |
|
||||
| Quick language questions | Ask syntax, idiom, or feature-specific questions. | Offers fast and accurate explanations. |
|
||||
| Code snippet generation | Generate small, reusable pieces of code. | Delivers high-quality results quickly. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
### Alternative options
|
||||
|
||||
The following table summarizes when an alternative model may be a better choice:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why another model may be better |
|
||||
|------------------------------------|---------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|
|
||||
| Multi-step reasoning or algorithms | Design complex logic or break down multi-step problems. | GPT-4.5 or {% data variables.copilot.copilot_claude_sonnet_37 %} provide better step-by-step thinking. |
|
||||
| Complex refactoring | Refactor large codebases or update multiple interdependent files. | GPT-4.5 or {% data variables.copilot.copilot_claude_sonnet_37 %} handle context and code dependencies more robustly. |
|
||||
| System review or architecture | Analyze structure, patterns, or architectural decisions in depth. | {% data variables.copilot.copilot_claude_sonnet_37 %} or GPT-4.5 offer deeper analysis. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
## {% data variables.copilot.copilot_claude_sonnet_37 %}
|
||||
|
||||
{% data variables.copilot.copilot_claude_sonnet_37 %} is a powerful model that excels in development tasks that require structured reasoning across large or complex codebases. Its hybrid approach to reasoning responds quickly when needed, while still supporting slower step-by-step analysis for deeper tasks.
|
||||
|
||||
For more information about {% data variables.copilot.copilot_claude_sonnet_37 %}, see [Anthropic's documentation](https://www.anthropic.com/claude/sonnet).
|
||||
For more information on using Claude in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-in-github-copilot).
|
||||
|
||||
### Use cases
|
||||
|
||||
{% data reusables.copilot.model-use-cases.claude-37-sonnet %}
|
||||
|
||||
### Strengths
|
||||
|
||||
The following table summarizes the strengths of {% data variables.copilot.copilot_claude_sonnet_37 %}:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why {% data variables.copilot.copilot_claude_sonnet_37 %} is a good fit |
|
||||
|------------------------|-----------------------------------------------------------------------------|----------------------------------------------------------------------|
|
||||
| Multi-file refactoring | Improve structure and maintainability across large codebases. | Handles multi-step logic and retains cross-file context. |
|
||||
| Architectural planning | Support mixed task complexity, from small queries to strategic work. | Fine-grained “thinking” controls adapt to the scope of each task. |
|
||||
| Feature development | Build and implement functionality across frontend, backend, and API layers. | Supports tasks with structured reasoning and reliable completions. |
|
||||
| Algorithm design | Design, test, and optimize complex algorithms. | Balances rapid prototyping with deep analysis when needed. |
|
||||
| Analytical insights | Combine high-level summaries with deep dives into code behavior. | Hybrid reasoning lets the model shift based on user needs. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
### Alternative options
|
||||
|
||||
The following table summarizes when an alternative model may be a better choice:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why another model may be better |
|
||||
|--------------------------|----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Quick iterations | Rapid back-and-forth prompts or code tweaks. | {% data variables.copilot.copilot_gpt_41 %} responds faster for lightweight tasks. |
|
||||
| Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_gemini_flash %} are more cost-effective for basic use cases. {% data variables.copilot.copilot_claude_sonnet_35 %} is cheaper, simpler, and still advanced enough for similar tasks. |
|
||||
| Lightweight prototyping | Rapid back-and-forth code iterations with minimal context. | {% data variables.copilot.copilot_claude_sonnet_37 %} may over-engineer or apply unnecessary complexity. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
## {% data variables.copilot.copilot_claude_sonnet_40 %}
|
||||
|
||||
{% data reusables.copilot.claude-sonnet-40-public-preview-note %}
|
||||
|
||||
For more information about {% data variables.copilot.copilot_claude_sonnet_40 %}, see [Anthropic's documentation](https://www.anthropic.com/claude/).
|
||||
For more information on using Claude in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-in-github-copilot).
|
||||
|
||||
## {% data variables.copilot.copilot_claude_opus %}
|
||||
|
||||
{% data reusables.copilot.claude-opus-public-preview-note %}
|
||||
|
||||
For more information about {% data variables.copilot.copilot_claude_opus %}, see [Anthropic's documentation](https://www.anthropic.com/claude/).
|
||||
For more information on using Claude in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-in-github-copilot).
|
||||
|
||||
## {% data variables.copilot.copilot_gemini_flash %}
|
||||
|
||||
{% data variables.copilot.copilot_gemini_flash %} is Google’s high-speed, multimodal model optimized for real-time, interactive applications that benefit from visual input and agentic reasoning. In {% data variables.copilot.copilot_chat_short %}, {% data variables.copilot.copilot_gemini_flash %} enables fast responses and cross-modal understanding.
|
||||
|
||||
For more information about {% data variables.copilot.copilot_gemini_flash %}, see [Google's documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-0-flash).
|
||||
For more information on using {% data variables.copilot.copilot_gemini %} in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot).
|
||||
|
||||
### Use cases
|
||||
|
||||
{% data reusables.copilot.model-use-cases.gemini-20-flash %}
|
||||
|
||||
### Strengths
|
||||
|
||||
The following table summarizes the strengths of {% data variables.copilot.copilot_gemini_flash %}:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why {% data variables.copilot.copilot_gemini_flash %} is a good fit |
|
||||
|-------------------------|---------------------------------------------------------------------|--------------------------------------------------------|
|
||||
| Code snippet generation | Generate small, reusable pieces of code. | Delivers high-quality results quickly. |
|
||||
| Design feedback loops | Get suggestions from sketches, diagrams, or visual drafts | Supports visual reasoning. |
|
||||
| Image-based analysis | Ask about a diagram or screenshot (where image input is supported). | Supports visual reasoning. |
|
||||
| Front-end prototyping | Build and test UIs or workflows involving visual elements | Supports multimodal reasoning and lightweight context. |
|
||||
| Bug investigation | Get a quick explanation or suggestion for an error. | Provides fast diagnostic insight. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
### Alternative options
|
||||
|
||||
The following table summarizes when an alternative model may be a better choice:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why another model may be better |
|
||||
|------------------------------------|--------------------------------------------------------------|-------------------------------------------------------------|
|
||||
| Multi-step reasoning or algorithms | Design complex logic or break down multi-step problems. | GPT-4.5 or {% data variables.copilot.copilot_claude_sonnet_37 %} provide better step-by-step thinking. |
|
||||
| Complex refactoring | Refactor large codebases or update multiple interdependent files. | GPT-4.5 handles context and code dependencies more robustly.|
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
## {% data variables.copilot.copilot_gemini_25_pro %}
|
||||
|
||||
{% data reusables.copilot.gemini-25-pro-public-preview-note %}
|
||||
|
||||
{% data variables.copilot.copilot_gemini_25_pro %} is Google's latest AI model, designed to handle complex tasks with advanced reasoning and coding capabilities. It also works well for heavy research workflows that require long-context understanding and analysis.
|
||||
|
||||
For more information about {% data variables.copilot.copilot_gemini_25_pro %}, see [Google's documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-pro).
|
||||
For more information on using {% data variables.copilot.copilot_gemini %} in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot).
|
||||
|
||||
### Use cases
|
||||
|
||||
{% data reusables.copilot.model-use-cases.gemini-25-pro %}
|
||||
|
||||
### Strengths
|
||||
|
||||
The following table summarizes the strengths of {% data variables.copilot.copilot_gemini_25_pro %}:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why {% data variables.copilot.copilot_gemini_25_pro %} is a good fit |
|
||||
|---------------------------|-------------------------------------------------------------------|---------------------------------------------------------------------|
|
||||
| Complex code generation | Write full functions, classes, or multi-file logic. | Provides better structure, consistency, and fewer logic errors. |
|
||||
| Debugging complex systems | Isolate and fix performance bottlenecks or multi-file issues. | Provides step-by-step analysis and high reasoning accuracy. |
|
||||
| Scientific research | Analyze data and generate insights across scientific disciplines. | Supports complex analysis with heavy researching capabilities. |
|
||||
| Long-context processing | Analyze extensive documents, datasets, or codebases. | Handles long-context inputs effectively. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
### Alternative options
|
||||
|
||||
The following table summarizes when an alternative model may be a better choice:
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Task | Description | Why another model may be better |
|
||||
|---------------------------|----------------------------------------------------|------------------------------------------------------------------------------------------------------------|
|
||||
| Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_gemini_flash %} are more cost-effective for basic use cases. |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
## Further reading
|
||||
|
||||
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/examples-for-ai-model-comparison)
|
||||
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat)
|
||||
* [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-code-completion)
|
||||
* For detailed model specs and pricing, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/supported-ai-models-in-copilot).
|
||||
* For more examples of how to use different models, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/comparing-ai-models-using-different-tasks).
|
||||
* To switch between models, refer to [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat) or [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-code-completion).
|
||||
|
||||
@@ -74,9 +74,9 @@ def grant_editor_access(user_id, doc_id):
|
||||
* {% data variables.copilot.copilot_gpt_4o %} can recognize the pattern and provide a clear, concise explanation.
|
||||
* The task doesn't require deep reasoning or complex logic.
|
||||
|
||||
## o3-mini
|
||||
## {% data variables.copilot.copilot_o3_mini %}
|
||||
|
||||
OpenAI o3-mini is a fast, cost-effective reasoning model designed to deliver coding performance while maintaining lower latency and resource usage. o3-mini outperforms o1 on coding benchmarks with response times that are comparable to o1-mini. Copilot is configured to use OpenAI's "medium" reasoning effort.
|
||||
OpenAI {% data variables.copilot.copilot_o3_mini %} is a fast, cost-effective reasoning model designed to deliver coding performance while maintaining lower latency and resource usage. {% data variables.copilot.copilot_o3_mini %} outperforms {% data variables.copilot.copilot_o1 %} on coding benchmarks with response times that are comparable to o1-mini. {% data variables.product.prodname_copilot_short %} is configured to use OpenAI's "medium" reasoning effort.
|
||||
|
||||
### Example scenario
|
||||
|
||||
|
||||
@@ -0,0 +1,36 @@
|
||||
---
|
||||
title: Configuring access to AI models in Copilot
|
||||
shortTitle: Configure access to AI models
|
||||
intro: 'Learn how to configure access to AI models in {% data variables.product.prodname_copilot_short %}.'
|
||||
versions:
|
||||
feature: copilot
|
||||
topics:
|
||||
- Copilot
|
||||
---
|
||||
|
||||
Your access to {% data variables.product.prodname_copilot %} models depends on:
|
||||
|
||||
* Your {% data variables.product.prodname_copilot_short %} plan.
|
||||
* The client you're using (for example, {% data variables.product.prodname_dotcom_the_website %}, {% data variables.product.prodname_vscode %}, or JetBrains IDEs).
|
||||
* Whether your organization or enterprise restricts access to specific models.
|
||||
|
||||
For information about the AI models available in {% data variables.product.prodname_copilot_short %}, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/supported-ai-models-in-copilot).
|
||||
|
||||
## Setup for individual use
|
||||
|
||||
If you have a {% data variables.copilot.copilot_free_short %}, {% data variables.copilot.copilot_pro_short %}, or {% data variables.copilot.copilot_pro_plus_short %} plan, you may need to enable access to certain models before using them.
|
||||
|
||||
You can enable access in two ways:
|
||||
|
||||
* The first time you use a model with {% data variables.copilot.copilot_chat_short %} in your editor or in the immersive view of {% data variables.copilot.copilot_chat_short %}, you will be prompted to allow access to the model.
|
||||
|
||||
Click **Allow** to enable the AI model and update the policy in your personal settings on {% data variables.product.github %}.
|
||||
|
||||
* You can enable the model directly in your personal settings on the {% data variables.product.github %} website. See [AUTOTITLE](/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/managing-copilot-policies-as-an-individual-subscriber#enabling-or-disabling-alternative-ai-models).
|
||||
|
||||
>[!NOTE]
|
||||
> Some models may not be available depending on your plan. See [AUTOTITLE](/copilot/about-github-copilot/plans-for-github-copilot#models).
|
||||
|
||||
## Setup for organization and enterprise use
|
||||
|
||||
As an enterprise or organization owner, you can enable or disable access to AI models for everyone who has been assigned a {% data variables.copilot.copilot_enterprise_short %} or {% data variables.copilot.copilot_business_short %} seat through your enterprise or organization. See [AUTOTITLE](/copilot/managing-copilot/managing-github-copilot-in-your-organization/setting-policies-for-copilot-in-your-organization/managing-policies-for-copilot-in-your-organization) and [AUTOTITLE](/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise).
|
||||
@@ -7,13 +7,10 @@ versions:
|
||||
topics:
|
||||
- Copilot
|
||||
children:
|
||||
- /changing-the-ai-model-for-copilot-chat
|
||||
- /changing-the-ai-model-for-copilot-code-completion
|
||||
- /supported-ai-models-in-copilot
|
||||
- /choosing-the-right-ai-model-for-your-task
|
||||
- /comparing-ai-models-using-different-tasks
|
||||
- /using-claude-in-github-copilot
|
||||
- /using-gemini-in-github-copilot
|
||||
- /using-openai-gpt-41-in-github-copilot
|
||||
- /using-openai-o3-in-github-copilot
|
||||
- /using-openai-o4-mini-in-github-copilot
|
||||
- /configuring-access-to-ai-models-in-copilot
|
||||
- /changing-the-ai-model-for-copilot-chat
|
||||
- /changing-the-ai-model-for-copilot-code-completion
|
||||
---
|
||||
|
||||
@@ -0,0 +1,116 @@
|
||||
---
|
||||
title: Supported AI models in Copilot
|
||||
intro: 'Learn about the supported AI models in {% data variables.product.prodname_copilot %}.'
|
||||
type: reference
|
||||
versions:
|
||||
feature: copilot
|
||||
topics:
|
||||
- Copilot
|
||||
redirect_from:
|
||||
- /copilot/using-github-copilot/using-claude-sonnet-in-github-copilot
|
||||
- /copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot
|
||||
- /copilot/using-github-copilot/ai-models/using-claude-in-github-copilot
|
||||
- /copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot
|
||||
- /copilot/using-github-copilot/ai-models/using-gemini-in-github-copilot
|
||||
- /copilot/using-github-copilot/ai-models/using-openai-gpt-41-in-github-copilot
|
||||
- /copilot/using-github-copilot/ai-models/using-openai-o3-in-github-copilot
|
||||
- /copilot/using-github-copilot/ai-models/using-openai-o4-mini-in-github-copilot
|
||||
---
|
||||
|
||||
{% data variables.product.prodname_copilot %} supports multiple models, each with different strengths. Some models prioritize speed and cost-efficiency, while others are optimized for accuracy, reasoning, or working with multimodal inputs (like images and code together).
|
||||
|
||||
Depending on your {% data variables.product.prodname_copilot_short %} plan and where you're using it—such as {% data variables.product.prodname_dotcom_the_website %} or an IDE—you may have access to different models.
|
||||
|
||||
>[!NOTE] Model availability is subject to change. Some models may be replaced or updated over time.
|
||||
|
||||
For all AI models, input prompts and output completions run through {% data variables.product.prodname_copilot %}'s content filters for harmful, offensive, or off-topic content, and for public code matching when enabled.
|
||||
|
||||
## Supported AI models in {% data variables.product.prodname_copilot_short %}
|
||||
|
||||
This table lists the AI models available in {% data variables.product.prodname_copilot_short %}, along with their release status and availability in different modes.
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Model name | Provider | Release status | Agent mode | Ask mode | Edit mode |
|
||||
|------------|----------|----------------|------------|----------------------|---------------|
|
||||
| {% data variables.copilot.copilot_gpt_41 %} | OpenAI | GA | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_gpt_45 %} | OpenAI | {% data variables.release-phases.public_preview_caps %} | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_gpt_4o %} | OpenAI | GA | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_o1 %} | OpenAI | {% data variables.release-phases.public_preview_caps %} | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_o3 %} | OpenAI | {% data variables.release-phases.public_preview_caps %} | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_o3_mini %} | OpenAI | GA | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_o4_mini %} | OpenAI | {% data variables.release-phases.public_preview_caps %} | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_claude_opus %} | Anthropic | {% data variables.release-phases.public_preview_caps %} | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_35 %} | Anthropic | GA | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_37 %} | Anthropic | GA | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking | Anthropic | GA | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_40 %} | Anthropic | {% data variables.release-phases.public_preview_caps %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_gemini_25_pro %} | Google | {% data variables.release-phases.public_preview_caps %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_gemini_flash %} | Google | GA | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
## Supported AI models per client
|
||||
|
||||
The following table shows which models are available in each client.
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Model | {% data variables.product.prodname_dotcom_the_website %} | {% data variables.product.prodname_vscode %} | {% data variables.product.prodname_vs %} | Eclipse | Xcode | JetBrains IDEs |
|
||||
|---------------------------|------------|---------|----------------|---------|--------|------------|
|
||||
| {% data variables.copilot.copilot_gpt_41 %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_gpt_45 %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_gpt_4o %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_o1 %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_o3 %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_o3_mini %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_o4_mini %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_claude_opus %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "x" aria-label="Not included" %} | {% octicon "x" aria-label="Not included" %} | {% octicon "x" aria-label="Not included" %} | {% octicon "x" aria-label="Not included" %} |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_35 %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_37 %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |{% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_40 %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "x" aria-label="Not included" %} | {% octicon "x" aria-label="Not included" %} | {% octicon "x" aria-label="Not included" %} | {% octicon "x" aria-label="Not included" %} |
|
||||
| {% data variables.copilot.copilot_gemini_25_pro %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
| {% data variables.copilot.copilot_gemini_flash %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
## Supported AI models per {% data variables.product.prodname_copilot_short %} plan
|
||||
|
||||
The following table shows which AI models are available in each {% data variables.product.prodname_copilot_short %} plan. For more information about the plans, see [AUTOTITLE](/copilot/about-github-copilot/plans-for-github-copilot).
|
||||
|
||||
{% data reusables.copilot.available-models-per-plan %}
|
||||
|
||||
## Model multipliers
|
||||
|
||||
Each model has a premium request multiplier, based on its complexity and resource usage. If you are on a paid {% data variables.product.prodname_copilot_short %} plan, your premium request allowance is deducted according to this multiplier.
|
||||
|
||||
For more information about premium requests, see [AUTOTITLE](/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests).
|
||||
|
||||
{% rowheaders %}
|
||||
|
||||
| Model | Multiplier for **paid plans** | Multiplier for **{% data variables.copilot.copilot_free_short %}** |
|
||||
|-------|-------------------------------|-------------------------------------------------|
|
||||
| {% data variables.copilot.copilot_gpt_41 %} | 0 | 1 |
|
||||
| {% data variables.copilot.copilot_gpt_45 %} | 50 | Not applicable |
|
||||
| {% data variables.copilot.copilot_gpt_4o %} | 0 | 1 |
|
||||
| {% data variables.copilot.copilot_o1 %} | 10 | Not applicable |
|
||||
| {% data variables.copilot.copilot_o3 %} | 1 | Not applicable |
|
||||
| {% data variables.copilot.copilot_o3_mini %} | 0.33 | 1 |
|
||||
| {% data variables.copilot.copilot_o4_mini %} | 0.33 | Not applicable |
|
||||
| {% data variables.copilot.copilot_claude_opus %} | 10 | Not applicable |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_35 %} | 1 | 1 |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_37 %} | 1 | Not applicable |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking | 1.25 | Not applicable |
|
||||
| {% data variables.copilot.copilot_claude_sonnet_40 %} | 1 | Not applicable |
|
||||
| {% data variables.copilot.copilot_gemini_25_pro %} | 1 | Not applicable |
|
||||
| {% data variables.copilot.copilot_gemini_flash %} | 0.25 | 1 |
|
||||
|
||||
{% endrowheaders %}
|
||||
|
||||
## Next steps
|
||||
|
||||
* For task-based guidance on selecting a model, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task).
|
||||
* To configure which models are available to you, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/configuring-access-to-ai-models-in-copilot).
|
||||
* To learn how to change your current model, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat) or [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-code-completion).
|
||||
* To learn more about Responsible Use and Responsible AI, see [{% data variables.product.prodname_copilot_short %} Trust Center](https://copilot.github.trust.page/) and [AUTOTITLE](/copilot/responsible-use-of-github-copilot-features).
|
||||
@@ -1,88 +0,0 @@
|
||||
---
|
||||
title: Using Claude in Copilot Chat
|
||||
allowTitleToDifferFromFilename: true
|
||||
shortTitle: 'Use {% data variables.copilot.copilot_claude %}'
|
||||
intro: 'Learn how to enable {% data variables.copilot.copilot_claude %} in {% data variables.copilot.copilot_chat %} for {% ifversion fpt %}yourself or{% endif %} your organization{% ifversion ghec %} or enterprise{% endif %}.'
|
||||
versions:
|
||||
feature: copilot
|
||||
topics:
|
||||
- Copilot
|
||||
redirect_from:
|
||||
- /copilot/using-github-copilot/using-claude-sonnet-in-github-copilot
|
||||
- /copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot
|
||||
---
|
||||
|
||||
## About {% data variables.copilot.copilot_claude %} in {% data variables.copilot.copilot_chat %}
|
||||
|
||||
{% data reusables.copilot.claude-public-preview-note %}
|
||||
|
||||
{% data variables.copilot.copilot_claude %} is a family of large language models that you can use as an alternative to the default model used by {% data variables.copilot.copilot_chat_short %}. {% data variables.copilot.copilot_claude %} excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. Learn more about [Claude's capabilities](https://www.anthropic.com/claude).
|
||||
|
||||
* {% data variables.copilot.copilot_claude_opus %} is available in:
|
||||
|
||||
* {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vscode %}
|
||||
* Immersive mode in {% data variables.copilot.copilot_chat_short %} in {% data variables.product.github %}
|
||||
|
||||
* {% data variables.copilot.copilot_claude_sonnet_40 %} is available in:
|
||||
|
||||
* {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vscode %}
|
||||
* Immersive mode in {% data variables.copilot.copilot_chat_short %} in {% data variables.product.github %}
|
||||
|
||||
* {% data variables.copilot.copilot_claude_sonnet_35 %} and {% data variables.copilot.copilot_claude_sonnet_37 %} are available in:
|
||||
|
||||
* {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vscode %}
|
||||
* {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vs %} 2022
|
||||
* **3.5**: Version 17.12 or later
|
||||
* **3.7**: Version 17.13 or later
|
||||
* {% data variables.copilot.copilot_chat_short %} in Xcode
|
||||
* {% data variables.copilot.copilot_chat_short %} in Eclipse
|
||||
* {% data variables.copilot.copilot_chat_short %} in JetBrains
|
||||
* Immersive mode in {% data variables.copilot.copilot_chat_short %} in {% data variables.product.github %}
|
||||
|
||||
{% data variables.copilot.copilot_claude_opus %} and {% data variables.copilot.copilot_claude_sonnet_40 %} are hosted by Anthropic PBC and Google Cloud Platform. {% data variables.copilot.copilot_claude_sonnet_37 %} is hosted by Amazon Web Services, Anthropic PBC, and Google Cloud Platform. {% data variables.copilot.copilot_claude_sonnet_35 %} is hosted exclusively by Amazon Web Services. {% data variables.product.github %} has provider agreements in place to ensure data is not used for training. Additional details for each provider are included below:
|
||||
|
||||
* Amazon Bedrock: Amazon makes the [following data commitments](https://docs.aws.amazon.com/bedrock/latest/userguide/data-protection.html): _Amazon Bedrock doesn't store or log your prompts and completions. Amazon Bedrock doesn't use your prompts and completions to train any AWS models and doesn't distribute them to third parties_.
|
||||
* Anthropic PBC: {% data variables.product.github %} maintains a [zero data retention agreement](https://privacy.anthropic.com/en/articles/8956058-i-have-a-zero-retention-agreement-with-anthropic-what-products-does-it-apply-to) with Anthropic.
|
||||
* Google Cloud: [Google commits to not training on {% data variables.product.github %} data as part of their service terms](https://cloud.google.com/vertex-ai/generative-ai/docs/data-governance). {% data variables.product.github %} is additionally not subject to prompt logging for abuse monitoring.
|
||||
|
||||
In order to provide better service quality and reduce latency, {% data variables.product.github %} uses [prompt caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching). You can read more about prompt caching on [Anthropic PBC](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching), [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html), and [Google Cloud](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/claude-prompt-caching).
|
||||
|
||||
When using {% data variables.copilot.copilot_claude %}, input prompts and output completions continue to run through {% data variables.product.prodname_copilot %}'s content filters for public code matching, when applied, along with those for harmful, offensive, or off-topic content.
|
||||
|
||||
## Configuring access
|
||||
|
||||
You must enable access to each {% data variables.copilot.copilot_claude %} individually before you can use the model.
|
||||
|
||||
{% ifversion fpt %}
|
||||
|
||||
### Setup for individual use
|
||||
|
||||
> [!NOTE]
|
||||
> * {% data variables.copilot.copilot_claude_opus %} is not currently available for {% data variables.copilot.copilot_free_short %} and {% data variables.copilot.copilot_pro_short %}.
|
||||
> * {% data variables.copilot.copilot_claude_sonnet_40 %} and {% data variables.copilot.copilot_claude_sonnet_37 %} are not currently available for {% data variables.copilot.copilot_free_short %}.
|
||||
> * {% data variables.copilot.copilot_claude_sonnet_37 %} is not currently available for {% data variables.copilot.copilot_free_short %}.
|
||||
|
||||
If you have a {% data variables.copilot.copilot_free_short %} or {% data variables.copilot.copilot_pro_short %} subscription, you can enable {% data variables.copilot.copilot_claude %} in two ways:
|
||||
|
||||
* The first time you choose to use {% data variables.copilot.copilot_claude %} models with {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vscode %}, or in the immersive view of {% data variables.copilot.copilot_chat_short %}, you will be prompted to allow access to the model.
|
||||
|
||||
Clicking **Allow** enables you to use {% data variables.copilot.copilot_claude %} and updates the policy in your personal settings on {% data variables.product.github %}.
|
||||
|
||||
* You can enable the model directly in your personal settings on the {% data variables.product.github %} website. See [AUTOTITLE](/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/managing-copilot-policies-as-an-individual-subscriber#enabling-or-disabling-alternative-ai-models).
|
||||
|
||||
{% endif %}
|
||||
|
||||
### Setup for organization {% ifversion ghec %}and enterprise{% endif %} use
|
||||
|
||||
> [!NOTE]
|
||||
> {% data variables.copilot.copilot_claude_opus %} is not currently available for {% data variables.copilot.copilot_business_short %}.
|
||||
|
||||
As an {% ifversion ghec %}enterprise or{% endif %} organization owner, you can enable or disable {% data variables.copilot.copilot_claude %} models for everyone who has been assigned a {% ifversion ghec %}{% data variables.copilot.copilot_enterprise_short %} or {% endif %}{% data variables.copilot.copilot_business_short %} seat through your {% ifversion ghec %}enterprise or {% endif %}organization. See [AUTOTITLE](/copilot/managing-copilot/managing-github-copilot-in-your-organization/setting-policies-for-copilot-in-your-organization/managing-policies-for-copilot-in-your-organization){% ifversion ghec %} and [AUTOTITLE](/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise){% endif %}.
|
||||
|
||||
## Using {% data variables.copilot.copilot_claude %}
|
||||
|
||||
For details on how to change the model that {% data variables.copilot.copilot_chat_short %} uses, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat).
|
||||
|
||||
## Leaving feedback
|
||||
|
||||
To leave feedback about {% data variables.copilot.copilot_claude %} in {% data variables.product.prodname_copilot_short %}, or to ask a question, see the {% data variables.product.prodname_github_community %} discussion [{% data variables.copilot.copilot_claude_sonnet_35 %} is now available to all {% data variables.product.prodname_copilot_short %} users in Public Preview](https://github.com/orgs/community/discussions/143337).
|
||||
@@ -1,54 +0,0 @@
|
||||
---
|
||||
title: Using Gemini in Copilot Chat
|
||||
allowTitleToDifferFromFilename: true
|
||||
shortTitle: 'Use {% data variables.copilot.copilot_gemini %}'
|
||||
intro: 'Learn how to enable {% data variables.copilot.copilot_gemini %} in {% data variables.copilot.copilot_chat %}, for {% ifversion fpt %}yourself or{% endif %} your organization{% ifversion ghec %} or enterprise{% endif %}.'
|
||||
versions:
|
||||
feature: copilot
|
||||
topics:
|
||||
- Copilot
|
||||
redirect_from:
|
||||
- /copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot
|
||||
---
|
||||
|
||||
## About {% data variables.copilot.copilot_gemini %} in {% data variables.product.prodname_copilot %}
|
||||
|
||||
{% data variables.copilot.copilot_gemini %} models are large language models (LLMs) that you can use as an alternative to the default model used by {% data variables.copilot.copilot_chat_short %}. {% data variables.copilot.copilot_gemini %} models are responsive LLMs that can empower you to build apps faster and more easily, so you can focus on great experiences for your users. {% data reusables.copilot.gemini-model-info %}
|
||||
|
||||
{% data variables.copilot.copilot_gemini %} is currently available in:
|
||||
|
||||
* {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vscode %}
|
||||
* Immersive mode in {% data variables.copilot.copilot_chat_short %} in {% data variables.product.github %}
|
||||
* {% data variables.copilot.copilot_chat_short %} in JetBrains IDEs
|
||||
|
||||
{% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_gemini_flash %} and {% data variables.copilot.copilot_gemini_25_pro %} hosted on Google Cloud Platform (GCP). When using {% data variables.copilot.copilot_gemini %} models, prompts and metadata are sent to GCP, which makes the [following data commitment](https://cloud.google.com/gemini/docs/discover/data-governance): _{% data variables.copilot.copilot_gemini %} doesn't use your prompts, or its responses, as data to train its models._
|
||||
|
||||
When using {% data variables.copilot.copilot_gemini %} models, input prompts and output completions continue to run through {% data variables.product.prodname_copilot %}'s content filters for public code matching, when applied, along with those for harmful, offensive, or off-topic content.
|
||||
|
||||
## Configuring access
|
||||
|
||||
You must enable access to {% data variables.copilot.copilot_gemini_flash %} and {% data variables.copilot.copilot_gemini_25_pro %} before you can use the model.
|
||||
|
||||
{% ifversion fpt %}
|
||||
|
||||
### Setup for individual use
|
||||
|
||||
> [!NOTE] {% data variables.copilot.copilot_gemini_25_pro %} is not currently available for {% data variables.copilot.copilot_free_short %}.
|
||||
|
||||
If you have a {% data variables.copilot.copilot_free_short %}, {% data variables.copilot.copilot_pro_short %}, or {% data variables.copilot.copilot_pro_plus_short %} subscription, you can enable the {% data variables.copilot.copilot_gemini %} models available to your plan in two ways:
|
||||
|
||||
* The first time you choose to use {% data variables.copilot.copilot_gemini %} models with {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vscode %}, or in the immersive view of {% data variables.copilot.copilot_chat_short %}, you will be prompted to allow access to the model.
|
||||
|
||||
Clicking **Allow** enables you to use {% data variables.copilot.copilot_gemini %} and updates the policy in your personal settings on {% data variables.product.github %}.
|
||||
|
||||
* You can enable the model directly in your personal settings on the {% data variables.product.github %} website. See [AUTOTITLE](/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/managing-copilot-policies-as-an-individual-subscriber#enabling-or-disabling-alternative-ai-models).
|
||||
|
||||
{% endif %}
|
||||
|
||||
### Setup for organization {% ifversion ghec %}and enterprise{% endif %} use
|
||||
|
||||
As an {% ifversion ghec %}enterprise or{% endif %} organization owner, you can enable or disable both {% data variables.copilot.copilot_gemini_flash %} and {% data variables.copilot.copilot_gemini_25_pro %} for everyone who has been assigned a {% ifversion ghec %}{% data variables.copilot.copilot_enterprise_short %} or {% endif %}{% data variables.copilot.copilot_business_short %} seat through your {% ifversion ghec %}enterprise or {% endif %}organization. See [AUTOTITLE](/copilot/managing-copilot/managing-github-copilot-in-your-organization/setting-policies-for-copilot-in-your-organization/managing-policies-for-copilot-in-your-organization){% ifversion ghec %} and [AUTOTITLE](/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise#copilot-access-to-alternative-ai-models){% endif %}.
|
||||
|
||||
## Using {% data variables.copilot.copilot_gemini %}
|
||||
|
||||
For details of how to change the model that {% data variables.copilot.copilot_chat_short %} uses, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat).
|
||||
@@ -1,49 +0,0 @@
|
||||
---
|
||||
title: Using OpenAI GPT-4.1 in Copilot Chat
|
||||
allowTitleToDifferFromFilename: true
|
||||
shortTitle: 'Use OpenAI {% data variables.copilot.copilot_gpt_41 %}'
|
||||
intro: 'Learn how to enable OpenAI {% data variables.copilot.copilot_gpt_41 %} in {% data variables.copilot.copilot_chat %}, for {% ifversion fpt %}yourself or{% endif %} your organization{% ifversion ghec %} or enterprise{% endif %}.'
|
||||
versions:
|
||||
feature: copilot
|
||||
topics:
|
||||
- Copilot
|
||||
---
|
||||
|
||||
## About OpenAI {% data variables.copilot.copilot_gpt_41 %} in {% data variables.copilot.copilot_chat %}
|
||||
|
||||
OpenAI has a family of large language models that you can use as an alternative to the default model used by {% data variables.copilot.copilot_chat_short %}. {% data variables.copilot.copilot_gpt_41 %} is one of those models and excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. For information about the capabilities of {% data variables.copilot.copilot_gpt_41 %}, see the [OpenAI documentation](https://platform.openai.com/docs/models).
|
||||
|
||||
{% data variables.copilot.copilot_gpt_41 %} is currently available in:
|
||||
|
||||
* {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vscode %}
|
||||
* Immersive mode in {% data variables.copilot.copilot_chat_short %} in {% data variables.product.github %}
|
||||
|
||||
{% data variables.copilot.copilot_gpt_41 %} is hosted by GitHub's Azure tenant when used in {% data variables.product.prodname_copilot %}.
|
||||
|
||||
When using {% data variables.copilot.copilot_gpt_41 %}, input prompts and output completions continue to run through {% data variables.product.prodname_copilot %}'s content filters for public code matching, when applied, along with those for harmful, offensive, or off-topic content.
|
||||
|
||||
## Configuring access
|
||||
|
||||
You must enable access to OpenAI {% data variables.copilot.copilot_gpt_41 %} individually before you can use the model.
|
||||
|
||||
{% ifversion fpt %}
|
||||
|
||||
### Setup for individual use
|
||||
|
||||
If you have a {% data variables.copilot.copilot_free_short %}, {% data variables.copilot.copilot_pro_short %}, or {% data variables.copilot.copilot_pro_plus_short %} subscription, you can enable OpenAI {% data variables.copilot.copilot_gpt_41 %} in two ways:
|
||||
|
||||
* The first time you choose to use {% data variables.copilot.copilot_gpt_41 %} with {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vscode %}, or in the immersive view of {% data variables.copilot.copilot_chat_short %}, you will be prompted to allow access to the model.
|
||||
|
||||
Clicking **Allow** enables you to use {% data variables.copilot.copilot_gpt_41 %} and updates the policy in your personal settings on {% data variables.product.github %}.
|
||||
|
||||
* You can enable the model directly in your personal settings on the {% data variables.product.github %} website. See [AUTOTITLE](/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/managing-copilot-policies-as-an-individual-subscriber#enabling-or-disabling-alternative-ai-models).
|
||||
|
||||
{% endif %}
|
||||
|
||||
### Setup for organization {% ifversion ghec %}and enterprise{% endif %} use
|
||||
|
||||
As an {% ifversion ghec %}enterprise or{% endif %} organization owner, you can enable or disable {% data variables.copilot.copilot_gpt_41 %} for everyone who has been assigned a {% ifversion ghec %}{% data variables.copilot.copilot_enterprise_short %} or {% endif %}{% data variables.copilot.copilot_business_short %} seat through your {% ifversion ghec %}enterprise or {% endif %}organization. See [AUTOTITLE](/copilot/managing-copilot/managing-github-copilot-in-your-organization/setting-policies-for-copilot-in-your-organization/managing-policies-for-copilot-in-your-organization){% ifversion ghec %} and [AUTOTITLE](/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise){% endif %}.
|
||||
|
||||
## Using {% data variables.copilot.copilot_gpt_41 %}
|
||||
|
||||
For details of how to change the model that {% data variables.copilot.copilot_chat_short %} uses, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat).
|
||||
@@ -1,55 +0,0 @@
|
||||
---
|
||||
title: Using OpenAI o3 in Copilot Chat
|
||||
allowTitleToDifferFromFilename: true
|
||||
shortTitle: 'Use OpenAI {% data variables.copilot.copilot_o3 %}'
|
||||
intro: 'Learn how to enable OpenAI {% data variables.copilot.copilot_o3 %} in {% data variables.copilot.copilot_chat %}, for {% ifversion fpt %}yourself or{% endif %} your organization{% ifversion ghec %} or enterprise{% endif %}.'
|
||||
versions:
|
||||
feature: copilot
|
||||
topics:
|
||||
- Copilot
|
||||
---
|
||||
|
||||
## About OpenAI {% data variables.copilot.copilot_o3 %} in {% data variables.copilot.copilot_chat %}
|
||||
|
||||
{% data reusables.copilot.o3-public-preview-note %}
|
||||
|
||||
OpenAI has a family of large language models that you can use as an alternative to the default model used by {% data variables.copilot.copilot_chat_short %}. {% data variables.copilot.copilot_o3 %} is one of those models and excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. For information about the capabilities of {% data variables.copilot.copilot_o3 %}, see the [OpenAI documentation](https://platform.openai.com/docs/models).
|
||||
|
||||
{% data variables.copilot.copilot_o3 %} is currently available in:
|
||||
|
||||
* {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vscode %}
|
||||
* Immersive mode in {% data variables.copilot.copilot_chat_short %} in {% data variables.product.github %}
|
||||
|
||||
{% data variables.copilot.copilot_o3 %} is hosted by OpenAI and GitHub's Azure tenant when used in {% data variables.product.prodname_copilot %}. OpenAI makes the [following data commitment](https://openai.com/enterprise-privacy/): _We [OpenAI] do not train our models on your business data by default_. GitHub maintains a [zero data retention agreement](https://platform.openai.com/docs/guides/your-data) with OpenAI.
|
||||
|
||||
When using {% data variables.copilot.copilot_o3 %}, input prompts and output completions continue to run through {% data variables.product.prodname_copilot %}'s content filters for public code matching, when applied, along with those for harmful, offensive, or off-topic content.
|
||||
|
||||
## Configuring access
|
||||
|
||||
You must enable access to OpenAI {% data variables.copilot.copilot_o3 %} individually before you can use the model.
|
||||
|
||||
{% ifversion fpt %}
|
||||
|
||||
### Setup for individual use
|
||||
|
||||
If you have a {% data variables.copilot.copilot_pro_plus_short %} subscription, you can enable OpenAI {% data variables.copilot.copilot_o3 %} in two ways:
|
||||
|
||||
* The first time you choose to use {% data variables.copilot.copilot_o3 %} with {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vscode %}, or in the immersive view of {% data variables.copilot.copilot_chat_short %}, you will be prompted to allow access to the model.
|
||||
|
||||
Clicking **Allow** enables you to use {% data variables.copilot.copilot_o3 %} and updates the policy in your personal settings on {% data variables.product.github %}.
|
||||
|
||||
* You can enable the model directly in your personal settings on the {% data variables.product.github %} website. See [AUTOTITLE](/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/managing-copilot-policies-as-an-individual-subscriber#enabling-or-disabling-alternative-ai-models).
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% ifversion ghec %}
|
||||
|
||||
### Setup enterprise use
|
||||
|
||||
As an enterprise owner, you can enable or disable {% data variables.copilot.copilot_o3 %} for everyone who has been assigned a {% data variables.copilot.copilot_enterprise_short %} seat through your enterprise. See [AUTOTITLE](/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise).
|
||||
|
||||
{% endif %}
|
||||
|
||||
## Using {% data variables.copilot.copilot_o3 %}
|
||||
|
||||
For details of how to change the model that {% data variables.copilot.copilot_chat_short %} uses, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat).
|
||||
@@ -1,51 +0,0 @@
|
||||
---
|
||||
title: Using OpenAI o4-mini in Copilot Chat
|
||||
allowTitleToDifferFromFilename: true
|
||||
shortTitle: 'Use OpenAI {% data variables.copilot.copilot_o4_mini %}'
|
||||
intro: 'Learn how to enable OpenAI {% data variables.copilot.copilot_o4_mini %} in {% data variables.copilot.copilot_chat %}, for {% ifversion fpt %}yourself or{% endif %} your organization{% ifversion ghec %} or enterprise{% endif %}.'
|
||||
versions:
|
||||
feature: copilot
|
||||
topics:
|
||||
- Copilot
|
||||
---
|
||||
|
||||
## About OpenAI {% data variables.copilot.copilot_o4_mini %} in {% data variables.copilot.copilot_chat %}
|
||||
|
||||
{% data reusables.copilot.o4-mini-public-preview-note %}
|
||||
|
||||
OpenAI has a family of large language models that you can use as an alternative to the default model used by {% data variables.copilot.copilot_chat_short %}. {% data variables.copilot.copilot_o4_mini %} is one of those models and excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. For information about the capabilities of {% data variables.copilot.copilot_o4_mini %}, see the [OpenAI documentation](https://platform.openai.com/docs/models).
|
||||
|
||||
{% data variables.copilot.copilot_o4_mini %} is currently available in:
|
||||
|
||||
* {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vscode %}
|
||||
* Immersive mode in {% data variables.copilot.copilot_chat_short %} in {% data variables.product.github %}
|
||||
|
||||
{% data variables.copilot.copilot_o4_mini %} is hosted by OpenAI and GitHub's Azure tenant when used in {% data variables.product.prodname_copilot %}. OpenAI makes the [following data commitment](https://openai.com/enterprise-privacy/): _We [OpenAI] do not train our models on your business data by default_. GitHub maintains a [zero data retention agreement](https://platform.openai.com/docs/guides/your-data) with OpenAI.
|
||||
|
||||
When using {% data variables.copilot.copilot_o4_mini %}, input prompts and output completions continue to run through {% data variables.product.prodname_copilot %}'s content filters for public code matching, when applied, along with those for harmful, offensive, or off-topic content.
|
||||
|
||||
## Configuring access
|
||||
|
||||
You must enable access to OpenAI {% data variables.copilot.copilot_o4_mini %} individually before you can use the model.
|
||||
|
||||
{% ifversion fpt %}
|
||||
|
||||
### Setup for individual use
|
||||
|
||||
If you have a {% data variables.copilot.copilot_pro_short %} or {% data variables.copilot.copilot_pro_plus_short %} subscription, you can enable OpenAI {% data variables.copilot.copilot_o4_mini %} in two ways:
|
||||
|
||||
* The first time you choose to use {% data variables.copilot.copilot_o4_mini %} with {% data variables.copilot.copilot_chat_short %} in {% data variables.product.prodname_vscode %}, or in the immersive view of {% data variables.copilot.copilot_chat_short %}, you will be prompted to allow access to the model.
|
||||
|
||||
Clicking **Allow** enables you to use {% data variables.copilot.copilot_o4_mini %} and updates the policy in your personal settings on {% data variables.product.github %}.
|
||||
|
||||
* You can enable the model directly in your personal settings on the {% data variables.product.github %} website. See [AUTOTITLE](/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/managing-copilot-policies-as-an-individual-subscriber#enabling-or-disabling-alternative-ai-models).
|
||||
|
||||
{% endif %}
|
||||
|
||||
### Setup for organization {% ifversion ghec %}and enterprise{% endif %} use
|
||||
|
||||
As an {% ifversion ghec %}enterprise or{% endif %} organization owner, you can enable or disable {% data variables.copilot.copilot_o4_mini %} for everyone who has been assigned a {% ifversion ghec %}{% data variables.copilot.copilot_enterprise_short %} or {% endif %}{% data variables.copilot.copilot_business_short %} seat through your {% ifversion ghec %}enterprise or {% endif %}organization. See [AUTOTITLE](/copilot/managing-copilot/managing-github-copilot-in-your-organization/setting-policies-for-copilot-in-your-organization/managing-policies-for-copilot-in-your-organization){% ifversion ghec %} and [AUTOTITLE](/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise){% endif %}.
|
||||
|
||||
## Using {% data variables.copilot.copilot_o4_mini %}
|
||||
|
||||
For details of how to change the model that {% data variables.copilot.copilot_chat_short %} uses, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat).
|
||||
Reference in New Issue
Block a user