1
0
mirror of synced 2025-12-19 18:10:59 -05:00

GPT-4.1 [GA] (#55541)

This commit is contained in:
Sunbrye Ly
2025-05-08 07:59:13 -07:00
committed by GitHub
parent bc4821717f
commit 00a637aa8a
15 changed files with 86 additions and 88 deletions

View File

@@ -22,7 +22,7 @@ redirect_from:
{% data reusables.rai.code-scanning.copilot-autofix-note %} {% data reusables.rai.code-scanning.copilot-autofix-note %}
{% data variables.product.prodname_copilot_autofix_short %} generates potential fixes that are relevant to the existing source code and translates the description and location of an alert into code changes that may fix the alert. {% data variables.product.prodname_copilot_autofix_short %} uses internal {% data variables.product.prodname_copilot %} APIs interfacing with the large language model GPT-4o from OpenAI, which has sufficient generative capabilities to produce both suggested fixes in code and explanatory text for those fixes. {% data variables.product.prodname_copilot_autofix_short %} generates potential fixes that are relevant to the existing source code and translates the description and location of an alert into code changes that may fix the alert. {% data variables.product.prodname_copilot_autofix_short %} uses internal {% data variables.product.prodname_copilot %} APIs interfacing with the large language model {% data variables.copilot.copilot_gpt_4o %} from OpenAI, which has sufficient generative capabilities to produce both suggested fixes in code and explanatory text for those fixes.
{% data variables.product.prodname_copilot_autofix_short %} is allowed by default and enabled for every repository using {% data variables.product.prodname_codeql %}, but you can choose to opt out and disable {% data variables.product.prodname_copilot_autofix_short %}. To learn how to disable {% data variables.product.prodname_copilot_autofix_short %} at the enterprise, organization and repository levels, see [AUTOTITLE](/code-security/code-scanning/managing-code-scanning-alerts/disabling-autofix-for-code-scanning). {% data variables.product.prodname_copilot_autofix_short %} is allowed by default and enabled for every repository using {% data variables.product.prodname_codeql %}, but you can choose to opt out and disable {% data variables.product.prodname_copilot_autofix_short %}. To learn how to disable {% data variables.product.prodname_copilot_autofix_short %} at the enterprise, organization and repository levels, see [AUTOTITLE](/code-security/code-scanning/managing-code-scanning-alerts/disabling-autofix-for-code-scanning).

View File

@@ -43,17 +43,18 @@ Each model has a premium request multiplier, based on its complexity and resourc
| Model | Premium requests | | Model | Premium requests |
|-------------------------------------------------------------------------|------------------------------------------------------------------------------| |-------------------------------------------------------------------------|------------------------------------------------------------------------------|
| Base model (currently {% data variables.copilot.copilot_gpt_4o %}) [^1] | 0 (paid users), 1 ({% data variables.product.prodname_copilot_free_short %}) | | Base model (currently {% data variables.copilot.copilot_gpt_41 %}) [^1] | 0 (paid users), 1 ({% data variables.product.prodname_copilot_free_short %}) |
| {% data variables.copilot.copilot_gpt_4o %} | 1 |
| {% data variables.copilot.copilot_gpt_45 %} | 50 |
| {% data variables.copilot.copilot_claude_sonnet_35 %} | 1 | | {% data variables.copilot.copilot_claude_sonnet_35 %} | 1 |
| {% data variables.copilot.copilot_claude_sonnet_37 %} | 1 | | {% data variables.copilot.copilot_claude_sonnet_37 %} | 1 |
| {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking | 1.25 | | {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking | 1.25 |
| {% data variables.copilot.copilot_gemini_flash %} | 0.25 | | {% data variables.copilot.copilot_gemini_flash %} | 0.25 |
| {% data variables.copilot.copilot_gemini_25_pro %} | 1 | | {% data variables.copilot.copilot_gemini_25_pro %} | 1 |
| GPT-4.5 | 50 |
| {% data variables.copilot.copilot_o1 %} | 10 | | {% data variables.copilot.copilot_o1 %} | 10 |
| {% data variables.copilot.copilot_o3_mini %} | 0.33 | | {% data variables.copilot.copilot_o3_mini %} | 0.33 |
[^1]: The base model at the time of writing is {% data variables.copilot.copilot_gpt_4o %}. This is subject to change. Response times for the base model may vary during periods of high usage. Requests to the base model may be subject to rate limiting. [^1]: The base model at the time of writing is powered by {% data variables.copilot.copilot_gpt_41 %}. This is subject to change. Response times for the base model may vary during periods of high usage. Requests to the base model may be subject to rate limiting.
## Additional premium requests ## Additional premium requests

View File

@@ -147,7 +147,7 @@ To use multi-model {% data variables.product.prodname_copilot_chat_short %}, you
The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}: The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}:
* {% data variables.copilot.copilot_gpt_4o %} * {% data variables.copilot.copilot_gpt_4o %}
* {% data variables.copilot.copilot_gpt_41 %} (preview) * {% data variables.copilot.copilot_gpt_41 %}
* {% data variables.copilot.copilot_gpt_45 %} (preview) * {% data variables.copilot.copilot_gpt_45 %} (preview)
* {% data variables.copilot.copilot_claude_sonnet_35 %} * {% data variables.copilot.copilot_claude_sonnet_35 %}
* {% data variables.copilot.copilot_claude_sonnet_37 %} * {% data variables.copilot.copilot_claude_sonnet_37 %}
@@ -221,7 +221,7 @@ These instructions are for the Eclipse IDE. For instructions on different client
The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}: The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}:
* {% data variables.copilot.copilot_gpt_4o %} * {% data variables.copilot.copilot_gpt_4o %}
* {% data variables.copilot.copilot_gpt_41 %} (preview) * {% data variables.copilot.copilot_gpt_41 %}
* {% data variables.copilot.copilot_gpt_45 %} (preview) * {% data variables.copilot.copilot_gpt_45 %} (preview)
* {% data variables.copilot.copilot_claude_sonnet_35 %} * {% data variables.copilot.copilot_claude_sonnet_35 %}
* {% data variables.copilot.copilot_claude_sonnet_37 %} * {% data variables.copilot.copilot_claude_sonnet_37 %}

View File

@@ -10,7 +10,7 @@ topics:
## Overview ## Overview
By default, {% data variables.product.prodname_copilot_short %} code completion uses the GPT-4o {% data variables.product.prodname_copilot_short %}, a fine-tuned GPT-4o mini based large language model (LLM). This model has been trained on a wide range of high quality public {% data variables.product.github %} repositories, providing coverage of over 30 programming languages. Its knowledge base is more current than the default model and you may find that it generates completion suggestions more quickly. By default, {% data variables.product.prodname_copilot_short %} code completion uses the {% data variables.copilot.copilot_gpt_4o %} {% data variables.product.prodname_copilot_short %}, a fine-tuned GPT-4o mini based large language model (LLM). This model has been trained on a wide range of high quality public {% data variables.product.github %} repositories, providing coverage of over 30 programming languages. Its knowledge base is more current than the default model and you may find that it generates completion suggestions more quickly.
<details> <details>
<summary>View the list of programming languages and technologies included in the training data.</summary> <summary>View the list of programming languages and technologies included in the training data.</summary>

View File

@@ -19,11 +19,11 @@ The best model depends on your use case:
* For **balance between cost and performance**, try {% data variables.copilot.copilot_gpt_41 %} or {% data variables.copilot.copilot_claude_sonnet_37 %}. * For **balance between cost and performance**, try {% data variables.copilot.copilot_gpt_41 %} or {% data variables.copilot.copilot_claude_sonnet_37 %}.
* For **fast, low-cost support for basic tasks**, try {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_claude_sonnet_35 %}. * For **fast, low-cost support for basic tasks**, try {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_claude_sonnet_35 %}.
* For **deep reasoning or complex coding challenges**, try {% data variables.copilot.copilot_o3 %}, GPT-4.5, or {% data variables.copilot.copilot_claude_sonnet_37 %}. * For **deep reasoning or complex coding challenges**, try {% data variables.copilot.copilot_o3 %}, GPT-4.5, or {% data variables.copilot.copilot_claude_sonnet_37 %}.
* For **multimodal inputs and real-time performance**, try {% data variables.copilot.copilot_gemini_flash %} or {% data variables.copilot.copilot_gpt_4o %}. * For **multimodal inputs and real-time performance**, try {% data variables.copilot.copilot_gemini_flash %} or {% data variables.copilot.copilot_gpt_41 %}.
You can click a model name in the list below to jump to a detailed overview of its strengths and use cases. You can click a model name in the list below to jump to a detailed overview of its strengths and use cases.
* [{% data variables.copilot.copilot_gpt_4o %}](#gpt-4o)
* [{% data variables.copilot.copilot_gpt_41 %}](#gpt-41) * [{% data variables.copilot.copilot_gpt_41 %}](#gpt-41)
* [{% data variables.copilot.copilot_gpt_4o %}](#gpt-4o)
* [{% data variables.copilot.copilot_gpt_45 %}](#gpt-45) * [{% data variables.copilot.copilot_gpt_45 %}](#gpt-45)
* [{% data variables.copilot.copilot_o1 %}](#o1) * [{% data variables.copilot.copilot_o1 %}](#o1)
* [{% data variables.copilot.copilot_o3 %}](#o3) * [{% data variables.copilot.copilot_o3 %}](#o3)
@@ -35,54 +35,9 @@ You can click a model name in the list below to jump to a detailed overview of i
* [{% data variables.copilot.copilot_gemini_25_pro %}](#gemini-25-pro) * [{% data variables.copilot.copilot_gemini_25_pro %}](#gemini-25-pro)
> [!NOTE] Different models have different premium request multipliers, which can affect how much of your monthly usage allowance is consumed. For details, see [AUTOTITLE](/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests). > [!NOTE] Different models have different premium request multipliers, which can affect how much of your monthly usage allowance is consumed. For details, see [AUTOTITLE](/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests).
## GPT-4o
OpenAI GPT-4o is a multimodal model that supports text and images. It responds in real time and works well for lightweight development tasks and conversational prompts in {% data variables.product.prodname_copilot_chat_short %}.
Compared to previous models, GPT-4o improves performance in multilingual contexts and demonstrates stronger capabilities when interpreting visual content. It delivers GPT-4 Turbolevel performance with lower latency and cost, making it a good default choice for many common developer tasks.
For more information about GPT-4o, see [OpenAI's documentation](https://platform.openai.com/docs/models/gpt-4o).
### Use cases
{% data reusables.copilot.model-use-cases.gpt-4o %}
### Strengths
The following table summarizes the strengths of GPT-4o:
{% rowheaders %}
| Task | Description | Why GPT-4o is a good fit |
|-----------------------------------|---------------------------------------------------------------------|----------------------------------------|
| Code explanation | Understand what a block of code does or walk through logic. | Fast and accurate explanations. |
| Code commenting and documentation | Generate or refine comments and documentation. | Writes clear, concise explanations. |
| Bug investigation | Get a quick explanation or suggestion for an error. | Provides fast diagnostic insight. |
| Code snippet generation | Generate small, reusable pieces of code. | Delivers high-quality results quickly. |
| Multilingual prompts | Work with non-English prompts or identifiers. | Improved multilingual comprehension. |
| Image-based questions | Ask about a diagram or screenshot (where image input is supported). | Supports visual reasoning. |
{% endrowheaders %}
### Alternative options
The following table summarizes when an alternative model may be a better choice:
{% rowheaders %}
| Task | Description | Why another model may be better |
|------------------------------------|--------------------------------------------------------------|-------------------------------------------------------------|
| Multi-step reasoning or algorithms | Design complex logic or break down multi-step problems. | GPT-4.5 or {% data variables.copilot.copilot_claude_sonnet_37 %} provide better step-by-step thinking. |
| Complex refactoring | Refactor large codebases or update multiple interdependent files. | GPT-4.5 handles context and code dependencies more robustly.|
| System review or architecture | Analyze structure, patterns, or architectural decisions in depth. | {% data variables.copilot.copilot_claude_sonnet_37 %} or GPT-4.5 offer deeper analysis. |
{% endrowheaders %}
## {% data variables.copilot.copilot_gpt_41 %} ## {% data variables.copilot.copilot_gpt_41 %}
{% data reusables.copilot.gpt-41-public-preview-note %} OpenAIs latest model, {% data variables.copilot.copilot_gpt_41 %}, is now available in {% data variables.product.prodname_copilot %} and {% data variables.product.prodname_github_models %}, bringing OpenAIs newest model to your coding workflow. This model outperforms {% data variables.copilot.copilot_gpt_4o %} across the board, with major gains in coding, instruction following, and long-context understanding. It has a larger context window and features a refreshed knowledge cutoff of June 2024.
OpenAIs latest model, {% data variables.copilot.copilot_gpt_41 %}, is now available in {% data variables.product.prodname_copilot %} and {% data variables.product.prodname_github_models %}, bringing OpenAIs newest model to your coding workflow. This model outperforms GPT-4o across the board, with major gains in coding, instruction following, and long-context understanding. It has a larger context window and features a refreshed knowledge cutoff of June 2024.
OpenAI has optimized {% data variables.copilot.copilot_gpt_41 %} for real-world use based on direct developer feedback about: frontend coding, making fewer extraneous edits, following formats reliably, adhering to response structure and ordering, consistent tool usage, and more. This model is a strong default choice for common development tasks that benefit from speed, responsiveness, and general-purpose reasoning. OpenAI has optimized {% data variables.copilot.copilot_gpt_41 %} for real-world use based on direct developer feedback about: frontend coding, making fewer extraneous edits, following formats reliably, adhering to response structure and ordering, consistent tool usage, and more. This model is a strong default choice for common development tasks that benefit from speed, responsiveness, and general-purpose reasoning.
@@ -114,11 +69,54 @@ The following table summarizes the strengths of {% data variables.copilot.copilo
| Complex refactoring | Refactor large codebases or update multiple interdependent files. | GPT-4.5 handles context and code dependencies more robustly. | | Complex refactoring | Refactor large codebases or update multiple interdependent files. | GPT-4.5 handles context and code dependencies more robustly. |
| System review or architecture | Analyze structure, patterns, or architectural decisions in depth. | {% data variables.copilot.copilot_claude_sonnet_37 %} or GPT-4.5 offer deeper analysis. | | System review or architecture | Analyze structure, patterns, or architectural decisions in depth. | {% data variables.copilot.copilot_claude_sonnet_37 %} or GPT-4.5 offer deeper analysis. |
## {% data variables.copilot.copilot_gpt_4o %}
OpenAI {% data variables.copilot.copilot_gpt_4o %} is a multimodal model that supports text and images. It responds in real time and works well for lightweight development tasks and conversational prompts in {% data variables.product.prodname_copilot_chat_short %}.
Compared to previous models, {% data variables.copilot.copilot_gpt_4o %} improves performance in multilingual contexts and demonstrates stronger capabilities when interpreting visual content. It delivers GPT-4 Turbolevel performance with lower latency and cost, making it a good default choice for many common developer tasks.
For more information about {% data variables.copilot.copilot_gpt_4o %}, see [OpenAI's documentation](https://platform.openai.com/docs/models/gpt-4o).
### Use cases
{% data reusables.copilot.model-use-cases.gpt-4o %}
### Strengths
The following table summarizes the strengths of {% data variables.copilot.copilot_gpt_4o %}:
{% rowheaders %}
| Task | Description | Why {% data variables.copilot.copilot_gpt_4o %} is a good fit |
|-----------------------------------|---------------------------------------------------------------------|---------------------------------------------------------------|
| Code explanation | Understand what a block of code does or walk through logic. | Fast and accurate explanations. |
| Code commenting and documentation | Generate or refine comments and documentation. | Writes clear, concise explanations. |
| Bug investigation | Get a quick explanation or suggestion for an error. | Provides fast diagnostic insight. |
| Code snippet generation | Generate small, reusable pieces of code. | Delivers high-quality results quickly. |
| Multilingual prompts | Work with non-English prompts or identifiers. | Improved multilingual comprehension. |
| Image-based questions | Ask about a diagram or screenshot (where image input is supported). | Supports visual reasoning. |
{% endrowheaders %}
### Alternative options
The following table summarizes when an alternative model may be a better choice:
{% rowheaders %}
| Task | Description | Why another model may be better |
|------------------------------------|--------------------------------------------------------------|-------------------------------------------------------------|
| Multi-step reasoning or algorithms | Design complex logic or break down multi-step problems. | GPT-4.5 or {% data variables.copilot.copilot_claude_sonnet_37 %} provide better step-by-step thinking. |
| Complex refactoring | Refactor large codebases or update multiple interdependent files. | GPT-4.5 handles context and code dependencies more robustly.|
| System review or architecture | Analyze structure, patterns, or architectural decisions in depth. | {% data variables.copilot.copilot_claude_sonnet_37 %} or GPT-4.5 offer deeper analysis. |
{% endrowheaders %}
## GPT-4.5 ## GPT-4.5
OpenAI GPT-4.5 improves reasoning, reliability, and contextual understanding. It works well for development tasks that involve complex logic, high-quality code generation, or interpreting nuanced intent. OpenAI GPT-4.5 improves reasoning, reliability, and contextual understanding. It works well for development tasks that involve complex logic, high-quality code generation, or interpreting nuanced intent.
Compared to GPT-4o, GPT-4.5 produces more consistent results for multi-step reasoning, long-form content, and complex problem-solving. It may have slightly higher latency and costs than GPT-4o and other smaller models. Compared to {% data variables.copilot.copilot_gpt_41 %}, GPT-4.5 produces more consistent results for multi-step reasoning, long-form content, and complex problem-solving. It may have slightly higher latency and costs than {% data variables.copilot.copilot_gpt_41 %} and other smaller models.
For more information about GPT-4.5, see [OpenAI's documentation](https://platform.openai.com/docs/models/gpt-4.5-preview). For more information about GPT-4.5, see [OpenAI's documentation](https://platform.openai.com/docs/models/gpt-4.5-preview).
@@ -147,10 +145,10 @@ The following table summarizes when an alternative model may be a better choice:
{% rowheaders %} {% rowheaders %}
| Task | Description | Why another model may be better | | Task | Description | Why another model may be better |
|--------------------------|---------------------------------------------------|-------------------------------------------------------------------| |--------------------------|------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|
| High-speed iteration | Rapid back-and-forth prompts or code tweaks. | GPT-4o responds faster with similar quality for lightweight tasks. | | High-speed iteration | Rapid back-and-forth prompts or code tweaks. | {% data variables.copilot.copilot_gpt_41 %} responds faster with similar quality for lightweight tasks. |
| Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | GPT-4o or {% data variables.copilot.copilot_o4_mini %} are more cost-effective. | | Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | {% data variables.copilot.copilot_gpt_41 %} or {% data variables.copilot.copilot_o4_mini %} are more cost-effective. |
{% endrowheaders %} {% endrowheaders %}
@@ -186,10 +184,10 @@ The following table summarizes when an alternative model may be a better choice:
{% rowheaders %} {% rowheaders %}
| Task | Description | Why another model may be better | | Task | Description | Why another model may be better |
|---------------------------|----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------| |---------------------------|----------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
| Quick iterations | Rapid back-and-forth prompts or code tweaks. | GPT-4o or {% data variables.copilot.copilot_gemini_flash %} responds faster for lightweight tasks. | | Quick iterations | Rapid back-and-forth prompts or code tweaks. | {% data variables.copilot.copilot_gpt_41 %} or {% data variables.copilot.copilot_gemini_flash %} responds faster for lightweight tasks. |
| Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_gemini_flash %} are more cost-effective for basic use cases. | | Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_gemini_flash %} are more cost-effective for basic use cases. |
{% endrowheaders %} {% endrowheaders %}
@@ -226,9 +224,9 @@ The following table summarizes when an alternative model may be a better choice:
{% rowheaders %} {% rowheaders %}
| Task | Description | Why another model may be better | | Task | Description | Why another model may be better |
|---------------------------|----------------------------------------------------|-----------------------------------------------------------------------------------------------------------| |---------------------------|----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
| Quick iterations | Rapid back-and-forth prompts or code tweaks. | GPT-4o or {% data variables.copilot.copilot_gemini_flash %} responds faster for lightweight tasks. | | Quick iterations | Rapid back-and-forth prompts or code tweaks. | {% data variables.copilot.copilot_gpt_41 %} or {% data variables.copilot.copilot_gemini_flash %} responds faster for lightweight tasks. |
| Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_gemini_flash %} are more cost-effective for basic use cases. | | Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_gemini_flash %} are more cost-effective for basic use cases. |
{% endrowheaders %} {% endrowheaders %}
@@ -386,11 +384,11 @@ The following table summarizes when an alternative model may be a better choice:
{% rowheaders %} {% rowheaders %}
| Task | Description | Why another model may be better | | Task | Description | Why another model may be better |
|--------------------------|----------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------| |--------------------------|----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Quick iterations | Rapid back-and-forth prompts or code tweaks. | GPT-4o responds faster for lightweight tasks. | | Quick iterations | Rapid back-and-forth prompts or code tweaks. | {% data variables.copilot.copilot_gpt_41 %} responds faster for lightweight tasks. |
| Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_gemini_flash %} are more cost-effective for basic use cases. {% data variables.copilot.copilot_claude_sonnet_35 %} is cheaper, simpler, and still advanced enough for similar tasks. | | Cost-sensitive scenarios | Tasks where performance-to-cost ratio matters. | {% data variables.copilot.copilot_o4_mini %} or {% data variables.copilot.copilot_gemini_flash %} are more cost-effective for basic use cases. {% data variables.copilot.copilot_claude_sonnet_35 %} is cheaper, simpler, and still advanced enough for similar tasks. |
| Lightweight prototyping | Rapid back-and-forth code iterations with minimal context. | {% data variables.copilot.copilot_claude_sonnet_37 %} may over-engineer or apply unnecessary complexity. | | Lightweight prototyping | Rapid back-and-forth code iterations with minimal context. | {% data variables.copilot.copilot_claude_sonnet_37 %} may over-engineer or apply unnecessary complexity. |
{% endrowheaders %} {% endrowheaders %}

View File

@@ -18,7 +18,7 @@ These examples show how models vary in their reasoning style, response depth, an
For a full list of supported models and side-by-side feature comparisons, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task). For a full list of supported models and side-by-side feature comparisons, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task).
## GPT-4o ## {% data variables.copilot.copilot_gpt_4o %}
{% data reusables.copilot.model-use-cases.gpt-4o %} {% data reusables.copilot.model-use-cases.gpt-4o %}
@@ -68,10 +68,10 @@ def grant_editor_access(user_id, doc_id):
) )
``` ```
### Why GPT-4o is a good fit ### Why {% data variables.copilot.copilot_gpt_4o %} is a good fit
* The function is short and self-contained, making it ideal for quick docstring generation. * The function is short and self-contained, making it ideal for quick docstring generation.
* GPT-4o can recognize the pattern and provide a clear, concise explanation. * {% data variables.copilot.copilot_gpt_4o %} can recognize the pattern and provide a clear, concise explanation.
* The task doesn't require deep reasoning or complex logic. * The task doesn't require deep reasoning or complex logic.
## o3-mini ## o3-mini

View File

@@ -11,8 +11,6 @@ topics:
## About OpenAI {% data variables.copilot.copilot_gpt_41 %} in {% data variables.product.prodname_copilot_chat %} ## About OpenAI {% data variables.copilot.copilot_gpt_41 %} in {% data variables.product.prodname_copilot_chat %}
{% data reusables.copilot.gpt-41-public-preview-note %}
OpenAI has a family of large language models that you can use as an alternative to the default model used by {% data variables.product.prodname_copilot_chat_short %}. {% data variables.copilot.copilot_gpt_41 %} is one of those models and excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. For information about the capabilities of {% data variables.copilot.copilot_gpt_41 %}, see the [OpenAI documentation](https://platform.openai.com/docs/models). OpenAI has a family of large language models that you can use as an alternative to the default model used by {% data variables.product.prodname_copilot_chat_short %}. {% data variables.copilot.copilot_gpt_41 %} is one of those models and excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. For information about the capabilities of {% data variables.copilot.copilot_gpt_41 %}, see the [OpenAI documentation](https://platform.openai.com/docs/models).
{% data variables.copilot.copilot_gpt_41 %} is currently available in: {% data variables.copilot.copilot_gpt_41 %} is currently available in:

View File

@@ -41,7 +41,7 @@ In immersive view, you can also preview how some file formats, such as Markdown,
## Powered by skills ## Powered by skills
When using the GPT-4o and {% data variables.copilot.copilot_claude_sonnet %} models, {% data variables.product.prodname_copilot_short %} has access to a collection of skills to fetch data from {% data variables.product.github %}, which are dynamically selected based on the question you ask. You can tell which skill {% data variables.product.prodname_copilot_short %} used by clicking {% octicon "chevron-down" aria-label="the down arrow" %} to expand the status information in the chat window. When using the {% data variables.copilot.copilot_gpt_4o %} and {% data variables.copilot.copilot_claude_sonnet %} models, {% data variables.product.prodname_copilot_short %} has access to a collection of skills to fetch data from {% data variables.product.github %}, which are dynamically selected based on the question you ask. You can tell which skill {% data variables.product.prodname_copilot_short %} used by clicking {% octicon "chevron-down" aria-label="the down arrow" %} to expand the status information in the chat window.
![Screenshot of the {% data variables.product.prodname_copilot_short %} chat panel with the status information expanded and the skill that was used highlighted with an orange outline.](/assets/images/help/copilot/chat-show-skill.png) ![Screenshot of the {% data variables.product.prodname_copilot_short %} chat panel with the status information expanded and the skill that was used highlighted with an orange outline.](/assets/images/help/copilot/chat-show-skill.png)
@@ -351,7 +351,8 @@ You can attach an image to {% data variables.product.prodname_copilot_short %} a
1. Go to the immersive view of {% data variables.product.prodname_copilot_chat_short %} ([https://github.com/copilot](https://github.com/copilot)). 1. Go to the immersive view of {% data variables.product.prodname_copilot_chat_short %} ([https://github.com/copilot](https://github.com/copilot)).
1. If you see the AI model picker at the top of the page, select one of the models that supports adding images to prompts: 1. If you see the AI model picker at the top of the page, select one of the models that supports adding images to prompts:
* {% data variables.copilot.copilot_gpt_4o %} (the default that's used if you don't see a model picker) * {% data variables.copilot.copilot_gpt_41 %} (the default that's used if you don't see a model picker)
* {% data variables.copilot.copilot_gpt_4o %}
* {% data variables.copilot.copilot_claude_sonnet_35 %} * {% data variables.copilot.copilot_claude_sonnet_35 %}
* {% data variables.copilot.copilot_claude_sonnet_37 %} * {% data variables.copilot.copilot_claude_sonnet_37 %}
* {% data variables.copilot.copilot_gemini_flash %} * {% data variables.copilot.copilot_gemini_flash %}

View File

@@ -143,7 +143,8 @@ When you use {% data variables.product.prodname_copilot_agent_short %} mode, {%
1. If you see the AI model picker at the bottom right of the chat view, select one of the models that supports adding images to prompts: 1. If you see the AI model picker at the bottom right of the chat view, select one of the models that supports adding images to prompts:
* {% data variables.copilot.copilot_gpt_4o %} (the default that's used if you don't see a model picker) * {% data variables.copilot.copilot_gpt_41 %} (the default that's used if you don't see a model picker)
* {% data variables.copilot.copilot_gpt_4o %}
* {% data variables.copilot.copilot_claude_sonnet_35 %} * {% data variables.copilot.copilot_claude_sonnet_35 %}
* {% data variables.copilot.copilot_claude_sonnet_37 %} * {% data variables.copilot.copilot_claude_sonnet_37 %}
* {% data variables.copilot.copilot_gemini_flash %} * {% data variables.copilot.copilot_gemini_flash %}
@@ -271,7 +272,8 @@ See [Ask questions in the inline chat view](https://learn.microsoft.com/visualst
1. If you see the AI model picker at the bottom right of the chat view, select one of the models that supports adding images to prompts: 1. If you see the AI model picker at the bottom right of the chat view, select one of the models that supports adding images to prompts:
* {% data variables.copilot.copilot_gpt_4o %} (the default that's used if you don't see a model picker) * {% data variables.copilot.copilot_gpt_41 %} (the default that's used if you don't see a model picker)
* {% data variables.copilot.copilot_gpt_4o %}
* {% data variables.copilot.copilot_claude_sonnet_35 %} * {% data variables.copilot.copilot_claude_sonnet_35 %}
* {% data variables.copilot.copilot_claude_sonnet_37 %} * {% data variables.copilot.copilot_claude_sonnet_37 %}
* {% data variables.copilot.copilot_gemini_flash %} * {% data variables.copilot.copilot_gemini_flash %}

View File

@@ -107,7 +107,7 @@ template
└── template.php └── template.php
``` ```
This example gives the prompts you can enter into {% data variables.product.prodname_copilot_chat_short %} to complete the migration, and the responses {% data variables.product.prodname_copilot_short %} returned for one instance of this migration. The default GPT 4o model was used to generate these responses. {% data variables.product.prodname_copilot_chat_short %} responses are non-deterministic, so you will probably get slightly different responses to the ones shown here. This example gives the prompts you can enter into {% data variables.product.prodname_copilot_chat_short %} to complete the migration, and the responses {% data variables.product.prodname_copilot_short %} returned for one instance of this migration. The {% data variables.copilot.copilot_gpt_4o %} model was used to generate these responses. {% data variables.product.prodname_copilot_chat_short %} responses are non-deterministic, so you will probably get slightly different responses to the ones shown here.
During a migration process you are likely to get errors that you need to fix before moving ahead. {% data variables.product.prodname_copilot_short %} can help you with this. The example includes some errors and shows how you can get {% data variables.product.prodname_copilot_short %} to help you fix them. During a migration process you are likely to get errors that you need to fix before moving ahead. {% data variables.product.prodname_copilot_short %} can help you with this. The example includes some errors and shows how you can get {% data variables.product.prodname_copilot_short %} to help you fix them.

View File

@@ -103,5 +103,5 @@ To see a list of all available commands, run `gh models`.
There are a few key ways you can use the extension: There are a few key ways you can use the extension:
* **To ask a model multiple questions using a chat experience**, run `gh models run`. Select your model from the listed models, then send your prompts. * **To ask a model multiple questions using a chat experience**, run `gh models run`. Select your model from the listed models, then send your prompts.
* **To ask a model a single question**, run `gh models run MODEL-NAME "QUESTION"` in your terminal. For example, to ask the GPT 4o model why the sky is blue, you can run `gh models run gpt-4o "why is the sky blue?"`. * **To ask a model a single question**, run `gh models run MODEL-NAME "QUESTION"` in your terminal. For example, to ask the {% data variables.copilot.copilot_gpt_41 %} model why the sky is blue, you can run `gh models run gpt-4.1 "why is the sky blue?"`.
* **To provide the output of a command as context when you call a model**, you can join a separate command and the call to the model with the pipe character (`|`). For example, to summarize the README file in the current directory using the GPT 4o model, you can run `cat README.md | gh models run gpt-4o "summarize this text"`. * **To provide the output of a command as context when you call a model**, you can join a separate command and the call to the model with the pipe character (`|`). For example, to summarize the README file in the current directory using the {% data variables.copilot.copilot_gpt_41 %} model, you can run `cat README.md | gh models run gpt-4.1 "summarize this text"`.

View File

@@ -48,7 +48,7 @@
| {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | | {% data variables.copilot.copilot_claude_sonnet_37 %} Thinking | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
| {% data variables.copilot.copilot_gemini_flash %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | | {% data variables.copilot.copilot_gemini_flash %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
| {% data variables.copilot.copilot_gemini_25_pro %} | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | | {% data variables.copilot.copilot_gemini_25_pro %} | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
| GPT-4o | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | | {% data variables.copilot.copilot_gpt_4o %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
| {% data variables.copilot.copilot_gpt_41 %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | | {% data variables.copilot.copilot_gpt_41 %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |
| GPT-4.5 | {% octicon "x" aria-label="Not included" %} | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | | GPT-4.5 | {% octicon "x" aria-label="Not included" %} | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} |
| o1 | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | | o1 | {% octicon "x" aria-label="Not included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} | {% octicon "check" aria-label="Included" %} |

View File

@@ -1,2 +0,0 @@
> [!NOTE]
> {% data variables.copilot.copilot_gpt_41 %} in {% data variables.product.prodname_copilot_chat_short %} is currently in {% data variables.release-phases.public_preview %} and subject to change.

View File

@@ -1 +1 @@
{% data variables.copilot.copilot_gpt_41 %} is a revamped version of OpenAI's GPT-4o model. This model is a strong default choice for common development tasks that benefit from speed, responsiveness, and general-purpose reasoning. If you're working on tasks that require broad knowledge, fast iteration, or basic code understanding, {% data variables.copilot.copilot_gpt_41 %} makes large improvements over GPT-4o. {% data variables.copilot.copilot_gpt_41 %} is a revamped version of OpenAI's {% data variables.copilot.copilot_gpt_4o %} model. This model is a strong default choice for common development tasks that benefit from speed, responsiveness, and general-purpose reasoning. If you're working on tasks that require broad knowledge, fast iteration, or basic code understanding, {% data variables.copilot.copilot_gpt_41 %} makes large improvements over {% data variables.copilot.copilot_gpt_4o %}.

View File

@@ -1 +1 @@
GPT-4o is a strong default choice for common development tasks that benefit from speed, responsiveness, and general-purpose reasoning. If you're working on tasks that require broad knowledge, fast iteration, or basic code understanding, GPT-4o is likely the best model to use. {% data variables.copilot.copilot_gpt_4o %} is a good choice for common development tasks that benefit from speed, responsiveness, and general-purpose reasoning. If you're working on tasks that require broad knowledge, fast iteration, or basic code understanding, {% data variables.copilot.copilot_gpt_4o %} is likely the model to use.