Add Copilot docs for the Gemini 2.0 Flash model (#54076)
Co-authored-by: Sophie <29382425+sophietheking@users.noreply.github.com> Co-authored-by: Paul Loeb <90000203+thispaul@users.noreply.github.com> Co-authored-by: Melanie Yarbrough <11952755+myarb@users.noreply.github.com>
This commit is contained in:
BIN
assets/images/help/copilot/copilot-immersive-button.png
Normal file
BIN
assets/images/help/copilot/copilot-immersive-button.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 47 KiB |
@@ -22,7 +22,7 @@ redirect_from:
|
||||
|
||||
{% data reusables.rai.code-scanning.copilot-autofix-note %}
|
||||
|
||||
{% data variables.product.prodname_copilot_autofix_short %} generates potential fixes that are relevant to the existing source code and translates the description and location of an alert into code changes that may fix the alert. {% data variables.product.prodname_copilot_autofix_short %} uses internal {% data variables.product.prodname_copilot %} APIs interfacing with the large language model GPT-4o from OpenAI, which has sufficient generative capabilities to produce both suggested fixes in code and explanatory text for those fixes.
|
||||
{% data variables.product.prodname_copilot_autofix_short %} generates potential fixes that are relevant to the existing source code and translates the description and location of an alert into code changes that may fix the alert. {% data variables.product.prodname_copilot_autofix_short %} uses internal {% data variables.product.prodname_copilot %} APIs interfacing with the large language model GPT 4o from OpenAI, which has sufficient generative capabilities to produce both suggested fixes in code and explanatory text for those fixes.
|
||||
|
||||
{% data variables.product.prodname_copilot_autofix_short %} is allowed by default and enabled for every repository using {% data variables.product.prodname_codeql %}, but you can choose to opt out and disable {% data variables.product.prodname_copilot_autofix_short %}. To learn how to disable {% data variables.product.prodname_copilot_autofix_short %} at the enterprise, organization and repository levels, see [AUTOTITLE](/code-security/code-scanning/managing-code-scanning-alerts/disabling-autofix-for-code-scanning).
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ topics:
|
||||
* {% data variables.product.prodname_copilot_edits_vscode_short %} to make changes across multiple files (**only in {% data variables.product.prodname_vscode %} and {% data variables.product.prodname_vs %}**)
|
||||
* {% data variables.product.prodname_copilot_chat_short %} in {% data variables.product.prodname_vscode %}, {% data variables.product.prodname_vs %}, JetBrains IDEs, and {% data variables.product.prodname_dotcom_the_website %}
|
||||
* Block suggestions matching public code
|
||||
* Access to {% data variables.copilot.copilot_claude_sonnet %} models
|
||||
* Access to the {% data variables.copilot.copilot_claude_sonnet %} and {% data variables.copilot.copilot_gemini_flash %} models
|
||||
* Access to {% data variables.product.prodname_copilot_extensions_short %} in {% data variables.product.prodname_vscode %}, {% data variables.product.prodname_vs %}, JetBrains IDEs, {% data variables.product.prodname_dotcom_the_website %}, and {% data variables.product.prodname_mobile %}
|
||||
|
||||
## What are the limitations of {% data variables.product.prodname_copilot_free_short %}?
|
||||
|
||||
@@ -38,12 +38,15 @@ You can choose whether your prompts and {% data variables.product.prodname_copil
|
||||
{% data reusables.user-settings.copilot-settings %}
|
||||
1. To allow or prevent {% data variables.product.prodname_dotcom %} using your data, select or deselect **Allow {% data variables.product.prodname_dotcom %} to use my code snippets from the code editor for product improvements**.
|
||||
|
||||
## Enabling or disabling {% data variables.copilot.copilot_claude_sonnet %}
|
||||
## Enabling or disabling alternative AI models
|
||||
|
||||
You can choose whether to allow use of Anthropic's {% data variables.copilot.copilot_claude_sonnet %} model as an alternative to {% data variables.product.prodname_copilot_short %}'s default model. For more information, see [AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot).
|
||||
You can choose whether to allow the following AI models to be used as an alternative to {% data variables.product.prodname_copilot_short %}'s default model.
|
||||
|
||||
* {% data variables.copilot.copilot_claude_sonnet %} - see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot)
|
||||
* {% data variables.copilot.copilot_gemini_flash %} - see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot)
|
||||
|
||||
{% data reusables.user-settings.copilot-settings %}
|
||||
1. To the right of **Anthropic {% data variables.copilot.copilot_claude_sonnet %} in {% data variables.product.prodname_copilot_short %}**, select the dropdown menu, then click **Enabled** or **Disabled**.
|
||||
1. To the right of the model name, select the dropdown menu, then click **Enabled** or **Disabled**.
|
||||
|
||||
## Enabling or disabling web search for {% data variables.product.prodname_copilot_chat %}
|
||||
|
||||
|
||||
@@ -33,8 +33,7 @@ You can configure any of the following policies for your enterprise:
|
||||
* [{% data variables.product.prodname_copilot_extensions %}](#github-copilot-extensions)
|
||||
* [Suggestions matching public code](#suggestions-matching-public-code)
|
||||
* [Give {% data variables.product.prodname_copilot_short %} access to Bing](#give-copilot-access-to-bing)
|
||||
* [{% data variables.product.prodname_copilot_short %} access to {% data variables.copilot.copilot_claude_sonnet %}](#copilot-access-to-claude-35-sonnet)
|
||||
* [{% data variables.product.prodname_copilot_short %} access to the o1 and o3 families of models](#copilot-access-to-the-o1-and-o3-families-of-models)
|
||||
* [{% data variables.product.prodname_copilot_short %} access to alternative AI models](#copilot-access-to-alternative-ai-models)
|
||||
|
||||
### {% data variables.product.prodname_copilot_short %} in {% data variables.product.prodname_dotcom_the_website %}
|
||||
|
||||
@@ -75,25 +74,17 @@ You can chat with {% data variables.product.prodname_copilot %} in your IDE to g
|
||||
|
||||
{% data variables.product.prodname_copilot_chat %} can use Bing to provide enhanced responses by searching the internet for information related to a question. Bing search is particularly helpful when discussing new technologies or highly specific subjects.
|
||||
|
||||
### {% data variables.product.prodname_copilot_short %} access to {% data variables.copilot.copilot_claude_sonnet %}
|
||||
### {% data variables.product.prodname_copilot_short %} access to alternative AI models
|
||||
|
||||
{% data reusables.copilot.claude-sonnet-preview-note %}
|
||||
> [!NOTE] The following models are currently in {% data variables.release-phases.public_preview %} as AI models for {% data variables.product.prodname_copilot %}, and are subject to change. The [AUTOTITLE](/free-pro-team@latest/site-policy/github-terms/github-pre-release-license-terms) apply to your use of these products.
|
||||
|
||||
By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to **Anthropic {% data variables.copilot.copilot_claude_sonnet %} in {% data variables.product.prodname_copilot_short %}**, members of your enterprise can choose to use this model rather than the default `GPT 4o` model. See [AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot).
|
||||
By default, {% data variables.product.prodname_copilot_chat_short %} uses the GPT 4o model. If you grant access to the alternative models, members of your enterprise can choose to use these models rather than the default GPT 4o model. The available alternative models are:
|
||||
|
||||
### {% data variables.product.prodname_copilot_short %} access to the o1 and o3 families of models
|
||||
|
||||
{% data reusables.models.o1-models-preview-note %}
|
||||
|
||||
By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to the o1 or o3 models, members of your enterprise can select to use these models rather than the default `GPT 4o` model.
|
||||
|
||||
The o1 family of models includes the following models:
|
||||
|
||||
* `o1`/`o1-preview`: These models are focused on advanced reasoning and solving complex problems, in particular in math and science. They respond more slowly than the `gpt-4o` model. Each member of your enterprise can make 10 requests to each of these models per day.
|
||||
|
||||
The o3 family of models includes one model:
|
||||
|
||||
* `o3-mini`: This is the next generation of reasoning models, following from `o1` and `o1-mini`. The `o3-mini` model outperforms `o1` on coding benchmarks with response times that are comparable to `o1-mini`, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model every 12 hours.
|
||||
* **{% data variables.copilot.copilot_claude_sonnet %}**. See [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot).
|
||||
* **{% data variables.copilot.copilot_gemini_flash %}**. See [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot).
|
||||
* **OpenAI's o1 and o3 models**
|
||||
* **o1**: This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the GPT 4o model. Each member of your enterprise can make 10 requests to this model per day.
|
||||
* **o3-mini**: This is the next generation of reasoning models, following from o1 and o1-mini. The o3-mini model outperforms o1 on coding benchmarks with response times that are comparable to o1-mini, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model every 12 hours.
|
||||
|
||||
### {% data variables.product.prodname_copilot_short %} Metrics API access
|
||||
|
||||
|
||||
@@ -32,8 +32,9 @@ Organization owners can set policies to govern how {% data variables.product.pro
|
||||
* {% data variables.product.prodname_copilot_cli_short %} and {% data variables.product.prodname_windows_terminal %}
|
||||
* Suggestions matching public code
|
||||
* Access to alternative models for {% data variables.product.prodname_copilot_short %}
|
||||
* Anthropic {% data variables.copilot.copilot_claude_sonnet %} in Copilot
|
||||
* OpenAI o1 and o3 models in Copilot
|
||||
* Anthropic {% data variables.copilot.copilot_claude_sonnet %} in {% data variables.product.prodname_copilot_short %}
|
||||
* Google {% data variables.copilot.copilot_gemini_flash %} in {% data variables.product.prodname_copilot_short %}
|
||||
* OpenAI o1 and o3 models in {% data variables.product.prodname_copilot_short %}
|
||||
|
||||
The policy settings selected by an organization owner determine the behavior of {% data variables.product.prodname_copilot %} for all organization members that have been granted access to {% data variables.product.prodname_copilot_short %} through the organization.
|
||||
|
||||
|
||||
@@ -64,7 +64,7 @@ For more information, see the [{% data variables.product.prodname_copilot_cli_sh
|
||||
|
||||
This error suggests that you have exceeded the rate limit for {% data variables.product.prodname_copilot_short %} requests. {% data variables.product.github %} uses rate limits to ensure everyone has fair access to the {% data variables.product.prodname_copilot_short %} service and to protect against abuse.
|
||||
|
||||
Most people see rate limiting for preview models, like OpenAI’s o1 and o1-mini, which are rate-limited due to limited capacity.
|
||||
Most people see rate limiting for preview models, like OpenAI’s o1 and o3-mini, which are rate-limited due to limited capacity.
|
||||
|
||||
Service-level request rate limits ensure high service quality for all {% data variables.product.prodname_copilot_short %} users and should not affect typical or even deeply engaged {% data variables.product.prodname_copilot_short %} usage. We are aware of some use cases that are affected by it. {% data variables.product.github %} is iterating on {% data variables.product.prodname_copilot_short %}’s rate-limiting heuristics to ensure it doesn’t block legitimate use cases.
|
||||
|
||||
|
||||
@@ -0,0 +1,90 @@
|
||||
---
|
||||
title: Changing the AI model for Copilot Chat
|
||||
shortTitle: 'Change the AI model'
|
||||
intro: 'Learn how to change the default LLM for {% data variables.product.prodname_copilot_chat_short %} to a different model.'
|
||||
versions:
|
||||
feature: copilot
|
||||
topics:
|
||||
- Copilot
|
||||
---
|
||||
|
||||
By default, {% data variables.product.prodname_copilot_chat_short %} uses OpenAI's GPT 4o large language model. This is a highly proficient model that performs well for text generation tasks, such as summarization and knowledge-based chat. The model is also capable of reasoning, solving complex math problems and coding.
|
||||
|
||||
However, you are not limited to using this model. You can choose from a selection of other models, each with its own particular strengths. You may have a favorite model that you like to use, or you might prefer to use a particular model for inquiring about a specific subject.
|
||||
|
||||
{% data variables.product.prodname_copilot_short %} allows you to change the model during a chat and have the alternative model used to generate responses to your prompts.
|
||||
|
||||
{% webui %}
|
||||
|
||||
> [!NOTE]
|
||||
> * Multiple model support in {% data variables.product.prodname_copilot_chat_short %} is in {% data variables.release-phases.public_preview %} and is subject to change.
|
||||
> * You can only use an alternative AI model in the immersive view of {% data variables.product.prodname_copilot_chat_short %}. This is the full-page version of {% data variables.product.prodname_copilot_chat_short %} that's displayed at [https://github.com/copilot](https://github.com/copilot). The {% data variables.product.prodname_copilot_chat_short %} panel always uses the default model.
|
||||
|
||||
## AI models for {% data variables.product.prodname_copilot_chat_short %}
|
||||
|
||||
{% data reusables.copilot.copilot-chat-models-list %}
|
||||
|
||||
### Limitations of AI models for {% data variables.product.prodname_copilot_chat_short %}
|
||||
|
||||
* If you want to use the skills listed in the table above{% ifversion ghec %}, or knowledge bases{% endif %}, on the {% data variables.product.github %} website, only the GPT 4o, {% data variables.copilot.copilot_claude_sonnet %}, and {% data variables.copilot.copilot_gemini_flash %} models are supported.
|
||||
* Experimental pre-release versions of the models may not interact with all filters correctly, including the duplication detection filter.
|
||||
|
||||
## Changing your AI model
|
||||
|
||||
These instructions are for {% data variables.product.prodname_copilot_short %} on the {% data variables.product.github %} website. For {% data variables.product.prodname_vs %} or {% data variables.product.prodname_vscode_shortname %}, click the appropriate tab at the top of this page.
|
||||
|
||||
{% data reusables.copilot.model-picker-enable-o1-models %}
|
||||
|
||||
> [!NOTE] If you use {% data variables.product.prodname_copilot_extensions_short %}, they may override the model you select.
|
||||
|
||||
1. In the top right of any page on {% data variables.product.github %}, click the down arrow beside the **{% octicon "copilot" aria-hidden="true" %}** icon and click **Immersive** in the dropdown menu.
|
||||
|
||||

|
||||
|
||||
1. At the top of the immersive view, select the **CURRENT-MODEL** {% octicon "chevron-down" aria-hidden="true" %} dropdown menu, then click the AI model of your choice.
|
||||
|
||||
{% endwebui %}
|
||||
|
||||
{% vscode %}
|
||||
|
||||
> [!NOTE] Multiple model support in {% data variables.product.prodname_copilot_chat_short %} is in {% data variables.release-phases.public_preview %} and is subject to change.
|
||||
|
||||
## AI models for {% data variables.product.prodname_copilot_chat_short %}
|
||||
|
||||
{% data reusables.copilot.copilot-chat-models-list %}
|
||||
|
||||
## Changing your AI model
|
||||
|
||||
These instructions are for {% data variables.product.prodname_vscode_shortname %}. For {% data variables.product.prodname_vs %} or for {% data variables.product.prodname_copilot_short %} on the {% data variables.product.github %} website, click the appropriate tab at the top of this page.
|
||||
|
||||
{% data reusables.copilot.model-picker-enable-o1-models %}
|
||||
|
||||
{% data reusables.copilot.chat-model-limitations-ide %}
|
||||
|
||||
{% data reusables.copilot.open-chat-vs-code %}
|
||||
1. In the bottom right of the chat view, select the **CURRENT-MODEL** {% octicon "chevron-down" aria-hidden="true" %} dropdown menu, then click the AI model of your choice.
|
||||
|
||||
{% endvscode %}
|
||||
|
||||
{% visualstudio %}
|
||||
|
||||
> [!NOTE] Multiple model support in {% data variables.product.prodname_copilot_chat_short %} is in {% data variables.release-phases.public_preview %} and is subject to change.
|
||||
|
||||
## AI models for {% data variables.product.prodname_copilot_chat_short %}
|
||||
|
||||
{% data reusables.copilot.copilot-chat-models-list-visual-studio %}
|
||||
|
||||
## Changing your AI model
|
||||
|
||||
These instructions are for {% data variables.product.prodname_vs %}. For {% data variables.product.prodname_vscode_shortname %} or for {% data variables.product.prodname_copilot_short %} on the {% data variables.product.github %} website, click the appropriate tab at the top of this page.
|
||||
|
||||
To use multi-model {% data variables.product.prodname_copilot_chat_short %}, you must use {% data variables.product.prodname_vs %} 2022 version 17.12 or later. See the [{% data variables.product.prodname_vs %} downloads page](https://visualstudio.microsoft.com/downloads/).
|
||||
|
||||
{% data reusables.copilot.model-picker-enable-o1-models %}
|
||||
|
||||
{% data reusables.copilot.chat-model-limitations-ide %}
|
||||
|
||||
1. In the {% data variables.product.prodname_vs %} menu bar, click **View**, then click **{% data variables.product.prodname_copilot_chat %}**.
|
||||
1. In the bottom right of the chat view, select the **CURRENT-MODEL** {% octicon "triangle-down" aria-hidden="true" %} dropdown menu, then click the AI model of your choice.
|
||||
|
||||
{% endvisualstudio %}
|
||||
13
content/copilot/using-github-copilot/ai-models/index.md
Normal file
13
content/copilot/using-github-copilot/ai-models/index.md
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
title: AI models for Copilot Chat
|
||||
shortTitle: AI models
|
||||
intro: "Learn how to use alternative large language models for {% data variables.product.prodname_copilot_chat %}."
|
||||
versions:
|
||||
feature: copilot
|
||||
topics:
|
||||
- Copilot
|
||||
children:
|
||||
- /changing-the-ai-model-for-copilot-chat
|
||||
- /using-claude-sonnet-in-github-copilot
|
||||
- /using-gemini-flash-in-github-copilot
|
||||
---
|
||||
@@ -7,9 +7,11 @@ versions:
|
||||
feature: copilot
|
||||
topics:
|
||||
- Copilot
|
||||
redirect_from:
|
||||
- /copilot/using-github-copilot/using-claude-sonnet-in-github-copilot
|
||||
---
|
||||
|
||||
{% data reusables.copilot.claude-sonnet-preview-note %}
|
||||
> [!NOTE] {% data variables.copilot.copilot_claude_sonnet %} is in {% data variables.release-phases.public_preview %} and subject to change. The [AUTOTITLE](/free-pro-team@latest/site-policy/github-terms/github-pre-release-license-terms) apply to your use of this product.
|
||||
|
||||
## About {% data variables.copilot.copilot_claude_sonnet %} in {% data variables.product.prodname_copilot %}
|
||||
|
||||
@@ -18,7 +20,7 @@ topics:
|
||||
{% data variables.copilot.copilot_claude_sonnet %} is currently available in:
|
||||
|
||||
* {% data variables.product.prodname_copilot_chat_short %} in {% data variables.product.prodname_vscode %}
|
||||
* {% data variables.product.prodname_copilot_chat_short %} in {% data variables.product.prodname_vs %} 17.12 Preview 3 or later
|
||||
* {% data variables.product.prodname_copilot_chat_short %} in {% data variables.product.prodname_vs %} 2022 version 17.12 or later
|
||||
* Immersive mode in {% data variables.product.prodname_copilot_chat_short %} in {% data variables.product.github %}
|
||||
|
||||
{% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_claude_sonnet %} hosted on Amazon Web Services. When using {% data variables.copilot.copilot_claude_sonnet %}, prompts and metadata are sent to Amazon's Bedrock service, which makes the [following data commitments](https://docs.aws.amazon.com/bedrock/latest/userguide/data-protection.html): _Amazon Bedrock doesn't store or log your prompts and completions. Amazon Bedrock doesn't use your prompts and completions to train any AWS models and doesn't distribute them to third parties_.
|
||||
@@ -39,7 +41,7 @@ If you have a {% data variables.product.prodname_copilot_free_short %} or {% dat
|
||||
|
||||
Clicking **Allow** enables you to use {% data variables.copilot.copilot_claude_sonnet %} and updates the policy in your personal settings on {% data variables.product.github %}.
|
||||
|
||||
* You can enable the model directly in your personal settings on the {% data variables.product.github %} website. See [AUTOTITLE](/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/managing-copilot-policies-as-an-individual-subscriber#enabling-or-disabling-claude-35-sonnet).
|
||||
* You can enable the model directly in your personal settings on the {% data variables.product.github %} website. See [AUTOTITLE](/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/managing-copilot-policies-as-an-individual-subscriber#enabling-or-disabling-alternative-ai-models).
|
||||
|
||||
{% endif %}
|
||||
|
||||
@@ -49,11 +51,8 @@ As an {% ifversion ghec %}enterprise or{% endif %} organization owner, you can e
|
||||
|
||||
## Using {% data variables.copilot.copilot_claude_sonnet %}
|
||||
|
||||
For details of how to change the model for {% data variables.product.prodname_copilot_chat_short %}, see:
|
||||
|
||||
* [AUTOTITLE](/copilot/using-github-copilot/asking-github-copilot-questions-in-githubcom#changing-your-ai-model)
|
||||
* [AUTOTITLE](/copilot/using-github-copilot/asking-github-copilot-questions-in-your-ide#changing-your-ai-model)
|
||||
For details of how to change the model that {% data variables.product.prodname_copilot_chat_short %} uses, see: [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat).
|
||||
|
||||
## Leaving feedback
|
||||
|
||||
To leave feedback about Claude 3.5 Sonnet in {% data variables.product.prodname_copilot %}, or to ask a question, see the {% data variables.product.prodname_github_community %} discussion [Claude 3.5 Sonnet is now available to all {% data variables.product.prodname_copilot_short %} users in Public Preview](https://github.com/orgs/community/discussions/143337).
|
||||
To leave feedback about {% data variables.copilot.copilot_claude_sonnet %} in {% data variables.product.prodname_copilot %}, or to ask a question, see the {% data variables.product.prodname_github_community %} discussion [Claude 3.5 Sonnet is now available to all {% data variables.product.prodname_copilot_short %} users in Public Preview](https://github.com/orgs/community/discussions/143337).
|
||||
@@ -0,0 +1,52 @@
|
||||
---
|
||||
title: Using Gemini 2.0 Flash in GitHub Copilot
|
||||
allowTitleToDifferFromFilename: true
|
||||
shortTitle: 'Use {% data variables.copilot.copilot_gemini_flash %}'
|
||||
intro: 'Learn how to enable {% data variables.copilot.copilot_gemini_flash %} for {% ifversion fpt %}yourself or{% endif %} your organization{% ifversion ghec %} or enterprise{% endif %}.'
|
||||
versions:
|
||||
feature: copilot
|
||||
topics:
|
||||
- Copilot
|
||||
---
|
||||
|
||||
> [!NOTE] {% data variables.copilot.copilot_gemini_flash %} is in {% data variables.release-phases.public_preview %} and subject to change. The [AUTOTITLE](/free-pro-team@latest/site-policy/github-terms/github-pre-release-license-terms) apply to your use of this product.
|
||||
|
||||
## About {% data variables.copilot.copilot_gemini_flash %} in {% data variables.product.prodname_copilot %}
|
||||
|
||||
{% data variables.copilot.copilot_gemini_flash %} is a large language model (LLM) that you can use as an alternative to the default model used by {% data variables.product.prodname_copilot_chat_short %}. {% data variables.copilot.copilot_gemini_flash %} is a responsive LLM that can empower you to build apps faster and more easily, so you can focus on great experiences for your users. {% data reusables.copilot.gemini-model-info %}
|
||||
|
||||
{% data variables.copilot.copilot_gemini_flash %} is currently available in:
|
||||
|
||||
* {% data variables.product.prodname_copilot_chat_short %} in {% data variables.product.prodname_vscode %}
|
||||
* {% data variables.product.prodname_copilot_chat_short %} in {% data variables.product.prodname_vs %} 2022 version 17.12 or later
|
||||
* Immersive mode in {% data variables.product.prodname_copilot_chat_short %} in {% data variables.product.github %}
|
||||
|
||||
{% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_gemini_flash %} hosted on Google Cloud Platform (GCP). When using {% data variables.copilot.copilot_gemini_flash %}, prompts and metadata are sent to GCP, which makes the [following data commitment](https://cloud.google.com/gemini/docs/discover/data-governance): _Gemini doesn't use your prompts, or its responses, as data to train its models._
|
||||
|
||||
When using {% data variables.copilot.copilot_gemini_flash %}, input prompts and output completions continue to run through {% data variables.product.prodname_copilot %}'s content filters for public code matching, when applied, along with those for harmful, offensive, or off-topic content.
|
||||
|
||||
## Configuring access
|
||||
|
||||
You must enable access to {% data variables.copilot.copilot_gemini_flash %} before you can use the model.
|
||||
|
||||
{% ifversion fpt %}
|
||||
|
||||
### Setup for individual use
|
||||
|
||||
If you have a {% data variables.product.prodname_copilot_free_short %} or {% data variables.product.prodname_copilot_pro_short %} subscription, you can enable {% data variables.copilot.copilot_gemini_flash %} in two ways:
|
||||
|
||||
* The first time you choose to use {% data variables.copilot.copilot_gemini_flash %} with {% data variables.product.prodname_copilot_chat_short %} in {% data variables.product.prodname_vscode %}, or in the immersive view of {% data variables.product.prodname_copilot_chat_short %}, you will be prompted to allow access to the model.
|
||||
|
||||
Clicking **Allow** enables you to use {% data variables.copilot.copilot_gemini_flash %} and updates the policy in your personal settings on {% data variables.product.github %}.
|
||||
|
||||
* You can enable the model directly in your personal settings on the {% data variables.product.github %} website. See [AUTOTITLE](/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/managing-copilot-policies-as-an-individual-subscriber#enabling-or-disabling-alternative-ai-models).
|
||||
|
||||
{% endif %}
|
||||
|
||||
### Setup for organization {% ifversion ghec %}and enterprise{% endif %} use
|
||||
|
||||
As an {% ifversion ghec %}enterprise or{% endif %} organization owner, you can enable or disable {% data variables.copilot.copilot_gemini_flash %} for everyone who has been assigned a {% ifversion ghec %}{% data variables.product.prodname_copilot_enterprise_short %} or {% endif %}{% data variables.product.prodname_copilot_business_short %} seat through your {% ifversion ghec %}enterprise or {% endif %}organization. See [AUTOTITLE](/copilot/managing-copilot/managing-github-copilot-in-your-organization/setting-policies-for-copilot-in-your-organization/managing-policies-for-copilot-in-your-organization){% ifversion ghec %} and [AUTOTITLE](/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise#copilot-access-to-alternative-ai-models){% endif %}.
|
||||
|
||||
## Using {% data variables.copilot.copilot_gemini_flash %}
|
||||
|
||||
For details of how to change the model that {% data variables.product.prodname_copilot_chat_short %} uses, see [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat).
|
||||
@@ -35,7 +35,7 @@ On {% data variables.product.github %}, you can use {% data variables.product.pr
|
||||
|
||||
## Powered by skills
|
||||
|
||||
When using the GPT-4o and {% data variables.copilot.copilot_claude_sonnet %} models, {% data variables.product.prodname_copilot_short %} has access to a collection of skills to fetch data from {% data variables.product.github %}, which are dynamically selected based on the question you ask. You can tell which skill {% data variables.product.prodname_copilot_short %} used by clicking {% octicon "chevron-down" aria-label="the down arrow" %} to expand the status information in the chat window.
|
||||
When using the GPT 4o and {% data variables.copilot.copilot_claude_sonnet %} models, {% data variables.product.prodname_copilot_short %} has access to a collection of skills to fetch data from {% data variables.product.github %}, which are dynamically selected based on the question you ask. You can tell which skill {% data variables.product.prodname_copilot_short %} used by clicking {% octicon "chevron-down" aria-label="the down arrow" %} to expand the status information in the chat window.
|
||||
|
||||

|
||||
|
||||
@@ -65,27 +65,7 @@ The skills you can use in {% data variables.product.prodname_copilot_chat_dotcom
|
||||
|
||||
## AI models for {% data variables.product.prodname_copilot_chat_short %}
|
||||
|
||||
{% data reusables.copilot.copilot-chat-models-beta-note %}
|
||||
|
||||
{% data reusables.copilot.copilot-chat-models-list-o3 %}
|
||||
|
||||
### Limitations of AI models for {% data variables.product.prodname_copilot_chat_short %}
|
||||
|
||||
* If you want to use the skills listed in the table above{% ifversion ghec %}, or knowledge bases{% endif %}, on the {% data variables.product.github %} website, only the GPT 4o and {% data variables.copilot.copilot_claude_sonnet %} models are supported.
|
||||
* Experimental pre-release versions of the models may not interact with all filters correctly, including the duplication detection filter.
|
||||
|
||||
### Changing your AI model
|
||||
|
||||
> [!NOTE] If you use {% data variables.product.prodname_copilot_extensions_short %}, they may override the model you select.
|
||||
|
||||
{% data reusables.copilot.model-picker-enable-o1-models %}
|
||||
1. In the top right of any page on {% data variables.product.github %}, click the **{% octicon "copilot" aria-hidden="true" %}** {% data variables.product.prodname_copilot %} icon next to the search bar.
|
||||
1. If the panel contains a previous conversation you had with {% data variables.product.prodname_copilot_short %}, in the top right of the panel, click {% octicon "plus" aria-label="New conversation" %}.
|
||||
|
||||

|
||||
|
||||
1. In the top right of the panel, click **{% octicon "screen-full" aria-hidden="true" %} Take conversation to immersive**. Multi-model {% data variables.product.prodname_copilot_chat_short %} is currently only available in the immersive view.
|
||||
1. In the top left of the immersive view, select the **CURRENT-MODEL** {% octicon "chevron-down" aria-hidden="true" %} dropdown menu, then click the AI model of your choice.
|
||||
{% data reusables.copilot.change-the-ai-model %}
|
||||
|
||||
## Asking a general question about software development
|
||||
|
||||
|
||||
@@ -151,17 +151,7 @@ You can tell {% data variables.product.prodname_copilot_short %} to answer a que
|
||||
|
||||
## AI models for {% data variables.product.prodname_copilot_chat_short %}
|
||||
|
||||
{% data reusables.copilot.copilot-chat-models-beta-note %}
|
||||
|
||||
{% data reusables.copilot.copilot-chat-models-list-o3 %}
|
||||
|
||||
### Changing your AI model
|
||||
|
||||
{% data reusables.copilot.chat-model-limitations-ide %}
|
||||
|
||||
{% data reusables.copilot.model-picker-enable-o1-models %}
|
||||
{% data reusables.copilot.open-chat-vs-code %}
|
||||
1. In the bottom right of the chat view, select the **CURRENT-MODEL** {% octicon "chevron-down" aria-hidden="true" %} dropdown menu, then click the AI model of your choice.
|
||||
{% data reusables.copilot.change-the-ai-model %}
|
||||
|
||||
## Additional ways to access {% data variables.product.prodname_copilot_chat_short %}
|
||||
|
||||
@@ -306,19 +296,7 @@ You can tell {% data variables.product.prodname_copilot_short %} to answer a que
|
||||
|
||||
## AI models for {% data variables.product.prodname_copilot_chat_short %}
|
||||
|
||||
{% data reusables.copilot.copilot-chat-models-beta-note %}
|
||||
|
||||
{% data reusables.copilot.copilot-chat-models-list-o1-preview %}
|
||||
|
||||
### Changing your AI model
|
||||
|
||||
To use multi-model {% data variables.product.prodname_copilot_chat_short %}, you must use {% data variables.product.prodname_vs %} 17.12 Preview 3 or later. See [{% data variables.product.prodname_vs %} 2022 Preview](https://visualstudio.microsoft.com/vs/preview/#download-preview) in the {% data variables.product.prodname_vs %} documentation.
|
||||
|
||||
{% data reusables.copilot.chat-model-limitations-ide %}
|
||||
|
||||
{% data reusables.copilot.model-picker-enable-o1-models %}
|
||||
1. In the {% data variables.product.prodname_vs %} menu bar, click **View**, then click **{% data variables.product.prodname_copilot_chat %}**.
|
||||
1. In the bottom right of the chat view, select the **CURRENT-MODEL** {% octicon "triangle-down" aria-hidden="true" %} dropdown menu, then click the AI model of your choice.
|
||||
{% data reusables.copilot.change-the-ai-model %}
|
||||
|
||||
## Additional ways to access {% data variables.product.prodname_copilot_chat_short %}
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ children:
|
||||
- /using-github-copilot-in-the-command-line
|
||||
- /prompt-engineering-for-github-copilot
|
||||
- /using-extensions-to-integrate-external-tools-with-copilot-chat
|
||||
- /using-claude-sonnet-in-github-copilot
|
||||
- /ai-models
|
||||
- /finding-public-code-that-matches-github-copilot-suggestions
|
||||
- /using-github-copilot-for-pull-requests
|
||||
- /guides-on-using-github-copilot
|
||||
|
||||
@@ -12,7 +12,7 @@ With {% data variables.product.prodname_github_models %} extensions, you can cal
|
||||
|
||||
If you have a {% data variables.product.prodname_copilot_short %} subscription, you can work with AI models in {% data variables.product.prodname_copilot_chat_short %} in two different ways:
|
||||
* Using the {% data variables.product.prodname_github_models %} {% data variables.product.prodname_copilot_extension_short %}. With this extension, you can ask for model recommendations based on certain criteria and chat with specific models. See [Using the {% data variables.product.prodname_github_models %} {% data variables.product.prodname_copilot_extension_short %}](#using-the-github-models-copilot-extension).
|
||||
* Using multiple model support in {% data variables.product.prodname_copilot_chat_short %}. With multi-model {% data variables.product.prodname_copilot_chat_short %}, you can choose a specific model to use for a conversation, then prompt {% data variables.product.prodname_copilot_chat_short %} as usual. See [AUTOTITLE](/copilot/using-github-copilot/asking-github-copilot-questions-in-githubcom#ai-models-for-copilot-chat) and [AUTOTITLE](/copilot/using-github-copilot/asking-github-copilot-questions-in-your-ide#ai-models-for-copilot-chat).
|
||||
* Using multiple model support in {% data variables.product.prodname_copilot_chat_short %}. With multi-model {% data variables.product.prodname_copilot_chat_short %}, you can choose a specific model to use for a conversation, then prompt {% data variables.product.prodname_copilot_chat_short %} as usual. See [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat).
|
||||
|
||||
### Using the {% data variables.product.prodname_github_models %} {% data variables.product.prodname_copilot_extension_short %}
|
||||
|
||||
@@ -60,5 +60,5 @@ To see a list of all available commands, run `gh models`.
|
||||
|
||||
There are a few key ways you can use the extension:
|
||||
* **To ask a model multiple questions using a chat experience**, run `gh models run`. Select your model from the listed models, then send your prompts.
|
||||
* **To ask a model a single question**, run `gh models run MODEL-NAME "QUESTION"` in your terminal. For example, to ask the `gpt-4o` model why the sky is blue, you can run `gh models run gpt-4o "why is the sky blue?"`.
|
||||
* **To provide the output of a command as context when you call a model**, you can join a separate command and the call to the model with the pipe character (`|`). For example, to summarize the README file in the current directory using the `gpt-4o` model, you can run `cat README.md | gh models run gpt-4o "summarize this text"`.
|
||||
* **To ask a model a single question**, run `gh models run MODEL-NAME "QUESTION"` in your terminal. For example, to ask the GPT 4o model why the sky is blue, you can run `gh models run gpt-4o "why is the sky blue?"`.
|
||||
* **To provide the output of a command as context when you call a model**, you can join a separate command and the call to the model with the pipe character (`|`). For example, to summarize the README file in the current directory using the GPT 4o model, you can run `cat README.md | gh models run gpt-4o "summarize this text"`.
|
||||
|
||||
@@ -18,7 +18,7 @@ To find an AI model:
|
||||
|
||||
The model is opened in the model playground. Details of the model are displayed in the sidebar on the right. If the sidebar is not displayed, expand it by clicking the **{% octicon "sidebar-expand" aria-label="Show parameters setting" %}** icon at the right of the playground.
|
||||
|
||||
{% data reusables.models.o1-models-preview-note %}
|
||||
> [!NOTE] Access to OpenAI's models is in {% data variables.release-phases.public_preview %} and subject to change.
|
||||
|
||||
## Experimenting with AI models in the playground
|
||||
|
||||
|
||||
1
data/reusables/copilot/change-the-ai-model.md
Normal file
1
data/reusables/copilot/change-the-ai-model.md
Normal file
@@ -0,0 +1 @@
|
||||
You can change the large language model that {% data variables.product.prodname_copilot_short %} uses to generate responses to chat prompts. You may find that different models perform better, or provide more useful responses, depending on the type of questions you ask. For more information see [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat).
|
||||
@@ -1 +0,0 @@
|
||||
> [!NOTE] {% data variables.copilot.copilot_claude_sonnet %} is in {% data variables.release-phases.public_preview %} and subject to change. The [AUTOTITLE](/free-pro-team@latest/site-policy/github-terms/github-pre-release-license-terms) apply to your use of this product.
|
||||
@@ -1 +0,0 @@
|
||||
> [!NOTE] Multiple model support in {% data variables.product.prodname_copilot_chat_short %} is in {% data variables.release-phases.public_preview %} and subject to change.
|
||||
@@ -1,12 +1,9 @@
|
||||
The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}:
|
||||
|
||||
* **GPT 4o:** This is the default {% data variables.product.prodname_copilot_chat_short %} model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/gpt-4o) and review the [model card](https://openai.com/index/gpt-4o-system-card/). Gpt-4o is hosted on Azure.
|
||||
* **GPT 4o:** This is the default {% data variables.product.prodname_copilot_chat_short %} model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/gpt-4o) and review the [model card](https://openai.com/index/gpt-4o-system-card/). GPT 4o is hosted on Azure.
|
||||
* **{% data variables.copilot.copilot_claude_sonnet %}:** This model excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. Learn more about the [model's capabilities](https://www.anthropic.com/claude/sonnet) or read the [model card](https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf). {% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_claude_sonnet %} hosted on Amazon Web Services.
|
||||
* **o1-preview:** This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the `gpt-4o` model. You can make 10 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1-preview is hosted on Azure.
|
||||
* **o1-mini:** This is the faster version of the `o1-preview` model, balancing the use of complex reasoning with the need for faster responses. It is best suited for code generation and small context operations. You can make 50 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1-mini is hosted on Azure.
|
||||
|
||||
> [!NOTE]
|
||||
> Support for the `o1` model, replacing `o1-preview`, is coming soon to {% data variables.product.prodname_vs %}.
|
||||
* **o1:** This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the GPT 4o model. You can make 10 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1 is hosted on Azure.
|
||||
* **o1-mini:** This is the faster version of the o1 model, balancing the use of complex reasoning with the need for faster responses. It is best suited for code generation and small context operations. You can make 50 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1-mini is hosted on Azure.
|
||||
|
||||
For more information about the o1 models, see [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.
|
||||
|
||||
@@ -1,10 +1,13 @@
|
||||
The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}:
|
||||
|
||||
* **GPT 4o:** This is the default {% data variables.product.prodname_copilot_chat_short %} model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/gpt-4o) and review the [model card](https://openai.com/index/gpt-4o-system-card/). Gpt-4o is hosted on Azure.
|
||||
* **GPT 4o:** This is the default {% data variables.product.prodname_copilot_chat_short %} model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/gpt-4o) and review the [model card](https://openai.com/index/gpt-4o-system-card/). GPT 4o is hosted on Azure.
|
||||
* **{% data variables.copilot.copilot_claude_sonnet %}:** This model excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. Learn more about the [model's capabilities](https://www.anthropic.com/claude/sonnet) or read the [model card](https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf). {% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_claude_sonnet %} hosted on Amazon Web Services.
|
||||
* **o1:** This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the `gpt-4o` model. You can make 10 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1 is hosted on Azure.
|
||||
* **{% data variables.copilot.copilot_gemini_flash %}:** This model has strong coding, math, and reasoning capabilities that makes it well suited to assist with software development. {% data reusables.copilot.gemini-model-info %}
|
||||
* **o1:** This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the GPT 4o model. You can make 10 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1 is hosted on Azure.
|
||||
* **o3-mini:** This model is the next generation of reasoning models, following from o1 and o1-mini. The o3-mini model outperforms o1 on coding benchmarks with response times that are comparable to o1-mini, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. You can make 50 requests to this model every 12 hours. Learn more about the [model's capabilities](https://platform.openai.com/docs/models#o3-mini) and review the [model card](https://openai.com/index/o3-mini-system-card/). o3-mini is hosted on Azure.
|
||||
|
||||
For more information about the o1 and o3 models, see [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.
|
||||
For more information about these models, see:
|
||||
|
||||
For more information about the {% data variables.copilot.copilot_claude_sonnet %} model from Anthropic, see [AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot).
|
||||
* **OpenAI's GPT 4o, o1, and o3-mini models**: [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.
|
||||
* **Anthropic's {% data variables.copilot.copilot_claude_sonnet %} model**: [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot).
|
||||
* **Google's {% data variables.copilot.copilot_gemini_flash %} model**: [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot).
|
||||
1
data/reusables/copilot/gemini-model-info.md
Normal file
1
data/reusables/copilot/gemini-model-info.md
Normal file
@@ -0,0 +1 @@
|
||||
For information about the capabilities of {% data variables.copilot.copilot_gemini_flash %}, see the [Google for developers blog](https://developers.googleblog.com/en/the-next-chapter-of-the-gemini-era-for-developers/) and the [Google Cloud documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#gemini-2.0-flash). For details of Google's data handling policy, see [Generative AI and data governance](https://cloud.google.com/vertex-ai/generative-ai/docs/data-governance#prediction) on the Google website.
|
||||
@@ -0,0 +1,64 @@
|
||||
---
|
||||
title: Integrating AI models into your development workflow
|
||||
intro: 'Call AI models in the tools you use every day.'
|
||||
versions:
|
||||
feature: github-models
|
||||
shortTitle: Integrate AI models
|
||||
---
|
||||
|
||||
With {% data variables.product.prodname_github_models %} extensions, you can call specific AI models from both {% data variables.product.prodname_copilot_chat_short %} and {% data variables.product.prodname_cli %}. These extensions integrate directly into your development workflow, allowing you to prompt models without context switching.
|
||||
|
||||
## Using AI models in {% data variables.product.prodname_copilot_chat_short %}
|
||||
|
||||
If you have a {% data variables.product.prodname_copilot_short %} subscription, you can work with AI models in {% data variables.product.prodname_copilot_chat_short %} in two different ways:
|
||||
* Using the {% data variables.product.prodname_github_models %} {% data variables.product.prodname_copilot_extension_short %}. With this extension, you can ask for model recommendations based on certain criteria and chat with specific models. See [Using the {% data variables.product.prodname_github_models %} {% data variables.product.prodname_copilot_extension_short %}](#using-the-github-models-copilot-extension).
|
||||
* Using multiple model support in {% data variables.product.prodname_copilot_chat_short %}. With multi-model {% data variables.product.prodname_copilot_chat_short %}, you can choose a specific model to use for a conversation, then prompt {% data variables.product.prodname_copilot_chat_short %} as usual. See [AUTOTITLE](/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat).
|
||||
|
||||
### Using the {% data variables.product.prodname_github_models %} {% data variables.product.prodname_copilot_extension_short %}
|
||||
|
||||
> [!NOTE] The {% data variables.product.prodname_github_models %} {% data variables.product.prodname_copilot_extension_short %} is in {% data variables.release-phases.public_preview %} and is subject to change.
|
||||
|
||||
1. Install the [{% data variables.product.prodname_github_models %} {% data variables.product.prodname_copilot_extension_short %}](https://github.com/marketplace/models-github).
|
||||
* If you have a {% data variables.product.prodname_copilot_pro_short %} subscription, you can install the extension on your personal account.
|
||||
* If you have access to {% data variables.product.prodname_copilot_short %} through a {% data variables.product.prodname_copilot_business_short %} or {% data variables.product.prodname_copilot_enterprise_short %} subscription:
|
||||
* An organization owner or enterprise owner needs to enable the {% data variables.product.prodname_copilot_extensions_short %} policy for your organization or enterprise.
|
||||
* An organization owner needs to install the extension for your organization.
|
||||
|
||||
1. Open any implementation of {% data variables.product.prodname_copilot_chat_short %} that supports {% data variables.product.prodname_copilot_extensions %}. For a list of supported {% data variables.product.prodname_copilot_chat_short %} implementations, see [AUTOTITLE](/copilot/using-github-copilot/using-extensions-to-integrate-external-tools-with-copilot-chat#supported-clients-and-ides).
|
||||
1. In the chat window, type `@models YOUR-PROMPT`, then send your prompt. There are several use cases for the {% data variables.product.prodname_github_models %} {% data variables.product.prodname_copilot_extension_short %}, including:
|
||||
* Recommending a particular model based on context and criteria you provide. For example, you can ask for a low-cost OpenAI model that supports function calling.
|
||||
* Executing prompts using a particular model. This is especially useful when you want to use a model that is not currently available in multi-model {% data variables.product.prodname_copilot_chat_short %}.
|
||||
* Listing models currently available through {% data variables.product.prodname_github_models %}
|
||||
|
||||
## Using AI models from the command line
|
||||
|
||||
> [!NOTE] The {% data variables.product.prodname_github_models %} extension for {% data variables.product.prodname_cli %} is in {% data variables.release-phases.public_preview %} and is subject to change.
|
||||
|
||||
You can use the {% data variables.product.prodname_github_models %} extension for {% data variables.product.prodname_cli %} to prompt AI models from the command line, and even pipe in the output of a command as context.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
To use the {% data variables.product.prodname_github_models %} CLI extension, you need to have {% data variables.product.prodname_cli %} installed. {% data reusables.cli.cli-installation %}
|
||||
|
||||
### Installing the extension
|
||||
|
||||
1. If you have not already authenticated to the {% data variables.product.prodname_cli %}, run the following command in your terminal.
|
||||
|
||||
```shell copy
|
||||
gh auth login
|
||||
```
|
||||
|
||||
1. To install the {% data variables.product.prodname_github_models %} extension, run the following command.
|
||||
|
||||
```shell copy
|
||||
gh extension install https://github.com/github/gh-models
|
||||
```
|
||||
|
||||
### Using the extension
|
||||
|
||||
To see a list of all available commands, run `gh models`.
|
||||
|
||||
There are a few key ways you can use the extension:
|
||||
* **To ask a model multiple questions using a chat experience**, run `gh models run`. Select your model from the listed models, then send your prompts.
|
||||
* **To ask a model a single question**, run `gh models run MODEL-NAME "QUESTION"` in your terminal. For example, to ask the GPT 4o model why the sky is blue, you can run `gh models run GPT-4o "why is the sky blue?"`.
|
||||
* **To provide the output of a command as context when you call a model**, you can join a separate command and the call to the model with the pipe character (`|`). For example, to summarize the README file in the current directory using the GPT 4o model, you can run `cat README.md | gh models run GPT-4o "summarize this text"`.
|
||||
@@ -1 +1 @@
|
||||
1. If you access {% data variables.product.prodname_copilot_chat_short %} through a {% data variables.product.prodname_copilot_business_short %}{% ifversion ghec %} or {% data variables.product.prodname_copilot_enterprise_short %}{% endif %} subscription, your organization{% ifversion ghec %} or enterprise{% endif %} must grant members the ability to switch to a different model. See [AUTOTITLE](/copilot/managing-copilot/managing-github-copilot-in-your-organization/setting-policies-for-copilot-in-your-organization/managing-policies-for-copilot-in-your-organization){% ifversion ghec %} or [AUTOTITLE](/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise){% endif %}.
|
||||
If you access {% data variables.product.prodname_copilot_chat_short %} through a {% data variables.product.prodname_copilot_business_short %}{% ifversion ghec %} or {% data variables.product.prodname_copilot_enterprise_short %}{% endif %} subscription, your organization{% ifversion ghec %} or enterprise{% endif %} must grant members the ability to switch to a different model. See [AUTOTITLE](/copilot/managing-copilot/managing-github-copilot-in-your-organization/setting-policies-for-copilot-in-your-organization/managing-policies-for-copilot-in-your-organization){% ifversion ghec %} or [AUTOTITLE](/copilot/managing-copilot/managing-copilot-for-your-enterprise/managing-policies-and-features-for-copilot-in-your-enterprise#copilot-access-to-alternative-ai-models){% endif %}.
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
> [!NOTE] Access to OpenAI's `o1` and `o3` models is in {% data variables.release-phases.public_preview %} and subject to change.
|
||||
@@ -29,3 +29,4 @@ copilot_code-review_short: 'Copilot code review'
|
||||
|
||||
## LLM models for Copilot
|
||||
copilot_claude_sonnet: 'Claude 3.5 Sonnet'
|
||||
copilot_gemini_flash: 'Gemini 2.0 Flash'
|
||||
|
||||
Reference in New Issue
Block a user