1
0
mirror of synced 2025-12-19 09:57:42 -05:00

Fix trailing spaces in content and data (#37904)

This commit is contained in:
Grace Park
2023-06-23 10:45:10 -07:00
committed by GitHub
parent 07dcb8c294
commit cbda3f2434
369 changed files with 642 additions and 641 deletions

View File

@@ -33,7 +33,7 @@ The person you invite to be your successor must have a {% data variables.product
{% data reusables.user-settings.access_settings %}
{% data reusables.user-settings.account_settings %}
3. Under "Successor settings", to invite a successor, begin typing a username, full name, or email address, then click their name when it appears.
![Screenshot of the "Successor settings" section. The string "octocat" is entered in a search field, and Octocat's profile is listed in a dropdown below.](/assets/images/help/settings/settings-invite-successor-search-field.png)
4. Click **Add successor**.

View File

@@ -19,7 +19,7 @@ shortTitle: Add an email address
{% note %}
**Notes**:
**Notes**:
- {% data reusables.user-settings.no-verification-disposable-emails %}
- If you're a member of an {% data variables.enterprise.prodname_emu_enterprise %}, you cannot make changes to your email address on {% data variables.product.prodname_dotcom_the_website %}. {% data reusables.enterprise-accounts.emu-more-info-account %}

View File

@@ -52,7 +52,7 @@ The repository owner has full control of the repository. In addition to the acti
| Archive the repository | "[AUTOTITLE](/repositories/archiving-a-github-repository/archiving-repositories)" |{% ifversion fpt or ghec %}
| Create security advisories | "[AUTOTITLE](/code-security/security-advisories/repository-security-advisories/about-repository-security-advisories)" |
| Display a sponsor button | "[AUTOTITLE](/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/displaying-a-sponsor-button-in-your-repository)" |{% endif %}
| Allow or disallow auto-merge for pull requests | "[AUTOTITLE](/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/managing-auto-merge-for-pull-requests-in-your-repository)" |
| Allow or disallow auto-merge for pull requests | "[AUTOTITLE](/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/managing-auto-merge-for-pull-requests-in-your-repository)" |
| Manage webhooks and deploy keys | "[AUTOTITLE](/authentication/connecting-to-github-with-ssh/managing-deploy-keys#deploy-keys)" |
## Collaborator access for a repository owned by a personal account

View File

@@ -12,7 +12,7 @@ shortTitle: Manage multiple accounts
## About management of multiple accounts
In some cases, you may need to use multiple accounts on {% data variables.location.product_location %}. For example, you may have a personal account for open source contributions, and your employer may also create and manage a user account for you within an enterprise.
In some cases, you may need to use multiple accounts on {% data variables.location.product_location %}. For example, you may have a personal account for open source contributions, and your employer may also create and manage a user account for you within an enterprise.
You cannot use your {% data variables.enterprise.prodname_managed_user %} to contribute to public projects on {% data variables.location.product_location %}, so you must contribute to those resources using your personal account. For more information, see "[About {% data variables.product.prodname_emus %}]({% ifversion fpt %}/enterprise-cloud@latest{% endif %}/admin/identity-and-access-management/using-enterprise-managed-users-for-iam/about-enterprise-managed-users#abilities-and-restrictions-of-managed-user-accounts){% ifversion fpt %}" in the {% data variables.product.prodname_ghe_cloud %} documentation.{% elsif ghec %}."{% endif %}
@@ -28,7 +28,7 @@ If you aren't required to use a {% data variables.enterprise.prodname_managed_us
## Contributing to two accounts using HTTPS and SSH
If you contribute with two accounts from one workstation, you can access repositories by using a different protocol and credentials for each account.
If you contribute with two accounts from one workstation, you can access repositories by using a different protocol and credentials for each account.
Git can use either the HTTPS or SSH protocol to access and update data in repositories on {% data variables.location.product_location %}. The protocol you use to clone a repository determines which credentials your workstation will use to authenticate when you access the repository. With this approach to account management, you store the credentials for one account to use for HTTPS connections and upload an SSH key to the other account to use for SSH connections.

View File

@@ -108,5 +108,5 @@ jobs:
## Specifying a .NET version
To use a preinstalled version of the .NET Core SDK on a {% data variables.product.prodname_dotcom %}-hosted runner, use the `setup-dotnet` action. This action finds a specific version of .NET from the tools cache on each runner, and adds the necessary binaries to `PATH`. These changes will persist for the remainder of the job.
The `setup-dotnet` action is the recommended way of using .NET with {% data variables.product.prodname_actions %}, because it ensures consistent behavior across different runners and different versions of .NET. If you are using a self-hosted runner, you must install .NET and add it to `PATH`. For more information, see the [`setup-dotnet`](https://github.com/marketplace/actions/setup-net-core-sdk) action.

View File

@@ -211,7 +211,7 @@ Checking in your `node_modules` directory can cause problems. As an alternative,
## Testing out your action in a workflow
Now you're ready to test your action out in a workflow.
Now you're ready to test your action out in a workflow.
Public actions can be used by workflows in any repository. When an action is in a private{% ifversion ghec or ghes or ghae%} or internal{% endif %} repository, the repository settings dictate whether the action is available only within the same repository or also to other repositories owned by the same {% ifversion ghec or ghes or ghae %}organization or enterprise{% else %}user or organization{% endif %}. For more information, see "[AUTOTITLE](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository)."

View File

@@ -23,7 +23,7 @@ Your action should:
- Work across {% data variables.product.product_name %}-hosted and self-hosted runners
- Leverage community tooling when possible
This article will demonstrate how to write an action that retrieves a specific version of your CLI, installs it, adds it to the path, and (optionally) caches it. This type of action (an action that sets up a tool) is often named `setup-$TOOL`.
This article will demonstrate how to write an action that retrieves a specific version of your CLI, installs it, adds it to the path, and (optionally) caches it. This type of action (an action that sets up a tool) is often named `setup-$TOOL`.
## Prerequisites

View File

@@ -42,7 +42,7 @@ To draft a new release and publish the action to {% data variables.product.prodn
1. Under "Release Action", select **Publish this Action to the {% data variables.product.prodname_marketplace %}**.
{% note %}
**Note**: The "Publish" checkbox is disabled if the account that owns the repository has not yet accepted the {% data variables.product.prodname_marketplace %} Developer Agreement. If you own the repository or are an organization owner, click the link to "accept the GitHub Marketplace Developer Agreement", then accept the agreement. If there is no link, send the organization owner a link to this "Release Action" page and ask them to accept the agreement.
{% endnote %}

View File

@@ -12,13 +12,13 @@ shortTitle: Share from your private repository
## About {% data variables.product.prodname_actions %} access to private repositories
You can share actions and reusable workflows from your private repository, without making them public, by allowing {% data variables.product.prodname_actions %} workflows to access a private repository that contains the action or reusable workflow.
You can share actions and reusable workflows from your private repository, without making them public, by allowing {% data variables.product.prodname_actions %} workflows to access a private repository that contains the action or reusable workflow.
Any actions or reusable workflows stored in the private repository can be used in workflows defined in other private repositories owned by the same organization or user. Actions and reusable workflows stored in private repositories cannot be used in public repositories.
{% warning %}
**Warning**:
**Warning**:
- If you make a private repository accessible to {% data variables.product.prodname_actions %} workflows in other repositories, outside collaborators on the other repositories can indirectly access the private repository, even though they do not have direct access to these repositories. The outside collaborators can view logs for workflow runs when actions or workflows from the private repository are used.
- {% data reusables.actions.scoped-token-note %}

View File

@@ -12,13 +12,13 @@ shortTitle: Share with your enterprise
## About {% data variables.product.prodname_actions %} access to internal {% ifversion private-actions %}and private {% endif %}repositories
If your organization is owned by an enterprise account, you can share actions and reusable workflows within your enterprise, without publishing them publicly, by allowing {% data variables.product.prodname_actions %} workflows to access an internal {% ifversion private-actions %}or private {% endif %}repository that contains the action or reusable workflow.
If your organization is owned by an enterprise account, you can share actions and reusable workflows within your enterprise, without publishing them publicly, by allowing {% data variables.product.prodname_actions %} workflows to access an internal {% ifversion private-actions %}or private {% endif %}repository that contains the action or reusable workflow.
Any actions or reusable workflows stored in the internal {% ifversion private-actions %}or private {% endif %}repository can be used in workflows defined in other internal or private repositories owned by the same organization, or by any organization owned by the enterprise. Actions and reusable workflows stored in internal repositories cannot be used in public repositories {% ifversion private-actions %}and actions and reusable workflows stored in private repositories cannot be used in public or internal repositories{% endif %}.
{% warning %}
**Warning**:
**Warning**:
- {% data reusables.actions.outside-collaborators-actions %}
- {% data reusables.actions.scoped-token-note %}

View File

@@ -12,13 +12,13 @@ shortTitle: Share with your organization
## About {% data variables.product.prodname_actions %} access to private {% ifversion internal-actions %} or internal {% endif %}repositories
You can share actions and reusable workflows within your organization, without publishing them publicly, by allowing {% data variables.product.prodname_actions %} workflows to access a private {% ifversion internal-actions %} or internal {% endif %}repository that contains the action or reusable workflow.
You can share actions and reusable workflows within your organization, without publishing them publicly, by allowing {% data variables.product.prodname_actions %} workflows to access a private {% ifversion internal-actions %} or internal {% endif %}repository that contains the action or reusable workflow.
Any actions or reusable workflows stored in the private repository can be used in workflows defined in other private {% ifversion internal-actions %} or internal {% endif %}repositories owned by the same organization. Actions and reusable workflows stored in internal repositories cannot be used in public repositories {% ifversion private-actions %}and actions and reusable workflows stored in private repositories cannot be used in public or internal repositories{% endif %}.
{% warning %}
**Warning**:
**Warning**:
- {% data reusables.actions.outside-collaborators-actions %}
- {% data reusables.actions.scoped-token-note %}

View File

@@ -134,6 +134,5 @@ The following resources may also be useful:
- For the original starter workflow, see [`azure-webapps-node.yml`](https://github.com/actions/starter-workflows/blob/main/deployments/azure-webapps-node.yml) in the {% data variables.product.prodname_actions %} `starter-workflows` repository.
- The action used to deploy the web app is the official Azure [`Azure/webapps-deploy`](https://github.com/Azure/webapps-deploy) action.
- For more examples of GitHub Action workflows that deploy to Azure, see the
[actions-workflow-samples](https://github.com/Azure/actions-workflow-samples) repository.
- For more examples of GitHub Action workflows that deploy to Azure, see the [actions-workflow-samples](https://github.com/Azure/actions-workflow-samples) repository.
- The "[Create a Node.js web app in Azure](https://docs.microsoft.com/azure/app-service/quickstart-nodejs)" quickstart in the Azure web app documentation demonstrates using {% data variables.product.prodname_vscode %} with the [Azure App Service extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice).

View File

@@ -32,7 +32,7 @@ This guide explains how to use {% data variables.product.prodname_actions %} to
Before creating your {% data variables.product.prodname_actions %} workflow, you will first need to complete the following setup steps:
1. Create an Azure Static Web App using the 'Other' option for deployment source. For more information, see "[Quickstart: Building your first static site in the Azure portal](https://docs.microsoft.com/azure/static-web-apps/get-started-portal)" in the Azure documentation.
1. Create an Azure Static Web App using the 'Other' option for deployment source. For more information, see "[Quickstart: Building your first static site in the Azure portal](https://docs.microsoft.com/azure/static-web-apps/get-started-portal)" in the Azure documentation.
2. Create a secret called `AZURE_STATIC_WEB_APPS_API_TOKEN` with the value of your static web app deployment token. For more information about how to find your deployment token, see "[Reset deployment tokens in Azure Static Web Apps](https://docs.microsoft.com/azure/static-web-apps/deployment-token-management)" in the Azure documentation.

View File

@@ -15,7 +15,7 @@ topics:
## Overview
OpenID Connect (OIDC) allows your {% data variables.product.prodname_actions %} workflows to access resources in Amazon Web Services (AWS), without needing to store the AWS credentials as long-lived {% data variables.product.prodname_dotcom %} secrets.
OpenID Connect (OIDC) allows your {% data variables.product.prodname_actions %} workflows to access resources in Amazon Web Services (AWS), without needing to store the AWS credentials as long-lived {% data variables.product.prodname_dotcom %} secrets.
This guide explains how to configure AWS to trust {% data variables.product.prodname_dotcom %}'s OIDC as a federated identity, and includes a workflow example for the [`aws-actions/configure-aws-credentials`](https://github.com/aws-actions/configure-aws-credentials) that uses tokens to authenticate to AWS and access resources.

View File

@@ -26,11 +26,11 @@ You should be familiar with the concepts described in "[AUTOTITLE](/actions/usin
When combined with OpenID Connect (OIDC), reusable workflows let you enforce consistent deployments across your repository, organization, or enterprise. You can do this by defining trust conditions on cloud roles based on reusable workflows. The available options will vary depending on your cloud provider:
- **Using `job_workflow_ref`**:
- To create trust conditions based on reusable workflows, your cloud provider must support custom claims for `job_workflow_ref`. This allows your cloud provider to identify which repository the job originally came from.
- **Using `job_workflow_ref`**:
- To create trust conditions based on reusable workflows, your cloud provider must support custom claims for `job_workflow_ref`. This allows your cloud provider to identify which repository the job originally came from.
- For clouds that only support the standard claims (audience (`aud`) and subject (`sub`)), you can use the API to customize the `sub` claim to include `job_workflow_ref`. For more information, see "[AUTOTITLE](/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#customizing-the-token-claims)". Support for custom claims is currently available for Google Cloud Platform and HashiCorp Vault.
- **Customizing the token claims**:
- **Customizing the token claims**:
- You can configure more granular trust conditions by customizing the issuer (`iss`) and subject (`sub`) claims included with the JWT. For more information, see "[AUTOTITLE](/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#customizing-the-token-claims)".
## How the token works with reusable workflows
@@ -95,9 +95,9 @@ You can configure a custom claim that filters for any reusable workflow in a spe
You can configure a custom claim that filters for a specific reusable workflow. In this example, the workflow run must have originated from a job defined in the reusable workflow `octo-org/octo-automation/.github/workflows/deployment.yml`, and in any repository that is owned by the `octo-org` organization.
- **Subject**:
- Syntax: `repo:ORG_NAME/*`
- Example: `repo:octo-org/*`
- Syntax: `repo:ORG_NAME/*`
- Example: `repo:octo-org/*`
- **Custom claim**:
- Syntax: `job_workflow_ref:ORG_NAME/REPO_NAME/.github/workflows/WORKFLOW_FILE@ref`
- Syntax: `job_workflow_ref:ORG_NAME/REPO_NAME/.github/workflows/WORKFLOW_FILE@ref`
- Example: `job_workflow_ref:octo-org/octo-automation/.github/workflows/deployment.yml@ 10040c56a8c0253d69db7c1f26a0d227275512e2`

View File

@@ -289,7 +289,7 @@ For more information about security hardening for self-hosted runners, see "[AUT
### Restricting the use of self-hosted runners
{% data reusables.actions.disable-selfhosted-runners-crossrefs %}
{% data reusables.actions.disable-selfhosted-runners-crossrefs %}
{% endif %}

View File

@@ -56,7 +56,7 @@ You can add self-hosted runners to a single repository. To add a self-hosted run
{% note %}
**Note**: {% data reusables.actions.disable-selfhosted-runners-crossrefs %}
**Note**: {% data reusables.actions.disable-selfhosted-runners-crossrefs %}
{% endnote %}

View File

@@ -24,7 +24,7 @@ shortTitle: Monitor & troubleshoot
You may not be able to create a self-hosted runner for an organization-owned repository.
{% data reusables.actions.disable-selfhosted-runners-crossrefs %}
{% data reusables.actions.disable-selfhosted-runners-crossrefs %}
{% endif %}

View File

@@ -119,7 +119,7 @@ env:
{% endraw %}
In this example, we're using a ternary operator to set the value of the `MY_ENV_VAR` environment variable based on whether the {% data variables.product.prodname_dotcom %} reference is set to `refs/heads/main` or not. If it is, the variable is set to `value_for_main_branch`. Otherwise, it is set to `value_for_other_branches`.
It is important to note that the first value after the `&&` condition must be `truthy` otherwise the value after the `||` will always be returned.
It is important to note that the first value after the `&&` condition must be `truthy` otherwise the value after the `||` will always be returned.
## Functions
@@ -317,7 +317,7 @@ steps:
### always
Causes the step to always execute, and returns `true`, even when canceled. The `always` expression is best used at the step level or on tasks that you expect to run even when a job is canceled. For example, you can use `always` to send logs even when a job is canceled.
Causes the step to always execute, and returns `true`, even when canceled. The `always` expression is best used at the step level or on tasks that you expect to run even when a job is canceled. For example, you can use `always` to send logs even when a job is canceled.
{% note %}

View File

@@ -33,7 +33,7 @@ In the tutorial, you will first make a workflow file that uses the [`imjohnbo/is
{% indented_data_reference reusables.actions.actions-not-certified-by-github-comment spaces=4 %}
{% indented_data_reference reusables.actions.actions-use-sha-pinning-comment spaces=4 %}
name: Weekly Team Sync
on:
schedule:

View File

@@ -23,9 +23,9 @@ By default, {% data variables.product.product_name %} stores build logs and arti
{% data reusables.repositories.navigate-to-workflow %}
{% data reusables.repositories.view-run %}
1. In the "Artifacts" section, click the artifact you want to download.
![Screenshot of the "Artifacts" section of a workflow run. The name of an artifact generated by the run, "artifact," is highlighted with a dark orange outline.](/assets/images/help/repository/artifact-drop-down-updated.png)
{% endwebui %}
{% cli %}

View File

@@ -28,9 +28,9 @@ shortTitle: Remove workflow artifacts
{% data reusables.repositories.navigate-to-workflow %}
{% data reusables.repositories.view-run %}
1. Under **Artifacts**, click {% octicon "trash" aria-label="Remove artifact ARTIFACT-NAME" %} next to the artifact you want to remove.
![Screenshot showing artifacts created during a workflow run. A trash can icon, used to remove an artifact, is outlined in dark orange.](/assets/images/help/repository/actions-delete-artifact-updated.png)
## Setting the retention period for an artifact
Retention periods for artifacts and logs can be configured at the repository, organization, and enterprise level. For more information, see {% ifversion fpt or ghec or ghes %}"[AUTOTITLE](/actions/learn-github-actions/usage-limits-billing-and-administration#artifact-and-log-retention-policy)."{% elsif ghae %}"[AUTOTITLE](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#configuring-the-retention-period-for-github-actions-artifacts-and-logs-in-your-repository)," "[AUTOTITLE](/organizations/managing-organization-settings/configuring-the-retention-period-for-github-actions-artifacts-and-logs-in-your-organization)," or "[AUTOTITLE](/admin/policies/enforcing-policies-for-your-enterprise/enforcing-policies-for-github-actions-in-your-enterprise#enforcing-a-policy-for-artifact-and-log-retention-in-your-enterprise)."{% endif %}

View File

@@ -166,7 +166,7 @@ For more information about setting up self-serve migrations with IssueOps, see t
## Using the {% data variables.product.prodname_actions_importer %} labs repository
The {% data variables.product.prodname_actions_importer %} labs repository contains platform-specific learning paths that teach you how to use {% data variables.product.prodname_actions_importer %} and how to approach migrations to {% data variables.product.prodname_actions %}. You can use this repository to learn how to use {% data variables.product.prodname_actions_importer %} to help plan, forecast, and automate your migration to {% data variables.product.prodname_actions %}.
The {% data variables.product.prodname_actions_importer %} labs repository contains platform-specific learning paths that teach you how to use {% data variables.product.prodname_actions_importer %} and how to approach migrations to {% data variables.product.prodname_actions %}. You can use this repository to learn how to use {% data variables.product.prodname_actions_importer %} to help plan, forecast, and automate your migration to {% data variables.product.prodname_actions %}.
To learn more, see the [GitHub Actions Importer labs repository](https://github.com/actions/importer-labs/tree/main#readme).

View File

@@ -26,7 +26,7 @@ shortTitle: 'Extending GitHub Actions Importer'
## Using custom transformers with {% data variables.product.prodname_actions_importer %}
A custom transformer contains mapping logic that {% data variables.product.prodname_actions_importer %} can use to transform your plugins, tasks, runner labels, or environment variables to work with {% data variables.product.prodname_actions %}. Custom transformers are written with a domain-specific language (DSL) built on top of Ruby, and are defined within a file with the `.rb` file extension.
A custom transformer contains mapping logic that {% data variables.product.prodname_actions_importer %} can use to transform your plugins, tasks, runner labels, or environment variables to work with {% data variables.product.prodname_actions %}. Custom transformers are written with a domain-specific language (DSL) built on top of Ruby, and are defined within a file with the `.rb` file extension.
You can use the `--custom-transformers` CLI option to specify which custom transformer files to use with the `audit`, `dry-run`, and `migrate` commands.

View File

@@ -69,7 +69,7 @@ The `configure` CLI command is used to set required credentials and options for
After creating the token, copy it and save it in a safe location for later use.
1. Create a Bamboo {% data variables.product.pat_generic %}. For more information, see [{% data variables.product.pat_generic_title_case_plural %}](https://confluence.atlassian.com/bamboo/personal-access-tokens-976779873.html) in the Bamboo documentation.
Your token must have the following permissions, depending on which resources will be transformed.
Resource Type | View | View Configuration | Edit
@@ -173,7 +173,7 @@ Listed below are some key terms that can appear in the forecast report:
- **Execution time** describes the amount of time a runner spent on a job. This metric can be used to help plan for the cost of {% data variables.product.prodname_dotcom %}-hosted runners.
- This metric is correlated to how much you should expect to spend in {% data variables.product.prodname_actions %}. This will vary depending on the hardware used for these minutes. You can use the [{% data variables.product.prodname_actions %} pricing calculator](https://github.com/pricing/calculator) to estimate the costs.
- **Queue time** metrics describe the amount of time a job spent waiting for a runner to be available to execute it.
- **Concurrent jobs** metrics describe the amount of jobs running at any given time. This metric can be used to
- **Concurrent jobs** metrics describe the amount of jobs running at any given time. This metric can be used to
## Perform a dry-run migration of a Bamboo pipeline
@@ -181,7 +181,7 @@ You can use the `dry-run` command to convert a Bamboo pipeline to an equivalent
### Running a dry-run migration for a build plan
To perform a dry run of migrating your Bamboo build plan to {% data variables.product.prodname_actions %}, run the following command in your terminal, replacing `:my_plan_slug` with the plan's project and plan key in the format `<projectKey>-<planKey>` (for example: `PAN-SCRIP`).
To perform a dry run of migrating your Bamboo build plan to {% data variables.product.prodname_actions %}, run the following command in your terminal, replacing `:my_plan_slug` with the plan's project and plan key in the format `<projectKey>-<planKey>` (for example: `PAN-SCRIP`).
```shell
gh actions-importer dry-run bamboo build --plan-slug :my_plan_slug --output-dir tmp/dry-run
```
@@ -308,7 +308,7 @@ gh actions-importer dry-run bamboo build --plan-slug IN-COM -o tmp/bamboo --conf
The following table shows the type of properties that {% data variables.product.prodname_actions_importer %} is currently able to convert.
| Bamboo | GitHub Actions | Status |
| :---------------------------------- | :-----------------------------------------------| ---------------------: |
| :---------------------------------- | :-----------------------------------------------| ---------------------: |
| `environments` | `jobs` | Supported |
| `environments.<environment_id>` | `jobs.<job_id>` | Supported |
| `<job_id>.artifacts` | `jobs.<job_id>.steps.actions/upload-artifact` | Supported |

View File

@@ -77,7 +77,7 @@ The `configure` CLI command is used to set required credentials and options for
Enter the following values (leave empty to omit):
✔ {% data variables.product.pat_generic_caps %} for GitHub: ***************
✔ Base url of the GitHub instance: https://github.com
✔ {% data variables.product.pat_generic_caps %} for Jenkins: ***************
✔ {% data variables.product.pat_generic_caps %} for Jenkins: ***************
✔ Username of Jenkins user: admin
✔ Base url of the Jenkins instance: https://localhost
Environment variables successfully updated.

View File

@@ -136,6 +136,7 @@ The `metadata-action` option required for {% data variables.product.prodname_reg
{% endif %}
The `build-push-action` options required for {% data variables.product.prodname_registry %} are:{% ifversion fpt or ghec %}
- `context`: Defines the build's context as the set of files located in the specified path.{% endif %}
- `push`: If set to `true`, the image will be pushed to the registry if it is built successfully.{% ifversion fpt or ghec %}
- `tags` and `labels`: These are populated by output from `metadata-action`.{% else %}

View File

@@ -19,7 +19,7 @@ shortTitle: Quickstart
## Introduction
You only need a {% data variables.product.prodname_dotcom %} repository to create and run a {% data variables.product.prodname_actions %} workflow. In this guide, you'll add a workflow that demonstrates some of the essential features of {% data variables.product.prodname_actions %}.
You only need a {% data variables.product.prodname_dotcom %} repository to create and run a {% data variables.product.prodname_actions %} workflow. In this guide, you'll add a workflow that demonstrates some of the essential features of {% data variables.product.prodname_actions %}.
The following example shows you how {% data variables.product.prodname_actions %} jobs can be automatically triggered, where they run, and how they can interact with the code in your repository.
@@ -71,7 +71,7 @@ Committing the workflow file to a branch in your repository triggers the `push`
1. The log shows you how each of the steps was processed. Expand any of the steps to view its details.
![Screenshot of steps run by the workflow.](/assets/images/help/repository/actions-quickstart-logs.png)
For example, you can see the list of files in your repository:
![Screenshot of the "List files in the repository" step expanded to show the log output. The output for the step is highlighted with a dark orange highlight.](/assets/images/help/repository/actions-quickstart-log-detail.png)

View File

@@ -309,7 +309,7 @@ SBOMs are available for Ubuntu, Windows, and macOS runner images. You can locate
{% ifversion actions-disable-repo-runners %}
{% data reusables.actions.disable-selfhosted-runners-crossrefs %}
{% data reusables.actions.disable-selfhosted-runners-crossrefs %}
{% endif %}

View File

@@ -22,7 +22,7 @@ topics:
Service containers are Docker containers that provide a simple and portable way for you to host services that you might need to test or operate your application in a workflow. For example, your workflow might need to run integration tests that require access to a database and memory cache.
You can configure service containers for each job in a workflow. {% data variables.product.prodname_dotcom %} creates a fresh Docker container for each service configured in the workflow, and destroys the service container when the job completes. Steps in a job can communicate with all service containers that are part of the same job. However, you cannot create and use service containers inside a composite action.
You can configure service containers for each job in a workflow. {% data variables.product.prodname_dotcom %} creates a fresh Docker container for each service configured in the workflow, and destroys the service container when the job completes. Steps in a job can communicate with all service containers that are part of the same job. However, you cannot create and use service containers inside a composite action.
{% data reusables.actions.docker-container-os-support %}

View File

@@ -12,7 +12,7 @@ shortTitle: Customize runners
{% data reusables.actions.enterprise-github-hosted-runners %}
If you require additional software packages on {% data variables.product.prodname_dotcom %}-hosted runners, you can create a job that installs the packages as part of your workflow.
If you require additional software packages on {% data variables.product.prodname_dotcom %}-hosted runners, you can create a job that installs the packages as part of your workflow.
To see which packages are already installed by default, see "[AUTOTITLE](/actions/using-github-hosted-runners/about-github-hosted-runners#preinstalled-software)."
@@ -40,7 +40,7 @@ jobs:
{% note %}
**Note:** Always run `sudo apt-get update` before installing a package. In case the `apt` index is stale, this command fetches and re-indexes any available packages, which helps prevent package installation failures.
**Note:** Always run `sudo apt-get update` before installing a package. In case the `apt` index is stale, this command fetches and re-indexes any available packages, which helps prevent package installation failures.
{% endnote %}

View File

@@ -48,7 +48,7 @@ For more information on workflow run artifacts, see "[AUTOTITLE](/actions/using-
## Restrictions for accessing a cache
Access restrictions provide cache isolation and security by creating a logical boundary between different branches or tags.
Access restrictions provide cache isolation and security by creating a logical boundary between different branches or tags.
Workflow runs can restore caches created in either the current branch or the default branch (usually `main`). If a workflow run is triggered for a pull request, it can also restore caches created in the base branch, including base branches of forked repositories. For example, if the branch `feature-b` has the base branch `feature-a`, a workflow run triggered on a pull request would have access to caches created in the default `main` branch, the base `feature-a` branch, and the current `feature-b` branch.
Workflow runs cannot restore caches created for child branches or sibling branches. For example, a cache created for the child `feature-b` branch would not be accessible to a workflow run triggered on the parent `main` branch. Similarly, a cache created for the `feature-a` branch with the base `main` would not be accessible to its sibling `feature-c` branch with the base `main`. Workflow runs also cannot restore caches created for different tag names. For example, a cache created for the tag `release-a` with the base `main` would not be accessible to a workflow run triggered for the tag `release-b` with the base `main`.
@@ -246,7 +246,7 @@ For example, if a pull request contains a `feature` branch and targets the defau
## Usage limits and eviction policy
{% data variables.product.prodname_dotcom %} will remove any cache entries that have not been accessed in over 7 days. There is no limit on the number of caches you can store, but the total size of all caches in a repository is limited{% ifversion actions-cache-policy-apis %}. By default, the limit is 10 GB per repository, but this limit might be different depending on policies set by your enterprise owners or repository administrators.{% else %} to 10 GB.{% endif %}
{% data variables.product.prodname_dotcom %} will remove any cache entries that have not been accessed in over 7 days. There is no limit on the number of caches you can store, but the total size of all caches in a repository is limited{% ifversion actions-cache-policy-apis %}. By default, the limit is 10 GB per repository, but this limit might be different depending on policies set by your enterprise owners or repository administrators.{% else %} to 10 GB.{% endif %}
{% data reusables.actions.cache-eviction-process %} {% ifversion actions-cache-ui %}The cache eviction process may cause cache thrashing, where caches are created and deleted at a high frequency. To reduce this, you can review the caches for a repository and take corrective steps, such as removing caching from specific workflows. For more information, see "[Managing caches](#managing-caches)."{% endif %}{% ifversion actions-cache-admin-ui %} You can also increase the cache size limit for a repository. For more information, see "[AUTOTITLE](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#configuring-cache-storage-for-a-repository)."
@@ -306,7 +306,7 @@ Users with `write` access to a repository can use the {% data variables.product.
{% data reusables.repositories.navigate-to-repo %}
{% data reusables.repositories.actions-tab %}
{% data reusables.repositories.actions-cache-list %}
1. To the right of the cache entry you want to delete, click {% octicon "trash" aria-label="Delete cache" %}.
1. To the right of the cache entry you want to delete, click {% octicon "trash" aria-label="Delete cache" %}.
![Screenshot of the list of cache entries. A trash can icon, used to delete a cache, is highlighted with a dark orange outline.](/assets/images/help/repository/actions-cache-delete.png)
@@ -318,7 +318,7 @@ Users with `write` access to a repository can use the {% data variables.product.
### Force deleting cache entries
Caches have branch scope restrictions in place, which means some caches have limited usage options. For more information on cache scope restrictions, see "[AUTOTITLE](/actions/using-workflows/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache)." If caches limited to a specific branch are using a lot of storage quota, it may cause caches from the `default` branch to be created and deleted at a high frequency.
Caches have branch scope restrictions in place, which means some caches have limited usage options. For more information on cache scope restrictions, see "[AUTOTITLE](/actions/using-workflows/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache)." If caches limited to a specific branch are using a lot of storage quota, it may cause caches from the `default` branch to be created and deleted at a high frequency.
For example, a repository could have many new pull requests opened, each with their own caches that are restricted to that branch. These caches could take up the majority of the cache storage for that repository. Once a repository has reached its maximum cache storage, the cache eviction policy will create space by deleting the oldest caches in the repository. In order to prevent cache thrashing when this happens, you can set up workflows to delete caches on a faster cadence than the cache eviction policy will. You can use the [`gh-actions-cache`](https://github.com/actions/gh-actions-cache/) CLI extension to delete caches for specific branches.

View File

@@ -1154,7 +1154,7 @@ jobs:
{% note %}
**Notes**:
**Notes**:
- The maximum number of top-level properties in `client_payload` is 10.
- The payload can contain a maximum of 65,535 characters.

View File

@@ -23,16 +23,16 @@ The instructions below assume that you need to set up {% data variables.product.
- No internet access.
- Access to limited internal resources, such as private registries for {% data variables.product.prodname_dependabot %}.
## Restricting internet access for {% data variables.product.prodname_dependabot %} runners
## Restricting internet access for {% data variables.product.prodname_dependabot %} runners
Before configuring {% data variables.product.prodname_dependabot %}, install Docker on your self-hosted runner. For more information, see "[AUTOTITLE](/admin/github-actions/enabling-github-actions-for-github-enterprise-server/managing-self-hosted-runners-for-dependabot-updates#configuring-self-hosted-runners-for-dependabot-updates)."
1. On {% data variables.location.product_location %}, navigate to the `github/dependabot-action` repository and retrieve information about the `dependabot-updater` and `dependabot-proxy` container images from the `containers.json` file.
1. On {% data variables.location.product_location %}, navigate to the `github/dependabot-action` repository and retrieve information about the `dependabot-updater` and `dependabot-proxy` container images from the `containers.json` file.
Each release of {% data variables.product.product_name %} includes an updated `containers.json` file at: `https://HOSTNAME/github/dependabot-action/blob/ghes-VERSION/docker/containers.json`. You can see the {% data variables.product.prodname_dotcom_the_website %} version of the file at: [containers.json](https://github.com/github/dependabot-action/blob/main/docker/containers.json).
1. Preload all the container images from the {% data variables.product.prodname_dotcom %} {% data variables.product.prodname_container_registry %} onto the {% data variables.product.prodname_dependabot %} runner using the `docker pull` command. {% ifversion ghes > 3.8 %}Alternatively, preload the `dependabot-proxy` image and then preload only the container images for the ecosystems you require.
For example, to support npm and {% data variables.product.prodname_actions %} you could use the following commands, copying details of the images to load from the `containers.json` file to ensure that you have the correct version and SHA for each image.
```
@@ -48,7 +48,7 @@ Before configuring {% data variables.product.prodname_dependabot %}, install Doc
{% endnote %}
1. When you have finished adding these images to the runner, you are ready to restrict internet access to the {% data variables.product.prodname_dependabot %} runner, ensuring that it can still access your private registries for the required ecosystems and for {% data variables.location.product_location %}.
1. When you have finished adding these images to the runner, you are ready to restrict internet access to the {% data variables.product.prodname_dependabot %} runner, ensuring that it can still access your private registries for the required ecosystems and for {% data variables.location.product_location %}.
You must add the images first because {% data variables.product.prodname_dependabot %} runners pull `dependabot-updater` and `dependabot-proxy` from the {% data variables.product.prodname_dotcom %} {% data variables.product.prodname_container_registry %} when {% data variables.product.prodname_dependabot %} jobs start running.
@@ -60,6 +60,6 @@ Before configuring {% data variables.product.prodname_dependabot %}, install Doc
1. For ecosystems that you want to test, click **Last checked TIME ago** to display the "Update logs" view.
1. Click **Check for updates** to check for new updates to dependencies for that ecosystem.
When the check for updates is complete, you should check the "Update logs" view to verify that {% data variables.product.prodname_dependabot %} accessed the configured private registries on {% data variables.location.product_location %} to check for version updates.
When the check for updates is complete, you should check the "Update logs" view to verify that {% data variables.product.prodname_dependabot %} accessed the configured private registries on {% data variables.location.product_location %} to check for version updates.
After you have verified that the configuration is correct, ask repository administrators to update their {% data variables.product.prodname_dependabot %} configurations to use private registries only. For more information, see "[AUTOTITLE](/code-security/dependabot/working-with-dependabot/removing-dependabot-access-to-public-registries)."

View File

@@ -26,7 +26,7 @@ To get started with {% data variables.product.product_name %}, you first need to
The first time you access your enterprise, you will complete an initial configuration to get {% data variables.product.product_name %} ready to use. The initial configuration includes connecting your enterprise with an identity provider (IdP), authenticating with SAML SSO, configuring policies for repositories and organizations in your enterprise, and configuring SMTP for outbound email. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/initializing-github-ae)."
Later, you can use the site admin dashboard and enterprise settings to further configure your enterprise, manage users, organizations and repositories, and set policies that reduce risk and increase quality.
Later, you can use the site admin dashboard and enterprise settings to further configure your enterprise, manage users, organizations and repositories, and set policies that reduce risk and increase quality.
All enterprises are configured with subdomain isolation and support for TLS 1.2 and higher for encrypted traffic only.
{% endif %}

View File

@@ -30,8 +30,8 @@ You can enable a retention policy for checks, actions, and associated data by se
{% data reusables.enterprise_site_admin_settings.management-console %}
1. In the "Settings" sidebar, click **Checks**.
2. Select **Enable archiving of Checks-related data**.
3. Under "Archive threshold (days)", type the number of days for the archival threshold. Checks older than this number of days will be archived.
4. Under "Delete threshold (days)", type the number of days for the deletion threshold. Archived checks older than this number of days will be permanently deleted.
3. Under "Archive threshold (days)", type the number of days for the archival threshold. Checks older than this number of days will be archived.
4. Under "Delete threshold (days)", type the number of days for the deletion threshold. Archived checks older than this number of days will be permanently deleted.
{% data reusables.enterprise_management_console.save-settings %}
{% endif %}

View File

@@ -100,7 +100,7 @@ Backup snapshots are written to the disk path set by the `GHE_DATA_DIR` data dir
```shell
./bin/ghe-host-check
```
```
1. To create an initial full backup, run the following command.
```shell
@@ -244,7 +244,7 @@ Optionally, to validate the restore, configure an IP exception list to allow acc
{% note %}
**Note:**
**Note:**
- The network settings are excluded from the backup snapshot. You must manually configure the network on the target {% data variables.product.prodname_ghe_server %} appliance as required for your environment.

View File

@@ -39,11 +39,11 @@ You can enable web commit signing, rotate the private key used for web commit si
ghe-config-apply
```
1. Create a new user on {% data variables.location.product_location %} via built-in authentication or external authentication. For more information, see "[AUTOTITLE](/admin/identity-and-access-management/managing-iam-for-your-enterprise/about-authentication-for-your-enterprise)."
- The user's username must be the same username you used when creating the PGP key in step 1 above, for example, `web-flow`.
- The user's email address must be the same address you used when creating the PGP key.
- The user's username must be the same username you used when creating the PGP key in step 1 above, for example, `web-flow`.
- The user's email address must be the same address you used when creating the PGP key.
{% data reusables.enterprise_site_admin_settings.add-key-to-web-flow-user %}
{% data reusables.enterprise_site_admin_settings.email-settings %}
1. Under "No-reply email address", type the same email address you used when creating the PGP key.
1. Under "No-reply email address", type the same email address you used when creating the PGP key.
{% note %}

View File

@@ -49,7 +49,7 @@ Otherwise, you can use the SSL Converter tool to convert your certificate into t
## Unresponsive installation after uploading a key
If {% data variables.location.product_location %} is unresponsive after uploading an TLS key, please [contact {% data variables.product.prodname_enterprise %} Support](https://enterprise.github.com/support) with specific details, including a copy of your TLS certificate. Ensure that your private key **is not** included.
If {% data variables.location.product_location %} is unresponsive after uploading an TLS key, please [contact {% data variables.product.prodname_enterprise %} Support](https://enterprise.github.com/support) with specific details, including a copy of your TLS certificate. Ensure that your private key **is not** included.
## Certificate validity errors

View File

@@ -10,7 +10,7 @@ topics:
{% data reusables.enterprise.repository-caching-release-phase %}
If you have teams and CI farms located around the world, you may experience reduced performance on your primary {% data variables.product.prodname_ghe_server %} instance. While active geo-replicas can improve the performance of read requests, this comes at the cost of limiting write throughput. To reduce load on your primary instance and improve write throughput performance, you can configure a repository cache, an asynchronous read-only mirror of repositories located near these geographically-distributed clients.
If you have teams and CI farms located around the world, you may experience reduced performance on your primary {% data variables.product.prodname_ghe_server %} instance. While active geo-replicas can improve the performance of read requests, this comes at the cost of limiting write throughput. To reduce load on your primary instance and improve write throughput performance, you can configure a repository cache, an asynchronous read-only mirror of repositories located near these geographically-distributed clients.
A repository cache eliminates the need for {% data variables.product.product_name %} to transmit the same Git data over a long-haul network link multiple times to serve multiple clients, by serving your repository data close to CI farms and distributed teams. For instance, if your primary instance is in North America and you also have a large presence in Asia, you will benefit from setting up the repository cache in Asia for use by CI runners there.

View File

@@ -15,7 +15,7 @@ topics:
---
## About {% data variables.product.product_name %} cluster nodes
Each node in a {% data variables.product.product_name %} cluster is a virtual machine (VM) that runs the {% data variables.product.product_name %} software. Before you deploy a cluster, you can review hardware requirements, required services, and design recommendations.
Each node in a {% data variables.product.product_name %} cluster is a virtual machine (VM) that runs the {% data variables.product.product_name %} software. Before you deploy a cluster, you can review hardware requirements, required services, and design recommendations.
{% data reusables.enterprise_clustering.clustering-requires-https %}
@@ -31,7 +31,7 @@ Each node must have a root volume, as well as a separate data volume. These are
## Services required for clustering
{% data variables.product.prodname_ghe_server %} comprises a set of services. In a cluster, these services run across multiple nodes, and the instance balances requests between the nodes. The instance automatically stores redundant copies of data on separate nodes. Most services are equal peers with other instances of the same service. The exceptions to this distribution are the `mysql-server` and `redis-server` services, which operate with a single primary node with one or more replica nodes.
{% data variables.product.prodname_ghe_server %} comprises a set of services. In a cluster, these services run across multiple nodes, and the instance balances requests between the nodes. The instance automatically stores redundant copies of data on separate nodes. Most services are equal peers with other instances of the same service. The exceptions to this distribution are the `mysql-server` and `redis-server` services, which operate with a single primary node with one or more replica nodes.
For adequate redundancy, use these minimum nodes operating each service.

View File

@@ -68,7 +68,7 @@ By default, {% data variables.product.prodname_nes %} is disabled. You can enabl
nomad status nes
```
## Configuring TTL settings for {% data variables.product.prodname_nes %}
## Configuring TTL settings for {% data variables.product.prodname_nes %}
To determine how {% data variables.product.prodname_nes %} notifies you, you can configure TTL settings for `fail` and `warn` states. The TTL for the `fail` state must be higher than the TTL for the `warn` state.
@@ -137,7 +137,7 @@ After {% data variables.product.prodname_nes %} detects that a node has exceeded
```shell copy
nomad node status
```
- If the node's status is `ineligible`, make the node eligible by connecting to the node via SSH and running the following command.
```shell copy
@@ -165,14 +165,14 @@ You can view logs for {% data variables.product.prodname_nes %} from any node in
nomad alloc logs -job nes
```
1. Alternatively, you can view logs for {% data variables.product.prodname_nes %} on the node that runs the service. The service writes logs to the systemd journal.
- To determine which node runs {% data variables.product.prodname_nes %}, run the following command.
```shell copy
nomad job status "nes" | grep running | grep "${nomad_node_id}" | awk 'NR==2{ print $1 }' | xargs nomad alloc status | grep "Node Name"
```
- To view logs on the node, connect to the node via SSH, then run the following command.
```shell copy
journalctl -t nes
```

View File

@@ -29,12 +29,12 @@ The time required to failover depends on how long it takes to manually promote t
$ ghe-maintenance -s
```
- When the number of active Git operations, MySQL queries, and Resque jobs reaches zero, wait 30 seconds.
- When the number of active Git operations, MySQL queries, and Resque jobs reaches zero, wait 30 seconds.
{% note %}
**Note:** Nomad will always have jobs running, even in maintenance mode, so you can safely ignore these jobs.
{% endnote %}
- To verify all replication channels report `OK`, use the `ghe-repl-status -vv` command.

View File

@@ -69,7 +69,7 @@ Replication requires that the primary node and all replica nodes can communicate
### Under-replication
If you run the `ghe-repl-status` command-line utility on a replica node and Git repositories, repository networks, or storage objects are under-replicated, one or more replica nodes are not fully synchronized with the primary node. Under-replication may occur if the primary node is unable to communicate with the replica nodes, or if the replica nodes are unable to communicate with the primary node.
If you run the `ghe-repl-status` command-line utility on a replica node and Git repositories, repository networks, or storage objects are under-replicated, one or more replica nodes are not fully synchronized with the primary node. Under-replication may occur if the primary node is unable to communicate with the replica nodes, or if the replica nodes are unable to communicate with the primary node.
If you've recently configured high availability or geo-replication, the initial sync will take some time. The duration of the initial sync depends on how much data exists and network conditions.
@@ -110,7 +110,7 @@ ghe-storage info OID
### Getting support from {% data variables.product.company_short %}
If you review the troubleshooting advice for replication and continue to experience issues on your instance, collect the following information, then contact {% data variables.contact.contact_ent_support %}.
If you review the troubleshooting advice for replication and continue to experience issues on your instance, collect the following information, then contact {% data variables.contact.contact_ent_support %}.
- On each affected node, run `ghe-repl-status -vv`, then copy the output to your ticket. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/command-line-utilities#ghe-repl-status)."
- On each affected node, create a support bundle to attach to your ticket. For more information, see "[AUTOTITLE](/support/contacting-github-support/providing-data-to-github-support#creating-and-sharing-support-bundles)."

View File

@@ -77,7 +77,7 @@ As more users join {% data variables.location.product_location %}, you may need
```shell
$ ghe-repl-stop
```
1. To install the {% data variables.product.prodname_ghe_server %} software on the newly partitioned disk, run the `ghe-upgrade` command. You must replace **PACKAGE-NAME.pkg** with the path to a platform-specific upgrade package that matches the version of {% data variables.product.prodname_ghe_server %} already running on the appliance. You cannot use a universal hotpatch upgrade package, such as `github-enterprise-2.11.9.hpkg`. After the `ghe-upgrade` command completes, application services will automatically terminate.
```shell

View File

@@ -84,7 +84,7 @@ The following instructions are only intended for {% data variables.product.prod
{% data reusables.enterprise_installation.ssh-into-instance %}
1. To validate the current flushing method for InnoDB, run the following command.
```shell copy
ghe-config mysql.innodb-flush-no-fsync
```
@@ -101,7 +101,7 @@ The following instructions are only intended for {% data variables.product.prod
You can reduce pending operations, increase IOPS, and improve performance by provisioning faster storage for your instance's nodes. To upgrade your instance's storage, back up your instance and restore the backup to a new replacement instance. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/configuring-backups-on-your-appliance)."
### Sharing data with {% data variables.product.company_short %}
### Sharing data with {% data variables.product.company_short %}
Finally, if you're willing to help {% data variables.product.company_short %} understand the real-world impact of the upgrade to MySQL 8, you can provide the data you've collected to {% data variables.contact.github_support %}. Provide the baseline and post-upgrade observations from the monitor dashboard, along with a support bundle that covers the period when you collected data. For more information, see "[AUTOTITLE](/support/learning-about-github-support/about-github-support)" and "[AUTOTITLE](/support/contacting-github-support/providing-data-to-github-support)."

View File

@@ -30,7 +30,7 @@ topics:
- Use the latest patch release when upgrading. {% data reusables.enterprise_installation.enterprise-download-upgrade-pkg %}
- Use a staging instance to test the upgrade steps. For more information, see "[AUTOTITLE](/admin/installation/setting-up-a-github-enterprise-server-instance/setting-up-a-staging-instance)."
- When running multiple upgrades, wait at least 24 hours between feature upgrades to allow data migrations and upgrade tasks running in the background to fully complete.
- Take a snapshot before upgrading your virtual machine. For more information, see "[AUTOTITLE](/admin/enterprise-management/updating-the-virtual-machine-and-physical-resources/upgrading-github-enterprise-server#taking-a-snapshot)."
- Take a snapshot before upgrading your virtual machine. For more information, see "[AUTOTITLE](/admin/enterprise-management/updating-the-virtual-machine-and-physical-resources/upgrading-github-enterprise-server#taking-a-snapshot)."
- Ensure you have a recent, successful backup of your instance. For more information, see the [{% data variables.product.prodname_enterprise_backup_utilities %} README.md file](https://github.com/github/backup-utils#readme).
## Requirements

View File

@@ -149,7 +149,7 @@ If the upgrade target you're presented with is a feature release instead of a pa
admin@HOSTNAME:~$ ghe-upgrade GITHUB-UPGRADE.hpkg
*** verifying upgrade package signature...
```
1. If at least one service or system component requires a reboot, the hotpatch upgrade script notifies you. For example, updates to the kernel, MySQL, or Elasticsearch may require a reboot.
1. If at least one service or system component requires a reboot, the hotpatch upgrade script notifies you. For example, updates to the kernel, MySQL, or Elasticsearch may require a reboot.
### Upgrading an instance with multiple nodes using a hotpatch

View File

@@ -60,7 +60,7 @@ Determine the migration approach that will work best for your enterprise. Smalle
We recommend an iterative approach that combines active management with self service. Start with a small group of early adopters that can act as your internal champions. Identify a handful of workflows that are comprehensive enough to represent the breadth of your business. Work with your early adopters to migrate those workflows to {% data variables.product.prodname_actions %}, iterating as needed. This will give other teams confidence that their workflows can be migrated, too.
Then, make {% data variables.product.prodname_actions %} available to your larger organization. Provide resources to help these teams migrate their own workflows to {% data variables.product.prodname_actions %}, and inform the teams when the existing systems will be retired.
Then, make {% data variables.product.prodname_actions %} available to your larger organization. Provide resources to help these teams migrate their own workflows to {% data variables.product.prodname_actions %}, and inform the teams when the existing systems will be retired.
Finally, inform any teams that are still using your old systems to complete their migrations within a specific timeframe. You can point to the successes of other teams to reassure them that migration is possible and desirable.

View File

@@ -22,7 +22,7 @@ shortTitle: About actions in your enterprise
{% data variables.product.prodname_actions %} workflows can use _actions_, which are individual tasks that you can combine to create jobs and customize your workflow. You can create your own actions, or use and customize actions shared by the {% data variables.product.prodname_dotcom %} community.
{% data reusables.actions.enterprise-no-internet-actions %} You can restrict your developers to using actions that are stored on {% data variables.location.product_location %}, which includes most official {% data variables.product.company_short %}-authored actions, as well as any actions your developers create. Alternatively, to allow your developers to benefit from the full ecosystem of actions built by industry leaders and the open source community, you can configure access to other actions from {% data variables.product.prodname_dotcom_the_website %}.
{% data reusables.actions.enterprise-no-internet-actions %} You can restrict your developers to using actions that are stored on {% data variables.location.product_location %}, which includes most official {% data variables.product.company_short %}-authored actions, as well as any actions your developers create. Alternatively, to allow your developers to benefit from the full ecosystem of actions built by industry leaders and the open source community, you can configure access to other actions from {% data variables.product.prodname_dotcom_the_website %}.
We recommend allowing automatic access to all actions from {% data variables.product.prodname_dotcom_the_website %}. {% ifversion ghes %}However, this does require {% data variables.product.product_name %} to make outbound connections to {% data variables.product.prodname_dotcom_the_website %}. If you don't want to allow these connections, or{% else %}If{% endif %} you want to have greater control over which actions are used on your enterprise, you can manually sync specific actions from {% data variables.product.prodname_dotcom_the_website %}.
@@ -45,7 +45,7 @@ Each action is a repository in the `actions` organization, and each action repos
{% note %}
**Notes:**
**Notes:**
- When using setup actions (such as `actions/setup-LANGUAGE`) on {% data variables.product.product_name %} with self-hosted runners, you might need to set up the tools cache on runners that do not have internet access. For more information, see "[AUTOTITLE](/admin/github-actions/managing-access-to-actions-from-githubcom/setting-up-the-tool-cache-on-self-hosted-runners-without-internet-access)."
- When {% data variables.product.product_name %} is updated, bundled actions are automatically replaced with default versions in the upgrade package.
@@ -55,7 +55,7 @@ Each action is a repository in the `actions` organization, and each action repos
{% data reusables.actions.access-actions-on-dotcom %}
The recommended approach is to enable automatic access to all actions from {% data variables.product.prodname_dotcom_the_website %}. You can do this by using {% data variables.product.prodname_github_connect %} to integrate {% data variables.product.product_name %} with {% data variables.product.prodname_ghe_cloud %}. For more information, see "[AUTOTITLE](/admin/github-actions/managing-access-to-actions-from-githubcom/enabling-automatic-access-to-githubcom-actions-using-github-connect)".
The recommended approach is to enable automatic access to all actions from {% data variables.product.prodname_dotcom_the_website %}. You can do this by using {% data variables.product.prodname_github_connect %} to integrate {% data variables.product.product_name %} with {% data variables.product.prodname_ghe_cloud %}. For more information, see "[AUTOTITLE](/admin/github-actions/managing-access-to-actions-from-githubcom/enabling-automatic-access-to-githubcom-actions-using-github-connect)".
{% ifversion ghes %}
{% note %}

View File

@@ -34,7 +34,7 @@ The `actions-sync` tool can only download actions from {% data variables.product
{% note %}
**Note:** The `actions-sync` tool is intended for use in systems where {% data variables.product.prodname_github_connect %} is not enabled. If you run the tool on a system with {% data variables.product.prodname_github_connect %} enabled, you may see the error `The repository <repo_name> has been retired and cannot be reused`. This indicates that a workflow has used that action directly on {% data variables.product.prodname_dotcom_the_website %} and the namespace is retired on {% data variables.location.product_location %}. For more information, see "[AUTOTITLE](/admin/github-actions/managing-access-to-actions-from-githubcom/enabling-automatic-access-to-githubcom-actions-using-github-connect#automatic-retirement-of-namespaces-for-actions-accessed-on-githubcom)."
**Note:** The `actions-sync` tool is intended for use in systems where {% data variables.product.prodname_github_connect %} is not enabled. If you run the tool on a system with {% data variables.product.prodname_github_connect %} enabled, you may see the error `The repository <repo_name> has been retired and cannot be reused`. This indicates that a workflow has used that action directly on {% data variables.product.prodname_dotcom_the_website %} and the namespace is retired on {% data variables.location.product_location %}. For more information, see "[AUTOTITLE](/admin/github-actions/managing-access-to-actions-from-githubcom/enabling-automatic-access-to-githubcom-actions-using-github-connect#automatic-retirement-of-namespaces-for-actions-accessed-on-githubcom)."
{% endnote %}
@@ -84,12 +84,12 @@ This example demonstrates using the `actions-sync` tool to sync an individual ac
- `--destination-token`: A {% data variables.product.pat_generic %} for the destination enterprise instance.
- `--destination-url`: The URL of the destination enterprise instance.
- `--repo-name`: The action repository to sync. This takes the format of `owner/repository:destination_owner/destination_repository`.
- The above example syncs the [`actions/stale`](https://github.com/actions/stale) repository to the `synced-actions/actions-stale` repository on the destination enterprise instance. You must create the organization named `synced-actions` in your enterprise before running the above command.
- If you omit `:destination_owner/destination_repository`, the tool uses the original owner and repository name for your enterprise. Before running the command, you must create a new organization in your enterprise that matches the owner name of the action. Consider using a central organization to store the synced actions in your enterprise, as this means you will not need to create multiple new organizations if you sync actions from different owners.
- You can sync multiple actions by replacing the `--repo-name` parameter with `--repo-name-list` or `--repo-name-list-file`. For more information, see the [`actions-sync` README](https://github.com/actions/actions-sync#actions-sync).
1. After the action repository is created in your enterprise, people in your enterprise can use the destination repository to reference the action in their workflows. For the example action shown above:
```yaml
uses: synced-actions/actions-stale@v1
```

View File

@@ -19,7 +19,7 @@ shortTitle: Use actions
Most official {% data variables.product.prodname_dotcom %}-authored actions are automatically bundled with {% data variables.product.prodname_ghe_managed %}, and are captured at a point in time from {% data variables.product.prodname_marketplace %}. When your {% data variables.product.prodname_ghe_managed %} instance is updated, the bundled official actions are also updated.
The bundled official actions include `actions/checkout`, `actions/upload-artifact`, `actions/download-artifact`, `actions/labeler`, and various `actions/setup-` actions, among others. To see which of the official actions are included, browse to the following organizations on your instance:
The bundled official actions include `actions/checkout`, `actions/upload-artifact`, `actions/download-artifact`, `actions/labeler`, and various `actions/setup-` actions, among others. To see which of the official actions are included, browse to the following organizations on your instance:
- <code>https://<em>HOSTNAME</em>/actions</code>
- <code>https://<em>HOSTNAME</em>/github</code>

View File

@@ -37,7 +37,7 @@ When a configuration error or an issue with your identity provider IdP prevents
Azure AD will retry SCIM provisioning attempts automatically during the next Azure AD sync cycle. The default SCIM provisioning interval for Azure AD is 40 minutes. For more information about this retry behavior, see the [Microsoft documentation](https://learn.microsoft.com/en-us/azure/active-directory/app-provisioning/how-provisioning-works#errors-and-retries) or contact Azure support if you need additional assistance.
Okta will retry failed SCIM provisioning attempts with manual Okta admin intervention. For more information about how an Okta admin can retry a failed task for a specific application, see the [Okta documentation](https://support.okta.com/help/s/article/How-to-retry-failed-tasks-for-a-specific-application?language=en_US) or contact Okta support if you need additional assistance.
Okta will retry failed SCIM provisioning attempts with manual Okta admin intervention. For more information about how an Okta admin can retry a failed task for a specific application, see the [Okta documentation](https://support.okta.com/help/s/article/How-to-retry-failed-tasks-for-a-specific-application?language=en_US) or contact Okta support if you need additional assistance.
{% endif %}
## SAML authentication errors

View File

@@ -13,7 +13,7 @@ topics:
permissions: Enterprise owners can use a recovery code to access an enterprise account.
---
You can use a recovery code to access your enterprise account when an authentication configuration error or an issue with your identity provider (IdP) prevents you from using SSO.
You can use a recovery code to access your enterprise account when an authentication configuration error or an issue with your identity provider (IdP) prevents you from using SSO.
In order to access your enterprise account this way, you must have previously downloaded and stored the recovery codes for your enterprise. For more information, see "[AUTOTITLE](/admin/identity-and-access-management/managing-recovery-codes-for-your-enterprise/downloading-your-enterprise-accounts-single-sign-on-recovery-codes)."

View File

@@ -19,7 +19,7 @@ shortTitle: Invite people
You can disable unauthenticated sign-ups and require an invitation to create a new user account on your instance. For more information, see "[AUTOTITLE](/admin/identity-and-access-management/using-built-in-authentication/disabling-unauthenticated-sign-ups)."
{% data reusables.enterprise_user_management.alternatively-enable-external-authentication %}
{% data reusables.enterprise_user_management.alternatively-enable-external-authentication %}
## Inviting people to create a user account

View File

@@ -30,7 +30,7 @@ For more information about using OIDC with {% data variables.product.prodname_em
### {% data variables.product.prodname_actions %}
Actions that use a {% data variables.product.pat_generic %} will likely be blocked by your IdP's CAP. We recommend that {% data variables.product.pat_generic %}s are created by a service account which is then exempted from IP controls in your IdP's CAP.
Actions that use a {% data variables.product.pat_generic %} will likely be blocked by your IdP's CAP. We recommend that {% data variables.product.pat_generic %}s are created by a service account which is then exempted from IP controls in your IdP's CAP.
If you're unable to use a service account, another option for unblocking actions that use {% data variables.product.pat_generic %}s is to allow the IP ranges used by {% data variables.product.prodname_actions %}. For more information, see "[AUTOTITLE](/authentication/keeping-your-account-and-data-secure/about-githubs-ip-addresses)."
@@ -38,12 +38,12 @@ If you're unable to use a service account, another option for unblocking actions
{% data variables.product.prodname_github_codespaces %} may not be available if your enterprise uses OIDC SSO with CAP to restrict access by IP addresses. This is because codespaces are created with dynamic IP addresses which it's likely your IdPs CAP will block. Other CAP policies may also affect {% data variables.product.prodname_github_codespaces %}'s availability, depending on the policy's specific setup.
### {% data variables.product.prodname_github_apps %} and {% data variables.product.prodname_oauth_apps %}
### {% data variables.product.prodname_github_apps %} and {% data variables.product.prodname_oauth_apps %}
When {% data variables.product.prodname_github_apps %} and {% data variables.product.prodname_oauth_apps %} sign a user in and make requests on that user's behalf, {% data variables.product.prodname_dotcom %} will send the IP address of the app's server to your IdP for validation. If the IP address of the app's server is not validated by your IdP's CAP, the request will fail.
When {% data variables.product.prodname_github_apps %} and {% data variables.product.prodname_oauth_apps %} sign a user in and make requests on that user's behalf, {% data variables.product.prodname_dotcom %} will send the IP address of the app's server to your IdP for validation. If the IP address of the app's server is not validated by your IdP's CAP, the request will fail.
When {% data variables.product.prodname_github_apps %} call {% data variables.product.prodname_dotcom %} APIs acting either as the app itself or as an installation, these calls are not performed on behalf of a user. Since your IdP's CAP executes and applies policies to user accounts, these application requests cannot be validated against CAP and are always allowed through. For more information on {% data variables.product.prodname_github_apps %} authenticating as themselves or as an installation, see "[AUTOTITLE](/apps/creating-github-apps/authenticating-with-a-github-app/about-authentication-with-a-github-app)".
You can contact the owners of the apps you want to use, ask for their IP ranges, and configure your IdP's CAP to allow access from those IP ranges. If you're unable to contact the owners, you can review your IdP sign-in logs to review the IP addresses seen in the requests, then allow-list those addresses.
You can contact the owners of the apps you want to use, ask for their IP ranges, and configure your IdP's CAP to allow access from those IP ranges. If you're unable to contact the owners, you can review your IdP sign-in logs to review the IP addresses seen in the requests, then allow-list those addresses.
If you do not wish to allow all of the IP ranges for all of your enterprise's apps, you can also exempt installed {% data variables.product.prodname_github_apps %} and authorized {% data variables.product.prodname_oauth_apps %} from the IdP allow list. If you do so, these apps will continue working regardless of the originating IP address. For more information, see "[AUTOTITLE](/admin/policies/enforcing-policies-for-your-enterprise/enforcing-policies-for-security-settings-in-your-enterprise#allowing-access-by-github-apps)."

View File

@@ -60,9 +60,9 @@ To configure your IdP, follow the instructions they provide for configuring the
- [{% data variables.product.prodname_emu_idp_application %} application on Azure Active Directory](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/aad.githubenterprisemanageduser?tab=Overview)
- [{% data variables.product.prodname_emu_idp_application %} application on Okta](https://www.okta.com/integrations/github-enterprise-managed-user)
- [{% data variables.product.prodname_emu_idp_application %} connector on PingFederate](https://www.pingidentity.com/en/resources/downloads/pingfederate.html) (public beta)
To download the PingFederate connector, navigate to the **Add-ons** tab and select **GitHub EMU Connector 1.0**.
{% indented_data_reference reusables.enterprise-accounts.public-beta-pingfed-for-emu spaces=3 %}
2. To configure the {% data variables.product.prodname_emu_idp_application %} application and your IdP, click the link below and follow the instructions provided by your IdP:

View File

@@ -17,7 +17,7 @@ topics:
## About provisioning for {% data variables.product.prodname_emus %}
You must configure provisioning for {% data variables.product.prodname_emus %} to create, manage, and deactivate user accounts for your enterprise members.
You must configure provisioning for {% data variables.product.prodname_emus %} to create, manage, and deactivate user accounts for your enterprise members.
After you configure provisioning for {% data variables.product.prodname_emus %}, users assigned to the {% data variables.product.prodname_emu_idp_application %} application in your identity provider are provisioned as new {% data variables.enterprise.prodname_managed_users %} on {% data variables.product.prodname_dotcom %} via SCIM, and the {% data variables.enterprise.prodname_managed_users %} are added to your enterprise. If you assign a group to the application, all users within the group will be provisioned as new {% data variables.enterprise.prodname_managed_users %}.
@@ -59,7 +59,7 @@ To configure provisioning for your {% data variables.enterprise.prodname_emu_ent
## Configuring provisioning for {% data variables.product.prodname_emus %}
After creating your {% data variables.product.pat_generic %} and storing it securely, you can configure provisioning on your identity provider.
After creating your {% data variables.product.pat_generic %} and storing it securely, you can configure provisioning on your identity provider.
{% data reusables.scim.emu-scim-rate-limit %}

View File

@@ -43,15 +43,15 @@ To migrate to a new IdP or tenant, you cannot edit your existing SAML configurat
1. Deactivate SAML for the {% data variables.enterprise.prodname_emu_enterprise %}.
- From your profile, click **Your enterprises**, and then click the appropriate enterprise.
- Click {% octicon "gear" aria-label="The Settings gear" %} **Settings**, and then click **Authentication security**.
- Under "SAML single sign-on", deselect **Require SAML authentication**, and then click **Save**.
- Click {% octicon "gear" aria-label="The Settings gear" %} **Settings**, and then click **Authentication security**.
- Under "SAML single sign-on", deselect **Require SAML authentication**, and then click **Save**.
1. Wait for all users in the enterprise to show as suspended.
1. While still signed in as the setup user, configure SAML and SCIM for the new IdP or tenant with a new {% data variables.product.prodname_emus %} application.
After you configure provisioning for the new application, the {% data variables.enterprise.prodname_managed_users %} will be unsuspended, and your developers will be able to sign into their existing accounts again.
By default, this process can take up to 40 minutes for Azure AD. To expedite the process for an individual user, click the **Provision on Demand** button in the "Provisioning" tab of the application for {% data variables.product.prodname_emus %}.
## Migrating when the normalized SCIM `userName` values will change

View File

@@ -69,7 +69,7 @@ In your Azure AD tenant, add the application for {% data variables.product.produ
{% endif %}
## Managing enterprise owners
## Managing enterprise owners
The steps to make a person an enterprise owner depend on whether you only use SAML or also use SCIM. For more information about enterprise owners, see "[AUTOTITLE](/admin/user-management/managing-users-in-your-enterprise/roles-in-an-enterprise)."

View File

@@ -151,7 +151,7 @@ After you enable SCIM on a {% data variables.product.product_name %} instance, a
--header 'Content-Type: application/scim' \
--header 'Authorization: Bearer $GHES_PAT'
```
The command should return an empty array.
{%- endif %}
{%- ifversion ghae %}

View File

@@ -20,13 +20,13 @@ redirect_from:
{% data reusables.saml.dotcom-saml-explanation %} {% data reusables.saml.about-saml-enterprise-accounts %}
{% data reusables.saml.switching-from-org-to-enterprise %}
{% data reusables.saml.switching-from-org-to-enterprise %}
When you configure SAML SSO at the organization level, each organization must be configured with a unique SSO tenant in your IdP, which means that your members will be associated with a unique SAML identity record for each organization they have successfully authenticated with. If you configure SAML SSO for your enterprise account instead, each enterprise member will have one SAML identity that is used for all organizations owned by the enterprise account.
After you configure SAML SSO for your enterprise account, the new configuration will override any existing SAML SSO configurations for organizations owned by the enterprise account.
Enterprise members will not be notified when an enterprise owner enables SAML for the enterprise account. If SAML SSO was previously enforced at the organization level, members should not see a major difference when navigating directly to organization resources. The members will continue to be prompted to authenticate via SAML. If members navigate to organization resources via their IdP dashboard, they will need to click the new tile for the enterprise-level app, instead of the old tile for the organization-level app. The members will then be able to choose the organization to navigate to.
Enterprise members will not be notified when an enterprise owner enables SAML for the enterprise account. If SAML SSO was previously enforced at the organization level, members should not see a major difference when navigating directly to organization resources. The members will continue to be prompted to authenticate via SAML. If members navigate to organization resources via their IdP dashboard, they will need to click the new tile for the enterprise-level app, instead of the old tile for the organization-level app. The members will then be able to choose the organization to navigate to.
Any {% data variables.product.pat_generic %}s, SSH keys, {% data variables.product.prodname_oauth_apps %}, and {% data variables.product.prodname_github_apps %} that were previously authorized for the organization will continue to be authorized for the organization. However, members will need to authorize any PATs, SSH keys, {% data variables.product.prodname_oauth_apps %}, and {% data variables.product.prodname_github_apps %} that were never authorized for use with SAML SSO for the organization.

View File

@@ -104,7 +104,7 @@ To configure the instance, you must confirm the instance's status, upload a lice
## Azure extension features
{% data variables.product.product_name %} does not support the installation of Azure extension features. The {% data variables.product.prodname_ghe_server %} image is shipped with a customized `waagent` package which only supports basic VM management functions and blocks advanced VM management functions.
{% data variables.product.product_name %} does not support the installation of Azure extension features. The {% data variables.product.prodname_ghe_server %} image is shipped with a customized `waagent` package which only supports basic VM management functions and blocks advanced VM management functions.
To avoid system instability of your {% data variables.product.prodname_ghe_server %} instance, the `walinuxagent` service is intentionally run in {% data variables.product.prodname_ghe_server %} in a restricted mode, explicitly disallowing the agent from being able to install other agents. VM management features that rely on additional agents and extensions beyond that which ships with {% data variables.product.prodname_ghe_server %} image, such as the Monitoring Agent extension for Azure Insights or Azure Backups, are unsupported.

View File

@@ -8,9 +8,9 @@ redirect_from:
- /early-access/github/analyze-how-your-team-works-with-server-statistics/exploring-server-statistics
---
You can download up to the last 365 days of {% data variables.product.prodname_server_statistics %} data in a CSV or JSON file. This data, which includes aggregate metrics on repositories, issues, and pull requests, can help you anticipate the needs of your organization, understand how your team works, and show the value you get from {% data variables.product.prodname_ghe_server %}.
You can download up to the last 365 days of {% data variables.product.prodname_server_statistics %} data in a CSV or JSON file. This data, which includes aggregate metrics on repositories, issues, and pull requests, can help you anticipate the needs of your organization, understand how your team works, and show the value you get from {% data variables.product.prodname_ghe_server %}.
Before you can download this data, you must enable {% data variables.product.prodname_server_statistics %}. For more information, see "[AUTOTITLE](/admin/configuration/configuring-github-connect/enabling-server-statistics-for-your-enterprise)."
Before you can download this data, you must enable {% data variables.product.prodname_server_statistics %}. For more information, see "[AUTOTITLE](/admin/configuration/configuring-github-connect/enabling-server-statistics-for-your-enterprise)."
To preview the metrics available to download, see "[AUTOTITLE](/admin/monitoring-activity-in-your-enterprise/analyzing-how-your-team-works-with-server-statistics/about-server-statistics)."

View File

@@ -10,6 +10,6 @@ redirect_from:
You can request up to 365 days of metrics in a single {% data variables.product.prodname_server_statistics %} REST API request. This data, which includes aggregate metrics on repositories, issues, and pull requests, can help you anticipate the needs of your organization, understand how your team works, and show the value you get from {% data variables.product.prodname_ghe_server %}. For a list of the metrics collected, see "[AUTOTITLE](/admin/monitoring-activity-in-your-enterprise/analyzing-how-your-team-works-with-server-statistics/about-server-statistics#server-statistics-data-collected)."
Before you can use the {% data variables.product.prodname_server_statistics %} REST API, you must enable {% data variables.product.prodname_server_statistics %}. For more information, see "[AUTOTITLE](/admin/configuration/configuring-github-connect/enabling-server-statistics-for-your-enterprise)."
Before you can use the {% data variables.product.prodname_server_statistics %} REST API, you must enable {% data variables.product.prodname_server_statistics %}. For more information, see "[AUTOTITLE](/admin/configuration/configuring-github-connect/enabling-server-statistics-for-your-enterprise)."
For more information about using the REST API to request server statistics, see "[AUTOTITLE](/enterprise-cloud@latest/rest/enterprise-admin/admin-stats#get-github-enterprise-server-statistics)" in the {% data variables.product.prodname_ghe_cloud %} REST API documentation.

View File

@@ -32,8 +32,8 @@ When you enable log forwarding, you must upload a CA certificate to encrypt comm
1. Select **Enable log forwarding**.
1. In the **Server address** field, type the address of the server to which you want to forward logs. You can specify multiple addresses in a comma-separated list.
1. In the Protocol drop-down menu, select the protocol to use to communicate with the log server. The protocol will apply to all specified log destinations.
1. Optionally, select **Enable TLS**. We recommend enabling TLS according to your local security policies, especially if there are untrusted networks between the appliance and any remote log servers.
2. To encrypt communication between syslog endpoints, click **Choose File** and choose a CA certificate for the remote syslog server. You should upload a CA bundle containing a concatenation of the certificates of the CAs involved in signing the certificate of the remote log server. The entire certificate chain will be validated, and must terminate in a root certificate.
1. Optionally, select **Enable TLS**. We recommend enabling TLS according to your local security policies, especially if there are untrusted networks between the appliance and any remote log servers.
2. To encrypt communication between syslog endpoints, click **Choose File** and choose a CA certificate for the remote syslog server. You should upload a CA bundle containing a concatenation of the certificates of the CAs involved in signing the certificate of the remote log server. The entire certificate chain will be validated, and must terminate in a root certificate.
{% elsif ghae %}
{% data reusables.enterprise-accounts.access-enterprise %}
{% data reusables.enterprise-accounts.settings-tab %}

View File

@@ -573,7 +573,7 @@ Before you'll see `git` category actions, you must enable Git events in the audi
| `management_console.slack_app_generate` | An app for the Slack integration was generated. For more information, see "[AUTOTITLE](/get-started/exploring-integrations/github-extensions-and-integrations#team-communication-tools)." |
| `management_console.slack_app_update` | The app-level token for the Slack integration was updated. For more information, see "[AUTOTITLE](/get-started/exploring-integrations/github-extensions-and-integrations#team-communication-tools)." |
| `management_console.smtp_test` | An SMTP configuration was tested while enabling email notifications for the instance. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/configuring-email-for-notifications#testing-email-delivery)." |
| `management_console.ssh_command` | A command was run using the administrative shell (SSH). For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/accessing-the-administrative-shell-ssh)."
| `management_console.ssh_command` | A command was run using the administrative shell (SSH). For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/accessing-the-administrative-shell-ssh)."
| `management_console.storage_actions_test` | A storage configuration for {% data variables.product.prodname_actions %} was tested. For more information, see "[AUTOTITLE](/admin/github-actions/enabling-github-actions-for-github-enterprise-server)." |
| `management_console.storage_migrations_test` | A storage configuration for {% data variables.product.prodname_importer_proper_name %} was tested. For more information, see "[AUTOTITLE](/migrations/using-github-enterprise-importer/migrating-repositories-with-github-enterprise-importer/migrating-repositories-from-github-enterprise-server-to-github-enterprise-cloud)." |
| `management_console.storage_packages_test` | A storage configuration for {% data variables.product.prodname_registry %} was tested. For more information, see "[AUTOTITLE](/admin/packages)." |

View File

@@ -51,11 +51,11 @@ Before you can enable Git events in the audit log, you must configure a retentio
1. Under "Git event opt-in", select or deselect **Enable git events in the audit-log**.
{% note %}
**Note:** The retention policy must be set to something other than infinite for this option to display.
{% endnote %}
![Screenshot of the audit log. The checkbox to enable Git events in the audit log is highlighted with an orange outline.](/assets/images/help/enterprises/enable-git-events-checkbox.png)
2. Click **Save**.

View File

@@ -13,11 +13,11 @@ topics:
{% data reusables.github-ae.github-ae-enables-you %} {% data variables.product.prodname_ghe_managed %} is fully managed, reliable, and scalable, allowing you to accelerate delivery while improving your risk and compliance posture.
{% data variables.product.prodname_ghe_managed %} offers one developer platform from idea to production. You can increase development velocity with the tools that teams know and love, while you maintain industry and regulatory compliance with security and access controls, workflow automation, and policy enforcement.
{% data variables.product.prodname_ghe_managed %} offers one developer platform from idea to production. You can increase development velocity with the tools that teams know and love, while you maintain industry and regulatory compliance with security and access controls, workflow automation, and policy enforcement.
## A highly available and planet-scale cloud
{% data variables.product.prodname_ghe_managed %} is a fully managed service, hosted in a high availability architecture. {% data variables.product.prodname_ghe_managed %} is hosted globally in a cloud that can scale to support your full development lifecycle without limits. {% data variables.product.prodname_dotcom %} fully manages backups, failover, and disaster recovery, so you never need to worry about your service or data.
{% data variables.product.prodname_ghe_managed %} is a fully managed service, hosted in a high availability architecture. {% data variables.product.prodname_ghe_managed %} is hosted globally in a cloud that can scale to support your full development lifecycle without limits. {% data variables.product.prodname_dotcom %} fully manages backups, failover, and disaster recovery, so you never need to worry about your service or data.
## Data residency
@@ -44,12 +44,12 @@ Secure access to your enterprise on {% data variables.product.prodname_ghe_manag
- FedRAMP High Authorization to Operate (ATO)
- SOC 1, SOC 2 Type II, and SOC 3
- ISO/IEC certifications
- ISO/IEC 27001:2013
- ISO/IEC 27001:2013
- ISO/IEC 27701:2019
- ISO/IEC 9001:2015
- ISO/IEC 22301:2019
- ISO/IEC 27018:2014
- ISO/IEC 20000-1:2018
- ISO/IEC 22301:2019
- ISO/IEC 27018:2014
- ISO/IEC 20000-1:2018
- ISO/IEC 27017:2015
## Further reading

View File

@@ -24,4 +24,4 @@ With the APIs, you can automate many administrative tasks. Some examples include
- Collect statistics about your enterprise. For more information, see "[AUTOTITLE](/rest/enterprise-admin#admin-stats)."
- Manage your enterprise account. For more information, see "[AUTOTITLE](/graphql/guides/managing-enterprise-accounts)."
For the complete documentation for the {% data variables.product.prodname_enterprise_api %}, see [{% data variables.product.prodname_dotcom %} REST API](/rest) and [{% data variables.product.prodname_dotcom%} GraphQL API](/graphql).
For the complete documentation for the {% data variables.product.prodname_enterprise_api %}, see [{% data variables.product.prodname_dotcom %} REST API](/rest) and [{% data variables.product.prodname_dotcom%} GraphQL API](/graphql).

View File

@@ -18,7 +18,7 @@ shortTitle: Create enterprise account
If you currently use {% data variables.product.prodname_ghe_cloud %} with a single organization and by invoice, you can create an enterprise account through your billing page. If you need Enterprise Managed Users, {% data variables.product.prodname_ghe_server %}, or invoicing support, please contact {% data variables.contact.contact_enterprise_sales %}.
You can also create an enterprise account by setting up a free trial of {% data variables.product.prodname_ghe_cloud %}. Trials are limited to 50 seats. If you have an existing organization with more than 50 seats that you want to invite into the trial, please contact {% data variables.contact.contact_enterprise_sales %}. For more information, see "[AUTOTITLE](/get-started/signing-up-for-github/setting-up-a-trial-of-github-enterprise-cloud)".
You can also create an enterprise account by setting up a free trial of {% data variables.product.prodname_ghe_cloud %}. Trials are limited to 50 seats. If you have an existing organization with more than 50 seats that you want to invite into the trial, please contact {% data variables.contact.contact_enterprise_sales %}. For more information, see "[AUTOTITLE](/get-started/signing-up-for-github/setting-up-a-trial-of-github-enterprise-cloud)".
When you create an enterprise account that owns your existing organization on {% data variables.product.product_name %}, the organization's resources remain accessible to members at the same URLs. After you add your organization to the enterprise account, the following changes will apply to the organization.

View File

@@ -82,7 +82,7 @@ For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-ent
{% data variables.product.product_name %} is provided as an appliance, and many of the operating system packages are modified compared to the usual Debian distribution. We do not support modifying the underlying operating system for this reason (including operating system upgrades), which is aligned with the [{% data variables.product.prodname_ghe_server %} license and support agreement](https://enterprise.github.com/license), under section 11.3 Exclusions.
Currently, the base operating system for {% data variables.product.product_name %} is Debian 10 (Buster), which receives support under the Debian Long Term Support program.
Currently, the base operating system for {% data variables.product.product_name %} is Debian 10 (Buster), which receives support under the Debian Long Term Support program.
Regular patch updates are released on the {% data variables.product.product_name %} [releases](https://enterprise.github.com/releases) page, and the [release notes](/admin/release-notes) page provides more information. These patches typically contain upstream vendor and project security patches after they've been tested and quality approved by our engineering team. There can be a slight time delay from when the upstream update is released to when it's tested and bundled in an upcoming {% data variables.product.product_name %} patch release.

View File

@@ -11,7 +11,7 @@ topics:
- Policies
---
To help you enforce business rules and regulatory compliance, policies provide a single point of management for all the organizations owned by an enterprise account.
To help you enforce business rules and regulatory compliance, policies provide a single point of management for all the organizations owned by an enterprise account.
{% data reusables.enterprise.about-policies %}

View File

@@ -67,7 +67,7 @@ You can choose to disable {% data variables.product.prodname_actions %} for all
By default anyone with admin access to a repository can add a self-hosted runner for the repository. The enterprise settings allow you to disable the use of repository-level self-hosted runners across all repositories in your enterprise. If you allow repository-level self-hosted runners for your enterprise, organization owners can choose to allow or prevent creation of repository-level self-hosted runners for some or all repositories in their organization. For more information see, "[AUTOTITLE](/organizations/managing-organization-settings/disabling-or-limiting-github-actions-for-your-organization)."
{% data reusables.actions.disable-selfhosted-runners-note %}
{% data reusables.actions.disable-selfhosted-runners-note %}
{% data reusables.enterprise-accounts.access-enterprise %}
{% data reusables.enterprise-accounts.policies-tab %}
@@ -75,9 +75,9 @@ By default anyone with admin access to a repository can add a self-hosted runner
1. In the "Runners" section, select **Disable for all organizations**.{% ifversion ghec %}
{% note %}
**Note**: Owners of an {% data variables.enterprise.prodname_emu_enterprise %} can also choose to select **Disable in all Enterprise Managed User (EMU) repositories** to restrict runner creation for repositories that are owned by managed user accounts.
{% endnote %}
{% endif %}

View File

@@ -56,7 +56,7 @@ Enterprise owners who create an organization owned by the enterprise account aut
## Inviting an organization to join your enterprise account
Enterprise owners can invite existing organizations to join their enterprise account.
Enterprise owners can invite existing organizations to join their enterprise account.
If the organization you want to invite is already owned by another enterprise account, you must be an owner of both enterprise accounts. If you're not, you can ask an owner of the enterprise account that currently owns the organization to transfer the organization to your enterprise account instead. For more information, see "[Transferring an organization between enterprise accounts](#transferring-an-organization-between-enterprise-accounts)."

View File

@@ -23,7 +23,7 @@ Both {% data variables.product.prodname_oauth_app %}s and {% data variables.prod
{% data variables.product.prodname_oauth_app %}s can only act on behalf of a user while {% data variables.product.prodname_github_app %}s can either act on behalf of a user or independently of a user.
{% data variables.product.prodname_github_app %}s use fine-grained permissions, give the user more control over which repositories the app can access, and use short-lived tokens.
{% data variables.product.prodname_github_app %}s use fine-grained permissions, give the user more control over which repositories the app can access, and use short-lived tokens.
For more information, see "[AUTOTITLE](/apps/oauth-apps/building-oauth-apps/differences-between-github-apps-and-oauth-apps)" and "[AUTOTITLE](/apps/creating-github-apps/setting-up-a-github-app/about-creating-github-apps)."

View File

@@ -32,7 +32,7 @@ The rate limit for {% data variables.product.prodname_github_app %}s using an in
There is one case where an {% data variables.product.prodname_oauth_app %} is preferred over a {% data variables.product.prodname_github_app %}. If your app needs to access enterprise-level resources such as the enterprise object itself, you should use an {% data variables.product.prodname_oauth_app %} because a {% data variables.product.prodname_github_app %} cannot yet be given permissions against an enterprise. {% data variables.product.prodname_github_app %}s can still access enterprise-owned organization and repository resources.
For more information about {% data variables.product.prodname_github_app %}s, see "[AUTOTITLE](/apps/creating-github-apps/setting-up-a-github-app/about-creating-github-apps)."
For more information about {% data variables.product.prodname_github_app %}s, see "[AUTOTITLE](/apps/creating-github-apps/setting-up-a-github-app/about-creating-github-apps)."
For more information about migrating an existing {% data variables.product.prodname_oauth_app %} to a {% data variables.product.prodname_github_app %}, see "[AUTOTITLE](/apps/creating-github-apps/guides/migrating-oauth-apps-to-github-apps)."

View File

@@ -80,7 +80,7 @@ Disadvantages:
## The app code must be aware of feature differences
New REST API endpoints, GraphQL objects, and webhooks are released to {% data variables.product.prodname_ghe_server %} at a later date than {% data variables.product.prodname_free_user %}, {% data variables.product.prodname_pro %}, {% data variables.product.prodname_team %}, and {% data variables.product.prodname_ghe_cloud %}. Additionally, there are multiple versions of {% data variables.product.prodname_ghe_server %}, and older versions may have different REST API endpoints, GraphQL objects, and webhooks.
New REST API endpoints, GraphQL objects, and webhooks are released to {% data variables.product.prodname_ghe_server %} at a later date than {% data variables.product.prodname_free_user %}, {% data variables.product.prodname_pro %}, {% data variables.product.prodname_team %}, and {% data variables.product.prodname_ghe_cloud %}. Additionally, there are multiple versions of {% data variables.product.prodname_ghe_server %}, and older versions may have different REST API endpoints, GraphQL objects, and webhooks.
Therefore, the app code needs to be aware of these differences. API responses and webhook payloads include a `x-github-enterprise-version` header for {% data variables.product.prodname_ghe_server %} payloads to help you determine what version you are handling.

View File

@@ -28,7 +28,7 @@ In addition to reviewing {% data variables.product.prodname_github_app %}s that
1. Next to the {% data variables.product.prodname_github_app %} you want to review or modify, click **Configure**.
- For a {% data variables.product.prodname_github_app %} installed on your personal account:
1. In the upper-right corner of any page, click your profile photo, then click **Settings**.
1. In the upper-right corner of any page, click your profile photo, then click **Settings**.
1. Under "Integrations," click **Applications**.
1. Click **Installed GitHub Apps**. A list of the {% data variables.product.prodname_github_app %}s installed on your personal account will be displayed.
1. Next to the {% data variables.product.prodname_github_app %} you want to review or modify, click **Configure**.

View File

@@ -20,7 +20,7 @@ SAML SSO allows an enterprise owner to centrally control and secure access to {%
{% data reusables.saml.you-must-periodically-authenticate %}
If you can't access {% data variables.product.product_name %}, contact your local enterprise owner or administrator for {% data variables.product.product_name %}. You may be able to locate contact information for your enterprise by clicking **Support** at the bottom of any page on {% data variables.product.product_name %}. {% data variables.product.company_short %} and {% data variables.contact.github_support %} do not have access to your IdP, and cannot troubleshoot authentication problems.
If you can't access {% data variables.product.product_name %}, contact your local enterprise owner or administrator for {% data variables.product.product_name %}. You may be able to locate contact information for your enterprise by clicking **Support** at the bottom of any page on {% data variables.product.product_name %}. {% data variables.product.company_short %} and {% data variables.contact.github_support %} do not have access to your IdP, and cannot troubleshoot authentication problems.
{% endif %}
@@ -42,11 +42,11 @@ If you have recently authenticated with your organization's SAML IdP in your bro
## Linked SAML identities
When you authenticate with your IdP account and return to {% data variables.product.prodname_dotcom %}, {% data variables.product.prodname_dotcom %} will record a link in the organization or enterprise between your {% data variables.product.prodname_dotcom %} personal account and the SAML identity you signed into. This linked identity is used to validate your membership in that organization, and depending on your organization or enterprise setup, is also used to determine which organizations and teams you're a member of as well. Each {% data variables.product.prodname_dotcom %} account can be linked to exactly one SAML identity per organization. Likewise, each SAML identity can be linked to exactly one {% data variables.product.prodname_dotcom %} account in an organization.
When you authenticate with your IdP account and return to {% data variables.product.prodname_dotcom %}, {% data variables.product.prodname_dotcom %} will record a link in the organization or enterprise between your {% data variables.product.prodname_dotcom %} personal account and the SAML identity you signed into. This linked identity is used to validate your membership in that organization, and depending on your organization or enterprise setup, is also used to determine which organizations and teams you're a member of as well. Each {% data variables.product.prodname_dotcom %} account can be linked to exactly one SAML identity per organization. Likewise, each SAML identity can be linked to exactly one {% data variables.product.prodname_dotcom %} account in an organization.
If you sign in with a SAML identity that is already linked to another {% data variables.product.prodname_dotcom %} account, you will receive an error message indicating that you cannot sign in with that SAML identity. This situation can occur if you are attempting to use a new {% data variables.product.prodname_dotcom %} account to work inside of your organization. If you didn't intend to use that SAML identity with that {% data variables.product.prodname_dotcom %} account, then you'll need to sign out of that SAML identity and then repeat the SAML login. If you do want to use that SAML identity with your {% data variables.product.prodname_dotcom %} account, you'll need to ask your admin to unlink your SAML identity from your old account, so that you can link it to your new account. Depending on the setup of your organization or enterprise, your admin may also need to reassign your identity within your SAML provider. For more information, see "[AUTOTITLE](/organizations/granting-access-to-your-organization-with-saml-single-sign-on/viewing-and-managing-a-members-saml-access-to-your-organization#viewing-and-revoking-a-linked-identity)."
If the SAML identity you sign in with does not match the SAML identity that is currently linked to your {% data variables.product.prodname_dotcom %} account, you'll receive a warning that you are about to relink your account. Because your SAML identity is used to govern access and team membership, continuing with the new SAML identity can cause you to lose access to teams and organizations inside of {% data variables.product.prodname_dotcom %}. Only continue if you know that you're supposed to use that new SAML identity for authentication in the future.
If the SAML identity you sign in with does not match the SAML identity that is currently linked to your {% data variables.product.prodname_dotcom %} account, you'll receive a warning that you are about to relink your account. Because your SAML identity is used to govern access and team membership, continuing with the new SAML identity can cause you to lose access to teams and organizations inside of {% data variables.product.prodname_dotcom %}. Only continue if you know that you're supposed to use that new SAML identity for authentication in the future.
## Authorizing {% data variables.product.pat_generic %}s and SSH keys with SAML SSO
@@ -60,7 +60,7 @@ To use a new or existing {% data variables.product.pat_generic %} or SSH key wit
You must have an active SAML session each time you authorize an {% data variables.product.prodname_oauth_app %} or {% data variables.product.prodname_github_app %} to access an organization that uses or enforces SAML SSO. You can create an active SAML session by navigating to `https://github.com/orgs/ORGANIZATION-NAME/sso` in your browser.
After an enterprise or organization owner enables or enforces SAML SSO for an organization, and after you authenticate via SAML for the first time, you must reauthorize any {% data variables.product.prodname_oauth_apps %} or {% data variables.product.prodname_github_apps %} that you previously authorized to access the organization.
After an enterprise or organization owner enables or enforces SAML SSO for an organization, and after you authenticate via SAML for the first time, you must reauthorize any {% data variables.product.prodname_oauth_apps %} or {% data variables.product.prodname_github_apps %} that you previously authorized to access the organization.
To see the {% data variables.product.prodname_oauth_apps %} you've authorized, visit your [{% data variables.product.prodname_oauth_apps %} page](https://github.com/settings/applications). To see the {% data variables.product.prodname_github_apps %} you've authorized, visit your [{% data variables.product.prodname_github_apps %} page](https://github.com/settings/apps/authorizations).

View File

@@ -14,7 +14,7 @@ topics:
- Identity
- Access management
---
To host your images, {% data variables.product.product_name %} uses the [open-source project Camo](https://github.com/atmos/camo). Camo generates an anonymous URL proxy for each file which hides your browser details and related information from other users. The URL starts `https://<subdomain>.githubusercontent.com/`, with different subdomains depending on how you uploaded the image.
To host your images, {% data variables.product.product_name %} uses the [open-source project Camo](https://github.com/atmos/camo). Camo generates an anonymous URL proxy for each file which hides your browser details and related information from other users. The URL starts `https://<subdomain>.githubusercontent.com/`, with different subdomains depending on how you uploaded the image.
Videos also get anonymized URLs with the same format as image URLs, but are not processed through Camo. This is because {% data variables.product.prodname_dotcom %} does not support externally hosted videos, so the anonymized URL is a link to the uploaded video hosted by {% data variables.product.prodname_dotcom %}.

View File

@@ -28,7 +28,7 @@ To keep your account secure, we recommend you follow these best practices:
{% data reusables.repositories.blocked-passwords %}
You can only use your password to log on to {% data variables.product.product_name %} using your browser. When you authenticate to {% data variables.product.product_name %} with other means, such as the command line or API, you should use other credentials. For more information, see "[AUTOTITLE](/authentication/keeping-your-account-and-data-secure/about-authentication-to-github)."
You can only use your password to log on to {% data variables.product.product_name %} using your browser. When you authenticate to {% data variables.product.product_name %} with other means, such as the command line or API, you should use other credentials. For more information, see "[AUTOTITLE](/authentication/keeping-your-account-and-data-secure/about-authentication-to-github)."
{% ifversion fpt or ghec %}{% data reusables.user-settings.password-authentication-deprecation %}{% endif %}

View File

@@ -24,9 +24,9 @@ You can remove the file from the latest commit with `git rm`. For information on
{% warning %}
**Warning**: This article tells you how to make commits with sensitive data unreachable from any branches or tags in your repository on {% ifversion ghae %}{% data variables.product.product_name %}{% else %}{% data variables.location.product_location %}{% endif %}. However, those commits may still be accessible in any clones or forks of your repository, directly via their SHA-1 hashes in cached views on {% data variables.product.product_name %}, and through any pull requests that reference them. You cannot remove sensitive data from other users' clones of your repository, but you can permanently remove cached views and references to the sensitive data in pull requests on {% data variables.product.product_name %} by contacting {% data variables.contact.contact_support %}.
**Warning**: This article tells you how to make commits with sensitive data unreachable from any branches or tags in your repository on {% ifversion ghae %}{% data variables.product.product_name %}{% else %}{% data variables.location.product_location %}{% endif %}. However, those commits may still be accessible in any clones or forks of your repository, directly via their SHA-1 hashes in cached views on {% data variables.product.product_name %}, and through any pull requests that reference them. You cannot remove sensitive data from other users' clones of your repository, but you can permanently remove cached views and references to the sensitive data in pull requests on {% data variables.product.product_name %} by contacting {% data variables.contact.contact_support %}.
Once you have pushed a commit to {% data variables.product.product_name %}, you should consider any sensitive data in the commit compromised. If you have committed a password, you should change it. If you have committed a key, generate a new one. Removing the compromised data doesn't resolve its initial exposure, especially in existing clones or forks of your repository.
Once you have pushed a commit to {% data variables.product.product_name %}, you should consider any sensitive data in the commit compromised. If you have committed a password, you should change it. If you have committed a key, generate a new one. Removing the compromised data doesn't resolve its initial exposure, especially in existing clones or forks of your repository.
If the commit that introduced the sensitive data exists in any forks of your repository, it will continue to be accessible unless the fork owner also removes the sensitive data from their fork or deletes the fork entirely. You will need to coordinate with the owners of any forks of your repository, asking them to take the appropriate actions.{% ifversion fpt or ghec %} Please note that {% data variables.product.company_short %} cannot provide contact information for these owners. {% endif %}
@@ -46,7 +46,7 @@ You can purge a file from your repository's history using either the `git filter
### Using the BFG
The [BFG Repo-Cleaner](https://rtyley.github.io/bfg-repo-cleaner/) is a tool that's built and maintained by the open source community. It provides a faster, simpler alternative to `git filter-repo` for removing unwanted data.
The [BFG Repo-Cleaner](https://rtyley.github.io/bfg-repo-cleaner/) is a tool that's built and maintained by the open source community. It provides a faster, simpler alternative to `git filter-repo` for removing unwanted data.
For example, to remove your file with sensitive data and leave your latest commit untouched, run:

View File

@@ -34,7 +34,7 @@ You can delete unauthorized (or possibly compromised) SSH keys to ensure that an
{% data reusables.command_line.start_ssh_agent %}
6. Find and take a note of your public key fingerprint.
6. Find and take a note of your public key fingerprint.
```shell
$ ssh-add -l -E sha256
> 2048 SHA256:274ffWxgaxq/tSINAykStUL7XWyRNcRTlcST1Ei7gBQ /Users/USERNAME/.ssh/id_rsa (RSA)
@@ -56,13 +56,13 @@ You can delete unauthorized (or possibly compromised) SSH keys to ensure that an
{% endtip %}
4. Open Git Bash.
4. Open Git Bash.
5. {% data reusables.desktop.windows_git_bash_turn_on_ssh_agent %}
{% data reusables.desktop.windows_git_for_windows_turn_on_ssh_agent %}
6. Find and take a note of your public key fingerprint.
6. Find and take a note of your public key fingerprint.
```shell
$ ssh-add -l -E sha256
> 2048 SHA256:274ffWxgaxq/tSINAykStUL7XWyRNcRTlcST1Ei7gBQ /Users/USERNAME/.ssh/id_rsa (RSA)
@@ -88,7 +88,7 @@ You can delete unauthorized (or possibly compromised) SSH keys to ensure that an
{% data reusables.command_line.start_ssh_agent %}
6. Find and take a note of your public key fingerprint.
6. Find and take a note of your public key fingerprint.
```shell
$ ssh-add -l -E sha256
> 2048 SHA256:274ffWxgaxq/tSINAykStUL7XWyRNcRTlcST1Ei7gBQ /Users/USERNAME/.ssh/id_rsa (RSA)

View File

@@ -38,7 +38,7 @@ After you authenticate to perform a sensitive action, your session is temporaril
{% note %}
**Note**: If your enterprise uses {% data variables.product.prodname_emus %}, you will not receive prompts to enter sudo mode, as your account doesn't have credentials stored on {% data variables.product.product_name %}.
**Note**: If your enterprise uses {% data variables.product.prodname_emus %}, you will not receive prompts to enter sudo mode, as your account doesn't have credentials stored on {% data variables.product.product_name %}.
{% endnote %}

View File

@@ -31,7 +31,7 @@ When you create a {% data variables.product.pat_generic %}, we recommend that yo
{% ifversion fpt or ghec %}
## Token revoked when pushed to a public repository or public gist
If a valid OAuth token, {% data variables.product.prodname_github_app %} token, or {% data variables.product.pat_generic %} is pushed to a public repository or public gist, the token will be automatically revoked.
If a valid OAuth token, {% data variables.product.prodname_github_app %} token, or {% data variables.product.pat_generic %} is pushed to a public repository or public gist, the token will be automatically revoked.
{% endif %}

View File

@@ -20,12 +20,12 @@ You can view a list of devices that have logged into your account, and revoke an
1. To see the web session details, click **See more**.
1. To revoke a web session, click **Revoke session**.
{% ifversion fpt or ghec %}
1. Optionally, to revoke a {% data variables.product.prodname_mobile %} session, go back to the Sessions overview page and click **Revoke** next to the device you want to revoke.
1. Optionally, to revoke a {% data variables.product.prodname_mobile %} session, go back to the Sessions overview page and click **Revoke** next to the device you want to revoke.
{% note %}
**Note:** Revoking a mobile session signs you out of the {% data variables.product.prodname_mobile %} application on that device and removes it as a second-factor option.
**Note:** Revoking a mobile session signs you out of the {% data variables.product.prodname_mobile %} application on that device and removes it as a second-factor option.
{% endnote %}
{% endif %}

View File

@@ -41,12 +41,12 @@ If you chose to set up two-factor authentication using a TOTP application on you
If you delete your authenticator application after configuring two-factor authentication, you'll need to provide your recovery code to get access to your account. Many TOTP apps support the secure backup of your authentication codes in the cloud and can be restored if you lose access to your device. For more information, see "[AUTOTITLE](/authentication/securing-your-account-with-two-factor-authentication-2fa/recovering-your-account-if-you-lose-your-2fa-credentials)."
### Using a security key
### Using a security key
If you've set up a security key on your account, and your browser supports security keys, you can use it to complete your sign in.
If you've set up a security key on your account, and your browser supports security keys, you can use it to complete your sign in.
1. Using your username and password, sign in to {% data variables.product.product_name %} through your browser.
1. If you use a physical security key, ensure it's connected to your device.
1. If you use a physical security key, ensure it's connected to your device.
1. To trigger the security key prompt from your operating system, select "Use security key".
1. Select the appropriate option in the prompt. Depending on your security key configuration, you may type a PIN, complete a biometric prompt, or use a physical security key.

View File

@@ -107,7 +107,7 @@ Before using this method, be sure that you can receive text messages. Carrier ra
{% data reusables.user-settings.security %}
{% data reusables.two_fa.enable-two-factor-authentication %}
1. At the bottom of the page, next to "SMS authentication", click **Select**.
1. Complete the CAPTCHA challenge, which helps protect against spam and abuse.
1. Complete the CAPTCHA challenge, which helps protect against spam and abuse.
1. Under "Setup SMS authentication", select your country code and type your mobile phone number, including the area code. When your information is correct, click **Send authentication code**.
1. You'll receive a text message with a security code. On {% data variables.product.product_name %}, type the code into the field under "Verify the code sent to your phone" and click **Continue**.
- If you need to edit the phone number you entered, you'll need to complete another CAPTCHA challenge.
@@ -125,7 +125,7 @@ On most devices and browsers, you can use a physical security key over USB or NF
Registering a security key for your account is available after enabling 2FA with a TOTP application{% ifversion fpt or ghec %} or a text message{% endif %}. If you lose your security key, you'll still be able to use your phone's code to sign in.
1. You must have already configured 2FA via a TOTP mobile app{% ifversion fpt or ghec %} or via SMS{% endif %}.
1. Ensure that you have a WebAuthn compatible security key inserted into your device, or that your device has a built-in authenticator such as Windows Hello, Face ID, or Touch ID. Most computers, phones, and tablets support this as an easier-to-use alternative to physical security keys.
1. Ensure that you have a WebAuthn compatible security key inserted into your device, or that your device has a built-in authenticator such as Windows Hello, Face ID, or Touch ID. Most computers, phones, and tablets support this as an easier-to-use alternative to physical security keys.
{% data reusables.user-settings.access_settings %}
{% data reusables.user-settings.security %}
1. Next to "Security keys", click **Add**.

View File

@@ -21,7 +21,7 @@ shortTitle: Disable 2FA
We strongly recommend using two-factor authentication to secure your account. If you need to disable 2FA, we recommend re-enabling it as soon as possible.
{% ifversion mandatory-2fa-dotcom-contributors %}
If you are part of the group that {% data variables.product.prodname_dotcom %} is requiring to enroll in 2FA in 2023, you cannot disable 2FA. A banner will display in your authentication settings to remind you that you are not allowed to disable 2FA. For more information about our 2023 2FA enrollment rollout for contributors to {% data variables.product.prodname_dotcom_the_website %}, see [this blog post](https://github.blog/2023-03-09-raising-the-bar-for-software-security-github-2fa-begins-march-13).
If you are part of the group that {% data variables.product.prodname_dotcom %} is requiring to enroll in 2FA in 2023, you cannot disable 2FA. A banner will display in your authentication settings to remind you that you are not allowed to disable 2FA. For more information about our 2023 2FA enrollment rollout for contributors to {% data variables.product.prodname_dotcom_the_website %}, see [this blog post](https://github.blog/2023-03-09-raising-the-bar-for-software-security-github-2fa-begins-march-13).
{% endif %}
{% warning %}

View File

@@ -57,7 +57,7 @@ $ ssh -T git@{% data variables.command_line.codeblock %}
{% mac %}
{% data reusables.command_line.open_the_multi_os_terminal %}
2. Verify that you have a private key generated and loaded into SSH.
2. Verify that you have a private key generated and loaded into SSH.
```shell
# start the ssh-agent in the background
$ eval "$(ssh-agent -s)"
@@ -75,7 +75,7 @@ $ ssh -T git@{% data variables.command_line.codeblock %}
1. {% data reusables.desktop.windows_git_bash_turn_on_ssh_agent %}
{% data reusables.desktop.windows_git_for_windows_turn_on_ssh_agent %}
2. Verify that you have a private key generated and loaded into SSH.
2. Verify that you have a private key generated and loaded into SSH.
```shell
$ ssh-add -l -E sha256
> 2048 SHA256:274ffWxgaxq/tSINAykStUL7XWyRNcRTlcST1Ei7gBQ /Users/USERNAME/.ssh/id_rsa (RSA)
@@ -86,7 +86,7 @@ $ ssh -T git@{% data variables.command_line.codeblock %}
{% linux %}
{% data reusables.command_line.open_the_multi_os_terminal %}
2. Verify that you have a private key generated and loaded into SSH.
2. Verify that you have a private key generated and loaded into SSH.
```shell
$ ssh-add -l -E sha256
> 2048 <em>SHA256:274ffWxgaxq/tSINAykStUL7XWyRNcRTlcST1Ei7gBQ</em> /Users/<em>USERNAME</em>/.ssh/id_rsa (RSA)
@@ -146,7 +146,7 @@ You must provide your public key to {% data variables.product.product_name %} to
$ eval "$(ssh-agent -s)"
> Agent pid 59566
```
3. Find and take a note of your public key fingerprint.
3. Find and take a note of your public key fingerprint.
```shell
$ ssh-add -l -E sha256
> 2048 SHA256:274ffWxgaxq/tSINAykStUL7XWyRNcRTlcST1Ei7gBQ /Users/USERNAME/.ssh/id_rsa (RSA)
@@ -166,7 +166,7 @@ You must provide your public key to {% data variables.product.product_name %} to
$ ssh-agent -s
> Agent pid 59566
```
3. Find and take a note of your public key fingerprint.
3. Find and take a note of your public key fingerprint.
```shell
$ ssh-add -l -E sha256
> 2048 SHA256:274ffWxgaxq/tSINAykStUL7XWyRNcRTlcST1Ei7gBQ /Users/USERNAME/.ssh/id_rsa (RSA)

View File

@@ -64,7 +64,7 @@ Enterprise owners and billing managers can manage the spending limit for {% data
{% data reusables.enterprise-accounts.settings-tab %}
{% data reusables.enterprise-accounts.billing-tab %}
1. On the "Billing" page, click the **Spending limit** tab.
![Screenshot of the "Billing" page. A tab labeled "Spending limit" is highlighted with an orange outline.](/assets/images/help/settings/spending-limit-tab-enterprise.png)
{% data reusables.dotcom_billing.monthly-spending-limit %}

View File

@@ -56,7 +56,7 @@ You can determine how many licenses you'll need for {% data variables.product.pr
{% endif %}
{% ifversion ghec %}
If you use {% data variables.product.prodname_ghe_cloud %} with an enterprise account and pay with a credit card, you can purchase a {% data variables.product.prodname_GH_advanced_security %} license or start a free trial from your enterprise account settings. For more information, see "[AUTOTITLE](/billing/managing-billing-for-github-advanced-security/signing-up-for-github-advanced-security)" and "[AUTOTITLE](/billing/managing-billing-for-github-advanced-security/setting-up-a-trial-of-github-advanced-security)."
If you use {% data variables.product.prodname_ghe_cloud %} with an enterprise account and pay with a credit card, you can purchase a {% data variables.product.prodname_GH_advanced_security %} license or start a free trial from your enterprise account settings. For more information, see "[AUTOTITLE](/billing/managing-billing-for-github-advanced-security/signing-up-for-github-advanced-security)" and "[AUTOTITLE](/billing/managing-billing-for-github-advanced-security/setting-up-a-trial-of-github-advanced-security)."
You can not purchase {% data variables.product.prodname_GH_advanced_security %} or start a {% data variables.product.prodname_GH_advanced_security %} trial if you are currently on a {% data variables.product.prodname_ghe_cloud %} trial.

View File

@@ -12,7 +12,7 @@ topics:
shortTitle: Manage Advanced Security licensing
---
## About licensing for GitHub Advanced Security
Each license for {% data variables.product.prodname_GH_advanced_security %} specifies a maximum number of accounts that can use these features. Each active committer to at least one repository with the feature enabled uses one {% ifversion ghas-billing-UI-update %}license{% else %}seat{% endif %}. A committer is considered active if one of their commits has been pushed to the repository within the last 90 days, regardless of when it was originally authored. For more information about committer numbers, see "[AUTOTITLE](/billing/managing-billing-for-github-advanced-security/about-billing-for-github-advanced-security)." For information about purchasing a license, see "[AUTOTITLE](/billing/managing-billing-for-github-advanced-security/signing-up-for-github-advanced-security)."
Each license for {% data variables.product.prodname_GH_advanced_security %} specifies a maximum number of accounts that can use these features. Each active committer to at least one repository with the feature enabled uses one {% ifversion ghas-billing-UI-update %}license{% else %}seat{% endif %}. A committer is considered active if one of their commits has been pushed to the repository within the last 90 days, regardless of when it was originally authored. For more information about committer numbers, see "[AUTOTITLE](/billing/managing-billing-for-github-advanced-security/about-billing-for-github-advanced-security)." For information about purchasing a license, see "[AUTOTITLE](/billing/managing-billing-for-github-advanced-security/signing-up-for-github-advanced-security)."
## Managing the number of GitHub Advanced Security committers
{% data reusables.enterprise-accounts.access-enterprise %}
@@ -31,7 +31,7 @@ Each license for {% data variables.product.prodname_GH_advanced_security %} spec
{% data reusables.enterprise-accounts.access-enterprise %}
{% data reusables.enterprise-accounts.settings-tab %}
{% data reusables.enterprise-accounts.license-tab %}
1. To the right of "GitHub Advanced Security", click **Manage**, then click **Cancel Subscription**.
1. To the right of "GitHub Advanced Security", click **Manage**, then click **Cancel Subscription**.
![Screenshot of the "Manage" dropdown in the {% data variables.product.prodname_GH_advanced_security %} licensing screen. The "Cancel Subscription" button is highlighted with an orange outline.](/assets/images/help/enterprises/ghas-cancel-subscription.png)
2. To confirm your cancellation, click **I understand, cancel Advanced Security**.
2. To confirm your cancellation, click **I understand, cancel Advanced Security**.

View File

@@ -21,7 +21,7 @@ shortTitle: Set up an Advanced Security trial
{% data reusables.enterprise-accounts.access-enterprise %}
{% data reusables.enterprise-accounts.settings-tab %}
{% data reusables.enterprise-accounts.license-tab %}
1. To the right of "{% data variables.product.prodname_GH_advanced_security %}", click **Start free trial**.
1. To the right of "{% data variables.product.prodname_GH_advanced_security %}", click **Start free trial**.
2. Click **Start trial**.
## Finishing your trial

View File

@@ -11,11 +11,11 @@ topics:
- Enterprise
shortTitle: Sign up for Advanced Security
---
## Purchasing {% data variables.product.prodname_GH_advanced_security %}
## Purchasing {% data variables.product.prodname_GH_advanced_security %}
{% data reusables.enterprise-accounts.access-enterprise %}
{% data reusables.enterprise-accounts.settings-tab %}
{% data reusables.enterprise-accounts.license-tab %}
1. To the right of "GitHub Advanced Security", click **Buy Advanced Security**.
1. To the right of "GitHub Advanced Security", click **Buy Advanced Security**.
![Screenshot of the {% data variables.product.prodname_GH_advanced_security %} section of the enterprise licensing screen. The "Buy Advanced Security" button is highlighted with an orange outline.](/assets/images/help/enterprises/ghas-buy-advanced-security-button.png)
1. {% data reusables.advanced-security.purchase-ghas %}

View File

@@ -14,15 +14,15 @@ shortTitle: View Advanced Security committers
## About the "Advanced Security Committers" dashboard
You can estimate the number of licenses your enterprise might need for {% data variables.product.prodname_GH_advanced_security %} with the "Advanced Security Committers" section of the site admin dashboard.
You can estimate the number of licenses your enterprise might need for {% data variables.product.prodname_GH_advanced_security %} with the "Advanced Security Committers" section of the site admin dashboard.
If you currently use {% data variables.product.prodname_GH_advanced_security %}, this tool helps you understand how many committers are currently using licenses. It also helps you estimate how many additional licenses would be used if you enable {% data variables.product.prodname_GH_advanced_security %} for more organizations and repositories.
If you're considering using {% data variables.product.prodname_GH_advanced_security %}, you can use this tool to estimate potential costs to enable {% data variables.product.prodname_GH_advanced_security %}.
If you're considering using {% data variables.product.prodname_GH_advanced_security %}, you can use this tool to estimate potential costs to enable {% data variables.product.prodname_GH_advanced_security %}.
For more information about billing for {% data variables.product.prodname_advanced_security %}, see "[AUTOTITLE](/billing/managing-billing-for-github-advanced-security/about-billing-for-github-advanced-security)."
## Viewing committer information
## Viewing committer information
1. In the upper-right corner of any page, click {% octicon "rocket" aria-label="Site admin" %}.
1. In the left sidebar, click **Advanced Security Committers**.

View File

@@ -177,7 +177,7 @@ If you delete a prebuild configuration, all the associated prebuilds are deleted
{% note %}
**Notes**:
**Notes**:
- Prebuilds may be updated several times during a billing month. Newer versions of a prebuild may be larger or smaller than the previous versions. This will affect the storage charges. For details of how storage is calculated during a billing month, see "[About billing for storage usage](#about-billing-for-storage-usage)" earlier in this article.
- As with deleting codespaces, deleting prebuilds does not reduce your used storage amount for the current billing month as this is a cumulative figure.

View File

@@ -54,7 +54,7 @@ Organizations owners and billing managers can manage the spending limit for {% d
**Note:** If {% data variables.product.prodname_github_codespaces %} is enabled for your organization, scroll to "Actions & Packages", then choose to limit spending or allow unlimited spending.
{% endnote %}
{% data reusables.dotcom_billing.update-spending-limit %}
{% ifversion ghec %}

View File

@@ -49,7 +49,7 @@ In addition to licensed seats, your bill may include other charges, such as {% d
{% note %}
**Notes:**
**Notes:**
- {% data variables.product.company_short %} counts each outside collaborator once for billing purposes, even if the user account has access to multiple repositories owned by your organization.
- {% data reusables.organizations.org-invite-scim %}
@@ -81,7 +81,7 @@ If your enterprise does not use {% data variables.product.prodname_emus %}, you
{% note %}
**Notes:**
**Notes:**
- {% data variables.product.company_short %} counts each member or outside collaborator once for billing purposes, even if the user account has membership in multiple organizations in an enterprise or access to multiple repositories owned by your organization.
- {% data reusables.organizations.org-invite-scim %}

Some files were not shown because too many files have changed in this diff Show More