1
0
mirror of synced 2025-12-21 02:46:50 -05:00

Fix for blank lines around code fences (#38255)

This commit is contained in:
Grace Park
2023-06-26 10:21:48 -07:00
committed by GitHub
parent a4913b5935
commit a8a6e4554a
272 changed files with 1552 additions and 2 deletions

View File

@@ -75,14 +75,18 @@ You can use the `git config` command to change the email address you associate w
{% data reusables.command_line.open_the_multi_os_terminal %} {% data reusables.command_line.open_the_multi_os_terminal %}
2. {% data reusables.user-settings.set_your_email_address_in_git %} 2. {% data reusables.user-settings.set_your_email_address_in_git %}
```shell ```shell
git config --global user.email "YOUR_EMAIL" git config --global user.email "YOUR_EMAIL"
``` ```
3. {% data reusables.user-settings.confirm_git_email_address_correct %} 3. {% data reusables.user-settings.confirm_git_email_address_correct %}
```shell ```shell
$ git config --global user.email $ git config --global user.email
<span class="output">email@example.com</span> <span class="output">email@example.com</span>
``` ```
4. {% data reusables.user-settings.link_email_with_your_account %} 4. {% data reusables.user-settings.link_email_with_your_account %}
### Setting your email address for a single repository ### Setting your email address for a single repository
@@ -94,12 +98,16 @@ You can change the email address associated with commits you make in a single re
{% data reusables.command_line.open_the_multi_os_terminal %} {% data reusables.command_line.open_the_multi_os_terminal %}
2. Change the current working directory to the local repository where you want to configure the email address that you associate with your Git commits. 2. Change the current working directory to the local repository where you want to configure the email address that you associate with your Git commits.
3. {% data reusables.user-settings.set_your_email_address_in_git %} 3. {% data reusables.user-settings.set_your_email_address_in_git %}
```shell ```shell
git config user.email "YOUR_EMAIL" git config user.email "YOUR_EMAIL"
``` ```
4. {% data reusables.user-settings.confirm_git_email_address_correct %} 4. {% data reusables.user-settings.confirm_git_email_address_correct %}
```shell ```shell
$ git config user.email $ git config user.email
<span class="output">email@example.com</span> <span class="output">email@example.com</span>
``` ```
5. {% data reusables.user-settings.link_email_with_your_account %} 5. {% data reusables.user-settings.link_email_with_your_account %}

View File

@@ -51,6 +51,7 @@ Alternatively, if you want to use the HTTPS protocol for both accounts, you can
```shell copy ```shell copy
git credential-osxkeychain erase https://github.com git credential-osxkeychain erase https://github.com
``` ```
{% data reusables.git.clear-stored-gcm-credentials %} {% data reusables.git.clear-stored-gcm-credentials %}
{% data reusables.git.cache-on-repository-path %} {% data reusables.git.cache-on-repository-path %}
{% data reusables.accounts.create-personal-access-tokens %} {% data reusables.accounts.create-personal-access-tokens %}
@@ -70,6 +71,7 @@ Alternatively, if you want to use the HTTPS protocol for both accounts, you can
```shell copy ```shell copy
cmdkey /delete:LegacyGeneric:target=git:https://github.com cmdkey /delete:LegacyGeneric:target=git:https://github.com
``` ```
{% data reusables.git.cache-on-repository-path %} {% data reusables.git.cache-on-repository-path %}
{% data reusables.accounts.create-personal-access-tokens %} {% data reusables.accounts.create-personal-access-tokens %}
{% data reusables.git.provide-credentials %} {% data reusables.git.provide-credentials %}

View File

@@ -141,6 +141,7 @@ You can use the `cache-dependency-path` parameter for cases when multiple depend
go-version: '1.17' go-version: '1.17'
cache-dependency-path: subdir/go.sum cache-dependency-path: subdir/go.sum
``` ```
{% else %} {% else %}
When caching is enabled, the `setup-go` action searches for the dependency file, `go.sum`, in the repository root and uses the hash of the dependency file as a part of the cache key. When caching is enabled, the `setup-go` action searches for the dependency file, `go.sum`, in the repository root and uses the hash of the dependency file as a part of the cache key.
@@ -162,6 +163,7 @@ Alternatively, you can use the `cache-dependency-path` parameter for cases when
cache: true cache: true
cache-dependency-path: subdir/go.sum cache-dependency-path: subdir/go.sum
``` ```
{% endif %} {% endif %}
If you have a custom requirement or need finer controls for caching, you can use the [`cache` action](https://github.com/marketplace/actions/cache). For more information, see "[AUTOTITLE](/actions/using-workflows/caching-dependencies-to-speed-up-workflows)." If you have a custom requirement or need finer controls for caching, you can use the [`cache` action](https://github.com/marketplace/actions/cache). For more information, see "[AUTOTITLE](/actions/using-workflows/caching-dependencies-to-speed-up-workflows)."

View File

@@ -73,6 +73,7 @@ jobs:
![Screenshot of a workflow run failure for a Pester test. Test reports "Expected $true, but got $false" and "Error: Process completed with exit code 1."](/assets/images/help/repository/actions-failed-pester-test-updated.png) ![Screenshot of a workflow run failure for a Pester test. Test reports "Expected $true, but got $false" and "Error: Process completed with exit code 1."](/assets/images/help/repository/actions-failed-pester-test-updated.png)
- `Invoke-Pester Unit.Tests.ps1 -Passthru` - Uses Pester to execute tests defined in a file called `Unit.Tests.ps1`. For example, to perform the same test described above, the `Unit.Tests.ps1` will contain the following: - `Invoke-Pester Unit.Tests.ps1 -Passthru` - Uses Pester to execute tests defined in a file called `Unit.Tests.ps1`. For example, to perform the same test described above, the `Unit.Tests.ps1` will contain the following:
``` ```
Describe "Check results file is present" { Describe "Check results file is present" {
It "Check results file is present" { It "Check results file is present" {

View File

@@ -89,11 +89,13 @@ Alternatively, you can check a `.ruby-version` file into the root of your repos
You can add a matrix strategy to run your workflow with more than one version of Ruby. For example, you can test your code against the latest patch releases of versions 3.1, 3.0, and 2.7. You can add a matrix strategy to run your workflow with more than one version of Ruby. For example, you can test your code against the latest patch releases of versions 3.1, 3.0, and 2.7.
{% raw %} {% raw %}
```yaml ```yaml
strategy: strategy:
matrix: matrix:
ruby-version: ['3.1', '3.0', '2.7'] ruby-version: ['3.1', '3.0', '2.7']
``` ```
{% endraw %} {% endraw %}
Each version of Ruby specified in the `ruby-version` array creates a job that runs the same steps. The {% raw %}`${{ matrix.ruby-version }}`{% endraw %} context is used to access the current job's version. For more information about matrix strategies and contexts, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions)" and "[AUTOTITLE](/actions/learn-github-actions/contexts)." Each version of Ruby specified in the `ruby-version` array creates a job that runs the same steps. The {% raw %}`${{ matrix.ruby-version }}`{% endraw %} context is used to access the current job's version. For more information about matrix strategies and contexts, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions)" and "[AUTOTITLE](/actions/learn-github-actions/contexts)."
@@ -156,12 +158,14 @@ The `setup-ruby` actions provides a method to automatically handle the caching o
To enable caching, set the following. To enable caching, set the following.
{% raw %} {% raw %}
```yaml ```yaml
steps: steps:
- uses: ruby/setup-ruby@ec02537da5712d66d4d50a0f33b7eb52773b5ed1 - uses: ruby/setup-ruby@ec02537da5712d66d4d50a0f33b7eb52773b5ed1
with: with:
bundler-cache: true bundler-cache: true
``` ```
{% endraw %} {% endraw %}
This will configure bundler to install your gems to `vendor/cache`. For each successful run of your workflow, this folder will be cached by {% data variables.product.prodname_actions %} and re-downloaded for subsequent workflow runs. A hash of your gemfile.lock and the Ruby version are used as the cache key. If you install any new gems, or change a version, the cache will be invalidated and bundler will do a fresh install. This will configure bundler to install your gems to `vendor/cache`. For each successful run of your workflow, this folder will be cached by {% data variables.product.prodname_actions %} and re-downloaded for subsequent workflow runs. A hash of your gemfile.lock and the Ruby version are used as the cache key. If you install any new gems, or change a version, the cache will be invalidated and bundler will do a fresh install.

View File

@@ -101,6 +101,7 @@ jobs:
You can configure your job to use a single specific version of Swift, such as `5.3.3`. You can configure your job to use a single specific version of Swift, such as `5.3.3`.
{% raw %} {% raw %}
```yaml copy ```yaml copy
steps: steps:
- uses: swift-actions/setup-swift@65540b95f51493d65f5e59e97dcef9629ddf11bf - uses: swift-actions/setup-swift@65540b95f51493d65f5e59e97dcef9629ddf11bf
@@ -109,6 +110,7 @@ steps:
- name: Get swift version - name: Get swift version
run: swift --version # Swift 5.3.3 run: swift --version # Swift 5.3.3
``` ```
{% endraw %} {% endraw %}
## Building and testing your code ## Building and testing your code

View File

@@ -51,6 +51,7 @@ Before you begin, you'll create a repository on {% ifversion ghae %}{% data vari
``` ```
1. From your terminal, check in your `goodbye.sh` file. 1. From your terminal, check in your `goodbye.sh` file.
```shell copy ```shell copy
git add goodbye.sh git add goodbye.sh
git commit -m "Add goodbye script" git commit -m "Add goodbye script"
@@ -63,6 +64,7 @@ Before you begin, you'll create a repository on {% ifversion ghae %}{% data vari
{% raw %} {% raw %}
**action.yml** **action.yml**
```yaml copy ```yaml copy
name: 'Hello World' name: 'Hello World'
description: 'Greet someone' description: 'Greet someone'
@@ -121,6 +123,7 @@ The following workflow code uses the completed hello world action that you made
Copy the workflow code into a `.github/workflows/main.yml` file in another repository, but replace `actions/hello-world-composite-action@v1` with the repository and tag you created. You can also replace the `who-to-greet` input with your name. Copy the workflow code into a `.github/workflows/main.yml` file in another repository, but replace `actions/hello-world-composite-action@v1` with the repository and tag you created. You can also replace the `who-to-greet` input with your name.
**.github/workflows/main.yml** **.github/workflows/main.yml**
```yaml copy ```yaml copy
on: [push] on: [push]

View File

@@ -58,6 +58,7 @@ Before you begin, you'll need to create a {% data variables.product.prodname_dot
In your new `hello-world-docker-action` directory, create a new `Dockerfile` file. Make sure that your filename is capitalized correctly (use a capital `D` but not a capital `f`) if you're having issues. For more information, see "[AUTOTITLE](/actions/creating-actions/dockerfile-support-for-github-actions)." In your new `hello-world-docker-action` directory, create a new `Dockerfile` file. Make sure that your filename is capitalized correctly (use a capital `D` but not a capital `f`) if you're having issues. For more information, see "[AUTOTITLE](/actions/creating-actions/dockerfile-support-for-github-actions)."
**Dockerfile** **Dockerfile**
```Dockerfile copy ```Dockerfile copy
# Container image that runs your code # Container image that runs your code
FROM alpine:3.10 FROM alpine:3.10
@@ -75,6 +76,7 @@ Create a new `action.yml` file in the `hello-world-docker-action` directory you
{% raw %} {% raw %}
**action.yml** **action.yml**
```yaml copy ```yaml copy
# action.yml # action.yml
name: 'Hello World' name: 'Hello World'
@@ -93,6 +95,7 @@ runs:
args: args:
- ${{ inputs.who-to-greet }} - ${{ inputs.who-to-greet }}
``` ```
{% endraw %} {% endraw %}
This metadata defines one `who-to-greet` input and one `time` output parameter. To pass inputs to the Docker container, you should declare the input using `inputs` and pass the input in the `args` keyword. Everything you include in `args` is passed to the container, but for better discoverability for users of your action, we recommended using inputs. This metadata defines one `who-to-greet` input and one `time` output parameter. To pass inputs to the Docker container, you should declare the input using `inputs` and pass the input in the `args` keyword. Everything you include in `args` is passed to the container, but for better discoverability for users of your action, we recommended using inputs.
@@ -110,6 +113,7 @@ Next, the script gets the current time and sets it as an output variable that ac
1. Add the following code to your `entrypoint.sh` file. 1. Add the following code to your `entrypoint.sh` file.
**entrypoint.sh** **entrypoint.sh**
```shell copy ```shell copy
#!/bin/sh -l #!/bin/sh -l
@@ -120,6 +124,7 @@ Next, the script gets the current time and sets it as an output variable that ac
{%- else %} {%- else %}
echo "::set-output name=time::$time" echo "::set-output name=time::$time"
{%- endif %} {%- endif %}
``` ```
If `entrypoint.sh` executes without any errors, the action's status is set to `success`. You can also explicitly set exit codes in your action's code to provide an action's status. For more information, see "[AUTOTITLE](/actions/creating-actions/setting-exit-codes-for-actions)." If `entrypoint.sh` executes without any errors, the action's status is set to `success`. You can also explicitly set exit codes in your action's code to provide an action's status. For more information, see "[AUTOTITLE](/actions/creating-actions/setting-exit-codes-for-actions)."
@@ -153,6 +158,7 @@ In your `hello-world-docker-action` directory, create a `README.md` file that sp
- An example of how to use your action in a workflow. - An example of how to use your action in a workflow.
**README.md** **README.md**
```markdown copy ```markdown copy
# Hello world docker action # Hello world docker action
@@ -205,6 +211,7 @@ Now you're ready to test your action out in a workflow.
The following workflow code uses the completed _hello world_ action in the public [`actions/hello-world-docker-action`](https://github.com/actions/hello-world-docker-action) repository. Copy the following workflow example code into a `.github/workflows/main.yml` file, but replace the `actions/hello-world-docker-action` with your repository and action name. You can also replace the `who-to-greet` input with your name. {% ifversion fpt or ghec %}Public actions can be used even if they're not published to {% data variables.product.prodname_marketplace %}. For more information, see "[AUTOTITLE](/actions/creating-actions/publishing-actions-in-github-marketplace#publishing-an-action)." {% endif %} The following workflow code uses the completed _hello world_ action in the public [`actions/hello-world-docker-action`](https://github.com/actions/hello-world-docker-action) repository. Copy the following workflow example code into a `.github/workflows/main.yml` file, but replace the `actions/hello-world-docker-action` with your repository and action name. You can also replace the `who-to-greet` input with your name. {% ifversion fpt or ghec %}Public actions can be used even if they're not published to {% data variables.product.prodname_marketplace %}. For more information, see "[AUTOTITLE](/actions/creating-actions/publishing-actions-in-github-marketplace#publishing-an-action)." {% endif %}
**.github/workflows/main.yml** **.github/workflows/main.yml**
```yaml copy ```yaml copy
on: [push] on: [push]
@@ -228,6 +235,7 @@ jobs:
Copy the following example workflow code into a `.github/workflows/main.yml` file in your action's repository. You can also replace the `who-to-greet` input with your name. {% ifversion fpt or ghec %}This private action can't be published to {% data variables.product.prodname_marketplace %}, and can only be used in this repository.{% endif %} Copy the following example workflow code into a `.github/workflows/main.yml` file in your action's repository. You can also replace the `who-to-greet` input with your name. {% ifversion fpt or ghec %}This private action can't be published to {% data variables.product.prodname_marketplace %}, and can only be used in this repository.{% endif %}
**.github/workflows/main.yml** **.github/workflows/main.yml**
```yaml copy ```yaml copy
on: [push] on: [push]

View File

@@ -105,6 +105,7 @@ GitHub Actions provide context information about the webhook event, Git refs, wo
Add a new file called `index.js`, with the following code. Add a new file called `index.js`, with the following code.
{% raw %} {% raw %}
```javascript copy ```javascript copy
const core = require('@actions/core'); const core = require('@actions/core');
const github = require('@actions/github'); const github = require('@actions/github');
@@ -122,6 +123,7 @@ try {
core.setFailed(error.message); core.setFailed(error.message);
} }
``` ```
{% endraw %} {% endraw %}
If an error is thrown in the above `index.js` example, `core.setFailed(error.message);` uses the actions toolkit [`@actions/core`](https://github.com/actions/toolkit/tree/main/packages/core) package to log a message and set a failing exit code. For more information, see "[AUTOTITLE](/actions/creating-actions/setting-exit-codes-for-actions)." If an error is thrown in the above `index.js` example, `core.setFailed(error.message);` uses the actions toolkit [`@actions/core`](https://github.com/actions/toolkit/tree/main/packages/core) package to log a message and set a failing exit code. For more information, see "[AUTOTITLE](/actions/creating-actions/setting-exit-codes-for-actions)."
@@ -224,6 +226,7 @@ This example demonstrates how your new public action can be run from within an e
Copy the following YAML into a new file at `.github/workflows/main.yml`, and update the `uses: octocat/hello-world-javascript-action@v1.1` line with your username and the name of the public repository you created above. You can also replace the `who-to-greet` input with your name. Copy the following YAML into a new file at `.github/workflows/main.yml`, and update the `uses: octocat/hello-world-javascript-action@v1.1` line with your username and the name of the public repository you created above. You can also replace the `who-to-greet` input with your name.
{% raw %} {% raw %}
```yaml copy ```yaml copy
on: [push] on: [push]
@@ -241,6 +244,7 @@ jobs:
- name: Get the output time - name: Get the output time
run: echo "The time was ${{ steps.hello.outputs.time }}" run: echo "The time was ${{ steps.hello.outputs.time }}"
``` ```
{% endraw %} {% endraw %}
When this workflow is triggered, the runner will download the `hello-world-javascript-action` action from your public repository and then execute it. When this workflow is triggered, the runner will download the `hello-world-javascript-action` action from your public repository and then execute it.
@@ -250,6 +254,7 @@ When this workflow is triggered, the runner will download the `hello-world-javas
Copy the workflow code into a `.github/workflows/main.yml` file in your action's repository. You can also replace the `who-to-greet` input with your name. Copy the workflow code into a `.github/workflows/main.yml` file in your action's repository. You can also replace the `who-to-greet` input with your name.
**.github/workflows/main.yml** **.github/workflows/main.yml**
```yaml copy ```yaml copy
on: [push] on: [push]

View File

@@ -36,6 +36,7 @@ The following script demonstrates how you can get a user-specified version as in
{% data variables.product.prodname_dotcom %} provides [`actions/toolkit`](https://github.com/actions/toolkit), which is a set of packages that helps you create actions. This example uses the [`actions/core`](https://github.com/actions/toolkit/tree/main/packages/core) and [`actions/tool-cache`](https://github.com/actions/toolkit/tree/main/packages/tool-cache) packages. {% data variables.product.prodname_dotcom %} provides [`actions/toolkit`](https://github.com/actions/toolkit), which is a set of packages that helps you create actions. This example uses the [`actions/core`](https://github.com/actions/toolkit/tree/main/packages/core) and [`actions/tool-cache`](https://github.com/actions/toolkit/tree/main/packages/tool-cache) packages.
{% raw %} {% raw %}
```javascript copy ```javascript copy
const core = require('@actions/core'); const core = require('@actions/core');
const tc = require('@actions/tool-cache'); const tc = require('@actions/tool-cache');
@@ -56,6 +57,7 @@ async function setup() {
module.exports = setup module.exports = setup
``` ```
{% endraw %} {% endraw %}
To use this script, replace `getDownloadURL` with a function that downloads your CLI. You will also need to create an actions metadata file (`action.yml`) that accepts a `version` input and that runs this script. For full details about how to create an action, see "[AUTOTITLE](/actions/creating-actions/creating-a-javascript-action)." To use this script, replace `getDownloadURL` with a function that downloads your CLI. You will also need to create an actions metadata file (`action.yml`) that accepts a `version` input and that runs this script. For full details about how to create an action, see "[AUTOTITLE](/actions/creating-actions/creating-a-javascript-action)."

View File

@@ -115,6 +115,7 @@ outputs:
### Example: Declaring outputs for composite actions ### Example: Declaring outputs for composite actions
{% raw %} {% raw %}
```yaml ```yaml
outputs: outputs:
random-number: random-number:
@@ -131,6 +132,7 @@ runs:
{%- endif %}{% raw %} {%- endif %}{% raw %}
shell: bash shell: bash
``` ```
{% endraw %} {% endraw %}
### `outputs.<output_id>.value` ### `outputs.<output_id>.value`
@@ -235,6 +237,7 @@ For example, this `cleanup.js` will only run on Linux-based runners:
**Optional** The command you want to run. This can be inline or a script in your action repository: **Optional** The command you want to run. This can be inline or a script in your action repository:
{% raw %} {% raw %}
```yaml ```yaml
runs: runs:
using: "composite" using: "composite"
@@ -242,6 +245,7 @@ runs:
- run: ${{ github.action_path }}/test/script.sh - run: ${{ github.action_path }}/test/script.sh
shell: bash shell: bash
``` ```
{% endraw %} {% endraw %}
Alternatively, you can use `$GITHUB_ACTION_PATH`: Alternatively, you can use `$GITHUB_ACTION_PATH`:
@@ -447,6 +451,7 @@ For more information about using the `CMD` instruction with {% data variables.pr
#### Example: Defining arguments for the Docker container #### Example: Defining arguments for the Docker container
{% raw %} {% raw %}
```yaml ```yaml
runs: runs:
using: 'docker' using: 'docker'
@@ -456,6 +461,7 @@ runs:
- 'foo' - 'foo'
- 'bar' - 'bar'
``` ```
{% endraw %} {% endraw %}
## `branding` ## `branding`

View File

@@ -47,6 +47,7 @@ Before creating your {% data variables.product.prodname_actions %} workflow, you
aws ecr create-repository \ aws ecr create-repository \
--repository-name MY_ECR_REPOSITORY \ --repository-name MY_ECR_REPOSITORY \
--region MY_AWS_REGION --region MY_AWS_REGION
```{% endraw %} ```{% endraw %}
Ensure that you use the same Amazon ECR repository name (represented here by `MY_ECR_REPOSITORY`) for the `ECR_REPOSITORY` variable in the workflow below. Ensure that you use the same Amazon ECR repository name (represented here by `MY_ECR_REPOSITORY`) for the `ECR_REPOSITORY` variable in the workflow below.
@@ -65,6 +66,7 @@ Before creating your {% data variables.product.prodname_actions %} workflow, you
{% raw %}```bash copy {% raw %}```bash copy
aws ecs register-task-definition --generate-cli-skeleton aws ecs register-task-definition --generate-cli-skeleton
```{% endraw %} ```{% endraw %}
Ensure that you set the `ECS_TASK_DEFINITION` variable in the workflow below as the path to the JSON file. Ensure that you set the `ECS_TASK_DEFINITION` variable in the workflow below as the path to the JSON file.

View File

@@ -63,6 +63,7 @@ Before creating your {% data variables.product.prodname_actions %} workflow, you
--name MY_WEBAPP_NAME \ --name MY_WEBAPP_NAME \
--resource-group MY_RESOURCE_GROUP \ --resource-group MY_RESOURCE_GROUP \
--settings DOCKER_REGISTRY_SERVER_URL=https://ghcr.io DOCKER_REGISTRY_SERVER_USERNAME=MY_REPOSITORY_OWNER DOCKER_REGISTRY_SERVER_PASSWORD=MY_PERSONAL_ACCESS_TOKEN --settings DOCKER_REGISTRY_SERVER_URL=https://ghcr.io DOCKER_REGISTRY_SERVER_USERNAME=MY_REPOSITORY_OWNER DOCKER_REGISTRY_SERVER_PASSWORD=MY_PERSONAL_ACCESS_TOKEN
``` ```
5. Optionally, configure a deployment environment. {% data reusables.actions.about-environments %} 5. Optionally, configure a deployment environment. {% data reusables.actions.about-environments %}

View File

@@ -49,11 +49,13 @@ To create the GKE cluster, you will first need to authenticate using the `gcloud
For example: For example:
{% raw %} {% raw %}
```bash copy ```bash copy
$ gcloud container clusters create $GKE_CLUSTER \ $ gcloud container clusters create $GKE_CLUSTER \
--project=$GKE_PROJECT \ --project=$GKE_PROJECT \
--zone=$GKE_ZONE --zone=$GKE_ZONE
``` ```
{% endraw %} {% endraw %}
### Enabling the APIs ### Enabling the APIs
@@ -61,11 +63,13 @@ $ gcloud container clusters create $GKE_CLUSTER \
Enable the Kubernetes Engine and Container Registry APIs. For example: Enable the Kubernetes Engine and Container Registry APIs. For example:
{% raw %} {% raw %}
```bash copy ```bash copy
$ gcloud services enable \ $ gcloud services enable \
containerregistry.googleapis.com \ containerregistry.googleapis.com \
container.googleapis.com container.googleapis.com
``` ```
{% endraw %} {% endraw %}
### Configuring a service account and storing its credentials ### Configuring a service account and storing its credentials
@@ -74,18 +78,23 @@ This procedure demonstrates how to create the service account for your GKE integ
1. Create a new service account: 1. Create a new service account:
{% raw %} {% raw %}
``` ```
gcloud iam service-accounts create $SA_NAME gcloud iam service-accounts create $SA_NAME
``` ```
{% endraw %} {% endraw %}
1. Retrieve the email address of the service account you just created: 1. Retrieve the email address of the service account you just created:
{% raw %} {% raw %}
``` ```
gcloud iam service-accounts list gcloud iam service-accounts list
``` ```
{% endraw %} {% endraw %}
1. Add roles to the service account. Note: Apply more restrictive roles to suit your requirements. 1. Add roles to the service account. Note: Apply more restrictive roles to suit your requirements.
{% raw %} {% raw %}
``` ```
$ gcloud projects add-iam-policy-binding $GKE_PROJECT \ $ gcloud projects add-iam-policy-binding $GKE_PROJECT \
--member=serviceAccount:$SA_EMAIL \ --member=serviceAccount:$SA_EMAIL \
@@ -97,18 +106,23 @@ This procedure demonstrates how to create the service account for your GKE integ
--member=serviceAccount:$SA_EMAIL \ --member=serviceAccount:$SA_EMAIL \
--role=roles/container.clusterViewer --role=roles/container.clusterViewer
``` ```
{% endraw %} {% endraw %}
1. Download the JSON keyfile for the service account: 1. Download the JSON keyfile for the service account:
{% raw %} {% raw %}
``` ```
gcloud iam service-accounts keys create key.json --iam-account=$SA_EMAIL gcloud iam service-accounts keys create key.json --iam-account=$SA_EMAIL
``` ```
{% endraw %} {% endraw %}
1. Store the service account key as a secret named `GKE_SA_KEY`: 1. Store the service account key as a secret named `GKE_SA_KEY`:
{% raw %} {% raw %}
``` ```
export GKE_SA_KEY=$(cat key.json | base64) export GKE_SA_KEY=$(cat key.json | base64)
``` ```
{% endraw %} {% endraw %}
For more information about how to store a secret, see "[AUTOTITLE](/actions/security-guides/encrypted-secrets)." For more information about how to store a secret, see "[AUTOTITLE](/actions/security-guides/encrypted-secrets)."

View File

@@ -50,6 +50,7 @@ Create secrets in your repository or organization for the following items:
```shell ```shell
base64 -i BUILD_CERTIFICATE.p12 | pbcopy base64 -i BUILD_CERTIFICATE.p12 | pbcopy
``` ```
- The password for your Apple signing certificate. - The password for your Apple signing certificate.
- In this example, the secret is named `P12_PASSWORD`. - In this example, the secret is named `P12_PASSWORD`.
@@ -131,6 +132,7 @@ On self-hosted runners, the `$RUNNER_TEMP` directory is cleaned up at the end of
If you use self-hosted runners, you should add a final step to your workflow to help ensure that these sensitive files are deleted at the end of the job. The workflow step shown below is an example of how to do this. If you use self-hosted runners, you should add a final step to your workflow to help ensure that these sensitive files are deleted at the end of the job. The workflow step shown below is an example of how to do this.
{% raw %} {% raw %}
```yaml ```yaml
- name: Clean up keychain and provisioning profile - name: Clean up keychain and provisioning profile
if: ${{ always() }} if: ${{ always() }}
@@ -138,4 +140,5 @@ If you use self-hosted runners, you should add a final step to your workflow to
security delete-keychain $RUNNER_TEMP/app-signing.keychain-db security delete-keychain $RUNNER_TEMP/app-signing.keychain-db
rm ~/Library/MobileDevice/Provisioning\ Profiles/build_pp.mobileprovision rm ~/Library/MobileDevice/Provisioning\ Profiles/build_pp.mobileprovision
``` ```
{% endraw %} {% endraw %}

View File

@@ -68,6 +68,7 @@ Once a workflow reaches a job that references an environment that has the custom
} \ } \
}' }'
``` ```
1. Optionally, to add a status report without taking any other action to {% data variables.product.prodname_dotcom_the_website %}, send a `POST` request to `/repos/OWNER/REPO/actions/runs/RUN_ID/deployment_protection_rule`. In the request body, omit the `state`. For more information, see "[AUTOTITLE](/rest/actions/workflow-runs#review-custom-deployment-protection-rules-for-a-workflow-run)." You can post a status report on the same deployment up to 10 times. Status reports support Markdown formatting and can be up to 1024 characters long. 1. Optionally, to add a status report without taking any other action to {% data variables.product.prodname_dotcom_the_website %}, send a `POST` request to `/repos/OWNER/REPO/actions/runs/RUN_ID/deployment_protection_rule`. In the request body, omit the `state`. For more information, see "[AUTOTITLE](/rest/actions/workflow-runs#review-custom-deployment-protection-rules-for-a-workflow-run)." You can post a status report on the same deployment up to 10 times. Status reports support Markdown formatting and can be up to 1024 characters long.
1. To approve or reject a request, send a `POST` request to `/repos/OWNER/REPO/actions/runs/RUN_ID/deployment_protection_rule`. In the request body, set the `state` property to either `approved` or `rejected`. For more information, see "[AUTOTITLE](/rest/actions/workflow-runs#review-custom-deployment-protection-rules-for-a-workflow-run)." 1. To approve or reject a request, send a `POST` request to `/repos/OWNER/REPO/actions/runs/RUN_ID/deployment_protection_rule`. In the request body, set the `state` property to either `approved` or `rejected`. For more information, see "[AUTOTITLE](/rest/actions/workflow-runs#review-custom-deployment-protection-rules-for-a-workflow-run)."

View File

@@ -57,6 +57,7 @@ The [`azure/login`](https://github.com/Azure/login) action receives a JWT from t
The following example exchanges an OIDC ID token with Azure to receive an access token, which can then be used to access cloud resources. The following example exchanges an OIDC ID token with Azure to receive an access token, which can then be used to access cloud resources.
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Run Azure Login with OIDC name: Run Azure Login with OIDC
on: [push] on: [push]
@@ -80,4 +81,5 @@ jobs:
az account show az account show
az group list az group list
``` ```
{% endraw %} {% endraw %}

View File

@@ -62,6 +62,7 @@ This example has a job called `Get_OIDC_ID_token` that uses actions to request a
This action exchanges a {% data variables.product.prodname_dotcom %} OIDC token for a Google Cloud access token, using [Workload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation). This action exchanges a {% data variables.product.prodname_dotcom %} OIDC token for a Google Cloud access token, using [Workload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation).
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: List services in GCP name: List services in GCP
on: on:
@@ -89,4 +90,5 @@ jobs:
gcloud auth login --brief --cred-file="${{ steps.auth.outputs.credentials_file_path }}" gcloud auth login --brief --cred-file="${{ steps.auth.outputs.credentials_file_path }}"
gcloud services list gcloud services list
``` ```
{% endraw %} {% endraw %}

View File

@@ -63,6 +63,7 @@ To configure your Vault server to accept JSON Web Tokens (JWT) for authenticatio
} }
EOF EOF
``` ```
3. Configure roles to group different policies together. If the authentication is successful, these policies are attached to the resulting Vault access token. 3. Configure roles to group different policies together. If the authentication is successful, these policies are attached to the resulting Vault access token.
```sh copy ```sh copy

View File

@@ -217,6 +217,7 @@ jobs:
```yaml copy ```yaml copy
name: Node.js Tests name: Node.js Tests
``` ```
</td> </td>
<td> <td>
@@ -229,6 +230,7 @@ name: Node.js Tests
```yaml copy ```yaml copy
on: on:
``` ```
</td> </td>
<td> <td>
@@ -241,6 +243,7 @@ The `on` keyword lets you define the events that trigger when the workflow is ru
```yaml copy ```yaml copy
workflow_dispatch: workflow_dispatch:
``` ```
</td> </td>
<td> <td>
@@ -253,6 +256,7 @@ Add the `workflow_dispatch` event if you want to be able to manually run this wo
```yaml copy ```yaml copy
pull_request: pull_request:
``` ```
</td> </td>
<td> <td>
@@ -267,6 +271,7 @@ Add the `pull_request` event, so that the workflow runs automatically every time
branches: branches:
- main - main
``` ```
</td> </td>
<td> <td>
@@ -281,6 +286,7 @@ permissions:
contents: read contents: read
pull-requests: read pull-requests: read
``` ```
</td> </td>
<td> <td>
@@ -294,6 +300,7 @@ Modifies the default permissions granted to `GITHUB_TOKEN`. This will vary depen
concurrency: concurrency:
group: {% raw %}'${{ github.workflow }} @ ${{ github.event.pull_request.head.label || github.head_ref || github.ref }}'{% endraw %} group: {% raw %}'${{ github.workflow }} @ ${{ github.event.pull_request.head.label || github.head_ref || github.ref }}'{% endraw %}
``` ```
</td> </td>
<td> <td>
@@ -306,6 +313,7 @@ Creates a concurrency group for specific events, and uses the `||` operator to d
```yaml copy ```yaml copy
cancel-in-progress: true cancel-in-progress: true
``` ```
</td> </td>
<td> <td>
@@ -318,6 +326,7 @@ Cancels any currently running job or workflow in the same concurrency group.
```yaml copy ```yaml copy
jobs: jobs:
``` ```
</td> </td>
<td> <td>
@@ -330,6 +339,7 @@ Groups together all the jobs that run in the workflow file.
```yaml copy ```yaml copy
test: test:
``` ```
</td> </td>
<td> <td>
@@ -342,6 +352,7 @@ Defines a job with the ID `test` that is stored within the `jobs` key.
```yaml copy ```yaml copy
runs-on: {% raw %}${{ fromJSON('["ubuntu-latest", "self-hosted"]')[github.repository == 'github/docs-internal'] }}{% endraw %} runs-on: {% raw %}${{ fromJSON('["ubuntu-latest", "self-hosted"]')[github.repository == 'github/docs-internal'] }}{% endraw %}
``` ```
</td> </td>
<td> <td>
@@ -354,6 +365,7 @@ Configures the job to run on a {% data variables.product.prodname_dotcom %}-host
```yaml copy ```yaml copy
timeout-minutes: 60 timeout-minutes: 60
``` ```
</td> </td>
<td> <td>
@@ -366,6 +378,7 @@ Sets the maximum number of minutes to let the job run before it is automatically
```yaml copy ```yaml copy
strategy: strategy:
``` ```
</td> </td>
<td> <td>
This section defines the build matrix for your jobs. This section defines the build matrix for your jobs.
@@ -377,6 +390,7 @@ Sets the maximum number of minutes to let the job run before it is automatically
```yaml copy ```yaml copy
fail-fast: false fail-fast: false
``` ```
</td> </td>
<td> <td>
@@ -400,6 +414,7 @@ Setting `fail-fast` to `false` prevents {% data variables.product.prodname_dotco
translations, translations,
] ]
``` ```
</td> </td>
<td> <td>
@@ -412,6 +427,7 @@ Creates a matrix named `test-group`, with an array of test groups. These values
```yaml copy ```yaml copy
steps: steps:
``` ```
</td> </td>
<td> <td>
@@ -428,6 +444,7 @@ Groups together all the steps that will run as part of the `test` job. Each job
lfs: {% raw %}${{ matrix.test-group == 'content' }}{% endraw %} lfs: {% raw %}${{ matrix.test-group == 'content' }}{% endraw %}
persist-credentials: 'false' persist-credentials: 'false'
``` ```
</td> </td>
<td> <td>
@@ -468,6 +485,7 @@ The `uses` keyword tells the job to retrieve the action named `actions/checkout`
throw err throw err
} }
``` ```
</td> </td>
<td> <td>
@@ -487,6 +505,7 @@ If the current repository is the `github/docs-internal` repository, this step us
path: docs-early-access path: docs-early-access
ref: {% raw %}${{ steps.check-early-access.outputs.result }}{% endraw %} ref: {% raw %}${{ steps.check-early-access.outputs.result }}{% endraw %}
``` ```
</td> </td>
<td> <td>
@@ -504,6 +523,7 @@ If the current repository is the `github/docs-internal` repository, this step ch
mv docs-early-access/data data/early-access mv docs-early-access/data data/early-access
rm -r docs-early-access rm -r docs-early-access
``` ```
</td> </td>
<td> <td>
@@ -517,6 +537,7 @@ If the current repository is the `github/docs-internal` repository, this step us
- name: Checkout LFS objects - name: Checkout LFS objects
run: git lfs checkout run: git lfs checkout
``` ```
</td> </td>
<td> <td>
@@ -535,6 +556,7 @@ This step runs a command to check out LFS objects from the repository.
# a string like `foo.js path/bar.md` # a string like `foo.js path/bar.md`
output: ' ' output: ' '
``` ```
</td> </td>
<td> <td>
@@ -549,6 +571,7 @@ This step uses the `trilom/file-changes-action` action to gather the files chang
run: | run: |
echo {% raw %}"${{ steps.get_diff_files.outputs.files }}" > get_diff_files.txt{% endraw %} echo {% raw %}"${{ steps.get_diff_files.outputs.files }}" > get_diff_files.txt{% endraw %}
``` ```
</td> </td>
<td> <td>
@@ -565,6 +588,7 @@ This step runs a shell command that uses an output from the previous step to cre
node-version: 16.14.x node-version: 16.14.x
cache: npm cache: npm
``` ```
</td> </td>
<td> <td>
@@ -578,6 +602,7 @@ This step uses the `actions/setup-node` action to install the specified version
- name: Install dependencies - name: Install dependencies
run: npm ci run: npm ci
``` ```
</td> </td>
<td> <td>
@@ -594,6 +619,7 @@ This step runs the `npm ci` shell command to install the npm software packages f
path: .next/cache path: .next/cache
key: {% raw %}${{ runner.os }}-nextjs-${{ hashFiles('package*.json') }}{% endraw %} key: {% raw %}${{ runner.os }}-nextjs-${{ hashFiles('package*.json') }}{% endraw %}
``` ```
</td> </td>
<td> <td>
@@ -608,6 +634,7 @@ This step uses the `actions/cache` action to cache the Next.js build, so that th
- name: Run build script - name: Run build script
run: npm run build run: npm run build
``` ```
</td> </td>
<td> <td>
{% endif %} {% endif %}
@@ -625,6 +652,7 @@ This step runs the build script.
CHANGELOG_CACHE_FILE_PATH: tests/fixtures/changelog-feed.json CHANGELOG_CACHE_FILE_PATH: tests/fixtures/changelog-feed.json
run: npm test -- {% raw %}tests/${{ matrix.test-group }}/{% endraw %} run: npm test -- {% raw %}tests/${{ matrix.test-group }}/{% endraw %}
``` ```
</td> </td>
<td> <td>

View File

@@ -133,6 +133,7 @@ jobs:
```yaml copy ```yaml copy
name: 'Link Checker: All English' name: 'Link Checker: All English'
``` ```
</td> </td>
<td> <td>
@@ -145,6 +146,7 @@ name: 'Link Checker: All English'
```yaml copy ```yaml copy
on: on:
``` ```
</td> </td>
<td> <td>
@@ -157,6 +159,7 @@ The `on` keyword lets you define the events that trigger when the workflow is ru
```yaml copy ```yaml copy
workflow_dispatch: workflow_dispatch:
``` ```
</td> </td>
<td> <td>
@@ -171,6 +174,7 @@ Add the `workflow_dispatch` event if you want to be able to manually run this wo
branches: branches:
- main - main
``` ```
</td> </td>
<td> <td>
@@ -183,6 +187,7 @@ Add the `push` event, so that the workflow runs automatically every time a commi
```yaml copy ```yaml copy
pull_request: pull_request:
``` ```
</td> </td>
<td> <td>
@@ -197,6 +202,7 @@ permissions:
contents: read contents: read
pull-requests: read pull-requests: read
``` ```
</td> </td>
<td> <td>
@@ -207,10 +213,12 @@ Modifies the default permissions granted to `GITHUB_TOKEN`. This will vary depen
<td> <td>
{% raw %} {% raw %}
```yaml copy ```yaml copy
concurrency: concurrency:
group: '${{ github.workflow }} @ ${{ github.event.pull_request.head.label || github.head_ref || github.ref }}' group: '${{ github.workflow }} @ ${{ github.event.pull_request.head.label || github.head_ref || github.ref }}'
``` ```
{% endraw %} {% endraw %}
</td> </td>
<td> <td>
@@ -224,6 +232,7 @@ Creates a concurrency group for specific events, and uses the `||` operator to d
```yaml copy ```yaml copy
cancel-in-progress: true cancel-in-progress: true
``` ```
</td> </td>
<td> <td>
@@ -236,6 +245,7 @@ Cancels any currently running job or workflow in the same concurrency group.
```yaml copy ```yaml copy
jobs: jobs:
``` ```
</td> </td>
<td> <td>
@@ -248,6 +258,7 @@ Groups together all the jobs that run in the workflow file.
```yaml copy ```yaml copy
check-links: check-links:
``` ```
</td> </td>
<td> <td>
@@ -258,9 +269,11 @@ Defines a job with the ID `check-links` that is stored within the `jobs` key.
<td> <td>
{% raw %} {% raw %}
```yaml copy ```yaml copy
runs-on: ${{ fromJSON('["ubuntu-latest", "self-hosted"]')[github.repository == 'github/docs-internal'] }} runs-on: ${{ fromJSON('["ubuntu-latest", "self-hosted"]')[github.repository == 'github/docs-internal'] }}
``` ```
{% endraw %} {% endraw %}
</td> </td>
<td> <td>
@@ -274,6 +287,7 @@ Configures the job to run on a {% data variables.product.prodname_dotcom %}-host
```yaml copy ```yaml copy
steps: steps:
``` ```
</td> </td>
<td> <td>
@@ -287,6 +301,7 @@ Groups together all the steps that will run as part of the `check-links` job. Ea
- name: Checkout - name: Checkout
uses: {% data reusables.actions.action-checkout %} uses: {% data reusables.actions.action-checkout %}
``` ```
</td> </td>
<td> <td>
@@ -303,6 +318,7 @@ The `uses` keyword tells the job to retrieve the action named `actions/checkout`
node-version: 16.13.x node-version: 16.13.x
cache: npm cache: npm
``` ```
</td> </td>
<td> <td>
@@ -317,6 +333,7 @@ This step uses the `actions/setup-node` action to install the specified version
- name: Install - name: Install
run: npm ci run: npm ci
``` ```
</td> </td>
<td> <td>
@@ -333,6 +350,7 @@ The `run` keyword tells the job to execute a command on the runner. In this case
with: with:
fileOutput: 'json' fileOutput: 'json'
``` ```
</td> </td>
<td> <td>
@@ -347,6 +365,7 @@ Uses the `trilom/file-changes-action` action to gather all the changed files. Th
- name: Show files changed - name: Show files changed
run: cat $HOME/files.json run: cat $HOME/files.json
``` ```
</td> </td>
<td> <td>
@@ -367,6 +386,7 @@ Lists the contents of `files.json`. This will be visible in the workflow run's l
--verbose \ --verbose \
--list $HOME/files.json --list $HOME/files.json
``` ```
</td> </td>
<td> <td>
@@ -386,6 +406,7 @@ This step uses `run` command to execute a script that is stored in the repositor
--check-images \ --check-images \
--level critical --level critical
``` ```
</td> </td>
<td> <td>

View File

@@ -163,6 +163,7 @@ jobs:
fi fi
done done
``` ```
## Understanding the example ## Understanding the example
{% data reusables.actions.example-explanation-table-intro %} {% data reusables.actions.example-explanation-table-intro %}
@@ -181,6 +182,7 @@ jobs:
```yaml copy ```yaml copy
name: Check all English links name: Check all English links
``` ```
</td> </td>
<td> <td>
@@ -196,6 +198,7 @@ on:
schedule: schedule:
- cron: '40 20 * * *' # once a day at 20:40 UTC / 12:40 PST - cron: '40 20 * * *' # once a day at 20:40 UTC / 12:40 PST
``` ```
</td> </td>
<td> <td>
@@ -213,6 +216,7 @@ permissions:
contents: read contents: read
issues: write issues: write
``` ```
</td> </td>
<td> <td>
@@ -225,6 +229,7 @@ Modifies the default permissions granted to `GITHUB_TOKEN`. This will vary depen
```yaml copy ```yaml copy
jobs: jobs:
``` ```
</td> </td>
<td> <td>
@@ -238,6 +243,7 @@ Groups together all the jobs that run in the workflow file.
check_all_english_links: check_all_english_links:
name: Check all links name: Check all links
``` ```
</td> </td>
<td> <td>
@@ -250,6 +256,7 @@ Defines a job with the ID `check_all_english_links`, and the name `Check all lin
```yaml copy ```yaml copy
if: github.repository == 'github/docs-internal' if: github.repository == 'github/docs-internal'
``` ```
</td> </td>
<td> <td>
@@ -262,6 +269,7 @@ Only run the `check_all_english_links` job if the repository is named `docs-inte
```yaml copy ```yaml copy
runs-on: ubuntu-latest runs-on: ubuntu-latest
``` ```
</td> </td>
<td> <td>
@@ -278,6 +286,7 @@ Configures the job to run on an Ubuntu Linux runner. This means that the job wil
REPORT_LABEL: broken link report REPORT_LABEL: broken link report
REPORT_REPOSITORY: github/docs-content REPORT_REPOSITORY: github/docs-content
``` ```
</td> </td>
<td> <td>
@@ -290,6 +299,7 @@ Creates custom environment variables, and redefines the built-in `GITHUB_TOKEN`
```yaml copy ```yaml copy
steps: steps:
``` ```
</td> </td>
<td> <td>
@@ -303,6 +313,7 @@ Groups together all the steps that will run as part of the `check_all_english_li
- name: Check out repo's default branch - name: Check out repo's default branch
uses: {% data reusables.actions.action-checkout %} uses: {% data reusables.actions.action-checkout %}
``` ```
</td> </td>
<td> <td>
@@ -319,6 +330,7 @@ The `uses` keyword tells the job to retrieve the action named `actions/checkout`
node-version: 16.8.x node-version: 16.8.x
cache: npm cache: npm
``` ```
</td> </td>
<td> <td>
@@ -334,6 +346,7 @@ This step uses the `actions/setup-node` action to install the specified version
- name: Run the "npm run build" command - name: Run the "npm run build" command
run: npm run build run: npm run build
``` ```
</td> </td>
<td> <td>
@@ -348,6 +361,7 @@ The `run` keyword tells the job to execute a command on the runner. In this case
run: | run: |
script/check-english-links.js > broken_links.md script/check-english-links.js > broken_links.md
``` ```
</td> </td>
<td> <td>
@@ -367,6 +381,7 @@ This `run` command executes a script that is stored in the repository at `script
run: echo "::set-output name=title::$(head -1 broken_links.md)" run: echo "::set-output name=title::$(head -1 broken_links.md)"
{%- endif %} {%- endif %}
``` ```
</td> </td>
<td> <td>
@@ -389,6 +404,7 @@ If the `check-english-links.js` script detects broken links and returns a non-ze
repository: {% raw %}${{ env.REPORT_REPOSITORY }}{% endraw %} repository: {% raw %}${{ env.REPORT_REPOSITORY }}{% endraw %}
labels: {% raw %}${{ env.REPORT_LABEL }}{% endraw %} labels: {% raw %}${{ env.REPORT_LABEL }}{% endraw %}
``` ```
</td> </td>
<td> <td>
@@ -417,6 +433,7 @@ Uses the `peter-evans/create-issue-from-file` action to create a new {% data var
gh issue comment {% raw %}${{ env.NEW_REPORT_URL }}{% endraw %} --body "⬅️ [Previous report]($previous_report_url)" gh issue comment {% raw %}${{ env.NEW_REPORT_URL }}{% endraw %} --body "⬅️ [Previous report]($previous_report_url)"
``` ```
</td> </td>
<td> <td>
@@ -437,6 +454,7 @@ Uses [`gh issue list`](https://cli.github.com/manual/gh_issue_list) to locate th
fi fi
done done
``` ```
</td> </td>
<td> <td>
@@ -458,6 +476,7 @@ If an issue from a previous run is open and assigned to someone, then use [`gh i
fi fi
done done
``` ```
</td> </td>
<td> <td>

View File

@@ -92,6 +92,7 @@ You can manage the runner service in the Windows **Services** application, or yo
```shell ```shell
./svc.sh install ./svc.sh install
``` ```
{% endmac %} {% endmac %}
## Starting the service ## Starting the service
@@ -99,19 +100,25 @@ You can manage the runner service in the Windows **Services** application, or yo
Start the service with the following command: Start the service with the following command:
{% linux %} {% linux %}
```shell ```shell
sudo ./svc.sh start sudo ./svc.sh start
``` ```
{% endlinux %} {% endlinux %}
{% windows %} {% windows %}
```shell ```shell
Start-Service "{{ service_win_name }}" Start-Service "{{ service_win_name }}"
``` ```
{% endwindows %} {% endwindows %}
{% mac %} {% mac %}
```shell ```shell
./svc.sh start ./svc.sh start
``` ```
{% endmac %} {% endmac %}
## Checking the status of the service ## Checking the status of the service
@@ -119,19 +126,25 @@ Start-Service "{{ service_win_name }}"
Check the status of the service with the following command: Check the status of the service with the following command:
{% linux %} {% linux %}
```shell ```shell
sudo ./svc.sh status sudo ./svc.sh status
``` ```
{% endlinux %} {% endlinux %}
{% windows %} {% windows %}
```shell ```shell
Get-Service "{{ service_win_name }}" Get-Service "{{ service_win_name }}"
``` ```
{% endwindows %} {% endwindows %}
{% mac %} {% mac %}
```shell ```shell
./svc.sh status ./svc.sh status
``` ```
{% endmac %} {% endmac %}
For more information on viewing the status of your self-hosted runner, see "[AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/monitoring-and-troubleshooting-self-hosted-runners)." For more information on viewing the status of your self-hosted runner, see "[AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/monitoring-and-troubleshooting-self-hosted-runners)."
@@ -141,19 +154,25 @@ Get-Service "{{ service_win_name }}"
Stop the service with the following command: Stop the service with the following command:
{% linux %} {% linux %}
```shell ```shell
sudo ./svc.sh stop sudo ./svc.sh stop
``` ```
{% endlinux %} {% endlinux %}
{% windows %} {% windows %}
```shell ```shell
Stop-Service "{{ service_win_name }}" Stop-Service "{{ service_win_name }}"
``` ```
{% endwindows %} {% endwindows %}
{% mac %} {% mac %}
```shell ```shell
./svc.sh stop ./svc.sh stop
``` ```
{% endmac %} {% endmac %}
## Uninstalling the service ## Uninstalling the service
@@ -162,19 +181,25 @@ Stop-Service "{{ service_win_name }}"
1. Uninstall the service with the following command: 1. Uninstall the service with the following command:
{% linux %} {% linux %}
```shell ```shell
sudo ./svc.sh uninstall sudo ./svc.sh uninstall
``` ```
{% endlinux %} {% endlinux %}
{% windows %} {% windows %}
```shell ```shell
Remove-Service "{{ service_win_name }}" Remove-Service "{{ service_win_name }}"
``` ```
{% endwindows %} {% endwindows %}
{% mac %} {% mac %}
```shell ```shell
./svc.sh uninstall ./svc.sh uninstall
``` ```
{% endmac %} {% endmac %}
{% linux %} {% linux %}

View File

@@ -114,6 +114,7 @@ You can print the contents of contexts to the log for debugging. The [`toJSON` f
{% data reusables.actions.github-context-warning %} {% data reusables.actions.github-context-warning %}
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Context testing name: Context testing
on: push on: push
@@ -147,6 +148,7 @@ jobs:
MATRIX_CONTEXT: ${{ toJson(matrix) }} MATRIX_CONTEXT: ${{ toJson(matrix) }}
run: echo '$MATRIX_CONTEXT' run: echo '$MATRIX_CONTEXT'
``` ```
{% endraw %} {% endraw %}
## `github` context ## `github` context
@@ -312,6 +314,7 @@ This example workflow shows how the `env` context can be configured at the workf
{% data reusables.repositories.actions-env-var-note %} {% data reusables.repositories.actions-env-var-note %}
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Hi Mascot name: Hi Mascot
on: push on: push
@@ -334,6 +337,7 @@ jobs:
steps: steps:
- run: echo 'Hi ${{ env.mascot }}' # Hi Tux - run: echo 'Hi ${{ env.mascot }}' # Hi Tux
``` ```
{% endraw %} {% endraw %}
{% ifversion actions-configuration-variables %} {% ifversion actions-configuration-variables %}
@@ -458,6 +462,7 @@ This example `jobs` context contains the result and outputs of a job from a reus
This example reusable workflow uses the `jobs` context to set outputs for the reusable workflow. Note how the outputs flow up from the steps, to the job, then to the `workflow_call` trigger. For more information, see "[AUTOTITLE](/actions/using-workflows/reusing-workflows#using-outputs-from-a-reusable-workflow)." This example reusable workflow uses the `jobs` context to set outputs for the reusable workflow. Note how the outputs flow up from the steps, to the job, then to the `workflow_call` trigger. For more information, see "[AUTOTITLE](/actions/using-workflows/reusing-workflows#using-outputs-from-a-reusable-workflow)."
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Reusable workflow name: Reusable workflow
@@ -494,6 +499,7 @@ jobs:
run: echo "::set-output name=secondword::world" run: echo "::set-output name=secondword::world"
{%- endif %}{% raw %} {%- endif %}{% raw %}
``` ```
{% endraw %} {% endraw %}
## `steps` context ## `steps` context
@@ -813,6 +819,7 @@ jobs:
- uses: {% data reusables.actions.action-checkout %} - uses: {% data reusables.actions.action-checkout %}
- run: ./debug - run: ./debug
``` ```
## `inputs` context ## `inputs` context
The `inputs` context contains input properties passed to an action{% ifversion actions-unified-inputs %},{% else %} or{% endif %} to a reusable workflow{% ifversion actions-unified-inputs %}, or to a manually triggered workflow{% endif %}. {% ifversion actions-unified-inputs %}For reusable workflows, the{% else %}The{% endif %} input names and types are defined in the [`workflow_call` event configuration](/actions/using-workflows/events-that-trigger-workflows#workflow-reuse-events) of a reusable workflow, and the input values are passed from [`jobs.<job_id>.with`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idwith) in an external workflow that calls the reusable workflow. {% ifversion actions-unified-inputs %}For manually triggered workflows, the inputs are defined in the [`workflow_dispatch` event configuration](/actions/using-workflows/events-that-trigger-workflows#workflow_dispatch) of a workflow.{% endif %} The `inputs` context contains input properties passed to an action{% ifversion actions-unified-inputs %},{% else %} or{% endif %} to a reusable workflow{% ifversion actions-unified-inputs %}, or to a manually triggered workflow{% endif %}. {% ifversion actions-unified-inputs %}For reusable workflows, the{% else %}The{% endif %} input names and types are defined in the [`workflow_call` event configuration](/actions/using-workflows/events-that-trigger-workflows#workflow-reuse-events) of a reusable workflow, and the input values are passed from [`jobs.<job_id>.with`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idwith) in an external workflow that calls the reusable workflow. {% ifversion actions-unified-inputs %}For manually triggered workflows, the inputs are defined in the [`workflow_dispatch` event configuration](/actions/using-workflows/events-that-trigger-workflows#workflow_dispatch) of a workflow.{% endif %}
@@ -843,6 +850,7 @@ The following example contents of the `inputs` context is from a workflow that h
This example reusable workflow uses the `inputs` context to get the values of the `build_id`, `deploy_target`, and `perform_deploy` inputs that were passed to the reusable workflow from the caller workflow. This example reusable workflow uses the `inputs` context to get the values of the `build_id`, `deploy_target`, and `perform_deploy` inputs that were passed to the reusable workflow from the caller workflow.
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Reusable deploy workflow name: Reusable deploy workflow
on: on:
@@ -866,6 +874,7 @@ jobs:
- name: Deploy build to target - name: Deploy build to target
run: deploy --build ${{ inputs.build_id }} --target ${{ inputs.deploy_target }} run: deploy --build ${{ inputs.build_id }} --target ${{ inputs.deploy_target }}
``` ```
{% endraw %} {% endraw %}
{% ifversion actions-unified-inputs %} {% ifversion actions-unified-inputs %}
@@ -874,6 +883,7 @@ jobs:
This example workflow triggered by a `workflow_dispatch` event uses the `inputs` context to get the values of the `build_id`, `deploy_target`, and `perform_deploy` inputs that were passed to the workflow. This example workflow triggered by a `workflow_dispatch` event uses the `inputs` context to get the values of the `build_id`, `deploy_target`, and `perform_deploy` inputs that were passed to the workflow.
{% raw %} {% raw %}
```yaml copy ```yaml copy
on: on:
workflow_dispatch: workflow_dispatch:
@@ -896,5 +906,6 @@ jobs:
- name: Deploy build to target - name: Deploy build to target
run: deploy --build ${{ inputs.build_id }} --target ${{ inputs.deploy_target }} run: deploy --build ${{ inputs.build_id }} --target ${{ inputs.deploy_target }}
``` ```
{% endraw %} {% endraw %}
{% endif %} {% endif %}

View File

@@ -38,10 +38,12 @@ steps:
### Example setting an environment variable ### Example setting an environment variable
{% raw %} {% raw %}
```yaml ```yaml
env: env:
MY_ENV_VAR: ${{ <expression> }} MY_ENV_VAR: ${{ <expression> }}
``` ```
{% endraw %} {% endraw %}
## Literals ## Literals
@@ -184,9 +186,11 @@ Replaces values in the `string`, with the variable `replaceValueN`. Variables in
#### Example of `format` #### Example of `format`
{% raw %} {% raw %}
```js ```js
format('Hello {0} {1} {2}', 'Mona', 'the', 'Octocat') format('Hello {0} {1} {2}', 'Mona', 'the', 'Octocat')
``` ```
{% endraw %} {% endraw %}
Returns 'Hello Mona the Octocat'. Returns 'Hello Mona the Octocat'.
@@ -194,9 +198,11 @@ Returns 'Hello Mona the Octocat'.
#### Example escaping braces #### Example escaping braces
{% raw %} {% raw %}
```js ```js
format('{{Hello {0} {1} {2}!}}', 'Mona', 'the', 'Octocat') format('{{Hello {0} {1} {2}!}}', 'Mona', 'the', 'Octocat')
``` ```
{% endraw %} {% endraw %}
Returns '{Hello Mona the Octocat!}'. Returns '{Hello Mona the Octocat!}'.
@@ -232,6 +238,7 @@ Returns a JSON object or JSON data type for `value`. You can use this function t
This workflow sets a JSON matrix in one job, and passes it to the next job using an output and `fromJSON`. This workflow sets a JSON matrix in one job, and passes it to the next job using an output and `fromJSON`.
{% raw %} {% raw %}
```yaml ```yaml
name: build name: build
on: push on: push
@@ -255,6 +262,7 @@ jobs:
steps: steps:
- run: build - run: build
``` ```
{% endraw %} {% endraw %}
#### Example returning a JSON data type #### Example returning a JSON data type
@@ -262,6 +270,7 @@ jobs:
This workflow uses `fromJSON` to convert environment variables from a string to a Boolean or integer. This workflow uses `fromJSON` to convert environment variables from a string to a Boolean or integer.
{% raw %} {% raw %}
```yaml ```yaml
name: print name: print
on: push on: push
@@ -276,6 +285,7 @@ jobs:
timeout-minutes: ${{ fromJSON(env.time) }} timeout-minutes: ${{ fromJSON(env.time) }}
run: echo ... run: echo ...
``` ```
{% endraw %} {% endraw %}
### hashFiles ### hashFiles

View File

@@ -52,6 +52,7 @@ To set a custom environment variable{% ifversion actions-configuration-variables
- A specific step within a job, by using [`jobs.<job_id>.steps[*].env`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsenv). - A specific step within a job, by using [`jobs.<job_id>.steps[*].env`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsenv).
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Greeting on variable day name: Greeting on variable day
@@ -72,6 +73,7 @@ jobs:
env: env:
First_Name: Mona First_Name: Mona
``` ```
{% endraw %} {% endraw %}
You can access `env` variable values using runner environment variables or using contexts. The example above shows three custom variables being used as environment variables in an `echo` command: `$DAY_OF_WEEK`, `$Greeting`, and `$First_Name`. The values for these variables are set, and scoped, at the workflow, job, and step level respectively. For more information on accessing variable values using contexts, see "[Using contexts to access variable values](#using-contexts-to-access-variable-values)." You can access `env` variable values using runner environment variables or using contexts. The example above shows three custom variables being used as environment variables in an `echo` command: `$DAY_OF_WEEK`, `$Greeting`, and `$First_Name`. The values for these variables are set, and scoped, at the workflow, job, and step level respectively. For more information on accessing variable values using contexts, see "[Using contexts to access variable values](#using-contexts-to-access-variable-values)."
@@ -222,6 +224,7 @@ In addition to runner environment variables, {% data variables.product.prodname_
Runner environment variables are always interpolated on the runner machine. However, parts of a workflow are processed by {% data variables.product.prodname_actions %} and are not sent to the runner. You cannot use environment variables in these parts of a workflow file. Instead, you can use contexts. For example, an `if` conditional, which determines whether a job or step is sent to the runner, is always processed by {% data variables.product.prodname_actions %}. You can use a context in an `if` conditional statement to access the value of an variable. Runner environment variables are always interpolated on the runner machine. However, parts of a workflow are processed by {% data variables.product.prodname_actions %} and are not sent to the runner. You cannot use environment variables in these parts of a workflow file. Instead, you can use contexts. For example, an `if` conditional, which determines whether a job or step is sent to the runner, is always processed by {% data variables.product.prodname_actions %}. You can use a context in an `if` conditional statement to access the value of an variable.
{% raw %} {% raw %}
```yaml copy ```yaml copy
env: env:
DAY_OF_WEEK: Monday DAY_OF_WEEK: Monday
@@ -238,6 +241,7 @@ jobs:
env: env:
First_Name: Mona First_Name: Mona
``` ```
{% endraw %} {% endraw %}
In this modification of the earlier example, we've introduced an `if` conditional. The workflow step is now only run if `DAY_OF_WEEK` is set to "Monday". We access this value from the `if` conditional statement by using the [`env` context](/actions/learn-github-actions/contexts#env-context). In this modification of the earlier example, we've introduced an `if` conditional. The workflow step is now only run if `DAY_OF_WEEK` is set to "Monday". We access this value from the `if` conditional statement by using the [`env` context](/actions/learn-github-actions/contexts#env-context).
@@ -343,6 +347,7 @@ We strongly recommend that actions use variables to access the filesystem rather
You can write a single workflow file that can be used for different operating systems by using the `RUNNER_OS` default environment variable and the corresponding context property <span style="white-space: nowrap;">{% raw %}`${{ runner.os }}`{% endraw %}</span>. For example, the following workflow could be run successfully if you changed the operating system from `macos-latest` to `windows-latest` without having to alter the syntax of the environment variables, which differs depending on the shell being used by the runner. You can write a single workflow file that can be used for different operating systems by using the `RUNNER_OS` default environment variable and the corresponding context property <span style="white-space: nowrap;">{% raw %}`${{ runner.os }}`{% endraw %}</span>. For example, the following workflow could be run successfully if you changed the operating system from `macos-latest` to `windows-latest` without having to alter the syntax of the environment variables, which differs depending on the shell being used by the runner.
{% raw %} {% raw %}
```yaml copy ```yaml copy
jobs: jobs:
if-Windows-else: if-Windows-else:
@@ -355,6 +360,7 @@ jobs:
if: runner.os != 'Windows' if: runner.os != 'Windows'
run: echo "The operating system on the runner is not Windows, it's $RUNNER_OS." run: echo "The operating system on the runner is not Windows, it's $RUNNER_OS."
``` ```
{% endraw %} {% endraw %}
In this example, the two `if` statements check the `os` property of the `runner` context to determine the operating system of the runner. `if` conditionals are processed by {% data variables.product.prodname_actions %}, and only steps where the check resolves as `true` are sent to the runner. Here one of the checks will always be `true` and the other `false`, so only one of these steps is sent to the runner. Once the job is sent to the runner, the step is executed and the environment variable in the `echo` command is interpolated using the appropriate syntax (`$env:NAME` for PowerShell on Windows, and `$NAME` for bash and sh on Linux and MacOS). In this example, the statement `runs-on: macos-latest` means that the second step will be run. In this example, the two `if` statements check the `os` property of the `runner` context to determine the operating system of the runner. `if` conditionals are processed by {% data variables.product.prodname_actions %}, and only steps where the check resolves as `true` are sent to the runner. Here one of the checks will always be `true` and the other `false`, so only one of these steps is sent to the runner. Once the job is sent to the runner, the step is executed and the environment variable in the `echo` command is interpolated using the appropriate syntax (`$env:NAME` for PowerShell on Windows, and `$NAME` for bash and sh on Linux and MacOS). In this example, the statement `runs-on: macos-latest` means that the second step will be run.

View File

@@ -30,6 +30,7 @@ In the tutorial, you will first make a workflow file that uses the [`peter-evans
3. Copy the following YAML contents into your workflow file. 3. Copy the following YAML contents into your workflow file.
```yaml copy ```yaml copy
{% indented_data_reference reusables.actions.actions-not-certified-by-github-comment spaces=4 %} {% indented_data_reference reusables.actions.actions-not-certified-by-github-comment spaces=4 %}
{% indented_data_reference reusables.actions.actions-use-sha-pinning-comment spaces=4 %} {% indented_data_reference reusables.actions.actions-use-sha-pinning-comment spaces=4 %}

View File

@@ -31,6 +31,7 @@ In the tutorial, you will first make a workflow file that uses the [`alex-page/g
4. Copy the following YAML contents into your workflow file. 4. Copy the following YAML contents into your workflow file.
```yaml copy ```yaml copy
{% indented_data_reference reusables.actions.actions-not-certified-by-github-comment spaces=4 %} {% indented_data_reference reusables.actions.actions-not-certified-by-github-comment spaces=4 %}
{% indented_data_reference reusables.actions.actions-use-sha-pinning-comment spaces=4 %} {% indented_data_reference reusables.actions.actions-use-sha-pinning-comment spaces=4 %}

View File

@@ -30,6 +30,7 @@ In the tutorial, you will first make a workflow file that uses the [`imjohnbo/is
3. Copy the following YAML contents into your workflow file. 3. Copy the following YAML contents into your workflow file.
```yaml copy ```yaml copy
{% indented_data_reference reusables.actions.actions-not-certified-by-github-comment spaces=4 %} {% indented_data_reference reusables.actions.actions-not-certified-by-github-comment spaces=4 %}
{% indented_data_reference reusables.actions.actions-use-sha-pinning-comment spaces=4 %} {% indented_data_reference reusables.actions.actions-use-sha-pinning-comment spaces=4 %}

View File

@@ -107,6 +107,7 @@ The `configure` CLI command is used to set required credentials and options for
✔ Azure DevOps organization name: :organization ✔ Azure DevOps organization name: :organization
✔ Azure DevOps project name: :project ✔ Azure DevOps project name: :project
Environment variables successfully updated. Environment variables successfully updated.
``` ```
3. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to the {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version: 3. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to the {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version:

View File

@@ -105,6 +105,7 @@ The `configure` CLI command is used to set required credentials and options for
✔ Base url of the Bamboo instance: https://bamboo.example.com ✔ Base url of the Bamboo instance: https://bamboo.example.com
Environment variables successfully updated. Environment variables successfully updated.
``` ```
1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version: 1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version:
```shell ```shell
@@ -182,6 +183,7 @@ You can use the `dry-run` command to convert a Bamboo pipeline to an equivalent
### Running a dry-run migration for a build plan ### Running a dry-run migration for a build plan
To perform a dry run of migrating your Bamboo build plan to {% data variables.product.prodname_actions %}, run the following command in your terminal, replacing `:my_plan_slug` with the plan's project and plan key in the format `<projectKey>-<planKey>` (for example: `PAN-SCRIP`). To perform a dry run of migrating your Bamboo build plan to {% data variables.product.prodname_actions %}, run the following command in your terminal, replacing `:my_plan_slug` with the plan's project and plan key in the format `<projectKey>-<planKey>` (for example: `PAN-SCRIP`).
```shell ```shell
gh actions-importer dry-run bamboo build --plan-slug :my_plan_slug --output-dir tmp/dry-run gh actions-importer dry-run bamboo build --plan-slug :my_plan_slug --output-dir tmp/dry-run
``` ```

View File

@@ -86,6 +86,7 @@ The `configure` CLI command is used to set required credentials and options for
✔ CircleCI organization name: mycircleciorganization ✔ CircleCI organization name: mycircleciorganization
Environment variables successfully updated. Environment variables successfully updated.
``` ```
1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version: 1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version:
```shell ```shell

View File

@@ -85,6 +85,7 @@ The `configure` CLI command is used to set required credentials and options for
✔ Private token for GitLab: *************** ✔ Private token for GitLab: ***************
✔ Base url of the GitLab instance: http://localhost ✔ Base url of the GitLab instance: http://localhost
Environment variables successfully updated. Environment variables successfully updated.
``` ```
1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version: 1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version:

View File

@@ -81,6 +81,7 @@ The `configure` CLI command is used to set required credentials and options for
✔ Username of Jenkins user: admin ✔ Username of Jenkins user: admin
✔ Base url of the Jenkins instance: https://localhost ✔ Base url of the Jenkins instance: https://localhost
Environment variables successfully updated. Environment variables successfully updated.
``` ```
1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version: 1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version:

View File

@@ -88,6 +88,7 @@ The `configure` CLI command is used to set required credentials and options for
✔ Base url of the Travis CI instance: https://travis-ci.com ✔ Base url of the Travis CI instance: https://travis-ci.com
✔ Travis CI organization name: actions-importer-labs ✔ Travis CI organization name: actions-importer-labs
Environment variables successfully updated. Environment variables successfully updated.
``` ```
1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version: 1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version:

View File

@@ -101,6 +101,7 @@ The supported values for `--features` are:
- `ghes-<number>`, where `<number>` is the version of {% data variables.product.prodname_ghe_server %}, `3.0` or later. For example, `ghes-3.3`. - `ghes-<number>`, where `<number>` is the version of {% data variables.product.prodname_ghe_server %}, `3.0` or later. For example, `ghes-3.3`.
You can view the list of available feature flags by {% data variables.product.prodname_actions_importer %} by running the `list-features` command. For example: You can view the list of available feature flags by {% data variables.product.prodname_actions_importer %} by running the `list-features` command. For example:
```shell copy ```shell copy
gh actions-importer list-features gh actions-importer list-features
``` ```

View File

@@ -59,6 +59,7 @@ Below is an example of the syntax for each system.
### Azure Pipelines syntax for script steps ### Azure Pipelines syntax for script steps
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
- job: scripts - job: scripts
@@ -72,11 +73,13 @@ jobs:
inputs: inputs:
script: Write-Host "This step runs in PowerShell" script: Write-Host "This step runs in PowerShell"
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for script steps ### {% data variables.product.prodname_actions %} syntax for script steps
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
scripts: scripts:
@@ -90,6 +93,7 @@ jobs:
- run: Write-Host "This step runs in PowerShell" - run: Write-Host "This step runs in PowerShell"
shell: powershell shell: powershell
``` ```
{% endraw %} {% endraw %}
## Differences in script error handling ## Differences in script error handling
@@ -109,6 +113,7 @@ Below is an example of the syntax for each system.
### Azure Pipelines syntax using CMD by default ### Azure Pipelines syntax using CMD by default
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
- job: run_command - job: run_command
@@ -117,11 +122,13 @@ jobs:
steps: steps:
- script: echo "This step runs in CMD on Windows by default" - script: echo "This step runs in CMD on Windows by default"
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for specifying CMD ### {% data variables.product.prodname_actions %} syntax for specifying CMD
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
run_command: run_command:
@@ -131,6 +138,7 @@ jobs:
- run: echo "This step runs in CMD on Windows explicitly" - run: echo "This step runs in CMD on Windows explicitly"
shell: cmd shell: cmd
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#using-a-specific-shell)." For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#using-a-specific-shell)."
@@ -146,6 +154,7 @@ Below is an example of the syntax for each system.
### Azure Pipelines syntax for conditional expressions ### Azure Pipelines syntax for conditional expressions
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
- job: conditional - job: conditional
@@ -155,11 +164,13 @@ jobs:
- script: echo "This step runs with str equals 'ABC' and num equals 123" - script: echo "This step runs with str equals 'ABC' and num equals 123"
condition: and(eq(variables.str, 'ABC'), eq(variables.num, 123)) condition: and(eq(variables.str, 'ABC'), eq(variables.num, 123))
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for conditional expressions ### {% data variables.product.prodname_actions %} syntax for conditional expressions
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
conditional: conditional:
@@ -168,6 +179,7 @@ jobs:
- run: echo "This step runs with str equals 'ABC' and num equals 123" - run: echo "This step runs with str equals 'ABC' and num equals 123"
if: ${{ env.str == 'ABC' && env.num == 123 }} if: ${{ env.str == 'ABC' && env.num == 123 }}
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/learn-github-actions/expressions)." For more information, see "[AUTOTITLE](/actions/learn-github-actions/expressions)."
@@ -181,6 +193,7 @@ Below is an example of the syntax for each system. The workflows start a first j
### Azure Pipelines syntax for dependencies between jobs ### Azure Pipelines syntax for dependencies between jobs
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
- job: initial - job: initial
@@ -207,11 +220,13 @@ jobs:
steps: steps:
- script: echo "This job will run after fanout1 and fanout2 have finished." - script: echo "This job will run after fanout1 and fanout2 have finished."
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for dependencies between jobs ### {% data variables.product.prodname_actions %} syntax for dependencies between jobs
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
initial: initial:
@@ -234,6 +249,7 @@ jobs:
steps: steps:
- run: echo "This job will run after fanout1 and fanout2 have finished." - run: echo "This job will run after fanout1 and fanout2 have finished."
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idneeds)." For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idneeds)."
@@ -247,6 +263,7 @@ Below is an example of the syntax for each system.
### Azure Pipelines syntax for tasks ### Azure Pipelines syntax for tasks
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
- job: run_python - job: run_python
@@ -259,6 +276,7 @@ jobs:
architecture: 'x64' architecture: 'x64'
- script: python script.py - script: python script.py
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for actions ### {% data variables.product.prodname_actions %} syntax for actions

View File

@@ -87,12 +87,14 @@ Below is an example of the syntax for each system.
### CircleCI syntax for caching ### CircleCI syntax for caching
{% raw %} {% raw %}
```yaml ```yaml
- restore_cache: - restore_cache:
keys: keys:
- v1-npm-deps-{{ checksum "package-lock.json" }} - v1-npm-deps-{{ checksum "package-lock.json" }}
- v1-npm-deps- - v1-npm-deps-
``` ```
{% endraw %} {% endraw %}
### GitHub Actions syntax for caching ### GitHub Actions syntax for caching
@@ -123,6 +125,7 @@ Below is an example in CircleCI and {% data variables.product.prodname_actions %
### CircleCI syntax for persisting data between jobs ### CircleCI syntax for persisting data between jobs
{% raw %} {% raw %}
```yaml ```yaml
- persist_to_workspace: - persist_to_workspace:
root: workspace root: workspace
@@ -134,11 +137,13 @@ Below is an example in CircleCI and {% data variables.product.prodname_actions %
- attach_workspace: - attach_workspace:
at: /tmp/workspace at: /tmp/workspace
``` ```
{% endraw %} {% endraw %}
### GitHub Actions syntax for persisting data between jobs ### GitHub Actions syntax for persisting data between jobs
{% raw %} {% raw %}
```yaml ```yaml
- name: Upload math result for job 1 - name: Upload math result for job 1
uses: {% data reusables.actions.action-upload-artifact %} uses: {% data reusables.actions.action-upload-artifact %}
@@ -153,6 +158,7 @@ Below is an example in CircleCI and {% data variables.product.prodname_actions %
with: with:
name: homework name: homework
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/using-workflows/storing-workflow-data-as-artifacts)." For more information, see "[AUTOTITLE](/actions/using-workflows/storing-workflow-data-as-artifacts)."
@@ -168,6 +174,7 @@ Below is an example in CircleCI and {% data variables.product.prodname_actions %
### CircleCI syntax for using databases and service containers ### CircleCI syntax for using databases and service containers
{% raw %} {% raw %}
```yaml ```yaml
--- ---
version: 2.1 version: 2.1
@@ -218,11 +225,13 @@ workflows:
- attach_workspace: - attach_workspace:
at: /tmp/workspace at: /tmp/workspace
``` ```
{% endraw %} {% endraw %}
### GitHub Actions syntax for using databases and service containers ### GitHub Actions syntax for using databases and service containers
{% raw %} {% raw %}
```yaml ```yaml
name: Containers name: Containers
@@ -267,6 +276,7 @@ jobs:
- name: Run tests - name: Run tests
run: bundle exec rake run: bundle exec rake
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/using-containerized-services/about-service-containers)." For more information, see "[AUTOTITLE](/actions/using-containerized-services/about-service-containers)."
@@ -278,6 +288,7 @@ Below is a real-world example. The left shows the actual CircleCI _config.yml_ f
### Complete example for CircleCI ### Complete example for CircleCI
{% raw %} {% raw %}
```yaml ```yaml
--- ---
version: 2.1 version: 2.1
@@ -359,6 +370,7 @@ workflows:
- ruby-26 - ruby-26
- ruby-25 - ruby-25
``` ```
{% endraw %} {% endraw %}
### Complete example for GitHub Actions ### Complete example for GitHub Actions

View File

@@ -46,6 +46,7 @@ Below is an example of the syntax for each system.
### GitLab CI/CD syntax for jobs ### GitLab CI/CD syntax for jobs
{% raw %} {% raw %}
```yaml ```yaml
job1: job1:
variables: variables:
@@ -53,11 +54,13 @@ job1:
script: script:
- echo "Run your script here" - echo "Run your script here"
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for jobs ### {% data variables.product.prodname_actions %} syntax for jobs
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
job1: job1:
@@ -65,6 +68,7 @@ jobs:
- uses: {% data reusables.actions.action-checkout %} - uses: {% data reusables.actions.action-checkout %}
- run: echo "Run your script here" - run: echo "Run your script here"
``` ```
{% endraw %} {% endraw %}
## Runners ## Runners
@@ -76,6 +80,7 @@ Below is an example of the syntax for each system.
### GitLab CI/CD syntax for runners ### GitLab CI/CD syntax for runners
{% raw %} {% raw %}
```yaml ```yaml
windows_job: windows_job:
tags: tags:
@@ -89,11 +94,13 @@ linux_job:
script: script:
- echo "Hello, $USER!" - echo "Hello, $USER!"
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for runners ### {% data variables.product.prodname_actions %} syntax for runners
{% raw %} {% raw %}
```yaml ```yaml
windows_job: windows_job:
runs-on: windows-latest runs-on: windows-latest
@@ -105,6 +112,7 @@ linux_job:
steps: steps:
- run: echo "Hello, $USER!" - run: echo "Hello, $USER!"
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idruns-on)." For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idruns-on)."
@@ -118,20 +126,24 @@ Below is an example of the syntax for each system.
### GitLab CI/CD syntax for Docker images ### GitLab CI/CD syntax for Docker images
{% raw %} {% raw %}
```yaml ```yaml
my_job: my_job:
image: node:10.16-jessie image: node:10.16-jessie
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for Docker images ### {% data variables.product.prodname_actions %} syntax for Docker images
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
my_job: my_job:
container: node:10.16-jessie container: node:10.16-jessie
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainer)." For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainer)."
@@ -145,6 +157,7 @@ Below is an example of the syntax for each system.
### GitLab CI/CD syntax for conditions and expressions ### GitLab CI/CD syntax for conditions and expressions
{% raw %} {% raw %}
```yaml ```yaml
deploy_prod: deploy_prod:
stage: deploy stage: deploy
@@ -153,11 +166,13 @@ deploy_prod:
rules: rules:
- if: '$CI_COMMIT_BRANCH == "master"' - if: '$CI_COMMIT_BRANCH == "master"'
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for conditions and expressions ### {% data variables.product.prodname_actions %} syntax for conditions and expressions
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
deploy_prod: deploy_prod:
@@ -166,6 +181,7 @@ jobs:
steps: steps:
- run: echo "Deploy to production server" - run: echo "Deploy to production server"
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/learn-github-actions/expressions)." For more information, see "[AUTOTITLE](/actions/learn-github-actions/expressions)."
@@ -179,6 +195,7 @@ Below is an example of the syntax for each system. The workflows start with two
### GitLab CI/CD syntax for dependencies between jobs ### GitLab CI/CD syntax for dependencies between jobs
{% raw %} {% raw %}
```yaml ```yaml
stages: stages:
- build - build
@@ -205,11 +222,13 @@ deploy_ab:
script: script:
- echo "This job will run after test_ab is complete" - echo "This job will run after test_ab is complete"
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for dependencies between jobs ### {% data variables.product.prodname_actions %} syntax for dependencies between jobs
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
build_a: build_a:
@@ -234,6 +253,7 @@ jobs:
steps: steps:
- run: echo "This job will run after test_ab is complete" - run: echo "This job will run after test_ab is complete"
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idneeds)." For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idneeds)."
@@ -261,6 +281,7 @@ Below is an example of the syntax for each system.
### GitLab CI/CD syntax for caching ### GitLab CI/CD syntax for caching
{% raw %} {% raw %}
```yaml ```yaml
image: node:latest image: node:latest
@@ -276,6 +297,7 @@ test_async:
script: script:
- node ./specs/start.js ./specs/async.spec.js - node ./specs/start.js ./specs/async.spec.js
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for caching ### {% data variables.product.prodname_actions %} syntax for caching
@@ -308,17 +330,20 @@ Below is an example of the syntax for each system.
### GitLab CI/CD syntax for artifacts ### GitLab CI/CD syntax for artifacts
{% raw %} {% raw %}
```yaml ```yaml
script: script:
artifacts: artifacts:
paths: paths:
- math-homework.txt - math-homework.txt
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for artifacts ### {% data variables.product.prodname_actions %} syntax for artifacts
{% raw %} {% raw %}
```yaml ```yaml
- name: Upload math result for job 1 - name: Upload math result for job 1
uses: {% data reusables.actions.action-upload-artifact %} uses: {% data reusables.actions.action-upload-artifact %}
@@ -326,6 +351,7 @@ artifacts:
name: homework name: homework
path: math-homework.txt path: math-homework.txt
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/using-workflows/storing-workflow-data-as-artifacts)." For more information, see "[AUTOTITLE](/actions/using-workflows/storing-workflow-data-as-artifacts)."
@@ -341,6 +367,7 @@ Below is an example of the syntax for each system.
### GitLab CI/CD syntax for databases and service containers ### GitLab CI/CD syntax for databases and service containers
{% raw %} {% raw %}
```yaml ```yaml
container-job: container-job:
variables: variables:
@@ -363,11 +390,13 @@ container-job:
tags: tags:
- docker - docker
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for databases and service containers ### {% data variables.product.prodname_actions %} syntax for databases and service containers
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
container-job: container-job:
@@ -400,6 +429,7 @@ jobs:
# The default PostgreSQL port # The default PostgreSQL port
POSTGRES_PORT: 5432 POSTGRES_PORT: 5432
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/using-containerized-services/about-service-containers)." For more information, see "[AUTOTITLE](/actions/using-containerized-services/about-service-containers)."

View File

@@ -69,17 +69,20 @@ Below is an example comparing the syntax for each system.
#### Travis CI syntax for a matrix #### Travis CI syntax for a matrix
{% raw %} {% raw %}
```yaml ```yaml
matrix: matrix:
include: include:
- rvm: 2.5 - rvm: 2.5
- rvm: 2.6.3 - rvm: 2.6.3
``` ```
{% endraw %} {% endraw %}
#### {% data variables.product.prodname_actions %} syntax for a matrix #### {% data variables.product.prodname_actions %} syntax for a matrix
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
build: build:
@@ -87,6 +90,7 @@ jobs:
matrix: matrix:
ruby: [2.5, 2.6.3] ruby: [2.5, 2.6.3]
``` ```
{% endraw %} {% endraw %}
### Targeting specific branches ### Targeting specific branches
@@ -98,17 +102,20 @@ Below is an example of the syntax for each system.
#### Travis CI syntax for targeting specific branches #### Travis CI syntax for targeting specific branches
{% raw %} {% raw %}
```yaml ```yaml
branches: branches:
only: only:
- main - main
- 'mona/octocat' - 'mona/octocat'
``` ```
{% endraw %} {% endraw %}
#### {% data variables.product.prodname_actions %} syntax for targeting specific branches #### {% data variables.product.prodname_actions %} syntax for targeting specific branches
{% raw %} {% raw %}
```yaml ```yaml
on: on:
push: push:
@@ -116,6 +123,7 @@ on:
- main - main
- 'mona/octocat' - 'mona/octocat'
``` ```
{% endraw %} {% endraw %}
### Checking out submodules ### Checking out submodules
@@ -127,20 +135,24 @@ Below is an example of the syntax for each system.
#### Travis CI syntax for checking out submodules #### Travis CI syntax for checking out submodules
{% raw %} {% raw %}
```yaml ```yaml
git: git:
submodules: false submodules: false
``` ```
{% endraw %} {% endraw %}
#### {% data variables.product.prodname_actions %} syntax for checking out submodules #### {% data variables.product.prodname_actions %} syntax for checking out submodules
{% raw %} {% raw %}
```yaml ```yaml
- uses: {% data reusables.actions.action-checkout %} - uses: {% data reusables.actions.action-checkout %}
with: with:
submodules: false submodules: false
``` ```
{% endraw %} {% endraw %}
### Using environment variables in a matrix ### Using environment variables in a matrix
@@ -232,6 +244,7 @@ Below is an example of the syntax for each system.
### Travis CI syntax for phases and steps ### Travis CI syntax for phases and steps
{% raw %} {% raw %}
```yaml ```yaml
language: python language: python
python: python:
@@ -240,11 +253,13 @@ python:
script: script:
- python script.py - python script.py
``` ```
{% endraw %} {% endraw %}
### {% data variables.product.prodname_actions %} syntax for steps and actions ### {% data variables.product.prodname_actions %} syntax for steps and actions
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
run_python: run_python:
@@ -256,6 +271,7 @@ jobs:
architecture: 'x64' architecture: 'x64'
- run: python script.py - run: python script.py
``` ```
{% endraw %} {% endraw %}
## Caching dependencies ## Caching dependencies
@@ -269,10 +285,12 @@ These examples demonstrate the cache syntax for each system.
### Travis CI syntax for caching ### Travis CI syntax for caching
{% raw %} {% raw %}
```yaml ```yaml
language: node_js language: node_js
cache: npm cache: npm
``` ```
{% endraw %} {% endraw %}
### GitHub Actions syntax for caching ### GitHub Actions syntax for caching
@@ -321,6 +339,7 @@ jobs:
##### Travis CI for building with Node.js ##### Travis CI for building with Node.js
{% raw %} {% raw %}
```yaml ```yaml
install: install:
- npm install - npm install
@@ -328,6 +347,7 @@ script:
- npm run build - npm run build
- npm test - npm test
``` ```
{% endraw %} {% endraw %}
##### {% data variables.product.prodname_actions %} workflow for building with Node.js ##### {% data variables.product.prodname_actions %} workflow for building with Node.js

View File

@@ -50,6 +50,7 @@ Each time you create a new release, you can trigger a workflow to publish your p
You can define a new Maven repository in the publishing block of your _build.gradle_ file that points to your package repository. For example, if you were deploying to the Maven Central Repository through the OSSRH hosting project, your _build.gradle_ could specify a repository with the name `"OSSRH"`. You can define a new Maven repository in the publishing block of your _build.gradle_ file that points to your package repository. For example, if you were deploying to the Maven Central Repository through the OSSRH hosting project, your _build.gradle_ could specify a repository with the name `"OSSRH"`.
{% raw %} {% raw %}
```groovy copy ```groovy copy
plugins { plugins {
... ...
@@ -71,6 +72,7 @@ publishing {
} }
} }
``` ```
{% endraw %} {% endraw %}
With this configuration, you can create a workflow that publishes your package to the Maven Central Repository by running the `gradle publish` command. In the deploy step, youll need to set environment variables for the username and password or token that you use to authenticate to the Maven repository. For more information, see "[AUTOTITLE](/actions/security-guides/encrypted-secrets)." With this configuration, you can create a workflow that publishes your package to the Maven Central Repository by running the `gradle publish` command. In the deploy step, youll need to set environment variables for the username and password or token that you use to authenticate to the Maven repository. For more information, see "[AUTOTITLE](/actions/security-guides/encrypted-secrets)."
@@ -122,6 +124,7 @@ You can define a new Maven repository in the publishing block of your _build.gra
For example, if your organization is named "octocat" and your repository is named "hello-world", then the {% data variables.product.prodname_registry %} configuration in _build.gradle_ would look similar to the below example. For example, if your organization is named "octocat" and your repository is named "hello-world", then the {% data variables.product.prodname_registry %} configuration in _build.gradle_ would look similar to the below example.
{% raw %} {% raw %}
```groovy copy ```groovy copy
plugins { plugins {
... ...
@@ -143,6 +146,7 @@ publishing {
} }
} }
``` ```
{% endraw %} {% endraw %}
With this configuration, you can create a workflow that publishes your package to {% data variables.product.prodname_registry %} by running the `gradle publish` command. With this configuration, you can create a workflow that publishes your package to {% data variables.product.prodname_registry %} by running the `gradle publish` command.
@@ -195,6 +199,7 @@ For example, if you deploy to the Central Repository through the OSSRH hosting p
If your organization is named "octocat" and your repository is named "hello-world", then the configuration in _build.gradle_ would look similar to the below example. If your organization is named "octocat" and your repository is named "hello-world", then the configuration in _build.gradle_ would look similar to the below example.
{% raw %} {% raw %}
```groovy copy ```groovy copy
plugins { plugins {
... ...
@@ -224,6 +229,7 @@ publishing {
} }
} }
``` ```
{% endraw %} {% endraw %}
With this configuration, you can create a workflow that publishes your package to both the Maven Central Repository and {% data variables.product.prodname_registry %} by running the `gradle publish` command. With this configuration, you can create a workflow that publishes your package to both the Maven Central Repository and {% data variables.product.prodname_registry %} by running the `gradle publish` command.

View File

@@ -54,6 +54,7 @@ In this workflow, you can use the `setup-java` action. This action installs the
For example, if you were deploying to the Maven Central Repository through the OSSRH hosting project, your _pom.xml_ could specify a distribution management repository with the `id` of `ossrh`. For example, if you were deploying to the Maven Central Repository through the OSSRH hosting project, your _pom.xml_ could specify a distribution management repository with the `id` of `ossrh`.
{% raw %} {% raw %}
```xml copy ```xml copy
<project ...> <project ...>
... ...
@@ -66,6 +67,7 @@ For example, if you were deploying to the Maven Central Repository through the O
</distributionManagement> </distributionManagement>
</project> </project>
``` ```
{% endraw %} {% endraw %}
With this configuration, you can create a workflow that publishes your package to the Maven Central Repository by specifying the repository management `id` to the `setup-java` action. Youll also need to provide environment variables that contain the username and password to authenticate to the repository. With this configuration, you can create a workflow that publishes your package to the Maven Central Repository by specifying the repository management `id` to the `setup-java` action. Youll also need to provide environment variables that contain the username and password to authenticate to the repository.
@@ -118,6 +120,7 @@ For a Maven-based project, you can make use of these settings by creating a dist
For example, if your organization is named "octocat" and your repository is named "hello-world", then the {% data variables.product.prodname_registry %} configuration in _pom.xml_ would look similar to the below example. For example, if your organization is named "octocat" and your repository is named "hello-world", then the {% data variables.product.prodname_registry %} configuration in _pom.xml_ would look similar to the below example.
{% raw %} {% raw %}
```xml copy ```xml copy
<project ...> <project ...>
... ...
@@ -130,6 +133,7 @@ For example, if your organization is named "octocat" and your repository is name
</distributionManagement> </distributionManagement>
</project> </project>
``` ```
{% endraw %} {% endraw %}
With this configuration, you can create a workflow that publishes your package to {% data variables.product.prodname_registry %} by making use of the automatically generated _settings.xml_. With this configuration, you can create a workflow that publishes your package to {% data variables.product.prodname_registry %} by making use of the automatically generated _settings.xml_.

View File

@@ -51,6 +51,7 @@ The following example shows you how {% data variables.product.prodname_actions %
ls {% raw %}${{ github.workspace }}{% endraw %} ls {% raw %}${{ github.workspace }}{% endraw %}
- run: echo "🍏 This job's status is {% raw %}${{ job.status }}{% endraw %}." - run: echo "🍏 This job's status is {% raw %}${{ job.status }}{% endraw %}."
``` ```
1. Scroll to the bottom of the page and select **Create a new branch for this commit and start a pull request**. Then, to create a pull request, click **Propose new file**. 1. Scroll to the bottom of the page and select **Create a new branch for this commit and start a pull request**. Then, to create a pull request, click **Propose new file**.
![Screenshot of the "Commit new file" area of the page.](/assets/images/help/repository/actions-quickstart-commit-new-file.png) ![Screenshot of the "Commit new file" area of the page.](/assets/images/help/repository/actions-quickstart-commit-new-file.png)

View File

@@ -232,6 +232,7 @@ You can check which access policies are being applied to a secret in your organi
To provide an action with a secret as an input or environment variable, you can use the `secrets` context to access secrets you've created in your repository. For more information, see "[AUTOTITLE](/actions/learn-github-actions/contexts)" and "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions)." To provide an action with a secret as an input or environment variable, you can use the `secrets` context to access secrets you've created in your repository. For more information, see "[AUTOTITLE](/actions/learn-github-actions/contexts)" and "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions)."
{% raw %} {% raw %}
```yaml ```yaml
steps: steps:
- name: Hello world action - name: Hello world action
@@ -240,6 +241,7 @@ steps:
env: # Or as an environment variable env: # Or as an environment variable
super_secret: ${{ secrets.SuperSecret }} super_secret: ${{ secrets.SuperSecret }}
``` ```
{% endraw %} {% endraw %}
Secrets cannot be directly referenced in `if:` conditionals. Instead, consider setting secrets as job-level environment variables, then referencing the environment variables to conditionally run steps in the job. For more information, see "[AUTOTITLE](/actions/learn-github-actions/contexts#context-availability)" and [`jobs.<job_id>.steps[*].if`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsif). Secrets cannot be directly referenced in `if:` conditionals. Instead, consider setting secrets as job-level environment variables, then referencing the environment variables to conditionally run steps in the job. For more information, see "[AUTOTITLE](/actions/learn-github-actions/contexts#context-availability)" and [`jobs.<job_id>.steps[*].if`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsif).
@@ -253,6 +255,7 @@ If you must pass secrets within a command line, then enclose them within the pro
### Example using Bash ### Example using Bash
{% raw %} {% raw %}
```yaml ```yaml
steps: steps:
- shell: bash - shell: bash
@@ -261,11 +264,13 @@ steps:
run: | run: |
example-command "$SUPER_SECRET" example-command "$SUPER_SECRET"
``` ```
{% endraw %} {% endraw %}
### Example using PowerShell ### Example using PowerShell
{% raw %} {% raw %}
```yaml ```yaml
steps: steps:
- shell: pwsh - shell: pwsh
@@ -274,11 +279,13 @@ steps:
run: | run: |
example-command "$env:SUPER_SECRET" example-command "$env:SUPER_SECRET"
``` ```
{% endraw %} {% endraw %}
### Example using Cmd.exe ### Example using Cmd.exe
{% raw %} {% raw %}
```yaml ```yaml
steps: steps:
- shell: cmd - shell: cmd
@@ -287,6 +294,7 @@ steps:
run: | run: |
example-command "%SUPER_SECRET%" example-command "%SUPER_SECRET%"
``` ```
{% endraw %} {% endraw %}
## Limits for secrets ## Limits for secrets

View File

@@ -75,6 +75,7 @@ The following sections explain how you can help mitigate the risk of script inje
A script injection attack can occur directly within a workflow's inline script. In the following example, an action uses an expression to test the validity of a pull request title, but also adds the risk of script injection: A script injection attack can occur directly within a workflow's inline script. In the following example, an action uses an expression to test the validity of a pull request title, but also adds the risk of script injection:
{% raw %} {% raw %}
``` ```
- name: Check PR title - name: Check PR title
run: | run: |
@@ -87,6 +88,7 @@ A script injection attack can occur directly within a workflow's inline script.
exit 1 exit 1
fi fi
``` ```
{% endraw %} {% endraw %}
This example is vulnerable to script injection because the `run` command executes within a temporary shell script on the runner. Before the shell script is run, the expressions inside {% raw %}`${{ }}`{% endraw %} are evaluated and then substituted with the resulting values, which can make it vulnerable to shell command injection. This example is vulnerable to script injection because the `run` command executes within a temporary shell script on the runner. Before the shell script is run, the expressions inside {% raw %}`${{ }}`{% endraw %} are evaluated and then substituted with the resulting values, which can make it vulnerable to shell command injection.
@@ -113,11 +115,13 @@ There are a number of different approaches available to help you mitigate the ri
The recommended approach is to create an action that processes the context value as an argument. This approach is not vulnerable to the injection attack, as the context value is not used to generate a shell script, but is instead passed to the action as an argument: The recommended approach is to create an action that processes the context value as an argument. This approach is not vulnerable to the injection attack, as the context value is not used to generate a shell script, but is instead passed to the action as an argument:
{% raw %} {% raw %}
``` ```
uses: fakeaction/checktitle@v3 uses: fakeaction/checktitle@v3
with: with:
title: ${{ github.event.pull_request.title }} title: ${{ github.event.pull_request.title }}
``` ```
{% endraw %} {% endraw %}
### Using an intermediate environment variable ### Using an intermediate environment variable
@@ -127,6 +131,7 @@ For inline scripts, the preferred approach to handling untrusted input is to set
The following example uses Bash to process the `github.event.pull_request.title` value as an environment variable: The following example uses Bash to process the `github.event.pull_request.title` value as an environment variable:
{% raw %} {% raw %}
``` ```
- name: Check PR title - name: Check PR title
env: env:
@@ -140,6 +145,7 @@ The following example uses Bash to process the `github.event.pull_request.title`
exit 1 exit 1
fi fi
``` ```
{% endraw %} {% endraw %}
In this example, the attempted script injection is unsuccessful, which is reflected by the following lines in the log: In this example, the attempted script injection is unsuccessful, which is reflected by the following lines in the log:
@@ -244,11 +250,13 @@ Workflows triggered using the `pull_request` event have read-only permissions an
- For a custom action, the risk can vary depending on how a program is using the secret it obtained from the argument: - For a custom action, the risk can vary depending on how a program is using the secret it obtained from the argument:
{% raw %} {% raw %}
``` ```
uses: fakeaction/publish@v3 uses: fakeaction/publish@v3
with: with:
key: ${{ secrets.PUBLISH_KEY }} key: ${{ secrets.PUBLISH_KEY }}
``` ```
{% endraw %} {% endraw %}
Although {% data variables.product.prodname_actions %} scrubs secrets from memory that are not referenced in the workflow (or an included action), the `GITHUB_TOKEN` and any referenced secrets can be harvested by a determined attacker. Although {% data variables.product.prodname_actions %} scrubs secrets from memory that are not referenced in the workflow (or an included action), the `GITHUB_TOKEN` and any referenced secrets can be harvested by a determined attacker.

View File

@@ -51,6 +51,7 @@ You can use the `services` keyword to create service containers that are part of
This example creates a service called `redis` in a job called `container-job`. The Docker host in this example is the `node:16-bullseye` container. This example creates a service called `redis` in a job called `container-job`. The Docker host in this example is the `node:16-bullseye` container.
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Redis container example name: Redis container example
on: push on: push
@@ -70,6 +71,7 @@ jobs:
# Docker Hub image # Docker Hub image
image: redis image: redis
``` ```
{% endraw %} {% endraw %}
## Mapping Docker host and service container ports ## Mapping Docker host and service container ports
@@ -93,6 +95,7 @@ When you specify the Docker host port but not the container port, the container
This example maps the service container `redis` port 6379 to the Docker host port 6379. This example maps the service container `redis` port 6379 to the Docker host port 6379.
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Redis Service Example name: Redis Service Example
on: push on: push
@@ -114,6 +117,7 @@ jobs:
# Opens tcp port 6379 on the host and service container # Opens tcp port 6379 on the host and service container
- 6379:6379 - 6379:6379
``` ```
{% endraw %} {% endraw %}
## Further reading ## Further reading

View File

@@ -73,6 +73,7 @@ jobs:
The following example demonstrates how to use [Chocolatey](https://community.chocolatey.org/packages) to install the {% data variables.product.prodname_dotcom %} CLI as part of a job. The following example demonstrates how to use [Chocolatey](https://community.chocolatey.org/packages) to install the {% data variables.product.prodname_dotcom %} CLI as part of a job.
{% raw %} {% raw %}
```yaml ```yaml
name: Build on Windows name: Build on Windows
on: push on: push
@@ -83,4 +84,5 @@ jobs:
- run: choco install gh - run: choco install gh
- run: gh version - run: gh version
``` ```
{% endraw %} {% endraw %}

View File

@@ -62,6 +62,7 @@ If your workflows use sensitive data, such as passwords or certificates, you can
This example job demonstrates how to reference an existing secret as an environment variable, and send it as a parameter to an example command. This example job demonstrates how to reference an existing secret as an environment variable, and send it as a parameter to an example command.
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
example-job: example-job:
@@ -73,6 +74,7 @@ jobs:
run: | run: |
example-command "$super_secret" example-command "$super_secret"
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/security-guides/encrypted-secrets)." For more information, see "[AUTOTITLE](/actions/security-guides/encrypted-secrets)."

View File

@@ -80,17 +80,20 @@ You cannot change the contents of an existing cache. Instead, you can create a n
~/.gradle/caches ~/.gradle/caches
~/.gradle/wrapper ~/.gradle/wrapper
``` ```
- You can specify either directories or single files, and glob patterns are supported. - You can specify either directories or single files, and glob patterns are supported.
- You can specify absolute paths, or paths relative to the workspace directory. - You can specify absolute paths, or paths relative to the workspace directory.
- `restore-keys`: **Optional** A string containing alternative restore keys, with each restore key placed on a new line. If no cache hit occurs for `key`, these restore keys are used sequentially in the order provided to find and restore a cache. For example: - `restore-keys`: **Optional** A string containing alternative restore keys, with each restore key placed on a new line. If no cache hit occurs for `key`, these restore keys are used sequentially in the order provided to find and restore a cache. For example:
{% raw %} {% raw %}
```yaml ```yaml
restore-keys: | restore-keys: |
npm-feature-${{ hashFiles('package-lock.json') }} npm-feature-${{ hashFiles('package-lock.json') }}
npm-feature- npm-feature-
npm- npm-
``` ```
{% endraw %} {% endraw %}
- `enableCrossOsArchive`: **Optional** A boolean value that when enabled, allows Windows runners to save or restore caches independent of the operating system the cache was created on. If this parameter is not set, it defaults to `false`. For more information, see [Cross OS cache](https://github.com/actions/cache/blob/main/tips-and-workarounds.md#cross-os-cache) in the Actions Cache documentation. - `enableCrossOsArchive`: **Optional** A boolean value that when enabled, allows Windows runners to save or restore caches independent of the operating system the cache was created on. If this parameter is not set, it defaults to `false`. For more information, see [Cross OS cache](https://github.com/actions/cache/blob/main/tips-and-workarounds.md#cross-os-cache) in the Actions Cache documentation.
@@ -165,9 +168,11 @@ Using expressions to create a `key` allows you to automatically create a new cac
For example, you can create a `key` using an expression that calculates the hash of an npm `package-lock.json` file. So, when the dependencies that make up the `package-lock.json` file change, the cache key changes and a new cache is automatically created. For example, you can create a `key` using an expression that calculates the hash of an npm `package-lock.json` file. So, when the dependencies that make up the `package-lock.json` file change, the cache key changes and a new cache is automatically created.
{% raw %} {% raw %}
```yaml ```yaml
npm-${{ hashFiles('package-lock.json') }} npm-${{ hashFiles('package-lock.json') }}
``` ```
{% endraw %} {% endraw %}
{% data variables.product.prodname_dotcom %} evaluates the expression `hash "package-lock.json"` to derive the final `key`. {% data variables.product.prodname_dotcom %} evaluates the expression `hash "package-lock.json"` to derive the final `key`.
@@ -200,23 +205,27 @@ Cache version is a way to stamp a cache with metadata of the `path` and the comp
### Example using multiple restore keys ### Example using multiple restore keys
{% raw %} {% raw %}
```yaml ```yaml
restore-keys: | restore-keys: |
npm-feature-${{ hashFiles('package-lock.json') }} npm-feature-${{ hashFiles('package-lock.json') }}
npm-feature- npm-feature-
npm- npm-
``` ```
{% endraw %} {% endraw %}
The runner evaluates the expressions, which resolve to these `restore-keys`: The runner evaluates the expressions, which resolve to these `restore-keys`:
{% raw %} {% raw %}
```yaml ```yaml
restore-keys: | restore-keys: |
npm-feature-d5ea0750 npm-feature-d5ea0750
npm-feature- npm-feature-
npm- npm-
``` ```
{% endraw %} {% endraw %}
The restore key `npm-feature-` matches any key that starts with the string `npm-feature-`. For example, both of the keys `npm-feature-fd3052de` and `npm-feature-a9b253ff` match the restore key. The cache with the most recent creation date would be used. The keys in this example are searched in the following order: The restore key `npm-feature-` matches any key that starts with the string `npm-feature-`. For example, both of the keys `npm-feature-fd3052de` and `npm-feature-a9b253ff` match the restore key. The cache with the most recent creation date would be used. The keys in this example are searched in the following order:

View File

@@ -68,7 +68,9 @@ This procedure demonstrates how to create a starter workflow and metadata file.
- name: Run a one-line script - name: Run a one-line script
run: echo Hello from Octo Organization run: echo Hello from Octo Organization
``` ```
4. Create a metadata file inside the `workflow-templates` directory. The metadata file must have the same name as the workflow file, but instead of the `.yml` extension, it must be appended with `.properties.json`. For example, this file named `octo-organization-ci.properties.json` contains the metadata for a workflow file named `octo-organization-ci.yml`: 4. Create a metadata file inside the `workflow-templates` directory. The metadata file must have the same name as the workflow file, but instead of the `.yml` extension, it must be appended with `.properties.json`. For example, this file named `octo-organization-ci.properties.json` contains the metadata for a workflow file named `octo-organization-ci.yml`:
```json copy ```json copy
{ {
"name": "Octo Organization Workflow", "name": "Octo Organization Workflow",
@@ -84,6 +86,7 @@ This procedure demonstrates how to create a starter workflow and metadata file.
] ]
} }
``` ```
- `name` - **Required.** The name of the workflow. This is displayed in the list of available workflows. - `name` - **Required.** The name of the workflow. This is displayed in the list of available workflows.
- `description` - **Required.** The description of the workflow. This is displayed in the list of available workflows. - `description` - **Required.** The description of the workflow. This is displayed in the list of available workflows.
- `iconName` - **Optional.** Specifies an icon for the workflow that is displayed in the list of workflows. `iconName` can one of the following types: - `iconName` - **Optional.** Specifies an icon for the workflow that is displayed in the list of workflows. `iconName` can one of the following types:

View File

@@ -1046,11 +1046,13 @@ on:
**Note**: When pushing multi-architecture container images, this event occurs once per manifest, so you might observe your workflow triggering multiple times. To mitigate this, and only run your workflow job for the event that contains the actual image tag information, use a conditional: **Note**: When pushing multi-architecture container images, this event occurs once per manifest, so you might observe your workflow triggering multiple times. To mitigate this, and only run your workflow job for the event that contains the actual image tag information, use a conditional:
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
job_name: job_name:
if: ${{ github.event.registry_package.package_version.container_metadata.tag.name != '' }} if: ${{ github.event.registry_package.package_version.container_metadata.tag.name != '' }}
``` ```
{% endraw %} {% endraw %}
{% endnote %} {% endnote %}

View File

@@ -110,6 +110,7 @@ You can define inputs and secrets, which can be passed from the caller workflow
1. In the reusable workflow, use the `inputs` and `secrets` keywords to define inputs or secrets that will be passed from a caller workflow. 1. In the reusable workflow, use the `inputs` and `secrets` keywords to define inputs or secrets that will be passed from a caller workflow.
{% raw %} {% raw %}
```yaml ```yaml
on: on:
workflow_call: workflow_call:
@@ -121,6 +122,7 @@ You can define inputs and secrets, which can be passed from the caller workflow
envPAT: envPAT:
required: true required: true
``` ```
{% endraw %} {% endraw %}
For details of the syntax for defining inputs and secrets, see [`on.workflow_call.inputs`](/actions/using-workflows/workflow-syntax-for-github-actions#onworkflow_callinputs) and [`on.workflow_call.secrets`](/actions/using-workflows/workflow-syntax-for-github-actions#onworkflow_callsecrets). For details of the syntax for defining inputs and secrets, see [`on.workflow_call.inputs`](/actions/using-workflows/workflow-syntax-for-github-actions#onworkflow_callinputs) and [`on.workflow_call.secrets`](/actions/using-workflows/workflow-syntax-for-github-actions#onworkflow_callsecrets).
{% ifversion actions-inherit-secrets-reusable-workflows %} {% ifversion actions-inherit-secrets-reusable-workflows %}
@@ -136,6 +138,7 @@ You can define inputs and secrets, which can be passed from the caller workflow
{%- endif %} {%- endif %}
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
reusable_workflow_job: reusable_workflow_job:
@@ -147,6 +150,7 @@ You can define inputs and secrets, which can be passed from the caller workflow
repo-token: ${{ secrets.envPAT }} repo-token: ${{ secrets.envPAT }}
configuration-path: ${{ inputs.config-path }} configuration-path: ${{ inputs.config-path }}
``` ```
{% endraw %} {% endraw %}
In the example above, `envPAT` is an environment secret that's been added to the `production` environment. This environment is therefore referenced within the job. In the example above, `envPAT` is an environment secret that's been added to the `production` environment. This environment is therefore referenced within the job.
@@ -165,6 +169,7 @@ You can define inputs and secrets, which can be passed from the caller workflow
This reusable workflow file named `workflow-B.yml` (we'll refer to this later in the [example caller workflow](#example-caller-workflow)) takes an input string and a secret from the caller workflow and uses them in an action. This reusable workflow file named `workflow-B.yml` (we'll refer to this later in the [example caller workflow](#example-caller-workflow)) takes an input string and a secret from the caller workflow and uses them in an action.
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Reusable workflow example name: Reusable workflow example
@@ -187,6 +192,7 @@ jobs:
repo-token: ${{ secrets.token }} repo-token: ${{ secrets.token }}
configuration-path: ${{ inputs.config-path }} configuration-path: ${{ inputs.config-path }}
``` ```
{% endraw %} {% endraw %}
## Calling a reusable workflow ## Calling a reusable workflow
@@ -217,6 +223,7 @@ A matrix strategy lets you use variables in a single job definition to automatic
This example job below calls a reusable workflow and references the matrix context by defining the variable `target` with the values `[dev, stage, prod]`. It will run three jobs, one for each value in the variable. This example job below calls a reusable workflow and references the matrix context by defining the variable `target` with the values `[dev, stage, prod]`. It will run three jobs, one for each value in the variable.
{% raw %} {% raw %}
```yaml copy ```yaml copy
jobs: jobs:
ReuseableMatrixJobForDeployment: ReuseableMatrixJobForDeployment:
@@ -227,6 +234,7 @@ jobs:
with: with:
target: ${{ matrix.target }} target: ${{ matrix.target }}
``` ```
{% endraw %} {% endraw %}
{% endif %} {% endif %}
@@ -265,6 +273,7 @@ When you call a reusable workflow, you can only use the following keywords in th
This workflow file calls two workflow files. The second of these, `workflow-B.yml` (shown in the [example reusable workflow](#example-reusable-workflow)), is passed an input (`config-path`) and a secret (`token`). This workflow file calls two workflow files. The second of these, `workflow-B.yml` (shown in the [example reusable workflow](#example-reusable-workflow)), is passed an input (`config-path`) and a secret (`token`).
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Call a reusable workflow name: Call a reusable workflow
@@ -287,6 +296,7 @@ jobs:
secrets: secrets:
token: ${{ secrets.GITHUB_TOKEN }} token: ${{ secrets.GITHUB_TOKEN }}
``` ```
{% endraw %} {% endraw %}
{% ifversion nested-reusable-workflow %} {% ifversion nested-reusable-workflow %}
@@ -297,6 +307,7 @@ You can connect a maximum of four levels of workflows - that is, the top-level c
From within a reusable workflow you can call another reusable workflow. From within a reusable workflow you can call another reusable workflow.
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Reusable workflow name: Reusable workflow
@@ -307,6 +318,7 @@ jobs:
call-another-reusable: call-another-reusable:
uses: octo-org/example-repo/.github/workflows/another-reusable.yml@v1 uses: octo-org/example-repo/.github/workflows/another-reusable.yml@v1
``` ```
{% endraw %} {% endraw %}
### Passing secrets to nested workflows ### Passing secrets to nested workflows
@@ -316,6 +328,7 @@ You can use `jobs.<job_id>.secrets` in a calling workflow to pass named secrets
In the following example, workflow A passes all of its secrets to workflow B, by using the `inherit` keyword, but workflow B only passes one secret to workflow C. Any of the other secrets passed to workflow B are not available to workflow C. In the following example, workflow A passes all of its secrets to workflow B, by using the `inherit` keyword, but workflow B only passes one secret to workflow C. Any of the other secrets passed to workflow B are not available to workflow C.
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
workflowA-calls-workflowB: workflowA-calls-workflowB:
@@ -330,6 +343,7 @@ jobs:
secrets: secrets:
envPAT: ${{ secrets.envPAT }} # pass just this secret envPAT: ${{ secrets.envPAT }} # pass just this secret
``` ```
{% endraw %} {% endraw %}
### Access and permissions ### Access and permissions
@@ -351,6 +365,7 @@ That means if the last successful completing reusable workflow sets an empty str
The following reusable workflow has a single job containing two steps. In each of these steps we set a single word as the output: "hello" and "world." In the `outputs` section of the job, we map these step outputs to job outputs called: `output1` and `output2`. In the `on.workflow_call.outputs` section we then define two outputs for the workflow itself, one called `firstword` which we map to `output1`, and one called `secondword` which we map to `output2`. The following reusable workflow has a single job containing two steps. In each of these steps we set a single word as the output: "hello" and "world." In the `outputs` section of the job, we map these step outputs to job outputs called: `output1` and `output2`. In the `on.workflow_call.outputs` section we then define two outputs for the workflow itself, one called `firstword` which we map to `output1`, and one called `secondword` which we map to `output2`.
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Reusable workflow name: Reusable workflow
@@ -387,11 +402,13 @@ jobs:
run: echo "::set-output name=secondword::world" run: echo "::set-output name=secondword::world"
{%- endif %}{% raw %} {%- endif %}{% raw %}
``` ```
{% endraw %} {% endraw %}
We can now use the outputs in the caller workflow, in the same way you would use the outputs from a job within the same workflow. We reference the outputs using the names defined at the workflow level in the reusable workflow: `firstword` and `secondword`. In this workflow, `job1` calls the reusable workflow and `job2` prints the outputs from the reusable workflow ("hello world") to standard output in the workflow log. We can now use the outputs in the caller workflow, in the same way you would use the outputs from a job within the same workflow. We reference the outputs using the names defined at the workflow level in the reusable workflow: `firstword` and `secondword`. In this workflow, `job1` calls the reusable workflow and `job2` prints the outputs from the reusable workflow ("hello world") to standard output in the workflow log.
{% raw %} {% raw %}
```yaml copy ```yaml copy
name: Call a reusable workflow and use its outputs name: Call a reusable workflow and use its outputs
@@ -408,6 +425,7 @@ jobs:
steps: steps:
- run: echo ${{ needs.job1.outputs.firstword }} ${{ needs.job1.outputs.secondword }} - run: echo ${{ needs.job1.outputs.firstword }} ${{ needs.job1.outputs.secondword }}
``` ```
{% endraw %} {% endraw %}
For more information on using job outputs, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idoutputs)." For more information on using job outputs, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idoutputs)."

View File

@@ -256,6 +256,7 @@ Creates a warning message and prints the message to the log. {% data reusables.a
```bash copy ```bash copy
echo "::warning file=app.js,line=1,col=5,endColumn=7::Missing semicolon" echo "::warning file=app.js,line=1,col=5,endColumn=7::Missing semicolon"
``` ```
{% endbash %} {% endbash %}
{% powershell %} {% powershell %}
@@ -524,6 +525,7 @@ jobs:
echo "::add-mask::$RETRIEVED_SECRET" echo "::add-mask::$RETRIEVED_SECRET"
echo "We retrieved our masked secret: $RETRIEVED_SECRET" echo "We retrieved our masked secret: $RETRIEVED_SECRET"
``` ```
{% endbash %} {% endbash %}
{% powershell %} {% powershell %}
@@ -563,6 +565,7 @@ jobs:
echo "::add-mask::$Retrieved_Secret" echo "::add-mask::$Retrieved_Secret"
echo "We retrieved our masked secret: $Retrieved_Secret" echo "We retrieved our masked secret: $Retrieved_Secret"
``` ```
{% endpowershell %} {% endpowershell %}
## Stopping and starting workflow commands ## Stopping and starting workflow commands
@@ -603,6 +606,7 @@ jobs:
echo "::$stopMarker::" echo "::$stopMarker::"
echo '::warning:: This is a warning again, because stop-commands has been turned off.' echo '::warning:: This is a warning again, because stop-commands has been turned off.'
``` ```
{% endbash %} {% endbash %}
{% powershell %} {% powershell %}
@@ -715,6 +719,7 @@ This example uses JavaScript to run the `save-state` command. The resulting envi
```javascript copy ```javascript copy
console.log('::save-state name=processID::12345') console.log('::save-state name=processID::12345')
``` ```
{% endif %} {% endif %}
The `STATE_processID` variable is then exclusively available to the cleanup script running under the `main` action. This example runs in `main` and uses JavaScript to display the value assigned to the `STATE_processID` environment variable: The `STATE_processID` variable is then exclusively available to the cleanup script running under the `main` action. This example runs in `main` and uses JavaScript to display the value assigned to the `STATE_processID` environment variable:
@@ -888,6 +893,7 @@ Sets a step's output parameter. Note that the step will need an `id` to be defin
```bash copy ```bash copy
echo "{name}={value}" >> "$GITHUB_OUTPUT" echo "{name}={value}" >> "$GITHUB_OUTPUT"
``` ```
{% endbash %} {% endbash %}
{% powershell %} {% powershell %}
@@ -1087,6 +1093,7 @@ Prepends a directory to the system `PATH` variable and automatically makes it av
```bash copy ```bash copy
echo "{path}" >> $GITHUB_PATH echo "{path}" >> $GITHUB_PATH
``` ```
{% endbash %} {% endbash %}
{% powershell %} {% powershell %}

View File

@@ -37,9 +37,11 @@ This value can include expressions and can reference the [`github`](/actions/lea
### Example of `run-name` ### Example of `run-name`
{% raw %} {% raw %}
```yaml ```yaml
run-name: Deploy to ${{ inputs.deploy_target }} by @${{ github.actor }} run-name: Deploy to ${{ inputs.deploy_target }} by @${{ github.actor }}
``` ```
{% endraw %} {% endraw %}
{% endif %} {% endif %}
@@ -88,6 +90,7 @@ If a caller workflow passes an input that is not specified in the called workflo
### Example of `on.workflow_call.inputs` ### Example of `on.workflow_call.inputs`
{% raw %} {% raw %}
```yaml ```yaml
on: on:
workflow_call: workflow_call:
@@ -106,6 +109,7 @@ jobs:
- name: Print the input name to STDOUT - name: Print the input name to STDOUT
run: echo The username is ${{ inputs.username }} run: echo The username is ${{ inputs.username }}
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/using-workflows/reusing-workflows)." For more information, see "[AUTOTITLE](/actions/using-workflows/reusing-workflows)."
@@ -123,6 +127,7 @@ In the example below, two outputs are defined for this reusable workflow: `workf
### Example of `on.workflow_call.outputs` ### Example of `on.workflow_call.outputs`
{% raw %} {% raw %}
```yaml ```yaml
on: on:
workflow_call: workflow_call:
@@ -135,6 +140,7 @@ on:
description: "The second job output" description: "The second job output"
value: ${{ jobs.my_job.outputs.job_output2 }} value: ${{ jobs.my_job.outputs.job_output2 }}
``` ```
{% endraw %} {% endraw %}
For information on how to reference a job output, see [`jobs.<job_id>.outputs`](#jobsjob_idoutputs). For more information, see "[AUTOTITLE](/actions/using-workflows/reusing-workflows)." For information on how to reference a job output, see [`jobs.<job_id>.outputs`](#jobsjob_idoutputs). For more information, see "[AUTOTITLE](/actions/using-workflows/reusing-workflows)."
@@ -156,6 +162,7 @@ If a caller workflow passes a secret that is not specified in the called workflo
### Example of `on.workflow_call.secrets` ### Example of `on.workflow_call.secrets`
{% raw %} {% raw %}
```yaml ```yaml
on: on:
workflow_call: workflow_call:
@@ -181,6 +188,7 @@ jobs:
secrets: secrets:
token: ${{ secrets.access-token }} token: ${{ secrets.access-token }}
``` ```
{% endraw %} {% endraw %}
## `on.workflow_call.secrets.<secret_id>` ## `on.workflow_call.secrets.<secret_id>`
@@ -326,6 +334,7 @@ You can run an unlimited number of steps as long as you are within the workflow
### Example of `jobs.<job_id>.steps` ### Example of `jobs.<job_id>.steps`
{% raw %} {% raw %}
```yaml ```yaml
name: Greeting from Mona name: Greeting from Mona
@@ -345,6 +354,7 @@ jobs:
run: | run: |
echo $MY_VAR $FIRST_NAME $MIDDLE_NAME $LAST_NAME. echo $MY_VAR $FIRST_NAME $MIDDLE_NAME $LAST_NAME.
``` ```
{% endraw %} {% endraw %}
## `jobs.<job_id>.steps[*].id` ## `jobs.<job_id>.steps[*].id`
@@ -388,6 +398,7 @@ Secrets cannot be directly referenced in `if:` conditionals. Instead, consider s
If a secret has not been set, the return value of an expression referencing the secret (such as {% raw %}`${{ secrets.SuperSecret }}`{% endraw %} in the example) will be an empty string. If a secret has not been set, the return value of an expression referencing the secret (such as {% raw %}`${{ secrets.SuperSecret }}`{% endraw %} in the example) will be an empty string.
{% raw %} {% raw %}
```yaml ```yaml
name: Run a step if a secret has been set name: Run a step if a secret has been set
on: push on: push
@@ -402,6 +413,7 @@ jobs:
- if: ${{ env.super_secret == '' }} - if: ${{ env.super_secret == '' }}
run: echo 'This step will only run if the secret does not have a value set.' run: echo 'This step will only run if the secret does not have a value set.'
``` ```
{% endraw %} {% endraw %}
For more information, see "[AUTOTITLE](/actions/learn-github-actions/contexts#context-availability)" and "[AUTOTITLE](/actions/security-guides/encrypted-secrets)." For more information, see "[AUTOTITLE](/actions/learn-github-actions/contexts#context-availability)" and "[AUTOTITLE](/actions/security-guides/encrypted-secrets)."
@@ -513,6 +525,7 @@ jobs:
- name: My first step - name: My first step
uses: docker://ghcr.io/OWNER/IMAGE_NAME uses: docker://ghcr.io/OWNER/IMAGE_NAME
``` ```
{% endif %} {% endif %}
### Example: Using a Docker public registry action ### Example: Using a Docker public registry action
@@ -714,6 +727,7 @@ A `string` that defines the inputs for a Docker container. {% data variables.pro
### Example of `jobs.<job_id>.steps[*].with.args` ### Example of `jobs.<job_id>.steps[*].with.args`
{% raw %} {% raw %}
```yaml ```yaml
steps: steps:
- name: Explain why this job ran - name: Explain why this job ran
@@ -722,6 +736,7 @@ steps:
entrypoint: /bin/echo entrypoint: /bin/echo
args: The ${{ github.event_name }} event triggered this step. args: The ${{ github.event_name }} event triggered this step.
``` ```
{% endraw %} {% endraw %}
The `args` are used in place of the `CMD` instruction in a `Dockerfile`. If you use `CMD` in your `Dockerfile`, use the guidelines ordered by preference: The `args` are used in place of the `CMD` instruction in a `Dockerfile`. If you use `CMD` in your `Dockerfile`, use the guidelines ordered by preference:
@@ -757,6 +772,7 @@ Public actions may specify expected variables in the README file. If you are set
### Example of `jobs.<job_id>.steps[*].env` ### Example of `jobs.<job_id>.steps[*].env`
{% raw %} {% raw %}
```yaml ```yaml
steps: steps:
- name: My first action - name: My first action
@@ -765,6 +781,7 @@ steps:
FIRST_NAME: Mona FIRST_NAME: Mona
LAST_NAME: Octocat LAST_NAME: Octocat
``` ```
{% endraw %} {% endraw %}
## `jobs.<job_id>.steps[*].continue-on-error` ## `jobs.<job_id>.steps[*].continue-on-error`
@@ -840,6 +857,7 @@ Prevents a workflow run from failing when a job fails. Set to `true` to allow a
You can allow specific jobs in a job matrix to fail without failing the workflow run. For example, if you wanted to only allow an experimental job with `node` set to `15` to fail without failing the workflow run. You can allow specific jobs in a job matrix to fail without failing the workflow run. For example, if you wanted to only allow an experimental job with `node` set to `15` to fail without failing the workflow run.
{% raw %} {% raw %}
```yaml ```yaml
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
continue-on-error: ${{ matrix.experimental }} continue-on-error: ${{ matrix.experimental }}
@@ -854,6 +872,7 @@ strategy:
os: ubuntu-latest os: ubuntu-latest
experimental: true experimental: true
``` ```
{% endraw %} {% endraw %}
## `jobs.<job_id>.container` ## `jobs.<job_id>.container`
@@ -927,6 +946,7 @@ The Docker image to use as the service container to run the action. The value ca
### Example of `jobs.<job_id>.services.<service_id>.credentials` ### Example of `jobs.<job_id>.services.<service_id>.credentials`
{% raw %} {% raw %}
```yaml ```yaml
services: services:
myservice1: myservice1:
@@ -940,6 +960,7 @@ services:
username: ${{ secrets.DOCKER_USER }} username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }} password: ${{ secrets.DOCKER_PASSWORD }}
``` ```
{% endraw %} {% endraw %}
## `jobs.<job_id>.services.<service_id>.env` ## `jobs.<job_id>.services.<service_id>.env`
@@ -1026,6 +1047,7 @@ Any secrets that you pass must match the names defined in the called workflow.
### Example of `jobs.<job_id>.secrets` ### Example of `jobs.<job_id>.secrets`
{% raw %} {% raw %}
```yaml ```yaml
jobs: jobs:
call-workflow: call-workflow:
@@ -1033,6 +1055,7 @@ jobs:
secrets: secrets:
access-token: ${{ secrets.PERSONAL_ACCESS_TOKEN }} access-token: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
``` ```
{% endraw %} {% endraw %}
{% ifversion actions-inherit-secrets-reusable-workflows %} {% ifversion actions-inherit-secrets-reusable-workflows %}

View File

@@ -78,42 +78,55 @@ For example, you can enable any {% data variables.product.prodname_GH_advanced_s
1. Enable features for {% data variables.product.prodname_GH_advanced_security %}. 1. Enable features for {% data variables.product.prodname_GH_advanced_security %}.
- To enable {% data variables.product.prodname_code_scanning_caps %}, enter the following commands. - To enable {% data variables.product.prodname_code_scanning_caps %}, enter the following commands.
```shell ```shell
ghe-config app.minio.enabled true ghe-config app.minio.enabled true
ghe-config app.code-scanning.enabled true ghe-config app.code-scanning.enabled true
``` ```
- To enable {% data variables.product.prodname_secret_scanning_caps %}, enter the following command. - To enable {% data variables.product.prodname_secret_scanning_caps %}, enter the following command.
```shell ```shell
ghe-config app.secret-scanning.enabled true ghe-config app.secret-scanning.enabled true
``` ```
- To enable the dependency graph, enter the following {% ifversion ghes %}command{% else %}commands{% endif %}. - To enable the dependency graph, enter the following {% ifversion ghes %}command{% else %}commands{% endif %}.
{% ifversion ghes %}```shell {% ifversion ghes %}```shell
ghe-config app.dependency-graph.enabled true ghe-config app.dependency-graph.enabled true
``` ```
{% else %}```shell {% else %}```shell
ghe-config app.github.dependency-graph-enabled true ghe-config app.github.dependency-graph-enabled true
ghe-config app.github.vulnerability-alerting-and-settings-enabled true ghe-config app.github.vulnerability-alerting-and-settings-enabled true
```{% endif %} ```{% endif %}
2. Optionally, disable features for {% data variables.product.prodname_GH_advanced_security %}. 2. Optionally, disable features for {% data variables.product.prodname_GH_advanced_security %}.
- To disable {% data variables.product.prodname_code_scanning %}, enter the following commands. - To disable {% data variables.product.prodname_code_scanning %}, enter the following commands.
```shell ```shell
ghe-config app.minio.enabled false ghe-config app.minio.enabled false
ghe-config app.code-scanning.enabled false ghe-config app.code-scanning.enabled false
``` ```
- To disable {% data variables.product.prodname_secret_scanning %}, enter the following command. - To disable {% data variables.product.prodname_secret_scanning %}, enter the following command.
```shell ```shell
ghe-config app.secret-scanning.enabled false ghe-config app.secret-scanning.enabled false
``` ```
- To disable the dependency graph, enter the following {% ifversion ghes %}command{% else %}commands{% endif %}. - To disable the dependency graph, enter the following {% ifversion ghes %}command{% else %}commands{% endif %}.
{% ifversion ghes %}```shell {% ifversion ghes %}```shell
ghe-config app.dependency-graph.enabled false ghe-config app.dependency-graph.enabled false
``` ```
{% else %}```shell {% else %}```shell
ghe-config app.github.dependency-graph-enabled false ghe-config app.github.dependency-graph-enabled false
ghe-config app.github.vulnerability-alerting-and-settings-enabled false ghe-config app.github.vulnerability-alerting-and-settings-enabled false
```{% endif %} ```{% endif %}
3. Apply the configuration. 3. Apply the configuration.
```shell ```shell
ghe-config-apply ghe-config-apply
``` ```

View File

@@ -40,6 +40,7 @@ Before configuring {% data variables.product.prodname_dependabot %}, install Doc
docker pull ghcr.io/dependabot/dependabot-updater-github-actions:VERSION@SHA docker pull ghcr.io/dependabot/dependabot-updater-github-actions:VERSION@SHA
docker pull ghcr.io/dependabot/dependabot-updater-npm:VERSION@SHA docker pull ghcr.io/dependabot/dependabot-updater-npm:VERSION@SHA
``` ```
{%- endif %} {%- endif %}
{% note %} {% note %}

View File

@@ -42,6 +42,7 @@ If {% data variables.location.product_location %} uses clustering, you cannot en
1. In the administrative shell, enable the dependency graph on {% data variables.location.product_location %}: 1. In the administrative shell, enable the dependency graph on {% data variables.location.product_location %}:
{% ifversion ghes %}```shell {% ifversion ghes %}```shell
ghe-config app.dependency-graph.enabled true ghe-config app.dependency-graph.enabled true
``` ```
{% else %}```shell {% else %}```shell
ghe-config app.github.dependency-graph-enabled true ghe-config app.github.dependency-graph-enabled true

View File

@@ -24,8 +24,10 @@ The first time that you access the {% data variables.enterprise.management_conso
## Accessing the {% data variables.enterprise.management_console %} as an unauthenticated user ## Accessing the {% data variables.enterprise.management_console %} as an unauthenticated user
1. Visit this URL in your browser, replacing `hostname` with your actual {% data variables.product.prodname_ghe_server %} hostname or IP address: 1. Visit this URL in your browser, replacing `hostname` with your actual {% data variables.product.prodname_ghe_server %} hostname or IP address:
```shell ```shell
http(s)://HOSTNAME/setup http(s)://HOSTNAME/setup
``` ```
{% data reusables.enterprise_management_console.type-management-console-password %} {% data reusables.enterprise_management_console.type-management-console-password %}
{% data reusables.enterprise_management_console.click-continue-authentication %} {% data reusables.enterprise_management_console.click-continue-authentication %}

View File

@@ -65,4 +65,5 @@ Your instance validates the hostnames for proxy exclusion using the list of IANA
```shell ```shell
ghe-config noproxy.exception-tld-list "COMMA-SEPARATED-TLD-LIST" ghe-config noproxy.exception-tld-list "COMMA-SEPARATED-TLD-LIST"
``` ```
{% data reusables.enterprise.apply-configuration %} {% data reusables.enterprise.apply-configuration %}

View File

@@ -30,6 +30,7 @@ We do not recommend customizing UFW as it can complicate some troubleshooting is
{% data reusables.enterprise_installation.ssh-into-instance %} {% data reusables.enterprise_installation.ssh-into-instance %}
2. To view the default firewall rules, use the `sudo ufw status` command. You should see output similar to this: 2. To view the default firewall rules, use the `sudo ufw status` command. You should see output similar to this:
```shell ```shell
$ sudo ufw status $ sudo ufw status
> Status: active > Status: active
@@ -67,10 +68,13 @@ We do not recommend customizing UFW as it can complicate some troubleshooting is
1. Configure a custom firewall rule. 1. Configure a custom firewall rule.
2. Check the status of each new rule with the `status numbered` command. 2. Check the status of each new rule with the `status numbered` command.
```shell ```shell
sudo ufw status numbered sudo ufw status numbered
``` ```
3. To back up your custom firewall rules, use the `cp`command to move the rules to a new file. 3. To back up your custom firewall rules, use the `cp`command to move the rules to a new file.
```shell ```shell
sudo cp -r /etc/ufw ~/ufw.backup sudo cp -r /etc/ufw ~/ufw.backup
``` ```
@@ -89,14 +93,19 @@ If something goes wrong after you change the firewall rules, you can reset the r
{% data reusables.enterprise_installation.ssh-into-instance %} {% data reusables.enterprise_installation.ssh-into-instance %}
2. To restore the previous backup rules, copy them back to the firewall with the `cp` command. 2. To restore the previous backup rules, copy them back to the firewall with the `cp` command.
```shell ```shell
sudo cp -f ~/ufw.backup/*rules /etc/ufw sudo cp -f ~/ufw.backup/*rules /etc/ufw
``` ```
3. Restart the firewall with the `systemctl` command. 3. Restart the firewall with the `systemctl` command.
```shell ```shell
sudo systemctl restart ufw sudo systemctl restart ufw
``` ```
4. Confirm that the rules are back to their defaults with the `ufw status` command. 4. Confirm that the rules are back to their defaults with the `ufw status` command.
```shell ```shell
$ sudo ufw status $ sudo ufw status
> Status: active > Status: active

View File

@@ -33,6 +33,7 @@ $ ghe-announce -u
{% ifversion ghe-announce-dismiss %} {% ifversion ghe-announce-dismiss %}
To allow each user to dismiss the announcement for themselves, use the `-d` flag. To allow each user to dismiss the announcement for themselves, use the `-d` flag.
```shell ```shell
# Sets a user-dismissible message that's visible to everyone # Sets a user-dismissible message that's visible to everyone
$ ghe-announce -d -s MESSAGE $ ghe-announce -d -s MESSAGE
@@ -41,6 +42,7 @@ $ ghe-announce -d -s MESSAGE
$ ghe-announce -u $ ghe-announce -u
> Removed the announcement message, which was user dismissible: MESSAGE > Removed the announcement message, which was user dismissible: MESSAGE
``` ```
{% endif %} {% endif %}
You can also set an announcement banner using the enterprise settings on {% data variables.product.product_name %}. For more information, see "[AUTOTITLE](/admin/user-management/managing-users-in-your-enterprise/customizing-user-messages-for-your-enterprise#creating-a-global-announcement-banner)." You can also set an announcement banner using the enterprise settings on {% data variables.product.product_name %}. For more information, see "[AUTOTITLE](/admin/user-management/managing-users-in-your-enterprise/customizing-user-messages-for-your-enterprise#creating-a-global-announcement-banner)."
@@ -88,6 +90,7 @@ This utility cleans up a variety of caches that might potentially take up extra
```shell ```shell
ghe-cleanup-caches ghe-cleanup-caches
``` ```
### ghe-cleanup-settings ### ghe-cleanup-settings
This utility wipes all existing {% data variables.enterprise.management_console %} settings. This utility wipes all existing {% data variables.enterprise.management_console %} settings.
@@ -114,6 +117,7 @@ $ ghe-config core.github-hostname URL
$ ghe-config -l $ ghe-config -l
# Lists all the configuration values # Lists all the configuration values
``` ```
Allows you to find the universally unique identifier (UUID) of your node in `cluster.conf`. Allows you to find the universally unique identifier (UUID) of your node in `cluster.conf`.
```shell ```shell
@@ -157,6 +161,7 @@ ghe-dbconsole
This utility returns a summary of Elasticsearch indexes in CSV format. This utility returns a summary of Elasticsearch indexes in CSV format.
Print an index summary with a header row to `STDOUT`: Print an index summary with a header row to `STDOUT`:
```shell ```shell
$ ghe-es-index-status -do $ ghe-es-index-status -do
> warning: parser/current is loading parser/ruby23, which recognizes > warning: parser/current is loading parser/ruby23, which recognizes
@@ -424,12 +429,14 @@ ghe-ssh-check-host-keys
``` ```
If a leaked host key is found the utility exits with status `1` and a message: If a leaked host key is found the utility exits with status `1` and a message:
```shell ```shell
> One or more of your SSH host keys were found in the blacklist. > One or more of your SSH host keys were found in the blacklist.
> Please reset your host keys using ghe-ssh-roll-host-keys. > Please reset your host keys using ghe-ssh-roll-host-keys.
``` ```
If a leaked host key was not found, the utility exits with status `0` and a message: If a leaked host key was not found, the utility exits with status `0` and a message:
```shell ```shell
> The SSH host keys were not found in the SSH host key blacklist. > The SSH host keys were not found in the SSH host key blacklist.
> No additional steps are needed/recommended at this time. > No additional steps are needed/recommended at this time.
@@ -568,6 +575,7 @@ ghe-webhook-logs -f -a YYYY-MM-DD
The date format should be `YYYY-MM-DD`, `YYYY-MM-DD HH:MM:SS`, or `YYYY-MM-DD HH:MM:SS (+/-) HH:M`. The date format should be `YYYY-MM-DD`, `YYYY-MM-DD HH:MM:SS`, or `YYYY-MM-DD HH:MM:SS (+/-) HH:M`.
To show the full hook payload, result, and any exceptions for the delivery: To show the full hook payload, result, and any exceptions for the delivery:
```shell ```shell
ghe-webhook-logs -g DELIVERY_GUID ghe-webhook-logs -g DELIVERY_GUID
``` ```
@@ -639,21 +647,25 @@ By default, the command creates the tarball in _/tmp_, but you can also have it
{% data reusables.enterprise.bundle-utility-period-argument-availability-note %} {% data reusables.enterprise.bundle-utility-period-argument-availability-note %}
To create a standard bundle: To create a standard bundle:
```shell ```shell
ssh -p 122 admin@HOSTNAME -- 'ghe-cluster-support-bundle -o' > cluster-support-bundle.tgz ssh -p 122 admin@HOSTNAME -- 'ghe-cluster-support-bundle -o' > cluster-support-bundle.tgz
``` ```
To create a standard bundle including data from the last 3 hours: To create a standard bundle including data from the last 3 hours:
```shell ```shell
ssh -p 122 admin@HOSTNAME -- "ghe-cluster-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}3hours {% elsif ghes < 3.9 %}'3 hours' {% endif %} -o" > support-bundle.tgz ssh -p 122 admin@HOSTNAME -- "ghe-cluster-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}3hours {% elsif ghes < 3.9 %}'3 hours' {% endif %} -o" > support-bundle.tgz
``` ```
To create a standard bundle including data from the last 2 days: To create a standard bundle including data from the last 2 days:
```shell ```shell
ssh -p 122 admin@HOSTNAME -- "ghe-cluster-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}2days {% elsif ghes < 3.9 %}'2 days' {% endif %} -o" > support-bundle.tgz ssh -p 122 admin@HOSTNAME -- "ghe-cluster-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}2days {% elsif ghes < 3.9 %}'2 days' {% endif %} -o" > support-bundle.tgz
``` ```
To create a standard bundle including data from the last 4 days and 8 hours: To create a standard bundle including data from the last 4 days and 8 hours:
```shell ```shell
ssh -p 122 admin@HOSTNAME -- "ghe-cluster-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}4days8hours {% elsif ghes < 3.9 %}'4 days 8 hours' {% endif %} -o" > support-bundle.tgz ssh -p 122 admin@HOSTNAME -- "ghe-cluster-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}4days8hours {% elsif ghes < 3.9 %}'4 days 8 hours' {% endif %} -o" > support-bundle.tgz
``` ```
@@ -665,11 +677,13 @@ ssh -p 122 admin@HOSTNAME -- ghe-cluster-support-bundle -x -o' > cluster-support
``` ```
To send a bundle to {% data variables.contact.github_support %}: To send a bundle to {% data variables.contact.github_support %}:
```shell ```shell
ssh -p 122 admin@HOSTNAME -- 'ghe-cluster-support-bundle -u' ssh -p 122 admin@HOSTNAME -- 'ghe-cluster-support-bundle -u'
``` ```
To send a bundle to {% data variables.contact.github_support %} and associate the bundle with a ticket: To send a bundle to {% data variables.contact.github_support %} and associate the bundle with a ticket:
```shell ```shell
ssh -p 122 admin@HOSTNAME -- 'ghe-cluster-support-bundle -t TICKET_ID' ssh -p 122 admin@HOSTNAME -- 'ghe-cluster-support-bundle -t TICKET_ID'
``` ```
@@ -683,11 +697,13 @@ ghe-dpages
``` ```
To show a summary of repository location and health: To show a summary of repository location and health:
```shell ```shell
ghe-dpages status ghe-dpages status
``` ```
To evacuate a {% data variables.product.prodname_pages %} storage service before evacuating a cluster node: To evacuate a {% data variables.product.prodname_pages %} storage service before evacuating a cluster node:
```shell ```shell
ghe-dpages evacuate pages-server-UUID ghe-dpages evacuate pages-server-UUID
``` ```
@@ -709,6 +725,7 @@ ghe-spokesctl routes
``` ```
To evacuate storage services on a cluster node: To evacuate storage services on a cluster node:
```shell ```shell
ghe-spokesctl server set evacuating git-server-UUID ghe-spokesctl server set evacuating git-server-UUID
``` ```
@@ -983,6 +1000,7 @@ For more information, please see our guides on [migrating data to and from your
### git-import-detect ### git-import-detect
Given a URL, detect which type of source control management system is at the other end. During a manual import this is likely already known, but this can be very useful in automated scripts. Given a URL, detect which type of source control management system is at the other end. During a manual import this is likely already known, but this can be very useful in automated scripts.
```shell ```shell
git-import-detect git-import-detect
``` ```
@@ -990,6 +1008,7 @@ git-import-detect
### git-import-hg-raw ### git-import-hg-raw
This utility imports a Mercurial repository to this Git repository. For more information, see "[AUTOTITLE](/migrations/importing-source-code/using-the-command-line-to-import-source-code/importing-from-other-version-control-systems-with-the-administrative-shell)." This utility imports a Mercurial repository to this Git repository. For more information, see "[AUTOTITLE](/migrations/importing-source-code/using-the-command-line-to-import-source-code/importing-from-other-version-control-systems-with-the-administrative-shell)."
```shell ```shell
git-import-hg-raw git-import-hg-raw
``` ```
@@ -997,6 +1016,7 @@ git-import-hg-raw
### git-import-svn-raw ### git-import-svn-raw
This utility imports Subversion history and file data into a Git branch. This is a straight copy of the tree, ignoring any trunk or branch distinction. For more information, see "[AUTOTITLE](/migrations/importing-source-code/using-the-command-line-to-import-source-code/importing-from-other-version-control-systems-with-the-administrative-shell)." This utility imports Subversion history and file data into a Git branch. This is a straight copy of the tree, ignoring any trunk or branch distinction. For more information, see "[AUTOTITLE](/migrations/importing-source-code/using-the-command-line-to-import-source-code/importing-from-other-version-control-systems-with-the-administrative-shell)."
```shell ```shell
git-import-svn-raw git-import-svn-raw
``` ```
@@ -1004,6 +1024,7 @@ git-import-svn-raw
### git-import-tfs-raw ### git-import-tfs-raw
This utility imports from Team Foundation Version Control (TFVC). For more information, see "[AUTOTITLE](/migrations/importing-source-code/using-the-command-line-to-import-source-code/importing-from-other-version-control-systems-with-the-administrative-shell))." This utility imports from Team Foundation Version Control (TFVC). For more information, see "[AUTOTITLE](/migrations/importing-source-code/using-the-command-line-to-import-source-code/importing-from-other-version-control-systems-with-the-administrative-shell))."
```shell ```shell
git-import-tfs-raw git-import-tfs-raw
``` ```
@@ -1011,6 +1032,7 @@ git-import-tfs-raw
### git-import-rewrite ### git-import-rewrite
This utility rewrites the imported repository. This gives you a chance to rename authors and, for Subversion and TFVC, produces Git branches based on folders. For more information, see "[AUTOTITLE](/migrations/importing-source-code/using-the-command-line-to-import-source-code/importing-from-other-version-control-systems-with-the-administrative-shell)." This utility rewrites the imported repository. This gives you a chance to rename authors and, for Subversion and TFVC, produces Git branches based on folders. For more information, see "[AUTOTITLE](/migrations/importing-source-code/using-the-command-line-to-import-source-code/importing-from-other-version-control-systems-with-the-administrative-shell)."
```shell ```shell
git-import-rewrite git-import-rewrite
``` ```
@@ -1047,31 +1069,37 @@ By default, the command creates the tarball in _/tmp_, but you can also have it
{% data reusables.enterprise.bundle-utility-period-argument-availability-note %} {% data reusables.enterprise.bundle-utility-period-argument-availability-note %}
To create a standard bundle: To create a standard bundle:
```shell ```shell
ssh -p 122 admin@HOSTNAME -- 'ghe-support-bundle -o' > support-bundle.tgz ssh -p 122 admin@HOSTNAME -- 'ghe-support-bundle -o' > support-bundle.tgz
``` ```
To create a standard bundle including data from the last 3 hours: To create a standard bundle including data from the last 3 hours:
```shell ```shell
ssh -p 122 admin@HOSTNAME -- "ghe-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}3hours {% elsif ghes < 3.9 %}'3 hours' {% endif %} -o" > support-bundle.tgz ssh -p 122 admin@HOSTNAME -- "ghe-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}3hours {% elsif ghes < 3.9 %}'3 hours' {% endif %} -o" > support-bundle.tgz
``` ```
To create a standard bundle including data from the last 2 days: To create a standard bundle including data from the last 2 days:
```shell ```shell
ssh -p 122 admin@HOSTNAME -- "ghe-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}2days {% elsif ghes < 3.9 %}'2 days' {% endif %} -o" > support-bundle.tgz ssh -p 122 admin@HOSTNAME -- "ghe-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}2days {% elsif ghes < 3.9 %}'2 days' {% endif %} -o" > support-bundle.tgz
``` ```
To create a standard bundle including data from the last 4 days and 8 hours: To create a standard bundle including data from the last 4 days and 8 hours:
```shell ```shell
ssh -p 122 admin@HOSTNAME -- "ghe-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}4days8hours {% elsif ghes < 3.9 %}'4 days 8 hours' {% endif %} -o" > support-bundle.tgz ssh -p 122 admin@HOSTNAME -- "ghe-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}4days8hours {% elsif ghes < 3.9 %}'4 days 8 hours' {% endif %} -o" > support-bundle.tgz
``` ```
To create an extended bundle including data from the last 8 days: To create an extended bundle including data from the last 8 days:
```shell ```shell
ssh -p 122 admin@HOSTNAME -- 'ghe-support-bundle -x -o' > support-bundle.tgz ssh -p 122 admin@HOSTNAME -- 'ghe-support-bundle -x -o' > support-bundle.tgz
``` ```
To send a bundle to {% data variables.contact.github_support %}: To send a bundle to {% data variables.contact.github_support %}:
```shell ```shell
ssh -p 122 admin@HOSTNAME -- 'ghe-support-bundle -u' ssh -p 122 admin@HOSTNAME -- 'ghe-support-bundle -u'
``` ```
@@ -1087,11 +1115,13 @@ ssh -p 122 admin@HOSTNAME -- 'ghe-support-bundle -t TICKET_ID'
This utility sends information from your appliance to {% data variables.product.prodname_enterprise %} support. You can either specify a local file, or provide a stream of up to 100MB of data via `STDIN`. The uploaded data can optionally be associated with a support ticket. This utility sends information from your appliance to {% data variables.product.prodname_enterprise %} support. You can either specify a local file, or provide a stream of up to 100MB of data via `STDIN`. The uploaded data can optionally be associated with a support ticket.
To send a file to {% data variables.contact.github_support %} and associate the file with a ticket: To send a file to {% data variables.contact.github_support %} and associate the file with a ticket:
```shell ```shell
ghe-support-upload -f FILE_PATH -t TICKET_ID ghe-support-upload -f FILE_PATH -t TICKET_ID
``` ```
To upload data via `STDIN` and associating the data with a ticket: To upload data via `STDIN` and associating the data with a ticket:
```shell ```shell
ghe-repl-status -vv | ghe-support-upload -t TICKET_ID -d "Verbose Replication Status" ghe-repl-status -vv | ghe-support-upload -t TICKET_ID -d "Verbose Replication Status"
``` ```
@@ -1143,11 +1173,13 @@ ssh -p 122 admin@HOSTNAME -- 'ghe-update-check'
This utility installs or verifies an upgrade package. You can also use this utility to roll back a patch release if an upgrade fails or is interrupted. For more information, see "[AUTOTITLE](/admin/enterprise-management/updating-the-virtual-machine-and-physical-resources/upgrading-github-enterprise-server)." This utility installs or verifies an upgrade package. You can also use this utility to roll back a patch release if an upgrade fails or is interrupted. For more information, see "[AUTOTITLE](/admin/enterprise-management/updating-the-virtual-machine-and-physical-resources/upgrading-github-enterprise-server)."
To verify an upgrade package: To verify an upgrade package:
```shell ```shell
ghe-upgrade --verify UPGRADE-PACKAGE-FILENAME ghe-upgrade --verify UPGRADE-PACKAGE-FILENAME
``` ```
To install an upgrade package: To install an upgrade package:
```shell ```shell
ghe-upgrade UPGRADE-PACKAGE-FILENAME ghe-upgrade UPGRADE-PACKAGE-FILENAME
``` ```
@@ -1161,17 +1193,20 @@ This utility manages scheduled installation of upgrade packages. You can show, c
The `ghe-upgrade-scheduler` utility is best suited for scheduling hotpatch upgrades, which do not require maintenance mode or a reboot in most cases. This utility is not practical for full package upgrades, which require an administrator to manually set maintenance mode, reboot the instance, and unset maintenance mode. For more information about the different types of upgrades, see "[AUTOTITLE](/admin/enterprise-management/updating-the-virtual-machine-and-physical-resources/upgrading-github-enterprise-server#upgrading-with-an-upgrade-package)" The `ghe-upgrade-scheduler` utility is best suited for scheduling hotpatch upgrades, which do not require maintenance mode or a reboot in most cases. This utility is not practical for full package upgrades, which require an administrator to manually set maintenance mode, reboot the instance, and unset maintenance mode. For more information about the different types of upgrades, see "[AUTOTITLE](/admin/enterprise-management/updating-the-virtual-machine-and-physical-resources/upgrading-github-enterprise-server#upgrading-with-an-upgrade-package)"
To schedule a new installation for a package: To schedule a new installation for a package:
```shell ```shell
ghe-upgrade-scheduler -c "0 2 15 12 *" UPGRADE-PACKAGE-FILENAME ghe-upgrade-scheduler -c "0 2 15 12 *" UPGRADE-PACKAGE-FILENAME
``` ```
To show scheduled installations for a package: To show scheduled installations for a package:
```shell ```shell
$ ghe-upgrade-scheduler -s UPGRADE PACKAGE FILENAME $ ghe-upgrade-scheduler -s UPGRADE PACKAGE FILENAME
> 0 2 15 12 * /usr/local/bin/ghe-upgrade -y -s UPGRADE-PACKAGE-FILENAME > /data/user/common/UPGRADE-PACKAGE-FILENAME.log 2>&1 > 0 2 15 12 * /usr/local/bin/ghe-upgrade -y -s UPGRADE-PACKAGE-FILENAME > /data/user/common/UPGRADE-PACKAGE-FILENAME.log 2>&1
``` ```
To remove scheduled installations for a package: To remove scheduled installations for a package:
```shell ```shell
ghe-upgrade-scheduler -r UPGRADE PACKAGE FILENAME ghe-upgrade-scheduler -r UPGRADE PACKAGE FILENAME
``` ```

View File

@@ -73,17 +73,20 @@ Backup snapshots are written to the disk path set by the `GHE_DATA_DIR` data dir
``` ```
git clone https://github.com/github/backup-utils.git /path/to/target/directory/backup-utils git clone https://github.com/github/backup-utils.git /path/to/target/directory/backup-utils
``` ```
1. To change into the local repository directory, run the following command. 1. To change into the local repository directory, run the following command.
``` ```
cd backup-utils cd backup-utils
``` ```
{% data reusables.enterprise_backup_utilities.enterprise-backup-utils-update-repo %} {% data reusables.enterprise_backup_utilities.enterprise-backup-utils-update-repo %}
1. To copy the included `backup.config-example` file to `backup.config`, run the following command. 1. To copy the included `backup.config-example` file to `backup.config`, run the following command.
```shell ```shell
cp backup.config-example backup.config cp backup.config-example backup.config
``` ```
1. To customize your configuration, edit `backup.config` in a text editor. 1. To customize your configuration, edit `backup.config` in a text editor.
1. Set the `GHE_HOSTNAME` value to your primary {% data variables.product.prodname_ghe_server %} instance's hostname or IP address. 1. Set the `GHE_HOSTNAME` value to your primary {% data variables.product.prodname_ghe_server %} instance's hostname or IP address.
@@ -101,6 +104,7 @@ Backup snapshots are written to the disk path set by the `GHE_DATA_DIR` data dir
```shell ```shell
./bin/ghe-host-check ./bin/ghe-host-check
``` ```
1. To create an initial full backup, run the following command. 1. To create an initial full backup, run the following command.
```shell ```shell
@@ -168,11 +172,13 @@ If your backup host has internet connectivity and you previously used a compress
``` ```
git clone https://github.com/github/backup-utils.git git clone https://github.com/github/backup-utils.git
``` ```
1. To change into the cloned repository, run the following command. 1. To change into the cloned repository, run the following command.
``` ```
cd backup-utils cd backup-utils
``` ```
{% data reusables.enterprise_backup_utilities.enterprise-backup-utils-update-repo %} {% data reusables.enterprise_backup_utilities.enterprise-backup-utils-update-repo %}
1. To restore your backup configuration from earlier, copy your existing backup configuration file to the local repository directory. Replace the path in the command with the location of the file saved in step 2. 1. To restore your backup configuration from earlier, copy your existing backup configuration file to the local repository directory. Replace the path in the command with the location of the file saved in step 2.

View File

@@ -39,9 +39,11 @@ To improve security for clients that connect to {% data variables.location.produ
```shell ```shell
ghe-config app.babeld.host-key-ed25519 true ghe-config app.babeld.host-key-ed25519 true
``` ```
1. Optionally, enter the following command to disable generation and advertisement of the Ed25519 host key. 1. Optionally, enter the following command to disable generation and advertisement of the Ed25519 host key.
```shell ```shell
ghe-config app.babeld.host-key-ed25519 false ghe-config app.babeld.host-key-ed25519 false
``` ```
{% data reusables.enterprise.apply-configuration %} {% data reusables.enterprise.apply-configuration %}

View File

@@ -101,16 +101,19 @@ By default, the rate limit for {% data variables.product.prodname_actions %} is
ghe-config actions-rate-limiting.enabled true ghe-config actions-rate-limiting.enabled true
ghe-config actions-rate-limiting.queue-runs-per-minute RUNS-PER-MINUTE ghe-config actions-rate-limiting.queue-runs-per-minute RUNS-PER-MINUTE
``` ```
1. To disable the rate limit after it's been enabled, run the following command. 1. To disable the rate limit after it's been enabled, run the following command.
``` ```
ghe-config actions-rate-limiting.enabled false ghe-config actions-rate-limiting.enabled false
``` ```
1. To apply the configuration, run the following command. 1. To apply the configuration, run the following command.
``` ```
ghe-config-apply ghe-config-apply
``` ```
1. Wait for the configuration run to complete. 1. Wait for the configuration run to complete.
{% endif %} {% endif %}

View File

@@ -49,4 +49,5 @@ For more information, see [{% data variables.product.prodname_blog %}](https://g
```shell ```shell
ghe-config app.gitauth.rsa-sha1 false ghe-config app.gitauth.rsa-sha1 false
``` ```
{% data reusables.enterprise.apply-configuration %} {% data reusables.enterprise.apply-configuration %}

View File

@@ -33,11 +33,13 @@ You can enable web commit signing, rotate the private key used for web commit si
```bash copy ```bash copy
ghe-config app.github.web-commit-signing-enabled true ghe-config app.github.web-commit-signing-enabled true
``` ```
1. Apply the configuration, then wait for the configuration run to complete. 1. Apply the configuration, then wait for the configuration run to complete.
```bash copy ```bash copy
ghe-config-apply ghe-config-apply
``` ```
1. Create a new user on {% data variables.location.product_location %} via built-in authentication or external authentication. For more information, see "[AUTOTITLE](/admin/identity-and-access-management/managing-iam-for-your-enterprise/about-authentication-for-your-enterprise)." 1. Create a new user on {% data variables.location.product_location %} via built-in authentication or external authentication. For more information, see "[AUTOTITLE](/admin/identity-and-access-management/managing-iam-for-your-enterprise/about-authentication-for-your-enterprise)."
- The user's username must be the same username you used when creating the PGP key in step 1 above, for example, `web-flow`. - The user's username must be the same username you used when creating the PGP key in step 1 above, for example, `web-flow`.
- The user's email address must be the same address you used when creating the PGP key. - The user's email address must be the same address you used when creating the PGP key.
@@ -71,6 +73,7 @@ You can disable web commit signing for {% data variables.location.product_locati
```bash copy ```bash copy
ghe-config app.github.web-commit-signing-enabled false ghe-config app.github.web-commit-signing-enabled false
``` ```
1. Apply the configuration. 1. Apply the configuration.
```bash copy ```bash copy

View File

@@ -25,10 +25,13 @@ shortTitle: Troubleshoot TLS errors
If you have a Linux machine with OpenSSL installed, you can remove your passphrase. If you have a Linux machine with OpenSSL installed, you can remove your passphrase.
1. Rename your original key file. 1. Rename your original key file.
```shell ```shell
mv yourdomain.key yourdomain.key.orig mv yourdomain.key yourdomain.key.orig
``` ```
2. Generate a new key without a passphrase. 2. Generate a new key without a passphrase.
```shell ```shell
openssl rsa -in yourdomain.key.orig -out yourdomain.key openssl rsa -in yourdomain.key.orig -out yourdomain.key
``` ```
@@ -69,14 +72,19 @@ If your {% data variables.product.prodname_ghe_server %} appliance interacts wit
1. Obtain the CA's root certificate from your local certificate authority and ensure it is in PEM format. 1. Obtain the CA's root certificate from your local certificate authority and ensure it is in PEM format.
2. Copy the file to your {% data variables.product.prodname_ghe_server %} appliance over SSH as the "admin" user on port 122. 2. Copy the file to your {% data variables.product.prodname_ghe_server %} appliance over SSH as the "admin" user on port 122.
```shell ```shell
scp -P 122 rootCA.crt admin@HOSTNAME:/home/admin scp -P 122 rootCA.crt admin@HOSTNAME:/home/admin
``` ```
3. Connect to the {% data variables.product.prodname_ghe_server %} administrative shell over SSH as the "admin" user on port 122. 3. Connect to the {% data variables.product.prodname_ghe_server %} administrative shell over SSH as the "admin" user on port 122.
```shell ```shell
ssh -p 122 admin@HOSTNAME ssh -p 122 admin@HOSTNAME
``` ```
4. Import the certificate into the system-wide certificate store. 4. Import the certificate into the system-wide certificate store.
```shell ```shell
ghe-ssl-ca-certificate-install -c rootCA.crt ghe-ssl-ca-certificate-install -c rootCA.crt
``` ```

View File

@@ -63,9 +63,11 @@ To verify your enterprise account's domain, you must have access to modify domai
{% data reusables.organizations.add-domain %} {% data reusables.organizations.add-domain %}
{% data reusables.organizations.add-dns-txt-record %} {% data reusables.organizations.add-dns-txt-record %}
1. Wait for your DNS configuration to change, which may take up to 72 hours. You can confirm your DNS configuration has changed by running the `dig` command on the command line, replacing `ENTERPRISE-ACCOUNT` with the name of your enterprise account, and `example.com` with the domain you'd like to verify. You should see your new TXT record listed in the command output. 1. Wait for your DNS configuration to change, which may take up to 72 hours. You can confirm your DNS configuration has changed by running the `dig` command on the command line, replacing `ENTERPRISE-ACCOUNT` with the name of your enterprise account, and `example.com` with the domain you'd like to verify. You should see your new TXT record listed in the command output.
```shell ```shell
dig _github-challenge-ENTERPRISE-ACCOUNT.DOMAIN-NAME +nostats +nocomments +nocmd TXT dig _github-challenge-ENTERPRISE-ACCOUNT.DOMAIN-NAME +nostats +nocomments +nocmd TXT
``` ```
1. After confirming your TXT record is added to your DNS, follow steps one through four above to navigate to your enterprise account's approved and verified domains. 1. After confirming your TXT record is added to your DNS, follow steps one through four above to navigate to your enterprise account's approved and verified domains.
{% data reusables.enterprise-accounts.continue-verifying-domain %} {% data reusables.enterprise-accounts.continue-verifying-domain %}
1. Optionally, after the "Verified" badge is visible on your organizations' profiles, delete the TXT entry from the DNS record at your domain hosting service. 1. Optionally, after the "Verified" badge is visible on your organizations' profiles, delete the TXT entry from the DNS record at your domain hosting service.

View File

@@ -45,24 +45,30 @@ Then, when told to fetch `https://github.example.com/myorg/myrepo`, Git will ins
{% data reusables.enterprise_installation.add-ssh-key-to-primary %} {% data reusables.enterprise_installation.add-ssh-key-to-primary %}
1. To verify the connection to the primary and enable replica mode for the repository cache, run `ghe-repl-setup` again. 1. To verify the connection to the primary and enable replica mode for the repository cache, run `ghe-repl-setup` again.
- If the repository cache is your only additional node, no arguments are required. - If the repository cache is your only additional node, no arguments are required.
```shell ```shell
ghe-repl-setup PRIMARY-IP ghe-repl-setup PRIMARY-IP
``` ```
- If you're configuring a repository cache in addition to one or more existing replicas, use the `-a` or `--add` argument. - If you're configuring a repository cache in addition to one or more existing replicas, use the `-a` or `--add` argument.
``` ```
ghe-repl-setup -a PRIMARY-IP ghe-repl-setup -a PRIMARY-IP
``` ```
{% ifversion ghes < 3.6 %} {% ifversion ghes < 3.6 %}
1. If you haven't already, set the datacenter name on the primary and any replica appliances, replacing DC-NAME with a datacenter name. 1. If you haven't already, set the datacenter name on the primary and any replica appliances, replacing DC-NAME with a datacenter name.
``` ```
ghe-repl-node --datacenter DC-NAME ghe-repl-node --datacenter DC-NAME
``` ```
1. Set a `cache-location` for the repository cache, replacing _CACHE-LOCATION_ with an alphanumeric identifier, such as the region where the cache is deployed. Also set a datacenter name for this cache; new caches will attempt to seed from another cache in the same datacenter.
1. Set a `cache-location` for the repository cache, replacing CACHE-LOCATION with an alphanumeric identifier, such as the region where the cache is deployed. Also set a datacenter name for this cache; new caches will attempt to seed from another cache in the same datacenter.
```shell ```shell
ghe-repl-node --cache CACHE-LOCATION --datacenter REPLICA-DC-NAME ghe-repl-node --cache CACHE-LOCATION --datacenter REPLICA-DC-NAME
``` ```
{% else %} {% else %}
1. To configure the repository cache, use the `ghe-repl-node` command and include the necessary parameters. 1. To configure the repository cache, use the `ghe-repl-node` command and include the necessary parameters.
- Set a `cache-location` for the repository cache, replacing _CACHE-LOCATION_ with an alphanumeric identifier, such as the region where the cache is deployed. The _CACHE-LOCATION_ value must not be any of the subdomains reserved for use with subdomain isolation, such as `assets` or `media`. For a list of reserved names, see "[AUTOTITLE](/admin/configuration/configuring-network-settings/enabling-subdomain-isolation#about-subdomain-isolation)." - Set a `cache-location` for the repository cache, replacing _CACHE-LOCATION_ with an alphanumeric identifier, such as the region where the cache is deployed. The _CACHE-LOCATION_ value must not be any of the subdomains reserved for use with subdomain isolation, such as `assets` or `media`. For a list of reserved names, see "[AUTOTITLE](/admin/configuration/configuring-network-settings/enabling-subdomain-isolation#about-subdomain-isolation)."
@@ -72,11 +78,13 @@ Then, when told to fetch `https://github.example.com/myorg/myrepo`, Git will ins
``` ```
ghe-repl-node --datacenter DC-NAME ghe-repl-node --datacenter DC-NAME
``` ```
- New caches will attempt to seed from another cache in the same datacenter. Set a `datacenter` for the repository cache, replacing _REPLICA-DC-NAME_ with the name of the datacenter where you're deploying the node.
- New caches will attempt to seed from another cache in the same datacenter. Set a `datacenter` for the repository cache, replacing REPLICA-DC-NAME with the name of the datacenter where you're deploying the node.
```shell ```shell
ghe-repl-node --cache CACHE-LOCATION --cache-domain EXTERNAL-CACHE-DOMAIN --datacenter REPLICA-DC-NAME ghe-repl-node --cache CACHE-LOCATION --cache-domain EXTERNAL-CACHE-DOMAIN --datacenter REPLICA-DC-NAME
``` ```
{% endif %} {% endif %}
{% data reusables.enterprise_installation.replication-command %} {% data reusables.enterprise_installation.replication-command %}

View File

@@ -112,9 +112,11 @@ We strongly recommend enabling PROXY support for both your instance and the load
{% data reusables.enterprise_installation.proxy-incompatible-with-aws-nlbs %} {% data reusables.enterprise_installation.proxy-incompatible-with-aws-nlbs %}
- For your instance, use this command: - For your instance, use this command:
```shell ```shell
ghe-config 'loadbalancer.proxy-protocol' 'true' && ghe-cluster-config-apply ghe-config 'loadbalancer.proxy-protocol' 'true' && ghe-cluster-config-apply
``` ```
- For the load balancer, use the instructions provided by your vendor. - For the load balancer, use the instructions provided by your vendor.
{% data reusables.enterprise_clustering.proxy_protocol_ports %} {% data reusables.enterprise_clustering.proxy_protocol_ports %}

View File

@@ -95,13 +95,17 @@ If you plan to take a node offline and the node runs any of the following roles,
- Command (replace REASON FOR EVACUATION with the reason for evacuation): - Command (replace REASON FOR EVACUATION with the reason for evacuation):
{% ifversion ghe-spokes-deprecation-phase-1 %} {% ifversion ghe-spokes-deprecation-phase-1 %}
```shell ```shell
ghe-spokesctl server set evacuating git-server-UUID 'REASON FOR EVACUATION' ghe-spokesctl server set evacuating git-server-UUID 'REASON FOR EVACUATION'
``` ```
{% else %} {% else %}
```shell ```shell
ghe-spokes server evacuate git-server-UUID 'REASON FOR EVACUATION' ghe-spokes server evacuate git-server-UUID 'REASON FOR EVACUATION'
``` ```
{% endif %} {% endif %}
- `pages-server`: - `pages-server`:
@@ -136,13 +140,17 @@ If you plan to take a node offline and the node runs any of the following roles,
- `git-server`: - `git-server`:
{% ifversion ghe-spokes-deprecation-phase-1 %} {% ifversion ghe-spokes-deprecation-phase-1 %}
```shell ```shell
ghe-spokesctl server evac-status git-server-UUID ghe-spokesctl server evac-status git-server-UUID
``` ```
{% else %} {% else %}
```shell ```shell
ghe-spokes evac-status git-server-UUID ghe-spokes evac-status git-server-UUID
``` ```
{% endif %} {% endif %}
- `pages-server`: - `pages-server`:

View File

@@ -56,11 +56,13 @@ By default, {% data variables.product.prodname_nes %} is disabled. You can enabl
```shell copy ```shell copy
ghe-config app.nes.enabled ghe-config app.nes.enabled
``` ```
1. To enable {% data variables.product.prodname_nes %}, run the following command. 1. To enable {% data variables.product.prodname_nes %}, run the following command.
```shell copy ```shell copy
ghe-config app.nes.enabled true ghe-config app.nes.enabled true
``` ```
{% data reusables.enterprise.apply-configuration %} {% data reusables.enterprise.apply-configuration %}
1. To verify that {% data variables.product.prodname_nes %} is running, from any node, run the following command. 1. To verify that {% data variables.product.prodname_nes %} is running, from any node, run the following command.
@@ -78,11 +80,13 @@ To determine how {% data variables.product.prodname_nes %} notifies you, you can
```shell copy ```shell copy
nes get-node-ttl all nes get-node-ttl all
``` ```
1. To set the TTL for the `fail` state, run the following command. Replace MINUTES with the number of minutes to use for failures. 1. To set the TTL for the `fail` state, run the following command. Replace MINUTES with the number of minutes to use for failures.
```shell copy ```shell copy
nes set-node-ttl fail MINUTES nes set-node-ttl fail MINUTES
``` ```
1. To set the TTL for the `warn` state, run the following command. Replace MINUTES with the number of minutes to use for warnings. 1. To set the TTL for the `warn` state, run the following command. Replace MINUTES with the number of minutes to use for warnings.
```shell copy ```shell copy
@@ -104,6 +108,7 @@ To manage whether {% data variables.product.prodname_nes %} can take a node and
```shell copy ```shell copy
nes set-node-adminaction approved HOSTNAME nes set-node-adminaction approved HOSTNAME
``` ```
- To revoke {% data variables.product.prodname_nes %}'s ability to take a node offline, run the following command. Replace HOSTNAME with the node's hostname. - To revoke {% data variables.product.prodname_nes %}'s ability to take a node offline, run the following command. Replace HOSTNAME with the node's hostname.
```shell copy ```shell copy
@@ -127,11 +132,13 @@ After {% data variables.product.prodname_nes %} detects that a node has exceeded
```shell copy ```shell copy
nes get-node-adminaction HOSTNAME nes get-node-adminaction HOSTNAME
``` ```
1. If the `adminaction` state is currently set to `approved`, change the state to `none` by running the following command. Replace HOSTNAME with the hostname of the ineligible node. 1. If the `adminaction` state is currently set to `approved`, change the state to `none` by running the following command. Replace HOSTNAME with the hostname of the ineligible node.
```shell copy ```shell copy
nes set-node-adminaction none HOSTNAME nes set-node-adminaction none HOSTNAME
``` ```
1. To ensure the node is in a healthy state, run the following command and confirm that the node's status is `ready`. 1. To ensure the node is in a healthy state, run the following command and confirm that the node's status is `ready`.
```shell copy ```shell copy
@@ -143,11 +150,13 @@ After {% data variables.product.prodname_nes %} detects that a node has exceeded
```shell copy ```shell copy
nomad node eligibility -enable -self nomad node eligibility -enable -self
``` ```
1. To update the node's eligibility in {% data variables.product.prodname_nes %}, run the following command. Replace HOSTNAME with the node's hostname. 1. To update the node's eligibility in {% data variables.product.prodname_nes %}, run the following command. Replace HOSTNAME with the node's hostname.
```shell copy ```shell copy
nes set-node-eligibility eligible HOSTNAME nes set-node-eligibility eligible HOSTNAME
``` ```
1. Wait 30 seconds, then check the cluster's health to confirm the target node is eligible by running the following command. 1. Wait 30 seconds, then check the cluster's health to confirm the target node is eligible by running the following command.
```shell copy ```shell copy
@@ -164,6 +173,7 @@ You can view logs for {% data variables.product.prodname_nes %} from any node in
```shell copy ```shell copy
nomad alloc logs -job nes nomad alloc logs -job nes
``` ```
1. Alternatively, you can view logs for {% data variables.product.prodname_nes %} on the node that runs the service. The service writes logs to the systemd journal. 1. Alternatively, you can view logs for {% data variables.product.prodname_nes %} on the node that runs the service. The service writes logs to the systemd journal.
- To determine which node runs {% data variables.product.prodname_nes %}, run the following command. - To determine which node runs {% data variables.product.prodname_nes %}, run the following command.
@@ -171,6 +181,7 @@ You can view logs for {% data variables.product.prodname_nes %} from any node in
```shell copy ```shell copy
nomad job status "nes" | grep running | grep "${nomad_node_id}" | awk 'NR==2{ print $1 }' | xargs nomad alloc status | grep "Node Name" nomad job status "nes" | grep running | grep "${nomad_node_id}" | awk 'NR==2{ print $1 }' | xargs nomad alloc status | grep "Node Name"
``` ```
- To view logs on the node, connect to the node via SSH, then run the following command. - To view logs on the node, connect to the node via SSH, then run the following command.
```shell copy ```shell copy

View File

@@ -39,6 +39,7 @@ admin@ghe-data-node-0:~$ ghe-cluster-status | grep error
> mysql-replication ghe-data-node-0: error Stopped > mysql-replication ghe-data-node-0: error Stopped
> mysql cluster: error > mysql cluster: error
``` ```
{% note %} {% note %}
**Note:** If there are no failing tests, this command produces no output. This indicates the cluster is healthy. **Note:** If there are no failing tests, this command produces no output. This indicates the cluster is healthy.
@@ -55,6 +56,7 @@ You can configure [Nagios](https://www.nagios.org/) to monitor {% data variables
### Configuring the Nagios host ### Configuring the Nagios host
1. Generate an SSH key with a blank passphrase. Nagios uses this to authenticate to the {% data variables.product.prodname_ghe_server %} cluster. 1. Generate an SSH key with a blank passphrase. Nagios uses this to authenticate to the {% data variables.product.prodname_ghe_server %} cluster.
```shell ```shell
nagiosuser@nagios:~$ ssh-keygen -t ed25519 nagiosuser@nagios:~$ ssh-keygen -t ed25519
> Generating public/private ed25519 key pair. > Generating public/private ed25519 key pair.
@@ -64,6 +66,7 @@ You can configure [Nagios](https://www.nagios.org/) to monitor {% data variables
> Your identification has been saved in /home/nagiosuser/.ssh/id_ed25519. > Your identification has been saved in /home/nagiosuser/.ssh/id_ed25519.
> Your public key has been saved in /home/nagiosuser/.ssh/id_ed25519.pub. > Your public key has been saved in /home/nagiosuser/.ssh/id_ed25519.pub.
``` ```
{% danger %} {% danger %}
**Security Warning:** An SSH key without a passphrase can pose a security risk if authorized for full access to a host. Limit this key's authorization to a single read-only command. **Security Warning:** An SSH key without a passphrase can pose a security risk if authorized for full access to a host. Limit this key's authorization to a single read-only command.
@@ -72,12 +75,14 @@ You can configure [Nagios](https://www.nagios.org/) to monitor {% data variables
{% note %} {% note %}
**Note:** If you're using a distribution of Linux that doesn't support the Ed25519 algorithm, use the command: **Note:** If you're using a distribution of Linux that doesn't support the Ed25519 algorithm, use the command:
```shell ```shell
nagiosuser@nagios:~$ ssh-keygen -t rsa -b 4096 nagiosuser@nagios:~$ ssh-keygen -t rsa -b 4096
``` ```
{% endnote %} {% endnote %}
2. Copy the private key (`id_ed25519`) to the `nagios` home folder and set the appropriate ownership. 2. Copy the private key (`id_ed25519`) to the `nagios` home folder and set the appropriate ownership.
```shell ```shell
nagiosuser@nagios:~$ sudo cp .ssh/id_ed25519 /var/lib/nagios/.ssh/ nagiosuser@nagios:~$ sudo cp .ssh/id_ed25519 /var/lib/nagios/.ssh/
nagiosuser@nagios:~$ sudo chown nagios:nagios /var/lib/nagios/.ssh/id_ed25519 nagiosuser@nagios:~$ sudo chown nagios:nagios /var/lib/nagios/.ssh/id_ed25519
@@ -95,6 +100,7 @@ You can configure [Nagios](https://www.nagios.org/) to monitor {% data variables
``` ```
5. To test that the Nagios plugin can successfully execute the command, run it interactively from Nagios host. 5. To test that the Nagios plugin can successfully execute the command, run it interactively from Nagios host.
```shell ```shell
nagiosuser@nagios:~$ /usr/lib/nagios/plugins/check_by_ssh -l admin -p 122 -H HOSTNAME -C "ghe-cluster-status -n" -t 30 nagiosuser@nagios:~$ /usr/lib/nagios/plugins/check_by_ssh -l admin -p 122 -H HOSTNAME -C "ghe-cluster-status -n" -t 30
> OK - No errors detected > OK - No errors detected
@@ -110,6 +116,7 @@ You can configure [Nagios](https://www.nagios.org/) to monitor {% data variables
command_line $USER1$/check_by_ssh -H $HOSTADDRESS$ -C "ghe-cluster-status -n" -l admin -p 122 -t 30 command_line $USER1$/check_by_ssh -H $HOSTADDRESS$ -C "ghe-cluster-status -n" -l admin -p 122 -t 30
} }
``` ```
7. Add this command to a service definition for a node in the {% data variables.product.prodname_ghe_server %} cluster. 7. Add this command to a service definition for a node in the {% data variables.product.prodname_ghe_server %} cluster.
**Example definition** **Example definition**

View File

@@ -31,6 +31,7 @@ In some cases, such as hardware failure, the underlying software that that manag
```shell copy ```shell copy
ghe-cluster-balance status ghe-cluster-balance status
``` ```
1. If a job is not properly distributed, inspect the allocations by running the following command. Replace JOB with a single job or comma-delimited list of jobs. 1. If a job is not properly distributed, inspect the allocations by running the following command. Replace JOB with a single job or comma-delimited list of jobs.
```shell copy ```shell copy
@@ -71,11 +72,13 @@ You can schedule rebalancing of jobs on your cluster by setting and applying con
```shell copy ```shell copy
ghe-config app.cluster-rebalance.enabled true ghe-config app.cluster-rebalance.enabled true
``` ```
1. Optionally, you can override the default schedule by defining a cron expression. For example, run the following command to balance jobs every three hours. 1. Optionally, you can override the default schedule by defining a cron expression. For example, run the following command to balance jobs every three hours.
```shell copy ```shell copy
ghe-config app.cluster-rebalance.schedule '0 */3 * * *' ghe-config app.cluster-rebalance.schedule '0 */3 * * *'
``` ```
{% data reusables.enterprise.apply-configuration %} {% data reusables.enterprise.apply-configuration %}
## Further reading ## Further reading

View File

@@ -25,6 +25,7 @@ topics:
1. Back up your data with [{% data variables.product.prodname_enterprise_backup_utilities %}](https://github.com/github/backup-utils#readme). 1. Back up your data with [{% data variables.product.prodname_enterprise_backup_utilities %}](https://github.com/github/backup-utils#readme).
2. From the administrative shell of any node, use the `ghe-cluster-hotpatch` command to install the latest hotpatch. You can provide a URL for a hotpatch, or manually download the hotpatch and specify a local filename. 2. From the administrative shell of any node, use the `ghe-cluster-hotpatch` command to install the latest hotpatch. You can provide a URL for a hotpatch, or manually download the hotpatch and specify a local filename.
```shell ```shell
ghe-cluster-hotpatch https://HOTPATCH-URL/FILENAME.hpkg ghe-cluster-hotpatch https://HOTPATCH-URL/FILENAME.hpkg
``` ```
@@ -39,6 +40,7 @@ Use an upgrade package to upgrade a {% data variables.product.prodname_ghe_serve
3. Schedule a maintenance window for end users of your {% data variables.product.prodname_ghe_server %} cluster, as it will be unavailable for normal use during the upgrade. Maintenance mode blocks user access and prevents data changes while the cluster upgrade is in progress. 3. Schedule a maintenance window for end users of your {% data variables.product.prodname_ghe_server %} cluster, as it will be unavailable for normal use during the upgrade. Maintenance mode blocks user access and prevents data changes while the cluster upgrade is in progress.
4. On the [{% data variables.product.prodname_ghe_server %} Download Page](https://enterprise.github.com/download), copy the URL for the upgrade _.pkg_ file to the clipboard. 4. On the [{% data variables.product.prodname_ghe_server %} Download Page](https://enterprise.github.com/download), copy the URL for the upgrade _.pkg_ file to the clipboard.
5. From the administrative shell of any node, use the `ghe-cluster-each` command combined with `curl` to download the release package to each node in a single step. Use the URL you copied in the previous step as an argument. 5. From the administrative shell of any node, use the `ghe-cluster-each` command combined with `curl` to download the release package to each node in a single step. Use the URL you copied in the previous step as an argument.
```shell ```shell
$ ghe-cluster-each -- "cd /home/admin && curl -L -O https://PACKAGE-URL.pkg" $ ghe-cluster-each -- "cd /home/admin && curl -L -O https://PACKAGE-URL.pkg"
> ghe-app-node-1: % Total % Received % Xferd Average Speed Time Time Time Current > ghe-app-node-1: % Total % Received % Xferd Average Speed Time Time Time Current
@@ -57,6 +59,7 @@ Use an upgrade package to upgrade a {% data variables.product.prodname_ghe_serve
> ghe-data-node-3: Dload Upload Total Spent Left Speed > ghe-data-node-3: Dload Upload Total Spent Left Speed
> 100 496M 100 496M 0 0 19.7M 0 0:00:25 0:00:25 --:--:-- 25.5M > 100 496M 100 496M 0 0 19.7M 0 0:00:25 0:00:25 --:--:-- 25.5M
``` ```
6. Identify the primary MySQL node, which is defined as `mysql-master = <hostname>` in `cluster.conf`. This node will be upgraded last. 6. Identify the primary MySQL node, which is defined as `mysql-master = <hostname>` in `cluster.conf`. This node will be upgraded last.
### Upgrading the cluster nodes ### Upgrading the cluster nodes
@@ -64,6 +67,7 @@ Use an upgrade package to upgrade a {% data variables.product.prodname_ghe_serve
1. Enable maintenance mode according to your scheduled window by connecting to the administrative shell of any cluster node and running `ghe-cluster-maintenance -s`. 1. Enable maintenance mode according to your scheduled window by connecting to the administrative shell of any cluster node and running `ghe-cluster-maintenance -s`.
2. **With the exception of the primary MySQL node**, connect to the administrative shell of each of the {% data variables.product.prodname_ghe_server %} nodes. 2. **With the exception of the primary MySQL node**, connect to the administrative shell of each of the {% data variables.product.prodname_ghe_server %} nodes.
Run the `ghe-upgrade` command, providing the package file name you downloaded in Step 4 of [Preparing to upgrade](#preparing-to-upgrade): Run the `ghe-upgrade` command, providing the package file name you downloaded in Step 4 of [Preparing to upgrade](#preparing-to-upgrade):
```shell ```shell
$ ghe-upgrade PACKAGE-FILENAME.pkg $ ghe-upgrade PACKAGE-FILENAME.pkg
> *** verifying upgrade package signature... > *** verifying upgrade package signature...
@@ -74,8 +78,10 @@ Run the `ghe-upgrade` command, providing the package file name you downloaded in
> gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u > gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
> gpg: Good signature from "GitHub Enterprise (Upgrade Package Key) > <enterprise@github.com>" > gpg: Good signature from "GitHub Enterprise (Upgrade Package Key) > <enterprise@github.com>"
``` ```
3. The upgrade process will reboot the node once it completes. Verify that you can `ping` each node after it reboots. 3. The upgrade process will reboot the node once it completes. Verify that you can `ping` each node after it reboots.
4. Connect to the administrative shell of the primary MySQL node. Run the `ghe-upgrade` command, providing the package file name you downloaded in Step 4 of [Preparing to upgrade](#preparing-to-upgrade): 4. Connect to the administrative shell of the primary MySQL node. Run the `ghe-upgrade` command, providing the package file name you downloaded in Step 4 of [Preparing to upgrade](#preparing-to-upgrade):
```shell ```shell
$ ghe-upgrade PACKAGE-FILENAME.pkg $ ghe-upgrade PACKAGE-FILENAME.pkg
> *** verifying upgrade package signature... > *** verifying upgrade package signature...
@@ -86,6 +92,7 @@ Run the `ghe-upgrade` command, providing the package file name you downloaded in
> gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u > gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
> gpg: Good signature from "GitHub Enterprise (Upgrade Package Key) > <enterprise@github.com>" > gpg: Good signature from "GitHub Enterprise (Upgrade Package Key) > <enterprise@github.com>"
``` ```
5. The upgrade process will reboot the primary MySQL node once it completes. Verify that you can `ping` each node after it reboots.{% ifversion ghes %} 5. The upgrade process will reboot the primary MySQL node once it completes. Verify that you can `ping` each node after it reboots.{% ifversion ghes %}
6. Connect to the administrative shell of the primary MySQL node and run the `ghe-cluster-config-apply` command. 6. Connect to the administrative shell of the primary MySQL node and run the `ghe-cluster-config-apply` command.
7. When `ghe-cluster-config-apply` is complete, check that the services are in a healthy state by running `ghe-cluster-status`.{% endif %} 7. When `ghe-cluster-config-apply` is complete, check that the services are in a healthy state by running `ghe-cluster-status`.{% endif %}

View File

@@ -23,15 +23,19 @@ shortTitle: Create HA replica
1. In a browser, navigate to the new replica appliance's IP address and upload your {% data variables.product.prodname_enterprise %} license. 1. In a browser, navigate to the new replica appliance's IP address and upload your {% data variables.product.prodname_enterprise %} license.
{% data reusables.enterprise_installation.replica-steps %} {% data reusables.enterprise_installation.replica-steps %}
1. Connect to the replica appliance's IP address using SSH. 1. Connect to the replica appliance's IP address using SSH.
```shell ```shell
ssh -p 122 admin@REPLICA_IP ssh -p 122 admin@REPLICA_IP
``` ```
{% data reusables.enterprise_installation.generate-replication-key-pair %} {% data reusables.enterprise_installation.generate-replication-key-pair %}
{% data reusables.enterprise_installation.add-ssh-key-to-primary %} {% data reusables.enterprise_installation.add-ssh-key-to-primary %}
1. To verify the connection to the primary and enable replica mode for the new replica, run `ghe-repl-setup` again. 1. To verify the connection to the primary and enable replica mode for the new replica, run `ghe-repl-setup` again.
```shell ```shell
ghe-repl-setup PRIMARY_IP ghe-repl-setup PRIMARY_IP
``` ```
{% data reusables.enterprise_installation.replication-command %} {% data reusables.enterprise_installation.replication-command %}
{% data reusables.enterprise_installation.verify-replication-channel %} {% data reusables.enterprise_installation.verify-replication-channel %}
@@ -42,29 +46,39 @@ This example configuration uses a primary and two replicas, which are located in
{% data reusables.enterprise_clustering.network-latency %} If latency is more than 70 milliseconds, we recommend cache replica nodes instead. For more information, see "[AUTOTITLE](/admin/enterprise-management/caching-repositories/configuring-a-repository-cache)." {% data reusables.enterprise_clustering.network-latency %} If latency is more than 70 milliseconds, we recommend cache replica nodes instead. For more information, see "[AUTOTITLE](/admin/enterprise-management/caching-repositories/configuring-a-repository-cache)."
1. Create the first replica the same way you would for a standard two node configuration by running `ghe-repl-setup` on the first replica. 1. Create the first replica the same way you would for a standard two node configuration by running `ghe-repl-setup` on the first replica.
```shell ```shell
(replica1)$ ghe-repl-setup PRIMARY_IP (replica1)$ ghe-repl-setup PRIMARY_IP
(replica1)$ ghe-repl-start (replica1)$ ghe-repl-start
``` ```
2. Create a second replica and use the `ghe-repl-setup --add` command. The `--add` flag prevents it from overwriting the existing replication configuration and adds the new replica to the configuration. 2. Create a second replica and use the `ghe-repl-setup --add` command. The `--add` flag prevents it from overwriting the existing replication configuration and adds the new replica to the configuration.
```shell ```shell
(replica2)$ ghe-repl-setup --add PRIMARY_IP (replica2)$ ghe-repl-setup --add PRIMARY_IP
(replica2)$ ghe-repl-start (replica2)$ ghe-repl-start
``` ```
3. By default, replicas are configured to the same datacenter, and will now attempt to seed from an existing node in the same datacenter. Configure the replicas for different datacenters by setting a different value for the datacenter option. The specific values can be anything you would like as long as they are different from each other. Run the `ghe-repl-node` command on each node and specify the datacenter. 3. By default, replicas are configured to the same datacenter, and will now attempt to seed from an existing node in the same datacenter. Configure the replicas for different datacenters by setting a different value for the datacenter option. The specific values can be anything you would like as long as they are different from each other. Run the `ghe-repl-node` command on each node and specify the datacenter.
On the primary: On the primary:
```shell ```shell
(primary)$ ghe-repl-node --datacenter [PRIMARY DC NAME] (primary)$ ghe-repl-node --datacenter [PRIMARY DC NAME]
``` ```
On the first replica: On the first replica:
```shell ```shell
(replica1)$ ghe-repl-node --datacenter [FIRST REPLICA DC NAME] (replica1)$ ghe-repl-node --datacenter [FIRST REPLICA DC NAME]
``` ```
On the second replica: On the second replica:
```shell ```shell
(replica2)$ ghe-repl-node --datacenter [SECOND REPLICA DC NAME] (replica2)$ ghe-repl-node --datacenter [SECOND REPLICA DC NAME]
``` ```
{% tip %} {% tip %}
**Tip:** You can set the `--datacenter` and `--active` options at the same time. **Tip:** You can set the `--datacenter` and `--active` options at the same time.
@@ -73,14 +87,19 @@ This example configuration uses a primary and two replicas, which are located in
4. An active replica node will store copies of the appliance data and service end user requests. An inactive node will store copies of the appliance data but will be unable to service end user requests. Enable active mode using the `--active` flag or inactive mode using the `--inactive` flag. 4. An active replica node will store copies of the appliance data and service end user requests. An inactive node will store copies of the appliance data but will be unable to service end user requests. Enable active mode using the `--active` flag or inactive mode using the `--inactive` flag.
On the first replica: On the first replica:
```shell ```shell
(replica1)$ ghe-repl-node --active (replica1)$ ghe-repl-node --active
``` ```
On the second replica: On the second replica:
```shell ```shell
(replica2)$ ghe-repl-node --active (replica2)$ ghe-repl-node --active
``` ```
5. To apply the configuration, use the `ghe-config-apply` command on the primary. 5. To apply the configuration, use the `ghe-config-apply` command on the primary.
```shell ```shell
(primary)$ ghe-config-apply (primary)$ ghe-config-apply
``` ```

View File

@@ -25,6 +25,7 @@ The time required to failover depends on how long it takes to manually promote t
- To use the management console, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)" - To use the management console, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)"
- You can also use the `ghe-maintenance -s` command. - You can also use the `ghe-maintenance -s` command.
```shell ```shell
ghe-maintenance -s ghe-maintenance -s
``` ```
@@ -44,6 +45,7 @@ The time required to failover depends on how long it takes to manually promote t
``` ```
4. On the replica appliance, to stop replication and promote the replica appliance to primary status, use the `ghe-repl-promote` command. This will also automatically put the primary node in maintenance mode if its reachable. 4. On the replica appliance, to stop replication and promote the replica appliance to primary status, use the `ghe-repl-promote` command. This will also automatically put the primary node in maintenance mode if its reachable.
```shell ```shell
ghe-repl-promote ghe-repl-promote
``` ```
@@ -59,10 +61,13 @@ The time required to failover depends on how long it takes to manually promote t
7. If desired, set up replication from the new primary to existing appliances and the previous primary. For more information, see "[AUTOTITLE](/admin/enterprise-management/configuring-high-availability/about-high-availability-configuration#utilities-for-replication-management)." 7. If desired, set up replication from the new primary to existing appliances and the previous primary. For more information, see "[AUTOTITLE](/admin/enterprise-management/configuring-high-availability/about-high-availability-configuration#utilities-for-replication-management)."
8. Appliances you do not intend to setup replication to that were part of the high availability configuration prior the failover, need to be removed from the high availability configuration by UUID. 8. Appliances you do not intend to setup replication to that were part of the high availability configuration prior the failover, need to be removed from the high availability configuration by UUID.
- On the former appliances, get their UUID via `cat /data/user/common/uuid`. - On the former appliances, get their UUID via `cat /data/user/common/uuid`.
```shell ```shell
cat /data/user/common/uuid cat /data/user/common/uuid
``` ```
- On the new primary, remove the UUIDs using `ghe-repl-teardown`. Please replace *`UUID`* with a UUID you retrieved in the previous step. - On the new primary, remove the UUIDs using `ghe-repl-teardown`. Please replace *`UUID`* with a UUID you retrieved in the previous step.
```shell ```shell
ghe-repl-teardown -u UUID ghe-repl-teardown -u UUID
``` ```

View File

@@ -28,17 +28,23 @@ You can use the former primary appliance as the new replica appliance if the fai
## Configuring a former primary appliance as a new replica ## Configuring a former primary appliance as a new replica
1. Connect to the former primary appliance's IP address using SSH. 1. Connect to the former primary appliance's IP address using SSH.
```shell ```shell
ssh -p 122 admin@ FORMER_PRIMARY_IP ssh -p 122 admin@ FORMER_PRIMARY_IP
``` ```
1. Enable maintenance mode on the former primary appliance. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)." 1. Enable maintenance mode on the former primary appliance. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)."
1. On the former primary appliance, run `ghe-repl-setup` with the IP address of the former replica. 1. On the former primary appliance, run `ghe-repl-setup` with the IP address of the former replica.
```shell ```shell
ghe-repl-setup FORMER_REPLICA_IP ghe-repl-setup FORMER_REPLICA_IP
``` ```
{% data reusables.enterprise_installation.add-ssh-key-to-primary %} {% data reusables.enterprise_installation.add-ssh-key-to-primary %}
1. To verify the connection to the new primary and enable replica mode for the new replica, run `ghe-repl-setup` again. 1. To verify the connection to the new primary and enable replica mode for the new replica, run `ghe-repl-setup` again.
```shell ```shell
ghe-repl-setup FORMER_REPLICA_IP ghe-repl-setup FORMER_REPLICA_IP
``` ```
{% data reusables.enterprise_installation.replication-command %} {% data reusables.enterprise_installation.replication-command %}

View File

@@ -19,10 +19,13 @@ shortTitle: Remove a HA replica
1. If necessary, stop a geo-replication replica from serving user traffic by removing the Geo DNS entries for the replica. 1. If necessary, stop a geo-replication replica from serving user traffic by removing the Geo DNS entries for the replica.
2. On the replica where you wish to temporarily stop replication, run ghe-repl-stop. 2. On the replica where you wish to temporarily stop replication, run ghe-repl-stop.
```shell ```shell
ghe-repl-stop ghe-repl-stop
``` ```
3. To start replication again, run `ghe-repl-start`. 3. To start replication again, run `ghe-repl-start`.
```shell ```shell
ghe-repl-start ghe-repl-start
``` ```
@@ -31,10 +34,13 @@ shortTitle: Remove a HA replica
1. If necessary, stop a geo-replication replica from serving user traffic by removing the Geo DNS entries for the replica. 1. If necessary, stop a geo-replication replica from serving user traffic by removing the Geo DNS entries for the replica.
2. On the replica you wish to remove replication from, run `ghe-repl-stop`. 2. On the replica you wish to remove replication from, run `ghe-repl-stop`.
```shell ```shell
ghe-repl-stop ghe-repl-stop
``` ```
3. On the replica, to tear down the replication state, run `ghe-repl-teardown`. 3. On the replica, to tear down the replication state, run `ghe-repl-teardown`.
```shell ```shell
ghe-repl-teardown ghe-repl-teardown
``` ```

View File

@@ -28,6 +28,7 @@ SNMP is a common standard for monitoring devices over a network. We strongly rec
4. In the **Community string** field, enter a new community string. If left blank, this defaults to `public`. 4. In the **Community string** field, enter a new community string. If left blank, this defaults to `public`.
{% data reusables.enterprise_management_console.save-settings %} {% data reusables.enterprise_management_console.save-settings %}
5. Test your SNMP configuration by running the following command on a separate workstation with SNMP support in your network: 5. Test your SNMP configuration by running the following command on a separate workstation with SNMP support in your network:
```shell ```shell
# community-string is your community string # community-string is your community string
# hostname is the IP or domain of your Enterprise instance # hostname is the IP or domain of your Enterprise instance
@@ -87,6 +88,7 @@ Of the available MIBs for SNMP, the most useful is `HOST-RESOURCES-MIB` (1.3.6.1
| hrStorageAllocationUnits.1 | 1.3.6.1.2.1.25.2.3.1.4.1 | The size, in bytes, of an hrStorageAllocationUnit | | hrStorageAllocationUnits.1 | 1.3.6.1.2.1.25.2.3.1.4.1 | The size, in bytes, of an hrStorageAllocationUnit |
For example, to query for `hrMemorySize` with SNMP v3, run the following command on a separate workstation with SNMP support in your network: For example, to query for `hrMemorySize` with SNMP v3, run the following command on a separate workstation with SNMP support in your network:
```shell ```shell
# username is the unique username of your SNMP v3 user # username is the unique username of your SNMP v3 user
# auth password is the authentication password # auth password is the authentication password
@@ -99,6 +101,7 @@ $ snmpget -v 3 -u USERNAME -l authPriv \
``` ```
With SNMP v2c, to query for `hrMemorySize`, run the following command on a separate workstation with SNMP support in your network: With SNMP v2c, to query for `hrMemorySize`, run the following command on a separate workstation with SNMP support in your network:
```shell ```shell
# community-string is your community string # community-string is your community string
# hostname is the IP or domain of your Enterprise instance # hostname is the IP or domain of your Enterprise instance

View File

@@ -37,22 +37,28 @@ As more users join {% data variables.location.product_location %}, you may need
{% data reusables.enterprise_installation.ssh-into-instance %} {% data reusables.enterprise_installation.ssh-into-instance %}
3. Put the appliance in maintenance mode. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)." 3. Put the appliance in maintenance mode. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)."
4. Reboot the appliance to detect the new storage allocation: 4. Reboot the appliance to detect the new storage allocation:
```shell ```shell
sudo reboot sudo reboot
``` ```
5. Run the `ghe-storage-extend` command to expand the `/data/user` filesystem: 5. Run the `ghe-storage-extend` command to expand the `/data/user` filesystem:
```shell ```shell
ghe-storage-extend ghe-storage-extend
``` ```
6. Ensure system services are functioning correctly, then release maintenance mode. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)." 6. Ensure system services are functioning correctly, then release maintenance mode. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)."
## Increasing the root partition size using a new appliance ## Increasing the root partition size using a new appliance
1. Set up a new {% data variables.product.prodname_ghe_server %} instance with a larger root disk using the same version as your current appliance. For more information, see "[AUTOTITLE](/admin/installation/setting-up-a-github-enterprise-server-instance)." 1. Set up a new {% data variables.product.prodname_ghe_server %} instance with a larger root disk using the same version as your current appliance. For more information, see "[AUTOTITLE](/admin/installation/setting-up-a-github-enterprise-server-instance)."
2. Shut down the current appliance: 2. Shut down the current appliance:
```shell ```shell
sudo poweroff sudo poweroff
``` ```
3. Detach the data disk from the current appliance using your virtualization platform's tools. 3. Detach the data disk from the current appliance using your virtualization platform's tools.
4. Attach the data disk to the new appliance with the larger root disk. 4. Attach the data disk to the new appliance with the larger root disk.
@@ -67,11 +73,13 @@ As more users join {% data variables.location.product_location %}, you may need
1. Attach a new disk to your {% data variables.product.prodname_ghe_server %} appliance. 1. Attach a new disk to your {% data variables.product.prodname_ghe_server %} appliance.
1. Run the `lsblk` command to identify the new disk's device name. 1. Run the `lsblk` command to identify the new disk's device name.
1. Run the `parted` command to format the disk, substituting your device name for `/dev/xvdg`: 1. Run the `parted` command to format the disk, substituting your device name for `/dev/xvdg`:
```shell ```shell
sudo parted /dev/xvdg mklabel msdos sudo parted /dev/xvdg mklabel msdos
sudo parted /dev/xvdg mkpart primary ext4 0% 50% sudo parted /dev/xvdg mkpart primary ext4 0% 50%
sudo parted /dev/xvdg mkpart primary ext4 50% 100% sudo parted /dev/xvdg mkpart primary ext4 50% 100%
``` ```
1. If your appliance is configured for high-availability or geo-replication, to stop replication run the `ghe-repl-stop` command on each replica node: 1. If your appliance is configured for high-availability or geo-replication, to stop replication run the `ghe-repl-stop` command on each replica node:
```shell ```shell
@@ -83,10 +91,13 @@ As more users join {% data variables.location.product_location %}, you may need
```shell ```shell
ghe-upgrade PACKAGE-NAME.pkg -s -t /dev/xvdg1 ghe-upgrade PACKAGE-NAME.pkg -s -t /dev/xvdg1
``` ```
1. Shut down the appliance: 1. Shut down the appliance:
```shell ```shell
sudo poweroff sudo poweroff
``` ```
1. In the hypervisor, remove the old root disk and attach the new root disk at the same location as the old root disk. 1. In the hypervisor, remove the old root disk and attach the new root disk at the same location as the old root disk.
1. Start the appliance. 1. Start the appliance.
1. Ensure system services are functioning correctly, then release maintenance mode. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)." 1. Ensure system services are functioning correctly, then release maintenance mode. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)."

View File

@@ -95,6 +95,7 @@ The following instructions are only intended for {% data variables.product.prod
```shell copy ```shell copy
ghe-config mysql.innodb-flush-no-fsync true ghe-config mysql.innodb-flush-no-fsync true
``` ```
{% data reusables.enterprise.apply-configuration %} {% data reusables.enterprise.apply-configuration %}
#### Upgrade your instance's storage #### Upgrade your instance's storage

View File

@@ -63,6 +63,7 @@ To upgrade to the latest version of {% data variables.product.prodname_enterpris
10. On the backup host, run the `ghe-backup` command to take a final backup snapshot. This ensures that all data from the old instance is captured. 10. On the backup host, run the `ghe-backup` command to take a final backup snapshot. This ensures that all data from the old instance is captured.
11. On the backup host, run the `ghe-restore` command you copied on the new instance's restore status screen to restore the latest snapshot. 11. On the backup host, run the `ghe-restore` command you copied on the new instance's restore status screen to restore the latest snapshot.
```shell ```shell
$ ghe-restore 169.254.1.1 $ ghe-restore 169.254.1.1
The authenticity of host '169.254.1.1:122' can't be established. The authenticity of host '169.254.1.1:122' can't be established.

View File

@@ -42,9 +42,11 @@ topics:
- Additional root storage must be available when upgrading through hotpatching, as it installs multiple versions of certain services until the upgrade is complete. Pre-flight checks will notify you if you don't have enough root disk storage. - Additional root storage must be available when upgrading through hotpatching, as it installs multiple versions of certain services until the upgrade is complete. Pre-flight checks will notify you if you don't have enough root disk storage.
- When upgrading through hotpatching, your instance cannot be too heavily loaded, as it may impact the hotpatching process. - When upgrading through hotpatching, your instance cannot be too heavily loaded, as it may impact the hotpatching process.
- Upgrading to {% data variables.product.prodname_ghe_server %} 2.17 migrates your audit logs from Elasticsearch to MySQL. This migration also increases the amount of time and disk space it takes to restore a snapshot. Before migrating, check the number of bytes in your Elasticsearch audit log indices with this command: - Upgrading to {% data variables.product.prodname_ghe_server %} 2.17 migrates your audit logs from Elasticsearch to MySQL. This migration also increases the amount of time and disk space it takes to restore a snapshot. Before migrating, check the number of bytes in your Elasticsearch audit log indices with this command:
``` shell ``` shell
curl -s http://localhost:9201/audit_log/_stats/store | jq ._all.primaries.store.size_in_bytes curl -s http://localhost:9201/audit_log/_stats/store | jq ._all.primaries.store.size_in_bytes
``` ```
Use the number to estimate the amount of disk space the MySQL audit logs will need. The script also monitors your free disk space while the import is in progress. Monitoring this number is especially useful if your free disk space is close to the amount of disk space necessary for migration. Use the number to estimate the amount of disk space the MySQL audit logs will need. The script also monitors your free disk space while the import is in progress. Monitoring this number is especially useful if your free disk space is close to the amount of disk space necessary for migration.
{% ifversion mysql-8-upgrade %} {% ifversion mysql-8-upgrade %}

View File

@@ -145,10 +145,12 @@ If the upgrade target you're presented with is a feature release instead of a pa
1. {% data reusables.enterprise_installation.enterprise-download-upgrade-pkg %} Copy the URL for the upgrade hotpackage (_.hpkg_ file). 1. {% data reusables.enterprise_installation.enterprise-download-upgrade-pkg %} Copy the URL for the upgrade hotpackage (_.hpkg_ file).
{% data reusables.enterprise_installation.download-package %} {% data reusables.enterprise_installation.download-package %}
1. Run the `ghe-upgrade` command using the package file name: 1. Run the `ghe-upgrade` command using the package file name:
```shell ```shell
admin@HOSTNAME:~$ ghe-upgrade GITHUB-UPGRADE.hpkg admin@HOSTNAME:~$ ghe-upgrade GITHUB-UPGRADE.hpkg
*** verifying upgrade package signature... *** verifying upgrade package signature...
``` ```
1. If at least one service or system component requires a reboot, the hotpatch upgrade script notifies you. For example, updates to the kernel, MySQL, or Elasticsearch may require a reboot. 1. If at least one service or system component requires a reboot, the hotpatch upgrade script notifies you. For example, updates to the kernel, MySQL, or Elasticsearch may require a reboot.
### Upgrading an instance with multiple nodes using a hotpatch ### Upgrading an instance with multiple nodes using a hotpatch
@@ -194,11 +196,14 @@ While you can use a hotpatch to upgrade to the latest patch release within a fea
{% endnote %} {% endnote %}
1. Run the `ghe-upgrade` command using the package file name: 1. Run the `ghe-upgrade` command using the package file name:
```shell ```shell
admin@HOSTNAME:~$ ghe-upgrade GITHUB-UPGRADE.pkg admin@HOSTNAME:~$ ghe-upgrade GITHUB-UPGRADE.pkg
*** verifying upgrade package signature... *** verifying upgrade package signature...
``` ```
1. Confirm that you'd like to continue with the upgrade and restart after the package signature verifies. The new root filesystem writes to the secondary partition and the instance automatically restarts in maintenance mode: 1. Confirm that you'd like to continue with the upgrade and restart after the package signature verifies. The new root filesystem writes to the secondary partition and the instance automatically restarts in maintenance mode:
```shell ```shell
*** applying update... *** applying update...
This package will upgrade your installation to version VERSION-NUMBER This package will upgrade your installation to version VERSION-NUMBER
@@ -206,6 +211,7 @@ While you can use a hotpatch to upgrade to the latest patch release within a fea
Target root partition: /dev/xvda2 Target root partition: /dev/xvda2
Proceed with installation? [y/N] Proceed with installation? [y/N]
``` ```
{%- ifversion ghe-migrations-cli-utility %} {%- ifversion ghe-migrations-cli-utility %}
1. Optionally, during an upgrade to a feature release, you can monitor the status of database migrations using the `ghe-migrations` utility. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/command-line-utilities#ghe-migrations)." 1. Optionally, during an upgrade to a feature release, you can monitor the status of database migrations using the `ghe-migrations` utility. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/command-line-utilities#ghe-migrations)."
{%- endif %} {%- endif %}
@@ -214,6 +220,7 @@ While you can use a hotpatch to upgrade to the latest patch release within a fea
```shell ```shell
tail -f /data/user/common/ghe-config.log tail -f /data/user/common/ghe-config.log
``` ```
{% ifversion ip-exception-list %} {% ifversion ip-exception-list %}
1. Optionally, after the upgrade, validate the upgrade by configuring an IP exception list to allow access to a specified list of IP addresses. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode#validating-changes-in-maintenance-mode-using-the-ip-exception-list)." 1. Optionally, after the upgrade, validate the upgrade by configuring an IP exception list to allow access to a specified list of IP addresses. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode#validating-changes-in-maintenance-mode-using-the-ip-exception-list)."
{% endif %} {% endif %}
@@ -262,6 +269,7 @@ To upgrade an instance that comprises multiple nodes using an upgrade package, y
``` ```
CRITICAL: git replication is behind the primary by more than 1007 repositories and/or gists CRITICAL: git replication is behind the primary by more than 1007 repositories and/or gists
``` ```
{% endnote %} {% endnote %}
{%- ifversion ghes = 3.4 or ghes = 3.5 or ghes = 3.6 %} {%- ifversion ghes = 3.4 or ghes = 3.5 or ghes = 3.6 %}

View File

@@ -31,6 +31,7 @@ To restore a backup of {% data variables.location.product_location %} with {% da
```shell copy ```shell copy
ssh -p 122 admin@HOSTNAME ssh -p 122 admin@HOSTNAME
``` ```
1. Configure the destination instance to use the same external storage service for {% data variables.product.prodname_actions %} as the source instance by entering one of the following commands. 1. Configure the destination instance to use the same external storage service for {% data variables.product.prodname_actions %} as the source instance by entering one of the following commands.
{% indented_data_reference reusables.actions.configure-storage-provider-platform-commands spaces=3 %} {% indented_data_reference reusables.actions.configure-storage-provider-platform-commands spaces=3 %}
{% data reusables.actions.configure-storage-provider %} {% data reusables.actions.configure-storage-provider %}
@@ -39,6 +40,7 @@ To restore a backup of {% data variables.location.product_location %} with {% da
```shell copy ```shell copy
ghe-config app.actions.enabled true ghe-config app.actions.enabled true
``` ```
{% data reusables.actions.apply-configuration-and-enable %} {% data reusables.actions.apply-configuration-and-enable %}
1. After {% data variables.product.prodname_actions %} is configured and enabled, to restore the rest of the data from the backup, use the `ghe-restore` command. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/configuring-backups-on-your-appliance#restoring-a-backup)." 1. After {% data variables.product.prodname_actions %} is configured and enabled, to restore the rest of the data from the backup, use the `ghe-restore` command. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/configuring-backups-on-your-appliance#restoring-a-backup)."
1. Re-register your self-hosted runners on the destination instance. For more information, see "[AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/adding-self-hosted-runners)." 1. Re-register your self-hosted runners on the destination instance. For more information, see "[AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/adding-self-hosted-runners)."

View File

@@ -150,6 +150,7 @@ If any of these services are at or near 100% CPU utilization, or the memory is n
} }
} }
``` ```
1. Save and exit the file. 1. Save and exit the file.
1. Run `ghe-config-apply` to apply the changes. 1. Run `ghe-config-apply` to apply the changes.
@@ -175,13 +176,17 @@ There are three ways to resolve this problem:
1. Log in to the administrative shell using SSH. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/accessing-the-administrative-shell-ssh)." 1. Log in to the administrative shell using SSH. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/accessing-the-administrative-shell-ssh)."
1. To remove the limitations on workflows triggered by {% data variables.product.prodname_dependabot %} on {% data variables.location.product_location %}, use the following command. 1. To remove the limitations on workflows triggered by {% data variables.product.prodname_dependabot %} on {% data variables.location.product_location %}, use the following command.
``` shell ``` shell
ghe-config app.actions.disable-dependabot-enforcement true ghe-config app.actions.disable-dependabot-enforcement true
``` ```
1. Apply the configuration. 1. Apply the configuration.
```shell ```shell
ghe-config-apply ghe-config-apply
``` ```
1. Return to {% data variables.product.prodname_ghe_server %}. 1. Return to {% data variables.product.prodname_ghe_server %}.
{% endif %} {% endif %}
@@ -204,18 +209,25 @@ To install the official bundled actions and starter workflows within a designate
1. Log in to the administrative shell using SSH. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/accessing-the-administrative-shell-ssh)." 1. Log in to the administrative shell using SSH. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/accessing-the-administrative-shell-ssh)."
1. To designate your organization as the location to store the bundled actions, use the `ghe-config` command, replacing `ORGANIZATION` with the name of your organization. 1. To designate your organization as the location to store the bundled actions, use the `ghe-config` command, replacing `ORGANIZATION` with the name of your organization.
```shell ```shell
ghe-config app.actions.actions-org ORGANIZATION ghe-config app.actions.actions-org ORGANIZATION
``` ```
and: and:
```shell ```shell
ghe-config app.actions.github-org ORGANIZATION ghe-config app.actions.github-org ORGANIZATION
``` ```
1. To add the bundled actions to your organization, unset the SHA. 1. To add the bundled actions to your organization, unset the SHA.
```shell ```shell
ghe-config --unset 'app.actions.actions-repos-sha1sum' ghe-config --unset 'app.actions.actions-repos-sha1sum'
``` ```
1. Apply the configuration. 1. Apply the configuration.
```shell ```shell
ghe-config-apply ghe-config-apply
``` ```

View File

@@ -45,6 +45,7 @@ To more accurately mirror your production environment, you can optionally copy f
```shell ```shell
azcopy copy 'https://<em>SOURCE-STORAGE-ACCOUNT-NAME</em>.blob.core.windows.net/<em>SAS-TOKEN</em>' 'https://<em>DESTINATION-STORAGE-ACCOUNT-NAME</em>.blob.core.windows.net/' --recursive azcopy copy 'https://<em>SOURCE-STORAGE-ACCOUNT-NAME</em>.blob.core.windows.net/<em>SAS-TOKEN</em>' 'https://<em>DESTINATION-STORAGE-ACCOUNT-NAME</em>.blob.core.windows.net/' --recursive
``` ```
- For Amazon S3 buckets, you can use [`aws s3 sync`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html). For example: - For Amazon S3 buckets, you can use [`aws s3 sync`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html). For example:
```shell ```shell

View File

@@ -66,6 +66,7 @@ To configure {% data variables.product.prodname_ghe_server %} to use OIDC with a
``` ```
SHA1 Fingerprint=AB:12:34:56:78:90:AB:CD:EF:12:34:56:78:90:AB:CD:EF:12:34:56 SHA1 Fingerprint=AB:12:34:56:78:90:AB:CD:EF:12:34:56:78:90:AB:CD:EF:12:34:56
``` ```
1. Remove the colons (`:`) from the thumbprint value, and save the value to use later. 1. Remove the colons (`:`) from the thumbprint value, and save the value to use later.
For example, the thumbprint for the value returned in the previous step is: For example, the thumbprint for the value returned in the previous step is:
@@ -73,6 +74,7 @@ To configure {% data variables.product.prodname_ghe_server %} to use OIDC with a
``` ```
AB1234567890ABCDEF1234567890ABCDEF123456 AB1234567890ABCDEF1234567890ABCDEF123456
``` ```
1. Using the AWS CLI, use the following command to create an OIDC provider for {% data variables.location.product_location_enterprise %}. Replace `HOSTNAME` with the public hostname for {% data variables.location.product_location_enterprise %}, and `THUMBPRINT` with the thumbprint value from the previous step. 1. Using the AWS CLI, use the following command to create an OIDC provider for {% data variables.location.product_location_enterprise %}. Replace `HOSTNAME` with the public hostname for {% data variables.location.product_location_enterprise %}, and `THUMBPRINT` with the thumbprint value from the previous step.
```shell copy ```shell copy
@@ -139,6 +141,7 @@ To configure {% data variables.product.prodname_ghe_server %} to use OIDC with a
} }
... ...
``` ```
1. Click **Update policy**. 1. Click **Update policy**.
### 3. Configure {% data variables.product.prodname_ghe_server %} to connect to Amazon S3 using OIDC ### 3. Configure {% data variables.product.prodname_ghe_server %} to connect to Amazon S3 using OIDC

View File

@@ -73,6 +73,7 @@ To configure {% data variables.product.prodname_ghe_server %} to use OIDC with G
``` ```
https://my-ghes-host.example.com/_services/token https://my-ghes-host.example.com/_services/token
``` ```
- Under "Audiences", leave **Default audience** selected, but note the identity provider URL, as it is needed later. The identity provider URL is in the format `https://iam.googleapis.com/projects/PROJECT-NUMBER/locations/global/workloadIdentityPools/POOL-NAME/providers/PROVIDER-NAME`. - Under "Audiences", leave **Default audience** selected, but note the identity provider URL, as it is needed later. The identity provider URL is in the format `https://iam.googleapis.com/projects/PROJECT-NUMBER/locations/global/workloadIdentityPools/POOL-NAME/providers/PROVIDER-NAME`.
- Click **Continue**. - Click **Continue**.
1. Under "Configure provider attributes": 1. Under "Configure provider attributes":

View File

@@ -74,6 +74,7 @@ You can populate the runner tool cache by running a {% data variables.product.pr
with: with:
path: {% raw %}${{runner.tool_cache}}/tool_cache.tar.gz{% endraw %} path: {% raw %}${{runner.tool_cache}}/tool_cache.tar.gz{% endraw %}
``` ```
1. Download the tool cache artifact from the workflow run. For instructions on downloading artifacts, see "[AUTOTITLE](/actions/managing-workflow-runs/downloading-workflow-artifacts)." 1. Download the tool cache artifact from the workflow run. For instructions on downloading artifacts, see "[AUTOTITLE](/actions/managing-workflow-runs/downloading-workflow-artifacts)."
1. Transfer the tool cache artifact to your self hosted runner and extract it to the local tool cache directory. The default tool cache directory is `RUNNER_DIR/_work/_tool`. If the runner hasn't processed any jobs yet, you might need to create the `_work/_tool` directories. 1. Transfer the tool cache artifact to your self hosted runner and extract it to the local tool cache directory. The default tool cache directory is `RUNNER_DIR/_work/_tool`. If the runner hasn't processed any jobs yet, you might need to create the `_work/_tool` directories.

View File

@@ -68,12 +68,14 @@ AMIs for {% data variables.product.prodname_ghe_server %} are available in the A
### Using the AWS CLI to select an AMI ### Using the AWS CLI to select an AMI
1. Using the AWS CLI, get a list of {% data variables.product.prodname_ghe_server %} images published by {% data variables.product.prodname_dotcom %}'s AWS owner IDs (`025577942450` for GovCloud, and `895557238572` for other regions). For more information, see "[describe-images](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-images.html)" in the AWS documentation. 1. Using the AWS CLI, get a list of {% data variables.product.prodname_ghe_server %} images published by {% data variables.product.prodname_dotcom %}'s AWS owner IDs (`025577942450` for GovCloud, and `895557238572` for other regions). For more information, see "[describe-images](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-images.html)" in the AWS documentation.
```shell ```shell
aws ec2 describe-images \ aws ec2 describe-images \
--owners OWNER_ID \ --owners OWNER_ID \
--query 'sort_by(Images,&Name)[*].{Name:Name,ImageID:ImageId}' \ --query 'sort_by(Images,&Name)[*].{Name:Name,ImageID:ImageId}' \
--output=text --output=text
``` ```
2. Take note of the AMI ID for the latest {% data variables.product.prodname_ghe_server %} image. 2. Take note of the AMI ID for the latest {% data variables.product.prodname_ghe_server %} image.
## Creating a security group ## Creating a security group
@@ -81,6 +83,7 @@ AMIs for {% data variables.product.prodname_ghe_server %} are available in the A
If you're setting up your AMI for the first time, you will need to create a security group and add a new security group rule for each port in the table below. For more information, see the AWS guide "[Using Security Groups](https://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-sg.html)." If you're setting up your AMI for the first time, you will need to create a security group and add a new security group rule for each port in the table below. For more information, see the AWS guide "[Using Security Groups](https://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-sg.html)."
1. Using the AWS CLI, create a new security group. For more information, see "[create-security-group](https://docs.aws.amazon.com/cli/latest/reference/ec2/create-security-group.html)" in the AWS documentation. 1. Using the AWS CLI, create a new security group. For more information, see "[create-security-group](https://docs.aws.amazon.com/cli/latest/reference/ec2/create-security-group.html)" in the AWS documentation.
```shell ```shell
aws ec2 create-security-group --group-name SECURITY_GROUP_NAME --description "SECURITY GROUP DESCRIPTION" aws ec2 create-security-group --group-name SECURITY_GROUP_NAME --description "SECURITY GROUP DESCRIPTION"
``` ```
@@ -88,9 +91,11 @@ If you're setting up your AMI for the first time, you will need to create a secu
2. Take note of the security group ID (`sg-xxxxxxxx`) of your newly created security group. 2. Take note of the security group ID (`sg-xxxxxxxx`) of your newly created security group.
3. Create a security group rule for each of the ports in the table below. For more information, see "[authorize-security-group-ingress](https://docs.aws.amazon.com/cli/latest/reference/ec2/authorize-security-group-ingress.html)" in the AWS documentation. 3. Create a security group rule for each of the ports in the table below. For more information, see "[authorize-security-group-ingress](https://docs.aws.amazon.com/cli/latest/reference/ec2/authorize-security-group-ingress.html)" in the AWS documentation.
```shell ```shell
aws ec2 authorize-security-group-ingress --group-id SECURITY_GROUP_ID --protocol PROTOCOL --port PORT_NUMBER --cidr SOURCE IP RANGE aws ec2 authorize-security-group-ingress --group-id SECURITY_GROUP_ID --protocol PROTOCOL --port PORT_NUMBER --cidr SOURCE IP RANGE
``` ```
This table identifies what each port is used for. This table identifies what each port is used for.
{% data reusables.enterprise_installation.necessary_ports %} {% data reusables.enterprise_installation.necessary_ports %}

View File

@@ -40,6 +40,7 @@ Before launching {% data variables.location.product_location %} on Azure, you'll
{% data reusables.enterprise_installation.create-ghe-instance %} {% data reusables.enterprise_installation.create-ghe-instance %}
1. Find the most recent {% data variables.product.prodname_ghe_server %} appliance image. For more information about the `vm image list` command, see "[`az vm image list`](https://docs.microsoft.com/cli/azure/vm/image?view=azure-cli-latest#az_vm_image_list)" in the Microsoft documentation. 1. Find the most recent {% data variables.product.prodname_ghe_server %} appliance image. For more information about the `vm image list` command, see "[`az vm image list`](https://docs.microsoft.com/cli/azure/vm/image?view=azure-cli-latest#az_vm_image_list)" in the Microsoft documentation.
```shell ```shell
az vm image list --all -f GitHub-Enterprise | grep '"urn":' | sort -V az vm image list --all -f GitHub-Enterprise | grep '"urn":' | sort -V
``` ```
@@ -83,6 +84,7 @@ To configure the instance, you must confirm the instance's status, upload a lice
{% data reusables.enterprise_installation.new-instance-attack-vector-warning %} {% data reusables.enterprise_installation.new-instance-attack-vector-warning %}
1. Before configuring the VM, you must wait for it to enter ReadyRole status. Check the status of the VM with the `vm list` command. For more information, see "[`az vm list`](https://docs.microsoft.com/cli/azure/vm?view=azure-cli-latest#az_vm_list)" in the Microsoft documentation. 1. Before configuring the VM, you must wait for it to enter ReadyRole status. Check the status of the VM with the `vm list` command. For more information, see "[`az vm list`](https://docs.microsoft.com/cli/azure/vm?view=azure-cli-latest#az_vm_list)" in the Microsoft documentation.
```shell ```shell
$ az vm list -d -g RESOURCE_GROUP -o table $ az vm list -d -g RESOURCE_GROUP -o table
> Name ResourceGroup PowerState PublicIps Fqdns Location Zones > Name ResourceGroup PowerState PublicIps Fqdns Location Zones
@@ -90,6 +92,7 @@ To configure the instance, you must confirm the instance's status, upload a lice
> VM_NAME RESOURCE_GROUP VM running 40.76.79.202 eastus > VM_NAME RESOURCE_GROUP VM running 40.76.79.202 eastus
``` ```
{% note %} {% note %}
**Note:** Azure does not automatically create a FQDNS entry for the VM. For more information, see Azure's guide on how to "[Create a fully qualified domain name in the Azure portal for a Linux VM](https://docs.microsoft.com/azure/virtual-machines/linux/portal-create-fqdn)." **Note:** Azure does not automatically create a FQDNS entry for the VM. For more information, see Azure's guide on how to "[Create a fully qualified domain name in the Azure portal for a Linux VM](https://docs.microsoft.com/azure/virtual-machines/linux/portal-create-fqdn)."

View File

@@ -36,6 +36,7 @@ Before launching {% data variables.location.product_location %} on Google Cloud
## Selecting the {% data variables.product.prodname_ghe_server %} image ## Selecting the {% data variables.product.prodname_ghe_server %} image
1. Using the [gcloud compute](https://cloud.google.com/compute/docs/gcloud-compute/) command-line tool, list the public {% data variables.product.prodname_ghe_server %} images: 1. Using the [gcloud compute](https://cloud.google.com/compute/docs/gcloud-compute/) command-line tool, list the public {% data variables.product.prodname_ghe_server %} images:
```shell ```shell
gcloud compute images list --project github-enterprise-public --no-standard-images gcloud compute images list --project github-enterprise-public --no-standard-images
``` ```
@@ -47,15 +48,19 @@ Before launching {% data variables.location.product_location %} on Google Cloud
GCE virtual machines are created as a member of a network, which has a firewall. For the network associated with the {% data variables.product.prodname_ghe_server %} VM, you'll need to configure the firewall to allow the required ports listed in the table below. For more information about firewall rules on Google Cloud Platform, see the Google guide "[Firewall Rules Overview](https://cloud.google.com/vpc/docs/firewalls)." GCE virtual machines are created as a member of a network, which has a firewall. For the network associated with the {% data variables.product.prodname_ghe_server %} VM, you'll need to configure the firewall to allow the required ports listed in the table below. For more information about firewall rules on Google Cloud Platform, see the Google guide "[Firewall Rules Overview](https://cloud.google.com/vpc/docs/firewalls)."
1. Using the gcloud compute command-line tool, create the network. For more information, see "[gcloud compute networks create](https://cloud.google.com/sdk/gcloud/reference/compute/networks/create)" in the Google documentation. 1. Using the gcloud compute command-line tool, create the network. For more information, see "[gcloud compute networks create](https://cloud.google.com/sdk/gcloud/reference/compute/networks/create)" in the Google documentation.
```shell ```shell
gcloud compute networks create NETWORK-NAME --subnet-mode auto gcloud compute networks create NETWORK-NAME --subnet-mode auto
``` ```
2. Create a firewall rule for each of the ports in the table below. For more information, see "[gcloud compute firewall-rules](https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/)" in the Google documentation. 2. Create a firewall rule for each of the ports in the table below. For more information, see "[gcloud compute firewall-rules](https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/)" in the Google documentation.
```shell ```shell
$ gcloud compute firewall-rules create RULE-NAME \ $ gcloud compute firewall-rules create RULE-NAME \
--network NETWORK-NAME \ --network NETWORK-NAME \
--allow tcp:22,tcp:25,tcp:80,tcp:122,udp:161,tcp:443,udp:1194,tcp:8080,tcp:8443,tcp:9418,icmp --allow tcp:22,tcp:25,tcp:80,tcp:122,udp:161,tcp:443,udp:1194,tcp:8080,tcp:8443,tcp:9418,icmp
``` ```
This table identifies the required ports and what each port is used for. This table identifies the required ports and what each port is used for.
{% data reusables.enterprise_installation.necessary_ports %} {% data reusables.enterprise_installation.necessary_ports %}
@@ -71,11 +76,13 @@ In production High Availability configurations, both primary and replica applian
To create the {% data variables.product.prodname_ghe_server %} instance, you'll need to create a GCE instance with your {% data variables.product.prodname_ghe_server %} image and attach an additional storage volume for your instance data. For more information, see "[Hardware considerations](#hardware-considerations)." To create the {% data variables.product.prodname_ghe_server %} instance, you'll need to create a GCE instance with your {% data variables.product.prodname_ghe_server %} image and attach an additional storage volume for your instance data. For more information, see "[Hardware considerations](#hardware-considerations)."
1. Using the gcloud compute command-line tool, create a data disk to use as an attached storage volume for your instance data, and configure the size based on your user license count. For more information, see "[gcloud compute disks create](https://cloud.google.com/sdk/gcloud/reference/compute/disks/create)" in the Google documentation. 1. Using the gcloud compute command-line tool, create a data disk to use as an attached storage volume for your instance data, and configure the size based on your user license count. For more information, see "[gcloud compute disks create](https://cloud.google.com/sdk/gcloud/reference/compute/disks/create)" in the Google documentation.
```shell ```shell
gcloud compute disks create DATA-DISK-NAME --size DATA-DISK-SIZE --type DATA-DISK-TYPE --zone ZONE gcloud compute disks create DATA-DISK-NAME --size DATA-DISK-SIZE --type DATA-DISK-TYPE --zone ZONE
``` ```
2. Then create an instance using the name of the {% data variables.product.prodname_ghe_server %} image you selected, and attach the data disk. For more information, see "[gcloud compute instances create](https://cloud.google.com/sdk/gcloud/reference/compute/instances/create)" in the Google documentation. 2. Then create an instance using the name of the {% data variables.product.prodname_ghe_server %} image you selected, and attach the data disk. For more information, see "[gcloud compute instances create](https://cloud.google.com/sdk/gcloud/reference/compute/instances/create)" in the Google documentation.
```shell ```shell
$ gcloud compute instances create INSTANCE-NAME \ $ gcloud compute instances create INSTANCE-NAME \
--machine-type n1-standard-8 \ --machine-type n1-standard-8 \

View File

@@ -37,25 +37,35 @@ shortTitle: Install on Hyper-V
{% data reusables.enterprise_installation.create-ghe-instance %} {% data reusables.enterprise_installation.create-ghe-instance %}
1. In PowerShell, create a new Generation 1 virtual machine, configure the size based on your user license count, and attach the {% data variables.product.prodname_ghe_server %} image you downloaded. For more information, see "[New-VM](https://docs.microsoft.com/powershell/module/hyper-v/new-vm?view=win10-ps)" in the Microsoft documentation. 1. In PowerShell, create a new Generation 1 virtual machine, configure the size based on your user license count, and attach the {% data variables.product.prodname_ghe_server %} image you downloaded. For more information, see "[New-VM](https://docs.microsoft.com/powershell/module/hyper-v/new-vm?view=win10-ps)" in the Microsoft documentation.
```shell ```shell
PS C:\> New-VM -Generation 1 -Name VM_NAME -MemoryStartupBytes MEMORY_SIZE -BootDevice VHD -VHDPath PATH_TO_VHD PS C:\> New-VM -Generation 1 -Name VM_NAME -MemoryStartupBytes MEMORY_SIZE -BootDevice VHD -VHDPath PATH_TO_VHD
``` ```
{% data reusables.enterprise_installation.create-attached-storage-volume %} Replace `PATH_TO_DATA_DISK` with the path to the location where you create the disk. For more information, see "[New-VHD](https://docs.microsoft.com/powershell/module/hyper-v/new-vhd?view=win10-ps)" in the Microsoft documentation. {% data reusables.enterprise_installation.create-attached-storage-volume %} Replace `PATH_TO_DATA_DISK` with the path to the location where you create the disk. For more information, see "[New-VHD](https://docs.microsoft.com/powershell/module/hyper-v/new-vhd?view=win10-ps)" in the Microsoft documentation.
```shell ```shell
PS C:\> New-VHD -Path PATH_TO_DATA_DISK -SizeBytes DISK_SIZE PS C:\> New-VHD -Path PATH_TO_DATA_DISK -SizeBytes DISK_SIZE
``` ```
3. Attach the data disk to your instance. For more information, see "[Add-VMHardDiskDrive](https://docs.microsoft.com/powershell/module/hyper-v/add-vmharddiskdrive?view=win10-ps)" in the Microsoft documentation. 3. Attach the data disk to your instance. For more information, see "[Add-VMHardDiskDrive](https://docs.microsoft.com/powershell/module/hyper-v/add-vmharddiskdrive?view=win10-ps)" in the Microsoft documentation.
```shell ```shell
PS C:\> Add-VMHardDiskDrive -VMName VM_NAME -Path PATH_TO_DATA_DISK PS C:\> Add-VMHardDiskDrive -VMName VM_NAME -Path PATH_TO_DATA_DISK
``` ```
4. Start the VM. For more information, see "[Start-VM](https://docs.microsoft.com/powershell/module/hyper-v/start-vm?view=win10-ps)" in the Microsoft documentation. 4. Start the VM. For more information, see "[Start-VM](https://docs.microsoft.com/powershell/module/hyper-v/start-vm?view=win10-ps)" in the Microsoft documentation.
```shell ```shell
PS C:\> Start-VM -Name VM_NAME PS C:\> Start-VM -Name VM_NAME
``` ```
5. Get the IP address of your VM. For more information, see "[Get-VMNetworkAdapter](https://docs.microsoft.com/powershell/module/hyper-v/get-vmnetworkadapter?view=win10-ps)" in the Microsoft documentation. 5. Get the IP address of your VM. For more information, see "[Get-VMNetworkAdapter](https://docs.microsoft.com/powershell/module/hyper-v/get-vmnetworkadapter?view=win10-ps)" in the Microsoft documentation.
```shell ```shell
PS C:\> (Get-VMNetworkAdapter -VMName VM_NAME).IpAddresses PS C:\> (Get-VMNetworkAdapter -VMName VM_NAME).IpAddresses
``` ```
6. Copy the VM's IP address and paste it into a web browser. 6. Copy the VM's IP address and paste it into a web browser.
## Configuring the {% data variables.product.prodname_ghe_server %} instance ## Configuring the {% data variables.product.prodname_ghe_server %} instance

View File

@@ -108,6 +108,7 @@ Optionally, if you use {% data variables.product.prodname_registry %} on your pr
ghe-config secrets.packages.azure-container-name "AZURE CONTAINER NAME" ghe-config secrets.packages.azure-container-name "AZURE CONTAINER NAME"
ghe-config secrets.packages.azure-connection-string "CONNECTION STRING" ghe-config secrets.packages.azure-connection-string "CONNECTION STRING"
``` ```
- Amazon S3: - Amazon S3:
```shell copy ```shell copy
@@ -117,6 +118,7 @@ Optionally, if you use {% data variables.product.prodname_registry %} on your pr
ghe-config secrets.packages.aws-access-key "S3 ACCESS KEY ID" ghe-config secrets.packages.aws-access-key "S3 ACCESS KEY ID"
ghe-config secrets.packages.aws-secret-key "S3 ACCESS SECRET" ghe-config secrets.packages.aws-secret-key "S3 ACCESS SECRET"
``` ```
1. To prepare to enable {% data variables.product.prodname_registry %} on the staging instance, enter the following command. 1. To prepare to enable {% data variables.product.prodname_registry %} on the staging instance, enter the following command.
```shell copy ```shell copy

View File

@@ -45,7 +45,9 @@ For more information, see "[Using the activity view to see changes to your repos
{% data reusables.enterprise_installation.ssh-into-instance %} {% data reusables.enterprise_installation.ssh-into-instance %}
1. In the appropriate Git repository, open the audit log file: 1. In the appropriate Git repository, open the audit log file:
```shell ```shell
ghe-repo OWNER/REPOSITORY -c "cat audit_log" ghe-repo OWNER/REPOSITORY -c "cat audit_log"
``` ```
{% endif %} {% endif %}

View File

@@ -101,6 +101,7 @@ For information on creating or accessing your access key ID and secret key, see
- Add the permissions policy you created above to allow writes to the bucket. - Add the permissions policy you created above to allow writes to the bucket.
- Edit the trust relationship to add the `sub` field to the validation conditions, replacing `ENTERPRISE` with the name of your enterprise. - Edit the trust relationship to add the `sub` field to the validation conditions, replacing `ENTERPRISE` with the name of your enterprise.
``` ```
"Condition": { "Condition": {
"StringEquals": { "StringEquals": {
@@ -109,6 +110,7 @@ For information on creating or accessing your access key ID and secret key, see
} }
} }
``` ```
- Make note of the Amazon Resource Name (ARN) of the created role. - Make note of the Amazon Resource Name (ARN) of the created role.
{% data reusables.enterprise.navigate-to-log-streaming-tab %} {% data reusables.enterprise.navigate-to-log-streaming-tab %}
{% data reusables.audit_log.streaming-choose-s3 %} {% data reusables.audit_log.streaming-choose-s3 %}

View File

@@ -33,6 +33,7 @@ For more information about your options, see the official [MinIO docs](https://d
1. Set up your preferred environment variables for MinIO. 1. Set up your preferred environment variables for MinIO.
These examples use `MINIO_DIR`: These examples use `MINIO_DIR`:
```shell ```shell
export MINIO_DIR=$(pwd)/minio export MINIO_DIR=$(pwd)/minio
mkdir -p $MINIO_DIR mkdir -p $MINIO_DIR
@@ -43,24 +44,29 @@ For more information about your options, see the official [MinIO docs](https://d
```shell ```shell
docker pull minio/minio docker pull minio/minio
``` ```
For more information, see the official "[MinIO Quickstart Guide](https://docs.min.io/docs/minio-quickstart-guide)." For more information, see the official "[MinIO Quickstart Guide](https://docs.min.io/docs/minio-quickstart-guide)."
3. Sign in to MinIO using your MinIO access key and secret. 3. Sign in to MinIO using your MinIO access key and secret.
{% linux %} {% linux %}
```shell ```shell
$ export MINIO_ACCESS_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1) $ export MINIO_ACCESS_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
# this one is actually a secret, so careful # this one is actually a secret, so careful
$ export MINIO_SECRET_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1) $ export MINIO_SECRET_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
``` ```
{% endlinux %} {% endlinux %}
{% mac %} {% mac %}
```shell ```shell
$ export MINIO_ACCESS_KEY=$(cat /dev/urandom | LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1) $ export MINIO_ACCESS_KEY=$(cat /dev/urandom | LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
# this one is actually a secret, so careful # this one is actually a secret, so careful
$ export MINIO_SECRET_KEY=$(cat /dev/urandom | LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1) $ export MINIO_SECRET_KEY=$(cat /dev/urandom | LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
``` ```
{% endmac %} {% endmac %}
You can access your MinIO keys using the environment variables: You can access your MinIO keys using the environment variables:

View File

@@ -31,6 +31,7 @@ You can use a Linux container management tool to build a pre-receive hook enviro
FROM gliderlabs/alpine:3.3 FROM gliderlabs/alpine:3.3
RUN apk add --no-cache git bash RUN apk add --no-cache git bash
``` ```
3. From the working directory that contains `Dockerfile.alpine-3.3`, build an image: 3. From the working directory that contains `Dockerfile.alpine-3.3`, build an image:
```shell ```shell
@@ -43,11 +44,13 @@ You can use a Linux container management tool to build a pre-receive hook enviro
> ---> 0250ab3be9c5 > ---> 0250ab3be9c5
> Successfully built 0250ab3be9c5 > Successfully built 0250ab3be9c5
``` ```
4. Create a container: 4. Create a container:
```shell ```shell
docker create --name pre-receive.alpine-3.3 pre-receive.alpine-3.3 /bin/true docker create --name pre-receive.alpine-3.3 pre-receive.alpine-3.3 /bin/true
``` ```
5. Export the Docker container to a `gzip` compressed `tar` file: 5. Export the Docker container to a `gzip` compressed `tar` file:
```shell ```shell
@@ -60,6 +63,7 @@ You can use a Linux container management tool to build a pre-receive hook enviro
1. Create a Linux `chroot` environment. 1. Create a Linux `chroot` environment.
2. Create a `gzip` compressed `tar` file of the `chroot` directory. 2. Create a `gzip` compressed `tar` file of the `chroot` directory.
```shell ```shell
cd /path/to/chroot cd /path/to/chroot
tar -czf /path/to/pre-receive-environment.tar.gz . tar -czf /path/to/pre-receive-environment.tar.gz .

Some files were not shown because too many files have changed in this diff Show More