Fix for blank lines around code fences (#38255)
This commit is contained in:
@@ -75,14 +75,18 @@ You can use the `git config` command to change the email address you associate w
|
||||
|
||||
{% data reusables.command_line.open_the_multi_os_terminal %}
|
||||
2. {% data reusables.user-settings.set_your_email_address_in_git %}
|
||||
|
||||
```shell
|
||||
git config --global user.email "YOUR_EMAIL"
|
||||
```
|
||||
|
||||
3. {% data reusables.user-settings.confirm_git_email_address_correct %}
|
||||
|
||||
```shell
|
||||
$ git config --global user.email
|
||||
<span class="output">email@example.com</span>
|
||||
```
|
||||
|
||||
4. {% data reusables.user-settings.link_email_with_your_account %}
|
||||
|
||||
### Setting your email address for a single repository
|
||||
@@ -94,12 +98,16 @@ You can change the email address associated with commits you make in a single re
|
||||
{% data reusables.command_line.open_the_multi_os_terminal %}
|
||||
2. Change the current working directory to the local repository where you want to configure the email address that you associate with your Git commits.
|
||||
3. {% data reusables.user-settings.set_your_email_address_in_git %}
|
||||
|
||||
```shell
|
||||
git config user.email "YOUR_EMAIL"
|
||||
```
|
||||
|
||||
4. {% data reusables.user-settings.confirm_git_email_address_correct %}
|
||||
|
||||
```shell
|
||||
$ git config user.email
|
||||
<span class="output">email@example.com</span>
|
||||
```
|
||||
|
||||
5. {% data reusables.user-settings.link_email_with_your_account %}
|
||||
|
||||
@@ -51,6 +51,7 @@ Alternatively, if you want to use the HTTPS protocol for both accounts, you can
|
||||
```shell copy
|
||||
git credential-osxkeychain erase https://github.com
|
||||
```
|
||||
|
||||
{% data reusables.git.clear-stored-gcm-credentials %}
|
||||
{% data reusables.git.cache-on-repository-path %}
|
||||
{% data reusables.accounts.create-personal-access-tokens %}
|
||||
@@ -70,6 +71,7 @@ Alternatively, if you want to use the HTTPS protocol for both accounts, you can
|
||||
```shell copy
|
||||
cmdkey /delete:LegacyGeneric:target=git:https://github.com
|
||||
```
|
||||
|
||||
{% data reusables.git.cache-on-repository-path %}
|
||||
{% data reusables.accounts.create-personal-access-tokens %}
|
||||
{% data reusables.git.provide-credentials %}
|
||||
|
||||
@@ -141,6 +141,7 @@ You can use the `cache-dependency-path` parameter for cases when multiple depend
|
||||
go-version: '1.17'
|
||||
cache-dependency-path: subdir/go.sum
|
||||
```
|
||||
|
||||
{% else %}
|
||||
|
||||
When caching is enabled, the `setup-go` action searches for the dependency file, `go.sum`, in the repository root and uses the hash of the dependency file as a part of the cache key.
|
||||
@@ -162,6 +163,7 @@ Alternatively, you can use the `cache-dependency-path` parameter for cases when
|
||||
cache: true
|
||||
cache-dependency-path: subdir/go.sum
|
||||
```
|
||||
|
||||
{% endif %}
|
||||
|
||||
If you have a custom requirement or need finer controls for caching, you can use the [`cache` action](https://github.com/marketplace/actions/cache). For more information, see "[AUTOTITLE](/actions/using-workflows/caching-dependencies-to-speed-up-workflows)."
|
||||
|
||||
@@ -73,6 +73,7 @@ jobs:
|
||||

|
||||
|
||||
- `Invoke-Pester Unit.Tests.ps1 -Passthru` - Uses Pester to execute tests defined in a file called `Unit.Tests.ps1`. For example, to perform the same test described above, the `Unit.Tests.ps1` will contain the following:
|
||||
|
||||
```
|
||||
Describe "Check results file is present" {
|
||||
It "Check results file is present" {
|
||||
|
||||
@@ -89,11 +89,13 @@ Alternatively, you can check a `.ruby-version` file into the root of your repos
|
||||
You can add a matrix strategy to run your workflow with more than one version of Ruby. For example, you can test your code against the latest patch releases of versions 3.1, 3.0, and 2.7.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
strategy:
|
||||
matrix:
|
||||
ruby-version: ['3.1', '3.0', '2.7']
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
Each version of Ruby specified in the `ruby-version` array creates a job that runs the same steps. The {% raw %}`${{ matrix.ruby-version }}`{% endraw %} context is used to access the current job's version. For more information about matrix strategies and contexts, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions)" and "[AUTOTITLE](/actions/learn-github-actions/contexts)."
|
||||
@@ -156,12 +158,14 @@ The `setup-ruby` actions provides a method to automatically handle the caching o
|
||||
To enable caching, set the following.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
steps:
|
||||
- uses: ruby/setup-ruby@ec02537da5712d66d4d50a0f33b7eb52773b5ed1
|
||||
with:
|
||||
bundler-cache: true
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
This will configure bundler to install your gems to `vendor/cache`. For each successful run of your workflow, this folder will be cached by {% data variables.product.prodname_actions %} and re-downloaded for subsequent workflow runs. A hash of your gemfile.lock and the Ruby version are used as the cache key. If you install any new gems, or change a version, the cache will be invalidated and bundler will do a fresh install.
|
||||
|
||||
@@ -101,6 +101,7 @@ jobs:
|
||||
You can configure your job to use a single specific version of Swift, such as `5.3.3`.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
steps:
|
||||
- uses: swift-actions/setup-swift@65540b95f51493d65f5e59e97dcef9629ddf11bf
|
||||
@@ -109,6 +110,7 @@ steps:
|
||||
- name: Get swift version
|
||||
run: swift --version # Swift 5.3.3
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## Building and testing your code
|
||||
|
||||
@@ -51,6 +51,7 @@ Before you begin, you'll create a repository on {% ifversion ghae %}{% data vari
|
||||
```
|
||||
|
||||
1. From your terminal, check in your `goodbye.sh` file.
|
||||
|
||||
```shell copy
|
||||
git add goodbye.sh
|
||||
git commit -m "Add goodbye script"
|
||||
@@ -63,6 +64,7 @@ Before you begin, you'll create a repository on {% ifversion ghae %}{% data vari
|
||||
|
||||
{% raw %}
|
||||
**action.yml**
|
||||
|
||||
```yaml copy
|
||||
name: 'Hello World'
|
||||
description: 'Greet someone'
|
||||
@@ -121,6 +123,7 @@ The following workflow code uses the completed hello world action that you made
|
||||
Copy the workflow code into a `.github/workflows/main.yml` file in another repository, but replace `actions/hello-world-composite-action@v1` with the repository and tag you created. You can also replace the `who-to-greet` input with your name.
|
||||
|
||||
**.github/workflows/main.yml**
|
||||
|
||||
```yaml copy
|
||||
on: [push]
|
||||
|
||||
|
||||
@@ -58,6 +58,7 @@ Before you begin, you'll need to create a {% data variables.product.prodname_dot
|
||||
In your new `hello-world-docker-action` directory, create a new `Dockerfile` file. Make sure that your filename is capitalized correctly (use a capital `D` but not a capital `f`) if you're having issues. For more information, see "[AUTOTITLE](/actions/creating-actions/dockerfile-support-for-github-actions)."
|
||||
|
||||
**Dockerfile**
|
||||
|
||||
```Dockerfile copy
|
||||
# Container image that runs your code
|
||||
FROM alpine:3.10
|
||||
@@ -75,6 +76,7 @@ Create a new `action.yml` file in the `hello-world-docker-action` directory you
|
||||
|
||||
{% raw %}
|
||||
**action.yml**
|
||||
|
||||
```yaml copy
|
||||
# action.yml
|
||||
name: 'Hello World'
|
||||
@@ -93,6 +95,7 @@ runs:
|
||||
args:
|
||||
- ${{ inputs.who-to-greet }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
This metadata defines one `who-to-greet` input and one `time` output parameter. To pass inputs to the Docker container, you should declare the input using `inputs` and pass the input in the `args` keyword. Everything you include in `args` is passed to the container, but for better discoverability for users of your action, we recommended using inputs.
|
||||
@@ -110,6 +113,7 @@ Next, the script gets the current time and sets it as an output variable that ac
|
||||
1. Add the following code to your `entrypoint.sh` file.
|
||||
|
||||
**entrypoint.sh**
|
||||
|
||||
```shell copy
|
||||
#!/bin/sh -l
|
||||
|
||||
@@ -120,6 +124,7 @@ Next, the script gets the current time and sets it as an output variable that ac
|
||||
{%- else %}
|
||||
echo "::set-output name=time::$time"
|
||||
{%- endif %}
|
||||
|
||||
```
|
||||
If `entrypoint.sh` executes without any errors, the action's status is set to `success`. You can also explicitly set exit codes in your action's code to provide an action's status. For more information, see "[AUTOTITLE](/actions/creating-actions/setting-exit-codes-for-actions)."
|
||||
|
||||
@@ -153,6 +158,7 @@ In your `hello-world-docker-action` directory, create a `README.md` file that sp
|
||||
- An example of how to use your action in a workflow.
|
||||
|
||||
**README.md**
|
||||
|
||||
```markdown copy
|
||||
# Hello world docker action
|
||||
|
||||
@@ -205,6 +211,7 @@ Now you're ready to test your action out in a workflow.
|
||||
The following workflow code uses the completed _hello world_ action in the public [`actions/hello-world-docker-action`](https://github.com/actions/hello-world-docker-action) repository. Copy the following workflow example code into a `.github/workflows/main.yml` file, but replace the `actions/hello-world-docker-action` with your repository and action name. You can also replace the `who-to-greet` input with your name. {% ifversion fpt or ghec %}Public actions can be used even if they're not published to {% data variables.product.prodname_marketplace %}. For more information, see "[AUTOTITLE](/actions/creating-actions/publishing-actions-in-github-marketplace#publishing-an-action)." {% endif %}
|
||||
|
||||
**.github/workflows/main.yml**
|
||||
|
||||
```yaml copy
|
||||
on: [push]
|
||||
|
||||
@@ -228,6 +235,7 @@ jobs:
|
||||
Copy the following example workflow code into a `.github/workflows/main.yml` file in your action's repository. You can also replace the `who-to-greet` input with your name. {% ifversion fpt or ghec %}This private action can't be published to {% data variables.product.prodname_marketplace %}, and can only be used in this repository.{% endif %}
|
||||
|
||||
**.github/workflows/main.yml**
|
||||
|
||||
```yaml copy
|
||||
on: [push]
|
||||
|
||||
|
||||
@@ -105,6 +105,7 @@ GitHub Actions provide context information about the webhook event, Git refs, wo
|
||||
Add a new file called `index.js`, with the following code.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```javascript copy
|
||||
const core = require('@actions/core');
|
||||
const github = require('@actions/github');
|
||||
@@ -122,6 +123,7 @@ try {
|
||||
core.setFailed(error.message);
|
||||
}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
If an error is thrown in the above `index.js` example, `core.setFailed(error.message);` uses the actions toolkit [`@actions/core`](https://github.com/actions/toolkit/tree/main/packages/core) package to log a message and set a failing exit code. For more information, see "[AUTOTITLE](/actions/creating-actions/setting-exit-codes-for-actions)."
|
||||
@@ -224,6 +226,7 @@ This example demonstrates how your new public action can be run from within an e
|
||||
Copy the following YAML into a new file at `.github/workflows/main.yml`, and update the `uses: octocat/hello-world-javascript-action@v1.1` line with your username and the name of the public repository you created above. You can also replace the `who-to-greet` input with your name.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
on: [push]
|
||||
|
||||
@@ -241,6 +244,7 @@ jobs:
|
||||
- name: Get the output time
|
||||
run: echo "The time was ${{ steps.hello.outputs.time }}"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
When this workflow is triggered, the runner will download the `hello-world-javascript-action` action from your public repository and then execute it.
|
||||
@@ -250,6 +254,7 @@ When this workflow is triggered, the runner will download the `hello-world-javas
|
||||
Copy the workflow code into a `.github/workflows/main.yml` file in your action's repository. You can also replace the `who-to-greet` input with your name.
|
||||
|
||||
**.github/workflows/main.yml**
|
||||
|
||||
```yaml copy
|
||||
on: [push]
|
||||
|
||||
|
||||
@@ -36,6 +36,7 @@ The following script demonstrates how you can get a user-specified version as in
|
||||
{% data variables.product.prodname_dotcom %} provides [`actions/toolkit`](https://github.com/actions/toolkit), which is a set of packages that helps you create actions. This example uses the [`actions/core`](https://github.com/actions/toolkit/tree/main/packages/core) and [`actions/tool-cache`](https://github.com/actions/toolkit/tree/main/packages/tool-cache) packages.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```javascript copy
|
||||
const core = require('@actions/core');
|
||||
const tc = require('@actions/tool-cache');
|
||||
@@ -56,6 +57,7 @@ async function setup() {
|
||||
|
||||
module.exports = setup
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
To use this script, replace `getDownloadURL` with a function that downloads your CLI. You will also need to create an actions metadata file (`action.yml`) that accepts a `version` input and that runs this script. For full details about how to create an action, see "[AUTOTITLE](/actions/creating-actions/creating-a-javascript-action)."
|
||||
|
||||
@@ -115,6 +115,7 @@ outputs:
|
||||
### Example: Declaring outputs for composite actions
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
outputs:
|
||||
random-number:
|
||||
@@ -131,6 +132,7 @@ runs:
|
||||
{%- endif %}{% raw %}
|
||||
shell: bash
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### `outputs.<output_id>.value`
|
||||
@@ -235,6 +237,7 @@ For example, this `cleanup.js` will only run on Linux-based runners:
|
||||
**Optional** The command you want to run. This can be inline or a script in your action repository:
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
runs:
|
||||
using: "composite"
|
||||
@@ -242,6 +245,7 @@ runs:
|
||||
- run: ${{ github.action_path }}/test/script.sh
|
||||
shell: bash
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
Alternatively, you can use `$GITHUB_ACTION_PATH`:
|
||||
@@ -447,6 +451,7 @@ For more information about using the `CMD` instruction with {% data variables.pr
|
||||
#### Example: Defining arguments for the Docker container
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
runs:
|
||||
using: 'docker'
|
||||
@@ -456,6 +461,7 @@ runs:
|
||||
- 'foo'
|
||||
- 'bar'
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## `branding`
|
||||
|
||||
@@ -47,6 +47,7 @@ Before creating your {% data variables.product.prodname_actions %} workflow, you
|
||||
aws ecr create-repository \
|
||||
--repository-name MY_ECR_REPOSITORY \
|
||||
--region MY_AWS_REGION
|
||||
|
||||
```{% endraw %}
|
||||
|
||||
Ensure that you use the same Amazon ECR repository name (represented here by `MY_ECR_REPOSITORY`) for the `ECR_REPOSITORY` variable in the workflow below.
|
||||
@@ -65,6 +66,7 @@ Before creating your {% data variables.product.prodname_actions %} workflow, you
|
||||
|
||||
{% raw %}```bash copy
|
||||
aws ecs register-task-definition --generate-cli-skeleton
|
||||
|
||||
```{% endraw %}
|
||||
|
||||
Ensure that you set the `ECS_TASK_DEFINITION` variable in the workflow below as the path to the JSON file.
|
||||
|
||||
@@ -63,6 +63,7 @@ Before creating your {% data variables.product.prodname_actions %} workflow, you
|
||||
--name MY_WEBAPP_NAME \
|
||||
--resource-group MY_RESOURCE_GROUP \
|
||||
--settings DOCKER_REGISTRY_SERVER_URL=https://ghcr.io DOCKER_REGISTRY_SERVER_USERNAME=MY_REPOSITORY_OWNER DOCKER_REGISTRY_SERVER_PASSWORD=MY_PERSONAL_ACCESS_TOKEN
|
||||
|
||||
```
|
||||
|
||||
5. Optionally, configure a deployment environment. {% data reusables.actions.about-environments %}
|
||||
|
||||
@@ -49,11 +49,13 @@ To create the GKE cluster, you will first need to authenticate using the `gcloud
|
||||
For example:
|
||||
|
||||
{% raw %}
|
||||
|
||||
```bash copy
|
||||
$ gcloud container clusters create $GKE_CLUSTER \
|
||||
--project=$GKE_PROJECT \
|
||||
--zone=$GKE_ZONE
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### Enabling the APIs
|
||||
@@ -61,11 +63,13 @@ $ gcloud container clusters create $GKE_CLUSTER \
|
||||
Enable the Kubernetes Engine and Container Registry APIs. For example:
|
||||
|
||||
{% raw %}
|
||||
|
||||
```bash copy
|
||||
$ gcloud services enable \
|
||||
containerregistry.googleapis.com \
|
||||
container.googleapis.com
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### Configuring a service account and storing its credentials
|
||||
@@ -74,18 +78,23 @@ This procedure demonstrates how to create the service account for your GKE integ
|
||||
|
||||
1. Create a new service account:
|
||||
{% raw %}
|
||||
|
||||
```
|
||||
gcloud iam service-accounts create $SA_NAME
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
1. Retrieve the email address of the service account you just created:
|
||||
{% raw %}
|
||||
|
||||
```
|
||||
gcloud iam service-accounts list
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
1. Add roles to the service account. Note: Apply more restrictive roles to suit your requirements.
|
||||
{% raw %}
|
||||
|
||||
```
|
||||
$ gcloud projects add-iam-policy-binding $GKE_PROJECT \
|
||||
--member=serviceAccount:$SA_EMAIL \
|
||||
@@ -97,18 +106,23 @@ This procedure demonstrates how to create the service account for your GKE integ
|
||||
--member=serviceAccount:$SA_EMAIL \
|
||||
--role=roles/container.clusterViewer
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
1. Download the JSON keyfile for the service account:
|
||||
{% raw %}
|
||||
|
||||
```
|
||||
gcloud iam service-accounts keys create key.json --iam-account=$SA_EMAIL
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
1. Store the service account key as a secret named `GKE_SA_KEY`:
|
||||
{% raw %}
|
||||
|
||||
```
|
||||
export GKE_SA_KEY=$(cat key.json | base64)
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
For more information about how to store a secret, see "[AUTOTITLE](/actions/security-guides/encrypted-secrets)."
|
||||
|
||||
|
||||
@@ -50,6 +50,7 @@ Create secrets in your repository or organization for the following items:
|
||||
```shell
|
||||
base64 -i BUILD_CERTIFICATE.p12 | pbcopy
|
||||
```
|
||||
|
||||
- The password for your Apple signing certificate.
|
||||
- In this example, the secret is named `P12_PASSWORD`.
|
||||
|
||||
@@ -131,6 +132,7 @@ On self-hosted runners, the `$RUNNER_TEMP` directory is cleaned up at the end of
|
||||
If you use self-hosted runners, you should add a final step to your workflow to help ensure that these sensitive files are deleted at the end of the job. The workflow step shown below is an example of how to do this.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
- name: Clean up keychain and provisioning profile
|
||||
if: ${{ always() }}
|
||||
@@ -138,4 +140,5 @@ If you use self-hosted runners, you should add a final step to your workflow to
|
||||
security delete-keychain $RUNNER_TEMP/app-signing.keychain-db
|
||||
rm ~/Library/MobileDevice/Provisioning\ Profiles/build_pp.mobileprovision
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
@@ -68,6 +68,7 @@ Once a workflow reaches a job that references an environment that has the custom
|
||||
} \
|
||||
}'
|
||||
```
|
||||
|
||||
1. Optionally, to add a status report without taking any other action to {% data variables.product.prodname_dotcom_the_website %}, send a `POST` request to `/repos/OWNER/REPO/actions/runs/RUN_ID/deployment_protection_rule`. In the request body, omit the `state`. For more information, see "[AUTOTITLE](/rest/actions/workflow-runs#review-custom-deployment-protection-rules-for-a-workflow-run)." You can post a status report on the same deployment up to 10 times. Status reports support Markdown formatting and can be up to 1024 characters long.
|
||||
|
||||
1. To approve or reject a request, send a `POST` request to `/repos/OWNER/REPO/actions/runs/RUN_ID/deployment_protection_rule`. In the request body, set the `state` property to either `approved` or `rejected`. For more information, see "[AUTOTITLE](/rest/actions/workflow-runs#review-custom-deployment-protection-rules-for-a-workflow-run)."
|
||||
|
||||
@@ -57,6 +57,7 @@ The [`azure/login`](https://github.com/Azure/login) action receives a JWT from t
|
||||
The following example exchanges an OIDC ID token with Azure to receive an access token, which can then be used to access cloud resources.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Run Azure Login with OIDC
|
||||
on: [push]
|
||||
@@ -80,4 +81,5 @@ jobs:
|
||||
az account show
|
||||
az group list
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
@@ -62,6 +62,7 @@ This example has a job called `Get_OIDC_ID_token` that uses actions to request a
|
||||
This action exchanges a {% data variables.product.prodname_dotcom %} OIDC token for a Google Cloud access token, using [Workload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation).
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: List services in GCP
|
||||
on:
|
||||
@@ -89,4 +90,5 @@ jobs:
|
||||
gcloud auth login --brief --cred-file="${{ steps.auth.outputs.credentials_file_path }}"
|
||||
gcloud services list
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
@@ -63,6 +63,7 @@ To configure your Vault server to accept JSON Web Tokens (JWT) for authenticatio
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
3. Configure roles to group different policies together. If the authentication is successful, these policies are attached to the resulting Vault access token.
|
||||
|
||||
```sh copy
|
||||
|
||||
@@ -217,6 +217,7 @@ jobs:
|
||||
```yaml copy
|
||||
name: Node.js Tests
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -229,6 +230,7 @@ name: Node.js Tests
|
||||
```yaml copy
|
||||
on:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -241,6 +243,7 @@ The `on` keyword lets you define the events that trigger when the workflow is ru
|
||||
```yaml copy
|
||||
workflow_dispatch:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -253,6 +256,7 @@ Add the `workflow_dispatch` event if you want to be able to manually run this wo
|
||||
```yaml copy
|
||||
pull_request:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -267,6 +271,7 @@ Add the `pull_request` event, so that the workflow runs automatically every time
|
||||
branches:
|
||||
- main
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -281,6 +286,7 @@ permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -294,6 +300,7 @@ Modifies the default permissions granted to `GITHUB_TOKEN`. This will vary depen
|
||||
concurrency:
|
||||
group: {% raw %}'${{ github.workflow }} @ ${{ github.event.pull_request.head.label || github.head_ref || github.ref }}'{% endraw %}
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -306,6 +313,7 @@ Creates a concurrency group for specific events, and uses the `||` operator to d
|
||||
```yaml copy
|
||||
cancel-in-progress: true
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -318,6 +326,7 @@ Cancels any currently running job or workflow in the same concurrency group.
|
||||
```yaml copy
|
||||
jobs:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -330,6 +339,7 @@ Groups together all the jobs that run in the workflow file.
|
||||
```yaml copy
|
||||
test:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -342,6 +352,7 @@ Defines a job with the ID `test` that is stored within the `jobs` key.
|
||||
```yaml copy
|
||||
runs-on: {% raw %}${{ fromJSON('["ubuntu-latest", "self-hosted"]')[github.repository == 'github/docs-internal'] }}{% endraw %}
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -354,6 +365,7 @@ Configures the job to run on a {% data variables.product.prodname_dotcom %}-host
|
||||
```yaml copy
|
||||
timeout-minutes: 60
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -366,6 +378,7 @@ Sets the maximum number of minutes to let the job run before it is automatically
|
||||
```yaml copy
|
||||
strategy:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
This section defines the build matrix for your jobs.
|
||||
@@ -377,6 +390,7 @@ Sets the maximum number of minutes to let the job run before it is automatically
|
||||
```yaml copy
|
||||
fail-fast: false
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -400,6 +414,7 @@ Setting `fail-fast` to `false` prevents {% data variables.product.prodname_dotco
|
||||
translations,
|
||||
]
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -412,6 +427,7 @@ Creates a matrix named `test-group`, with an array of test groups. These values
|
||||
```yaml copy
|
||||
steps:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -428,6 +444,7 @@ Groups together all the steps that will run as part of the `test` job. Each job
|
||||
lfs: {% raw %}${{ matrix.test-group == 'content' }}{% endraw %}
|
||||
persist-credentials: 'false'
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -468,6 +485,7 @@ The `uses` keyword tells the job to retrieve the action named `actions/checkout`
|
||||
throw err
|
||||
}
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -487,6 +505,7 @@ If the current repository is the `github/docs-internal` repository, this step us
|
||||
path: docs-early-access
|
||||
ref: {% raw %}${{ steps.check-early-access.outputs.result }}{% endraw %}
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -504,6 +523,7 @@ If the current repository is the `github/docs-internal` repository, this step ch
|
||||
mv docs-early-access/data data/early-access
|
||||
rm -r docs-early-access
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -517,6 +537,7 @@ If the current repository is the `github/docs-internal` repository, this step us
|
||||
- name: Checkout LFS objects
|
||||
run: git lfs checkout
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -535,6 +556,7 @@ This step runs a command to check out LFS objects from the repository.
|
||||
# a string like `foo.js path/bar.md`
|
||||
output: ' '
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -549,6 +571,7 @@ This step uses the `trilom/file-changes-action` action to gather the files chang
|
||||
run: |
|
||||
echo {% raw %}"${{ steps.get_diff_files.outputs.files }}" > get_diff_files.txt{% endraw %}
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -565,6 +588,7 @@ This step runs a shell command that uses an output from the previous step to cre
|
||||
node-version: 16.14.x
|
||||
cache: npm
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -578,6 +602,7 @@ This step uses the `actions/setup-node` action to install the specified version
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -594,6 +619,7 @@ This step runs the `npm ci` shell command to install the npm software packages f
|
||||
path: .next/cache
|
||||
key: {% raw %}${{ runner.os }}-nextjs-${{ hashFiles('package*.json') }}{% endraw %}
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -608,6 +634,7 @@ This step uses the `actions/cache` action to cache the Next.js build, so that th
|
||||
- name: Run build script
|
||||
run: npm run build
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
{% endif %}
|
||||
@@ -625,6 +652,7 @@ This step runs the build script.
|
||||
CHANGELOG_CACHE_FILE_PATH: tests/fixtures/changelog-feed.json
|
||||
run: npm test -- {% raw %}tests/${{ matrix.test-group }}/{% endraw %}
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
|
||||
@@ -133,6 +133,7 @@ jobs:
|
||||
```yaml copy
|
||||
name: 'Link Checker: All English'
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -145,6 +146,7 @@ name: 'Link Checker: All English'
|
||||
```yaml copy
|
||||
on:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -157,6 +159,7 @@ The `on` keyword lets you define the events that trigger when the workflow is ru
|
||||
```yaml copy
|
||||
workflow_dispatch:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -171,6 +174,7 @@ Add the `workflow_dispatch` event if you want to be able to manually run this wo
|
||||
branches:
|
||||
- main
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -183,6 +187,7 @@ Add the `push` event, so that the workflow runs automatically every time a commi
|
||||
```yaml copy
|
||||
pull_request:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -197,6 +202,7 @@ permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -207,10 +213,12 @@ Modifies the default permissions granted to `GITHUB_TOKEN`. This will vary depen
|
||||
<td>
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
concurrency:
|
||||
group: '${{ github.workflow }} @ ${{ github.event.pull_request.head.label || github.head_ref || github.ref }}'
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
</td>
|
||||
<td>
|
||||
@@ -224,6 +232,7 @@ Creates a concurrency group for specific events, and uses the `||` operator to d
|
||||
```yaml copy
|
||||
cancel-in-progress: true
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -236,6 +245,7 @@ Cancels any currently running job or workflow in the same concurrency group.
|
||||
```yaml copy
|
||||
jobs:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -248,6 +258,7 @@ Groups together all the jobs that run in the workflow file.
|
||||
```yaml copy
|
||||
check-links:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -258,9 +269,11 @@ Defines a job with the ID `check-links` that is stored within the `jobs` key.
|
||||
<td>
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
runs-on: ${{ fromJSON('["ubuntu-latest", "self-hosted"]')[github.repository == 'github/docs-internal'] }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
</td>
|
||||
<td>
|
||||
@@ -274,6 +287,7 @@ Configures the job to run on a {% data variables.product.prodname_dotcom %}-host
|
||||
```yaml copy
|
||||
steps:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -287,6 +301,7 @@ Groups together all the steps that will run as part of the `check-links` job. Ea
|
||||
- name: Checkout
|
||||
uses: {% data reusables.actions.action-checkout %}
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -303,6 +318,7 @@ The `uses` keyword tells the job to retrieve the action named `actions/checkout`
|
||||
node-version: 16.13.x
|
||||
cache: npm
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -317,6 +333,7 @@ This step uses the `actions/setup-node` action to install the specified version
|
||||
- name: Install
|
||||
run: npm ci
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -333,6 +350,7 @@ The `run` keyword tells the job to execute a command on the runner. In this case
|
||||
with:
|
||||
fileOutput: 'json'
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -347,6 +365,7 @@ Uses the `trilom/file-changes-action` action to gather all the changed files. Th
|
||||
- name: Show files changed
|
||||
run: cat $HOME/files.json
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -367,6 +386,7 @@ Lists the contents of `files.json`. This will be visible in the workflow run's l
|
||||
--verbose \
|
||||
--list $HOME/files.json
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -386,6 +406,7 @@ This step uses `run` command to execute a script that is stored in the repositor
|
||||
--check-images \
|
||||
--level critical
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
|
||||
@@ -163,6 +163,7 @@ jobs:
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
## Understanding the example
|
||||
|
||||
{% data reusables.actions.example-explanation-table-intro %}
|
||||
@@ -181,6 +182,7 @@ jobs:
|
||||
```yaml copy
|
||||
name: Check all English links
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -196,6 +198,7 @@ on:
|
||||
schedule:
|
||||
- cron: '40 20 * * *' # once a day at 20:40 UTC / 12:40 PST
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -213,6 +216,7 @@ permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -225,6 +229,7 @@ Modifies the default permissions granted to `GITHUB_TOKEN`. This will vary depen
|
||||
```yaml copy
|
||||
jobs:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -238,6 +243,7 @@ Groups together all the jobs that run in the workflow file.
|
||||
check_all_english_links:
|
||||
name: Check all links
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -250,6 +256,7 @@ Defines a job with the ID `check_all_english_links`, and the name `Check all lin
|
||||
```yaml copy
|
||||
if: github.repository == 'github/docs-internal'
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -262,6 +269,7 @@ Only run the `check_all_english_links` job if the repository is named `docs-inte
|
||||
```yaml copy
|
||||
runs-on: ubuntu-latest
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -278,6 +286,7 @@ Configures the job to run on an Ubuntu Linux runner. This means that the job wil
|
||||
REPORT_LABEL: broken link report
|
||||
REPORT_REPOSITORY: github/docs-content
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -290,6 +299,7 @@ Creates custom environment variables, and redefines the built-in `GITHUB_TOKEN`
|
||||
```yaml copy
|
||||
steps:
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -303,6 +313,7 @@ Groups together all the steps that will run as part of the `check_all_english_li
|
||||
- name: Check out repo's default branch
|
||||
uses: {% data reusables.actions.action-checkout %}
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -319,6 +330,7 @@ The `uses` keyword tells the job to retrieve the action named `actions/checkout`
|
||||
node-version: 16.8.x
|
||||
cache: npm
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -334,6 +346,7 @@ This step uses the `actions/setup-node` action to install the specified version
|
||||
- name: Run the "npm run build" command
|
||||
run: npm run build
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -348,6 +361,7 @@ The `run` keyword tells the job to execute a command on the runner. In this case
|
||||
run: |
|
||||
script/check-english-links.js > broken_links.md
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -367,6 +381,7 @@ This `run` command executes a script that is stored in the repository at `script
|
||||
run: echo "::set-output name=title::$(head -1 broken_links.md)"
|
||||
{%- endif %}
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -389,6 +404,7 @@ If the `check-english-links.js` script detects broken links and returns a non-ze
|
||||
repository: {% raw %}${{ env.REPORT_REPOSITORY }}{% endraw %}
|
||||
labels: {% raw %}${{ env.REPORT_LABEL }}{% endraw %}
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -417,6 +433,7 @@ Uses the `peter-evans/create-issue-from-file` action to create a new {% data var
|
||||
|
||||
gh issue comment {% raw %}${{ env.NEW_REPORT_URL }}{% endraw %} --body "⬅️ [Previous report]($previous_report_url)"
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -437,6 +454,7 @@ Uses [`gh issue list`](https://cli.github.com/manual/gh_issue_list) to locate th
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
@@ -458,6 +476,7 @@ If an issue from a previous run is open and assigned to someone, then use [`gh i
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
|
||||
@@ -92,6 +92,7 @@ You can manage the runner service in the Windows **Services** application, or yo
|
||||
```shell
|
||||
./svc.sh install
|
||||
```
|
||||
|
||||
{% endmac %}
|
||||
|
||||
## Starting the service
|
||||
@@ -99,19 +100,25 @@ You can manage the runner service in the Windows **Services** application, or yo
|
||||
Start the service with the following command:
|
||||
|
||||
{% linux %}
|
||||
|
||||
```shell
|
||||
sudo ./svc.sh start
|
||||
```
|
||||
|
||||
{% endlinux %}
|
||||
{% windows %}
|
||||
|
||||
```shell
|
||||
Start-Service "{{ service_win_name }}"
|
||||
```
|
||||
|
||||
{% endwindows %}
|
||||
{% mac %}
|
||||
|
||||
```shell
|
||||
./svc.sh start
|
||||
```
|
||||
|
||||
{% endmac %}
|
||||
|
||||
## Checking the status of the service
|
||||
@@ -119,19 +126,25 @@ Start-Service "{{ service_win_name }}"
|
||||
Check the status of the service with the following command:
|
||||
|
||||
{% linux %}
|
||||
|
||||
```shell
|
||||
sudo ./svc.sh status
|
||||
```
|
||||
|
||||
{% endlinux %}
|
||||
{% windows %}
|
||||
|
||||
```shell
|
||||
Get-Service "{{ service_win_name }}"
|
||||
```
|
||||
|
||||
{% endwindows %}
|
||||
{% mac %}
|
||||
|
||||
```shell
|
||||
./svc.sh status
|
||||
```
|
||||
|
||||
{% endmac %}
|
||||
|
||||
For more information on viewing the status of your self-hosted runner, see "[AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/monitoring-and-troubleshooting-self-hosted-runners)."
|
||||
@@ -141,19 +154,25 @@ Get-Service "{{ service_win_name }}"
|
||||
Stop the service with the following command:
|
||||
|
||||
{% linux %}
|
||||
|
||||
```shell
|
||||
sudo ./svc.sh stop
|
||||
```
|
||||
|
||||
{% endlinux %}
|
||||
{% windows %}
|
||||
|
||||
```shell
|
||||
Stop-Service "{{ service_win_name }}"
|
||||
```
|
||||
|
||||
{% endwindows %}
|
||||
{% mac %}
|
||||
|
||||
```shell
|
||||
./svc.sh stop
|
||||
```
|
||||
|
||||
{% endmac %}
|
||||
|
||||
## Uninstalling the service
|
||||
@@ -162,19 +181,25 @@ Stop-Service "{{ service_win_name }}"
|
||||
1. Uninstall the service with the following command:
|
||||
|
||||
{% linux %}
|
||||
|
||||
```shell
|
||||
sudo ./svc.sh uninstall
|
||||
```
|
||||
|
||||
{% endlinux %}
|
||||
{% windows %}
|
||||
|
||||
```shell
|
||||
Remove-Service "{{ service_win_name }}"
|
||||
```
|
||||
|
||||
{% endwindows %}
|
||||
{% mac %}
|
||||
|
||||
```shell
|
||||
./svc.sh uninstall
|
||||
```
|
||||
|
||||
{% endmac %}
|
||||
|
||||
{% linux %}
|
||||
|
||||
@@ -114,6 +114,7 @@ You can print the contents of contexts to the log for debugging. The [`toJSON` f
|
||||
{% data reusables.actions.github-context-warning %}
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Context testing
|
||||
on: push
|
||||
@@ -147,6 +148,7 @@ jobs:
|
||||
MATRIX_CONTEXT: ${{ toJson(matrix) }}
|
||||
run: echo '$MATRIX_CONTEXT'
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## `github` context
|
||||
@@ -312,6 +314,7 @@ This example workflow shows how the `env` context can be configured at the workf
|
||||
{% data reusables.repositories.actions-env-var-note %}
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Hi Mascot
|
||||
on: push
|
||||
@@ -334,6 +337,7 @@ jobs:
|
||||
steps:
|
||||
- run: echo 'Hi ${{ env.mascot }}' # Hi Tux
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
{% ifversion actions-configuration-variables %}
|
||||
@@ -458,6 +462,7 @@ This example `jobs` context contains the result and outputs of a job from a reus
|
||||
This example reusable workflow uses the `jobs` context to set outputs for the reusable workflow. Note how the outputs flow up from the steps, to the job, then to the `workflow_call` trigger. For more information, see "[AUTOTITLE](/actions/using-workflows/reusing-workflows#using-outputs-from-a-reusable-workflow)."
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Reusable workflow
|
||||
|
||||
@@ -494,6 +499,7 @@ jobs:
|
||||
run: echo "::set-output name=secondword::world"
|
||||
{%- endif %}{% raw %}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## `steps` context
|
||||
@@ -813,6 +819,7 @@ jobs:
|
||||
- uses: {% data reusables.actions.action-checkout %}
|
||||
- run: ./debug
|
||||
```
|
||||
|
||||
## `inputs` context
|
||||
|
||||
The `inputs` context contains input properties passed to an action{% ifversion actions-unified-inputs %},{% else %} or{% endif %} to a reusable workflow{% ifversion actions-unified-inputs %}, or to a manually triggered workflow{% endif %}. {% ifversion actions-unified-inputs %}For reusable workflows, the{% else %}The{% endif %} input names and types are defined in the [`workflow_call` event configuration](/actions/using-workflows/events-that-trigger-workflows#workflow-reuse-events) of a reusable workflow, and the input values are passed from [`jobs.<job_id>.with`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idwith) in an external workflow that calls the reusable workflow. {% ifversion actions-unified-inputs %}For manually triggered workflows, the inputs are defined in the [`workflow_dispatch` event configuration](/actions/using-workflows/events-that-trigger-workflows#workflow_dispatch) of a workflow.{% endif %}
|
||||
@@ -843,6 +850,7 @@ The following example contents of the `inputs` context is from a workflow that h
|
||||
This example reusable workflow uses the `inputs` context to get the values of the `build_id`, `deploy_target`, and `perform_deploy` inputs that were passed to the reusable workflow from the caller workflow.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Reusable deploy workflow
|
||||
on:
|
||||
@@ -866,6 +874,7 @@ jobs:
|
||||
- name: Deploy build to target
|
||||
run: deploy --build ${{ inputs.build_id }} --target ${{ inputs.deploy_target }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
{% ifversion actions-unified-inputs %}
|
||||
@@ -874,6 +883,7 @@ jobs:
|
||||
This example workflow triggered by a `workflow_dispatch` event uses the `inputs` context to get the values of the `build_id`, `deploy_target`, and `perform_deploy` inputs that were passed to the workflow.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
on:
|
||||
workflow_dispatch:
|
||||
@@ -896,5 +906,6 @@ jobs:
|
||||
- name: Deploy build to target
|
||||
run: deploy --build ${{ inputs.build_id }} --target ${{ inputs.deploy_target }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
{% endif %}
|
||||
|
||||
@@ -38,10 +38,12 @@ steps:
|
||||
### Example setting an environment variable
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
env:
|
||||
MY_ENV_VAR: ${{ <expression> }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## Literals
|
||||
@@ -184,9 +186,11 @@ Replaces values in the `string`, with the variable `replaceValueN`. Variables in
|
||||
#### Example of `format`
|
||||
|
||||
{% raw %}
|
||||
|
||||
```js
|
||||
format('Hello {0} {1} {2}', 'Mona', 'the', 'Octocat')
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
Returns 'Hello Mona the Octocat'.
|
||||
@@ -194,9 +198,11 @@ Returns 'Hello Mona the Octocat'.
|
||||
#### Example escaping braces
|
||||
|
||||
{% raw %}
|
||||
|
||||
```js
|
||||
format('{{Hello {0} {1} {2}!}}', 'Mona', 'the', 'Octocat')
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
Returns '{Hello Mona the Octocat!}'.
|
||||
@@ -232,6 +238,7 @@ Returns a JSON object or JSON data type for `value`. You can use this function t
|
||||
This workflow sets a JSON matrix in one job, and passes it to the next job using an output and `fromJSON`.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
name: build
|
||||
on: push
|
||||
@@ -255,6 +262,7 @@ jobs:
|
||||
steps:
|
||||
- run: build
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
#### Example returning a JSON data type
|
||||
@@ -262,6 +270,7 @@ jobs:
|
||||
This workflow uses `fromJSON` to convert environment variables from a string to a Boolean or integer.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
name: print
|
||||
on: push
|
||||
@@ -276,6 +285,7 @@ jobs:
|
||||
timeout-minutes: ${{ fromJSON(env.time) }}
|
||||
run: echo ...
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### hashFiles
|
||||
|
||||
@@ -52,6 +52,7 @@ To set a custom environment variable{% ifversion actions-configuration-variables
|
||||
- A specific step within a job, by using [`jobs.<job_id>.steps[*].env`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsenv).
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Greeting on variable day
|
||||
|
||||
@@ -72,6 +73,7 @@ jobs:
|
||||
env:
|
||||
First_Name: Mona
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
You can access `env` variable values using runner environment variables or using contexts. The example above shows three custom variables being used as environment variables in an `echo` command: `$DAY_OF_WEEK`, `$Greeting`, and `$First_Name`. The values for these variables are set, and scoped, at the workflow, job, and step level respectively. For more information on accessing variable values using contexts, see "[Using contexts to access variable values](#using-contexts-to-access-variable-values)."
|
||||
@@ -222,6 +224,7 @@ In addition to runner environment variables, {% data variables.product.prodname_
|
||||
Runner environment variables are always interpolated on the runner machine. However, parts of a workflow are processed by {% data variables.product.prodname_actions %} and are not sent to the runner. You cannot use environment variables in these parts of a workflow file. Instead, you can use contexts. For example, an `if` conditional, which determines whether a job or step is sent to the runner, is always processed by {% data variables.product.prodname_actions %}. You can use a context in an `if` conditional statement to access the value of an variable.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
env:
|
||||
DAY_OF_WEEK: Monday
|
||||
@@ -238,6 +241,7 @@ jobs:
|
||||
env:
|
||||
First_Name: Mona
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
In this modification of the earlier example, we've introduced an `if` conditional. The workflow step is now only run if `DAY_OF_WEEK` is set to "Monday". We access this value from the `if` conditional statement by using the [`env` context](/actions/learn-github-actions/contexts#env-context).
|
||||
@@ -343,6 +347,7 @@ We strongly recommend that actions use variables to access the filesystem rather
|
||||
You can write a single workflow file that can be used for different operating systems by using the `RUNNER_OS` default environment variable and the corresponding context property <span style="white-space: nowrap;">{% raw %}`${{ runner.os }}`{% endraw %}</span>. For example, the following workflow could be run successfully if you changed the operating system from `macos-latest` to `windows-latest` without having to alter the syntax of the environment variables, which differs depending on the shell being used by the runner.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
jobs:
|
||||
if-Windows-else:
|
||||
@@ -355,6 +360,7 @@ jobs:
|
||||
if: runner.os != 'Windows'
|
||||
run: echo "The operating system on the runner is not Windows, it's $RUNNER_OS."
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
In this example, the two `if` statements check the `os` property of the `runner` context to determine the operating system of the runner. `if` conditionals are processed by {% data variables.product.prodname_actions %}, and only steps where the check resolves as `true` are sent to the runner. Here one of the checks will always be `true` and the other `false`, so only one of these steps is sent to the runner. Once the job is sent to the runner, the step is executed and the environment variable in the `echo` command is interpolated using the appropriate syntax (`$env:NAME` for PowerShell on Windows, and `$NAME` for bash and sh on Linux and MacOS). In this example, the statement `runs-on: macos-latest` means that the second step will be run.
|
||||
|
||||
@@ -30,6 +30,7 @@ In the tutorial, you will first make a workflow file that uses the [`peter-evans
|
||||
3. Copy the following YAML contents into your workflow file.
|
||||
|
||||
```yaml copy
|
||||
|
||||
{% indented_data_reference reusables.actions.actions-not-certified-by-github-comment spaces=4 %}
|
||||
|
||||
{% indented_data_reference reusables.actions.actions-use-sha-pinning-comment spaces=4 %}
|
||||
|
||||
@@ -31,6 +31,7 @@ In the tutorial, you will first make a workflow file that uses the [`alex-page/g
|
||||
4. Copy the following YAML contents into your workflow file.
|
||||
|
||||
```yaml copy
|
||||
|
||||
{% indented_data_reference reusables.actions.actions-not-certified-by-github-comment spaces=4 %}
|
||||
|
||||
{% indented_data_reference reusables.actions.actions-use-sha-pinning-comment spaces=4 %}
|
||||
|
||||
@@ -30,6 +30,7 @@ In the tutorial, you will first make a workflow file that uses the [`imjohnbo/is
|
||||
3. Copy the following YAML contents into your workflow file.
|
||||
|
||||
```yaml copy
|
||||
|
||||
{% indented_data_reference reusables.actions.actions-not-certified-by-github-comment spaces=4 %}
|
||||
|
||||
{% indented_data_reference reusables.actions.actions-use-sha-pinning-comment spaces=4 %}
|
||||
|
||||
@@ -107,6 +107,7 @@ The `configure` CLI command is used to set required credentials and options for
|
||||
✔ Azure DevOps organization name: :organization
|
||||
✔ Azure DevOps project name: :project
|
||||
Environment variables successfully updated.
|
||||
|
||||
```
|
||||
3. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to the {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version:
|
||||
|
||||
|
||||
@@ -105,6 +105,7 @@ The `configure` CLI command is used to set required credentials and options for
|
||||
✔ Base url of the Bamboo instance: https://bamboo.example.com
|
||||
Environment variables successfully updated.
|
||||
```
|
||||
|
||||
1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version:
|
||||
|
||||
```shell
|
||||
@@ -182,6 +183,7 @@ You can use the `dry-run` command to convert a Bamboo pipeline to an equivalent
|
||||
### Running a dry-run migration for a build plan
|
||||
|
||||
To perform a dry run of migrating your Bamboo build plan to {% data variables.product.prodname_actions %}, run the following command in your terminal, replacing `:my_plan_slug` with the plan's project and plan key in the format `<projectKey>-<planKey>` (for example: `PAN-SCRIP`).
|
||||
|
||||
```shell
|
||||
gh actions-importer dry-run bamboo build --plan-slug :my_plan_slug --output-dir tmp/dry-run
|
||||
```
|
||||
|
||||
@@ -86,6 +86,7 @@ The `configure` CLI command is used to set required credentials and options for
|
||||
✔ CircleCI organization name: mycircleciorganization
|
||||
Environment variables successfully updated.
|
||||
```
|
||||
|
||||
1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version:
|
||||
|
||||
```shell
|
||||
|
||||
@@ -85,6 +85,7 @@ The `configure` CLI command is used to set required credentials and options for
|
||||
✔ Private token for GitLab: ***************
|
||||
✔ Base url of the GitLab instance: http://localhost
|
||||
Environment variables successfully updated.
|
||||
|
||||
```
|
||||
1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version:
|
||||
|
||||
|
||||
@@ -81,6 +81,7 @@ The `configure` CLI command is used to set required credentials and options for
|
||||
✔ Username of Jenkins user: admin
|
||||
✔ Base url of the Jenkins instance: https://localhost
|
||||
Environment variables successfully updated.
|
||||
|
||||
```
|
||||
1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version:
|
||||
|
||||
|
||||
@@ -88,6 +88,7 @@ The `configure` CLI command is used to set required credentials and options for
|
||||
✔ Base url of the Travis CI instance: https://travis-ci.com
|
||||
✔ Travis CI organization name: actions-importer-labs
|
||||
Environment variables successfully updated.
|
||||
|
||||
```
|
||||
1. In your terminal, run the {% data variables.product.prodname_actions_importer %} `update` CLI command to connect to {% data variables.product.prodname_registry %} {% data variables.product.prodname_container_registry %} and ensure that the container image is updated to the latest version:
|
||||
|
||||
|
||||
@@ -101,6 +101,7 @@ The supported values for `--features` are:
|
||||
- `ghes-<number>`, where `<number>` is the version of {% data variables.product.prodname_ghe_server %}, `3.0` or later. For example, `ghes-3.3`.
|
||||
|
||||
You can view the list of available feature flags by {% data variables.product.prodname_actions_importer %} by running the `list-features` command. For example:
|
||||
|
||||
```shell copy
|
||||
gh actions-importer list-features
|
||||
```
|
||||
|
||||
@@ -59,6 +59,7 @@ Below is an example of the syntax for each system.
|
||||
### Azure Pipelines syntax for script steps
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
- job: scripts
|
||||
@@ -72,11 +73,13 @@ jobs:
|
||||
inputs:
|
||||
script: Write-Host "This step runs in PowerShell"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for script steps
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
scripts:
|
||||
@@ -90,6 +93,7 @@ jobs:
|
||||
- run: Write-Host "This step runs in PowerShell"
|
||||
shell: powershell
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## Differences in script error handling
|
||||
@@ -109,6 +113,7 @@ Below is an example of the syntax for each system.
|
||||
### Azure Pipelines syntax using CMD by default
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
- job: run_command
|
||||
@@ -117,11 +122,13 @@ jobs:
|
||||
steps:
|
||||
- script: echo "This step runs in CMD on Windows by default"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for specifying CMD
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
run_command:
|
||||
@@ -131,6 +138,7 @@ jobs:
|
||||
- run: echo "This step runs in CMD on Windows explicitly"
|
||||
shell: cmd
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#using-a-specific-shell)."
|
||||
@@ -146,6 +154,7 @@ Below is an example of the syntax for each system.
|
||||
### Azure Pipelines syntax for conditional expressions
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
- job: conditional
|
||||
@@ -155,11 +164,13 @@ jobs:
|
||||
- script: echo "This step runs with str equals 'ABC' and num equals 123"
|
||||
condition: and(eq(variables.str, 'ABC'), eq(variables.num, 123))
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for conditional expressions
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
conditional:
|
||||
@@ -168,6 +179,7 @@ jobs:
|
||||
- run: echo "This step runs with str equals 'ABC' and num equals 123"
|
||||
if: ${{ env.str == 'ABC' && env.num == 123 }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/learn-github-actions/expressions)."
|
||||
@@ -181,6 +193,7 @@ Below is an example of the syntax for each system. The workflows start a first j
|
||||
### Azure Pipelines syntax for dependencies between jobs
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
- job: initial
|
||||
@@ -207,11 +220,13 @@ jobs:
|
||||
steps:
|
||||
- script: echo "This job will run after fanout1 and fanout2 have finished."
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for dependencies between jobs
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
initial:
|
||||
@@ -234,6 +249,7 @@ jobs:
|
||||
steps:
|
||||
- run: echo "This job will run after fanout1 and fanout2 have finished."
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idneeds)."
|
||||
@@ -247,6 +263,7 @@ Below is an example of the syntax for each system.
|
||||
### Azure Pipelines syntax for tasks
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
- job: run_python
|
||||
@@ -259,6 +276,7 @@ jobs:
|
||||
architecture: 'x64'
|
||||
- script: python script.py
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for actions
|
||||
|
||||
@@ -87,12 +87,14 @@ Below is an example of the syntax for each system.
|
||||
### CircleCI syntax for caching
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
- restore_cache:
|
||||
keys:
|
||||
- v1-npm-deps-{{ checksum "package-lock.json" }}
|
||||
- v1-npm-deps-
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### GitHub Actions syntax for caching
|
||||
@@ -123,6 +125,7 @@ Below is an example in CircleCI and {% data variables.product.prodname_actions %
|
||||
### CircleCI syntax for persisting data between jobs
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
- persist_to_workspace:
|
||||
root: workspace
|
||||
@@ -134,11 +137,13 @@ Below is an example in CircleCI and {% data variables.product.prodname_actions %
|
||||
- attach_workspace:
|
||||
at: /tmp/workspace
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### GitHub Actions syntax for persisting data between jobs
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
- name: Upload math result for job 1
|
||||
uses: {% data reusables.actions.action-upload-artifact %}
|
||||
@@ -153,6 +158,7 @@ Below is an example in CircleCI and {% data variables.product.prodname_actions %
|
||||
with:
|
||||
name: homework
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/using-workflows/storing-workflow-data-as-artifacts)."
|
||||
@@ -168,6 +174,7 @@ Below is an example in CircleCI and {% data variables.product.prodname_actions %
|
||||
### CircleCI syntax for using databases and service containers
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
---
|
||||
version: 2.1
|
||||
@@ -218,11 +225,13 @@ workflows:
|
||||
- attach_workspace:
|
||||
at: /tmp/workspace
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### GitHub Actions syntax for using databases and service containers
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
name: Containers
|
||||
|
||||
@@ -267,6 +276,7 @@ jobs:
|
||||
- name: Run tests
|
||||
run: bundle exec rake
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/using-containerized-services/about-service-containers)."
|
||||
@@ -278,6 +288,7 @@ Below is a real-world example. The left shows the actual CircleCI _config.yml_ f
|
||||
### Complete example for CircleCI
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
---
|
||||
version: 2.1
|
||||
@@ -359,6 +370,7 @@ workflows:
|
||||
- ruby-26
|
||||
- ruby-25
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### Complete example for GitHub Actions
|
||||
|
||||
@@ -46,6 +46,7 @@ Below is an example of the syntax for each system.
|
||||
### GitLab CI/CD syntax for jobs
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
job1:
|
||||
variables:
|
||||
@@ -53,11 +54,13 @@ job1:
|
||||
script:
|
||||
- echo "Run your script here"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for jobs
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
job1:
|
||||
@@ -65,6 +68,7 @@ jobs:
|
||||
- uses: {% data reusables.actions.action-checkout %}
|
||||
- run: echo "Run your script here"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## Runners
|
||||
@@ -76,6 +80,7 @@ Below is an example of the syntax for each system.
|
||||
### GitLab CI/CD syntax for runners
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
windows_job:
|
||||
tags:
|
||||
@@ -89,11 +94,13 @@ linux_job:
|
||||
script:
|
||||
- echo "Hello, $USER!"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for runners
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
windows_job:
|
||||
runs-on: windows-latest
|
||||
@@ -105,6 +112,7 @@ linux_job:
|
||||
steps:
|
||||
- run: echo "Hello, $USER!"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idruns-on)."
|
||||
@@ -118,20 +126,24 @@ Below is an example of the syntax for each system.
|
||||
### GitLab CI/CD syntax for Docker images
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
my_job:
|
||||
image: node:10.16-jessie
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for Docker images
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
my_job:
|
||||
container: node:10.16-jessie
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainer)."
|
||||
@@ -145,6 +157,7 @@ Below is an example of the syntax for each system.
|
||||
### GitLab CI/CD syntax for conditions and expressions
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
deploy_prod:
|
||||
stage: deploy
|
||||
@@ -153,11 +166,13 @@ deploy_prod:
|
||||
rules:
|
||||
- if: '$CI_COMMIT_BRANCH == "master"'
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for conditions and expressions
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
deploy_prod:
|
||||
@@ -166,6 +181,7 @@ jobs:
|
||||
steps:
|
||||
- run: echo "Deploy to production server"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/learn-github-actions/expressions)."
|
||||
@@ -179,6 +195,7 @@ Below is an example of the syntax for each system. The workflows start with two
|
||||
### GitLab CI/CD syntax for dependencies between jobs
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
stages:
|
||||
- build
|
||||
@@ -205,11 +222,13 @@ deploy_ab:
|
||||
script:
|
||||
- echo "This job will run after test_ab is complete"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for dependencies between jobs
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
build_a:
|
||||
@@ -234,6 +253,7 @@ jobs:
|
||||
steps:
|
||||
- run: echo "This job will run after test_ab is complete"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idneeds)."
|
||||
@@ -261,6 +281,7 @@ Below is an example of the syntax for each system.
|
||||
### GitLab CI/CD syntax for caching
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
image: node:latest
|
||||
|
||||
@@ -276,6 +297,7 @@ test_async:
|
||||
script:
|
||||
- node ./specs/start.js ./specs/async.spec.js
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for caching
|
||||
@@ -308,17 +330,20 @@ Below is an example of the syntax for each system.
|
||||
### GitLab CI/CD syntax for artifacts
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
script:
|
||||
artifacts:
|
||||
paths:
|
||||
- math-homework.txt
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for artifacts
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
- name: Upload math result for job 1
|
||||
uses: {% data reusables.actions.action-upload-artifact %}
|
||||
@@ -326,6 +351,7 @@ artifacts:
|
||||
name: homework
|
||||
path: math-homework.txt
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/using-workflows/storing-workflow-data-as-artifacts)."
|
||||
@@ -341,6 +367,7 @@ Below is an example of the syntax for each system.
|
||||
### GitLab CI/CD syntax for databases and service containers
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
container-job:
|
||||
variables:
|
||||
@@ -363,11 +390,13 @@ container-job:
|
||||
tags:
|
||||
- docker
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for databases and service containers
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
container-job:
|
||||
@@ -400,6 +429,7 @@ jobs:
|
||||
# The default PostgreSQL port
|
||||
POSTGRES_PORT: 5432
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/using-containerized-services/about-service-containers)."
|
||||
|
||||
@@ -69,17 +69,20 @@ Below is an example comparing the syntax for each system.
|
||||
#### Travis CI syntax for a matrix
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
matrix:
|
||||
include:
|
||||
- rvm: 2.5
|
||||
- rvm: 2.6.3
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
#### {% data variables.product.prodname_actions %} syntax for a matrix
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
build:
|
||||
@@ -87,6 +90,7 @@ jobs:
|
||||
matrix:
|
||||
ruby: [2.5, 2.6.3]
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### Targeting specific branches
|
||||
@@ -98,17 +102,20 @@ Below is an example of the syntax for each system.
|
||||
#### Travis CI syntax for targeting specific branches
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
branches:
|
||||
only:
|
||||
- main
|
||||
- 'mona/octocat'
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
#### {% data variables.product.prodname_actions %} syntax for targeting specific branches
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
on:
|
||||
push:
|
||||
@@ -116,6 +123,7 @@ on:
|
||||
- main
|
||||
- 'mona/octocat'
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### Checking out submodules
|
||||
@@ -127,20 +135,24 @@ Below is an example of the syntax for each system.
|
||||
#### Travis CI syntax for checking out submodules
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
git:
|
||||
submodules: false
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
#### {% data variables.product.prodname_actions %} syntax for checking out submodules
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
- uses: {% data reusables.actions.action-checkout %}
|
||||
with:
|
||||
submodules: false
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### Using environment variables in a matrix
|
||||
@@ -232,6 +244,7 @@ Below is an example of the syntax for each system.
|
||||
### Travis CI syntax for phases and steps
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
language: python
|
||||
python:
|
||||
@@ -240,11 +253,13 @@ python:
|
||||
script:
|
||||
- python script.py
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### {% data variables.product.prodname_actions %} syntax for steps and actions
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
run_python:
|
||||
@@ -256,6 +271,7 @@ jobs:
|
||||
architecture: 'x64'
|
||||
- run: python script.py
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## Caching dependencies
|
||||
@@ -269,10 +285,12 @@ These examples demonstrate the cache syntax for each system.
|
||||
### Travis CI syntax for caching
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
language: node_js
|
||||
cache: npm
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### GitHub Actions syntax for caching
|
||||
@@ -321,6 +339,7 @@ jobs:
|
||||
##### Travis CI for building with Node.js
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
install:
|
||||
- npm install
|
||||
@@ -328,6 +347,7 @@ script:
|
||||
- npm run build
|
||||
- npm test
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
##### {% data variables.product.prodname_actions %} workflow for building with Node.js
|
||||
|
||||
@@ -50,6 +50,7 @@ Each time you create a new release, you can trigger a workflow to publish your p
|
||||
You can define a new Maven repository in the publishing block of your _build.gradle_ file that points to your package repository. For example, if you were deploying to the Maven Central Repository through the OSSRH hosting project, your _build.gradle_ could specify a repository with the name `"OSSRH"`.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```groovy copy
|
||||
plugins {
|
||||
...
|
||||
@@ -71,6 +72,7 @@ publishing {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
With this configuration, you can create a workflow that publishes your package to the Maven Central Repository by running the `gradle publish` command. In the deploy step, you’ll need to set environment variables for the username and password or token that you use to authenticate to the Maven repository. For more information, see "[AUTOTITLE](/actions/security-guides/encrypted-secrets)."
|
||||
@@ -122,6 +124,7 @@ You can define a new Maven repository in the publishing block of your _build.gra
|
||||
For example, if your organization is named "octocat" and your repository is named "hello-world", then the {% data variables.product.prodname_registry %} configuration in _build.gradle_ would look similar to the below example.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```groovy copy
|
||||
plugins {
|
||||
...
|
||||
@@ -143,6 +146,7 @@ publishing {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
With this configuration, you can create a workflow that publishes your package to {% data variables.product.prodname_registry %} by running the `gradle publish` command.
|
||||
@@ -195,6 +199,7 @@ For example, if you deploy to the Central Repository through the OSSRH hosting p
|
||||
If your organization is named "octocat" and your repository is named "hello-world", then the configuration in _build.gradle_ would look similar to the below example.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```groovy copy
|
||||
plugins {
|
||||
...
|
||||
@@ -224,6 +229,7 @@ publishing {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
With this configuration, you can create a workflow that publishes your package to both the Maven Central Repository and {% data variables.product.prodname_registry %} by running the `gradle publish` command.
|
||||
|
||||
@@ -54,6 +54,7 @@ In this workflow, you can use the `setup-java` action. This action installs the
|
||||
For example, if you were deploying to the Maven Central Repository through the OSSRH hosting project, your _pom.xml_ could specify a distribution management repository with the `id` of `ossrh`.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```xml copy
|
||||
<project ...>
|
||||
...
|
||||
@@ -66,6 +67,7 @@ For example, if you were deploying to the Maven Central Repository through the O
|
||||
</distributionManagement>
|
||||
</project>
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
With this configuration, you can create a workflow that publishes your package to the Maven Central Repository by specifying the repository management `id` to the `setup-java` action. You’ll also need to provide environment variables that contain the username and password to authenticate to the repository.
|
||||
@@ -118,6 +120,7 @@ For a Maven-based project, you can make use of these settings by creating a dist
|
||||
For example, if your organization is named "octocat" and your repository is named "hello-world", then the {% data variables.product.prodname_registry %} configuration in _pom.xml_ would look similar to the below example.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```xml copy
|
||||
<project ...>
|
||||
...
|
||||
@@ -130,6 +133,7 @@ For example, if your organization is named "octocat" and your repository is name
|
||||
</distributionManagement>
|
||||
</project>
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
With this configuration, you can create a workflow that publishes your package to {% data variables.product.prodname_registry %} by making use of the automatically generated _settings.xml_.
|
||||
|
||||
@@ -51,6 +51,7 @@ The following example shows you how {% data variables.product.prodname_actions %
|
||||
ls {% raw %}${{ github.workspace }}{% endraw %}
|
||||
- run: echo "🍏 This job's status is {% raw %}${{ job.status }}{% endraw %}."
|
||||
```
|
||||
|
||||
1. Scroll to the bottom of the page and select **Create a new branch for this commit and start a pull request**. Then, to create a pull request, click **Propose new file**.
|
||||
|
||||

|
||||
|
||||
@@ -232,6 +232,7 @@ You can check which access policies are being applied to a secret in your organi
|
||||
To provide an action with a secret as an input or environment variable, you can use the `secrets` context to access secrets you've created in your repository. For more information, see "[AUTOTITLE](/actions/learn-github-actions/contexts)" and "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions)."
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
steps:
|
||||
- name: Hello world action
|
||||
@@ -240,6 +241,7 @@ steps:
|
||||
env: # Or as an environment variable
|
||||
super_secret: ${{ secrets.SuperSecret }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
Secrets cannot be directly referenced in `if:` conditionals. Instead, consider setting secrets as job-level environment variables, then referencing the environment variables to conditionally run steps in the job. For more information, see "[AUTOTITLE](/actions/learn-github-actions/contexts#context-availability)" and [`jobs.<job_id>.steps[*].if`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsif).
|
||||
@@ -253,6 +255,7 @@ If you must pass secrets within a command line, then enclose them within the pro
|
||||
### Example using Bash
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
steps:
|
||||
- shell: bash
|
||||
@@ -261,11 +264,13 @@ steps:
|
||||
run: |
|
||||
example-command "$SUPER_SECRET"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### Example using PowerShell
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
steps:
|
||||
- shell: pwsh
|
||||
@@ -274,11 +279,13 @@ steps:
|
||||
run: |
|
||||
example-command "$env:SUPER_SECRET"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### Example using Cmd.exe
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
steps:
|
||||
- shell: cmd
|
||||
@@ -287,6 +294,7 @@ steps:
|
||||
run: |
|
||||
example-command "%SUPER_SECRET%"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## Limits for secrets
|
||||
|
||||
@@ -75,6 +75,7 @@ The following sections explain how you can help mitigate the risk of script inje
|
||||
A script injection attack can occur directly within a workflow's inline script. In the following example, an action uses an expression to test the validity of a pull request title, but also adds the risk of script injection:
|
||||
|
||||
{% raw %}
|
||||
|
||||
```
|
||||
- name: Check PR title
|
||||
run: |
|
||||
@@ -87,6 +88,7 @@ A script injection attack can occur directly within a workflow's inline script.
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
This example is vulnerable to script injection because the `run` command executes within a temporary shell script on the runner. Before the shell script is run, the expressions inside {% raw %}`${{ }}`{% endraw %} are evaluated and then substituted with the resulting values, which can make it vulnerable to shell command injection.
|
||||
@@ -113,11 +115,13 @@ There are a number of different approaches available to help you mitigate the ri
|
||||
The recommended approach is to create an action that processes the context value as an argument. This approach is not vulnerable to the injection attack, as the context value is not used to generate a shell script, but is instead passed to the action as an argument:
|
||||
|
||||
{% raw %}
|
||||
|
||||
```
|
||||
uses: fakeaction/checktitle@v3
|
||||
with:
|
||||
title: ${{ github.event.pull_request.title }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### Using an intermediate environment variable
|
||||
@@ -127,6 +131,7 @@ For inline scripts, the preferred approach to handling untrusted input is to set
|
||||
The following example uses Bash to process the `github.event.pull_request.title` value as an environment variable:
|
||||
|
||||
{% raw %}
|
||||
|
||||
```
|
||||
- name: Check PR title
|
||||
env:
|
||||
@@ -140,6 +145,7 @@ The following example uses Bash to process the `github.event.pull_request.title`
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
In this example, the attempted script injection is unsuccessful, which is reflected by the following lines in the log:
|
||||
@@ -244,11 +250,13 @@ Workflows triggered using the `pull_request` event have read-only permissions an
|
||||
- For a custom action, the risk can vary depending on how a program is using the secret it obtained from the argument:
|
||||
|
||||
{% raw %}
|
||||
|
||||
```
|
||||
uses: fakeaction/publish@v3
|
||||
with:
|
||||
key: ${{ secrets.PUBLISH_KEY }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
Although {% data variables.product.prodname_actions %} scrubs secrets from memory that are not referenced in the workflow (or an included action), the `GITHUB_TOKEN` and any referenced secrets can be harvested by a determined attacker.
|
||||
|
||||
@@ -51,6 +51,7 @@ You can use the `services` keyword to create service containers that are part of
|
||||
This example creates a service called `redis` in a job called `container-job`. The Docker host in this example is the `node:16-bullseye` container.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Redis container example
|
||||
on: push
|
||||
@@ -70,6 +71,7 @@ jobs:
|
||||
# Docker Hub image
|
||||
image: redis
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## Mapping Docker host and service container ports
|
||||
@@ -93,6 +95,7 @@ When you specify the Docker host port but not the container port, the container
|
||||
This example maps the service container `redis` port 6379 to the Docker host port 6379.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Redis Service Example
|
||||
on: push
|
||||
@@ -114,6 +117,7 @@ jobs:
|
||||
# Opens tcp port 6379 on the host and service container
|
||||
- 6379:6379
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## Further reading
|
||||
|
||||
@@ -73,6 +73,7 @@ jobs:
|
||||
The following example demonstrates how to use [Chocolatey](https://community.chocolatey.org/packages) to install the {% data variables.product.prodname_dotcom %} CLI as part of a job.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
name: Build on Windows
|
||||
on: push
|
||||
@@ -83,4 +84,5 @@ jobs:
|
||||
- run: choco install gh
|
||||
- run: gh version
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
@@ -62,6 +62,7 @@ If your workflows use sensitive data, such as passwords or certificates, you can
|
||||
This example job demonstrates how to reference an existing secret as an environment variable, and send it as a parameter to an example command.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
example-job:
|
||||
@@ -73,6 +74,7 @@ jobs:
|
||||
run: |
|
||||
example-command "$super_secret"
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/security-guides/encrypted-secrets)."
|
||||
|
||||
@@ -80,17 +80,20 @@ You cannot change the contents of an existing cache. Instead, you can create a n
|
||||
~/.gradle/caches
|
||||
~/.gradle/wrapper
|
||||
```
|
||||
|
||||
- You can specify either directories or single files, and glob patterns are supported.
|
||||
- You can specify absolute paths, or paths relative to the workspace directory.
|
||||
- `restore-keys`: **Optional** A string containing alternative restore keys, with each restore key placed on a new line. If no cache hit occurs for `key`, these restore keys are used sequentially in the order provided to find and restore a cache. For example:
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
restore-keys: |
|
||||
npm-feature-${{ hashFiles('package-lock.json') }}
|
||||
npm-feature-
|
||||
npm-
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
- `enableCrossOsArchive`: **Optional** A boolean value that when enabled, allows Windows runners to save or restore caches independent of the operating system the cache was created on. If this parameter is not set, it defaults to `false`. For more information, see [Cross OS cache](https://github.com/actions/cache/blob/main/tips-and-workarounds.md#cross-os-cache) in the Actions Cache documentation.
|
||||
@@ -165,9 +168,11 @@ Using expressions to create a `key` allows you to automatically create a new cac
|
||||
For example, you can create a `key` using an expression that calculates the hash of an npm `package-lock.json` file. So, when the dependencies that make up the `package-lock.json` file change, the cache key changes and a new cache is automatically created.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
npm-${{ hashFiles('package-lock.json') }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
{% data variables.product.prodname_dotcom %} evaluates the expression `hash "package-lock.json"` to derive the final `key`.
|
||||
@@ -200,23 +205,27 @@ Cache version is a way to stamp a cache with metadata of the `path` and the comp
|
||||
### Example using multiple restore keys
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
restore-keys: |
|
||||
npm-feature-${{ hashFiles('package-lock.json') }}
|
||||
npm-feature-
|
||||
npm-
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
The runner evaluates the expressions, which resolve to these `restore-keys`:
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
restore-keys: |
|
||||
npm-feature-d5ea0750
|
||||
npm-feature-
|
||||
npm-
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
The restore key `npm-feature-` matches any key that starts with the string `npm-feature-`. For example, both of the keys `npm-feature-fd3052de` and `npm-feature-a9b253ff` match the restore key. The cache with the most recent creation date would be used. The keys in this example are searched in the following order:
|
||||
|
||||
@@ -68,7 +68,9 @@ This procedure demonstrates how to create a starter workflow and metadata file.
|
||||
- name: Run a one-line script
|
||||
run: echo Hello from Octo Organization
|
||||
```
|
||||
|
||||
4. Create a metadata file inside the `workflow-templates` directory. The metadata file must have the same name as the workflow file, but instead of the `.yml` extension, it must be appended with `.properties.json`. For example, this file named `octo-organization-ci.properties.json` contains the metadata for a workflow file named `octo-organization-ci.yml`:
|
||||
|
||||
```json copy
|
||||
{
|
||||
"name": "Octo Organization Workflow",
|
||||
@@ -84,6 +86,7 @@ This procedure demonstrates how to create a starter workflow and metadata file.
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
- `name` - **Required.** The name of the workflow. This is displayed in the list of available workflows.
|
||||
- `description` - **Required.** The description of the workflow. This is displayed in the list of available workflows.
|
||||
- `iconName` - **Optional.** Specifies an icon for the workflow that is displayed in the list of workflows. `iconName` can one of the following types:
|
||||
|
||||
@@ -1046,11 +1046,13 @@ on:
|
||||
**Note**: When pushing multi-architecture container images, this event occurs once per manifest, so you might observe your workflow triggering multiple times. To mitigate this, and only run your workflow job for the event that contains the actual image tag information, use a conditional:
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
job_name:
|
||||
if: ${{ github.event.registry_package.package_version.container_metadata.tag.name != '' }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
{% endnote %}
|
||||
|
||||
@@ -110,6 +110,7 @@ You can define inputs and secrets, which can be passed from the caller workflow
|
||||
|
||||
1. In the reusable workflow, use the `inputs` and `secrets` keywords to define inputs or secrets that will be passed from a caller workflow.
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
on:
|
||||
workflow_call:
|
||||
@@ -121,6 +122,7 @@ You can define inputs and secrets, which can be passed from the caller workflow
|
||||
envPAT:
|
||||
required: true
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
For details of the syntax for defining inputs and secrets, see [`on.workflow_call.inputs`](/actions/using-workflows/workflow-syntax-for-github-actions#onworkflow_callinputs) and [`on.workflow_call.secrets`](/actions/using-workflows/workflow-syntax-for-github-actions#onworkflow_callsecrets).
|
||||
{% ifversion actions-inherit-secrets-reusable-workflows %}
|
||||
@@ -136,6 +138,7 @@ You can define inputs and secrets, which can be passed from the caller workflow
|
||||
{%- endif %}
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
reusable_workflow_job:
|
||||
@@ -147,6 +150,7 @@ You can define inputs and secrets, which can be passed from the caller workflow
|
||||
repo-token: ${{ secrets.envPAT }}
|
||||
configuration-path: ${{ inputs.config-path }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
In the example above, `envPAT` is an environment secret that's been added to the `production` environment. This environment is therefore referenced within the job.
|
||||
|
||||
@@ -165,6 +169,7 @@ You can define inputs and secrets, which can be passed from the caller workflow
|
||||
This reusable workflow file named `workflow-B.yml` (we'll refer to this later in the [example caller workflow](#example-caller-workflow)) takes an input string and a secret from the caller workflow and uses them in an action.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Reusable workflow example
|
||||
|
||||
@@ -187,6 +192,7 @@ jobs:
|
||||
repo-token: ${{ secrets.token }}
|
||||
configuration-path: ${{ inputs.config-path }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## Calling a reusable workflow
|
||||
@@ -217,6 +223,7 @@ A matrix strategy lets you use variables in a single job definition to automatic
|
||||
This example job below calls a reusable workflow and references the matrix context by defining the variable `target` with the values `[dev, stage, prod]`. It will run three jobs, one for each value in the variable.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
jobs:
|
||||
ReuseableMatrixJobForDeployment:
|
||||
@@ -227,6 +234,7 @@ jobs:
|
||||
with:
|
||||
target: ${{ matrix.target }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
{% endif %}
|
||||
|
||||
@@ -265,6 +273,7 @@ When you call a reusable workflow, you can only use the following keywords in th
|
||||
This workflow file calls two workflow files. The second of these, `workflow-B.yml` (shown in the [example reusable workflow](#example-reusable-workflow)), is passed an input (`config-path`) and a secret (`token`).
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Call a reusable workflow
|
||||
|
||||
@@ -287,6 +296,7 @@ jobs:
|
||||
secrets:
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
{% ifversion nested-reusable-workflow %}
|
||||
@@ -297,6 +307,7 @@ You can connect a maximum of four levels of workflows - that is, the top-level c
|
||||
From within a reusable workflow you can call another reusable workflow.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Reusable workflow
|
||||
|
||||
@@ -307,6 +318,7 @@ jobs:
|
||||
call-another-reusable:
|
||||
uses: octo-org/example-repo/.github/workflows/another-reusable.yml@v1
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### Passing secrets to nested workflows
|
||||
@@ -316,6 +328,7 @@ You can use `jobs.<job_id>.secrets` in a calling workflow to pass named secrets
|
||||
In the following example, workflow A passes all of its secrets to workflow B, by using the `inherit` keyword, but workflow B only passes one secret to workflow C. Any of the other secrets passed to workflow B are not available to workflow C.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
workflowA-calls-workflowB:
|
||||
@@ -330,6 +343,7 @@ jobs:
|
||||
secrets:
|
||||
envPAT: ${{ secrets.envPAT }} # pass just this secret
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
### Access and permissions
|
||||
@@ -351,6 +365,7 @@ That means if the last successful completing reusable workflow sets an empty str
|
||||
The following reusable workflow has a single job containing two steps. In each of these steps we set a single word as the output: "hello" and "world." In the `outputs` section of the job, we map these step outputs to job outputs called: `output1` and `output2`. In the `on.workflow_call.outputs` section we then define two outputs for the workflow itself, one called `firstword` which we map to `output1`, and one called `secondword` which we map to `output2`.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Reusable workflow
|
||||
|
||||
@@ -387,11 +402,13 @@ jobs:
|
||||
run: echo "::set-output name=secondword::world"
|
||||
{%- endif %}{% raw %}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
We can now use the outputs in the caller workflow, in the same way you would use the outputs from a job within the same workflow. We reference the outputs using the names defined at the workflow level in the reusable workflow: `firstword` and `secondword`. In this workflow, `job1` calls the reusable workflow and `job2` prints the outputs from the reusable workflow ("hello world") to standard output in the workflow log.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml copy
|
||||
name: Call a reusable workflow and use its outputs
|
||||
|
||||
@@ -408,6 +425,7 @@ jobs:
|
||||
steps:
|
||||
- run: echo ${{ needs.job1.outputs.firstword }} ${{ needs.job1.outputs.secondword }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information on using job outputs, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idoutputs)."
|
||||
|
||||
@@ -256,6 +256,7 @@ Creates a warning message and prints the message to the log. {% data reusables.a
|
||||
```bash copy
|
||||
echo "::warning file=app.js,line=1,col=5,endColumn=7::Missing semicolon"
|
||||
```
|
||||
|
||||
{% endbash %}
|
||||
|
||||
{% powershell %}
|
||||
@@ -524,6 +525,7 @@ jobs:
|
||||
echo "::add-mask::$RETRIEVED_SECRET"
|
||||
echo "We retrieved our masked secret: $RETRIEVED_SECRET"
|
||||
```
|
||||
|
||||
{% endbash %}
|
||||
|
||||
{% powershell %}
|
||||
@@ -563,6 +565,7 @@ jobs:
|
||||
echo "::add-mask::$Retrieved_Secret"
|
||||
echo "We retrieved our masked secret: $Retrieved_Secret"
|
||||
```
|
||||
|
||||
{% endpowershell %}
|
||||
|
||||
## Stopping and starting workflow commands
|
||||
@@ -603,6 +606,7 @@ jobs:
|
||||
echo "::$stopMarker::"
|
||||
echo '::warning:: This is a warning again, because stop-commands has been turned off.'
|
||||
```
|
||||
|
||||
{% endbash %}
|
||||
|
||||
{% powershell %}
|
||||
@@ -715,6 +719,7 @@ This example uses JavaScript to run the `save-state` command. The resulting envi
|
||||
```javascript copy
|
||||
console.log('::save-state name=processID::12345')
|
||||
```
|
||||
|
||||
{% endif %}
|
||||
|
||||
The `STATE_processID` variable is then exclusively available to the cleanup script running under the `main` action. This example runs in `main` and uses JavaScript to display the value assigned to the `STATE_processID` environment variable:
|
||||
@@ -888,6 +893,7 @@ Sets a step's output parameter. Note that the step will need an `id` to be defin
|
||||
```bash copy
|
||||
echo "{name}={value}" >> "$GITHUB_OUTPUT"
|
||||
```
|
||||
|
||||
{% endbash %}
|
||||
|
||||
{% powershell %}
|
||||
@@ -1087,6 +1093,7 @@ Prepends a directory to the system `PATH` variable and automatically makes it av
|
||||
```bash copy
|
||||
echo "{path}" >> $GITHUB_PATH
|
||||
```
|
||||
|
||||
{% endbash %}
|
||||
|
||||
{% powershell %}
|
||||
|
||||
@@ -37,9 +37,11 @@ This value can include expressions and can reference the [`github`](/actions/lea
|
||||
### Example of `run-name`
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
run-name: Deploy to ${{ inputs.deploy_target }} by @${{ github.actor }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
{% endif %}
|
||||
|
||||
@@ -88,6 +90,7 @@ If a caller workflow passes an input that is not specified in the called workflo
|
||||
### Example of `on.workflow_call.inputs`
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
on:
|
||||
workflow_call:
|
||||
@@ -106,6 +109,7 @@ jobs:
|
||||
- name: Print the input name to STDOUT
|
||||
run: echo The username is ${{ inputs.username }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/using-workflows/reusing-workflows)."
|
||||
@@ -123,6 +127,7 @@ In the example below, two outputs are defined for this reusable workflow: `workf
|
||||
### Example of `on.workflow_call.outputs`
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
on:
|
||||
workflow_call:
|
||||
@@ -135,6 +140,7 @@ on:
|
||||
description: "The second job output"
|
||||
value: ${{ jobs.my_job.outputs.job_output2 }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For information on how to reference a job output, see [`jobs.<job_id>.outputs`](#jobsjob_idoutputs). For more information, see "[AUTOTITLE](/actions/using-workflows/reusing-workflows)."
|
||||
@@ -156,6 +162,7 @@ If a caller workflow passes a secret that is not specified in the called workflo
|
||||
### Example of `on.workflow_call.secrets`
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
on:
|
||||
workflow_call:
|
||||
@@ -181,6 +188,7 @@ jobs:
|
||||
secrets:
|
||||
token: ${{ secrets.access-token }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## `on.workflow_call.secrets.<secret_id>`
|
||||
@@ -326,6 +334,7 @@ You can run an unlimited number of steps as long as you are within the workflow
|
||||
### Example of `jobs.<job_id>.steps`
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
name: Greeting from Mona
|
||||
|
||||
@@ -345,6 +354,7 @@ jobs:
|
||||
run: |
|
||||
echo $MY_VAR $FIRST_NAME $MIDDLE_NAME $LAST_NAME.
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## `jobs.<job_id>.steps[*].id`
|
||||
@@ -388,6 +398,7 @@ Secrets cannot be directly referenced in `if:` conditionals. Instead, consider s
|
||||
If a secret has not been set, the return value of an expression referencing the secret (such as {% raw %}`${{ secrets.SuperSecret }}`{% endraw %} in the example) will be an empty string.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
name: Run a step if a secret has been set
|
||||
on: push
|
||||
@@ -402,6 +413,7 @@ jobs:
|
||||
- if: ${{ env.super_secret == '' }}
|
||||
run: echo 'This step will only run if the secret does not have a value set.'
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
For more information, see "[AUTOTITLE](/actions/learn-github-actions/contexts#context-availability)" and "[AUTOTITLE](/actions/security-guides/encrypted-secrets)."
|
||||
@@ -513,6 +525,7 @@ jobs:
|
||||
- name: My first step
|
||||
uses: docker://ghcr.io/OWNER/IMAGE_NAME
|
||||
```
|
||||
|
||||
{% endif %}
|
||||
### Example: Using a Docker public registry action
|
||||
|
||||
@@ -714,6 +727,7 @@ A `string` that defines the inputs for a Docker container. {% data variables.pro
|
||||
### Example of `jobs.<job_id>.steps[*].with.args`
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
steps:
|
||||
- name: Explain why this job ran
|
||||
@@ -722,6 +736,7 @@ steps:
|
||||
entrypoint: /bin/echo
|
||||
args: The ${{ github.event_name }} event triggered this step.
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
The `args` are used in place of the `CMD` instruction in a `Dockerfile`. If you use `CMD` in your `Dockerfile`, use the guidelines ordered by preference:
|
||||
@@ -757,6 +772,7 @@ Public actions may specify expected variables in the README file. If you are set
|
||||
### Example of `jobs.<job_id>.steps[*].env`
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
steps:
|
||||
- name: My first action
|
||||
@@ -765,6 +781,7 @@ steps:
|
||||
FIRST_NAME: Mona
|
||||
LAST_NAME: Octocat
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## `jobs.<job_id>.steps[*].continue-on-error`
|
||||
@@ -840,6 +857,7 @@ Prevents a workflow run from failing when a job fails. Set to `true` to allow a
|
||||
You can allow specific jobs in a job matrix to fail without failing the workflow run. For example, if you wanted to only allow an experimental job with `node` set to `15` to fail without failing the workflow run.
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
runs-on: ${{ matrix.os }}
|
||||
continue-on-error: ${{ matrix.experimental }}
|
||||
@@ -854,6 +872,7 @@ strategy:
|
||||
os: ubuntu-latest
|
||||
experimental: true
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## `jobs.<job_id>.container`
|
||||
@@ -927,6 +946,7 @@ The Docker image to use as the service container to run the action. The value ca
|
||||
### Example of `jobs.<job_id>.services.<service_id>.credentials`
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
services:
|
||||
myservice1:
|
||||
@@ -940,6 +960,7 @@ services:
|
||||
username: ${{ secrets.DOCKER_USER }}
|
||||
password: ${{ secrets.DOCKER_PASSWORD }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## `jobs.<job_id>.services.<service_id>.env`
|
||||
@@ -1026,6 +1047,7 @@ Any secrets that you pass must match the names defined in the called workflow.
|
||||
### Example of `jobs.<job_id>.secrets`
|
||||
|
||||
{% raw %}
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
call-workflow:
|
||||
@@ -1033,6 +1055,7 @@ jobs:
|
||||
secrets:
|
||||
access-token: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
|
||||
```
|
||||
|
||||
{% endraw %}
|
||||
|
||||
{% ifversion actions-inherit-secrets-reusable-workflows %}
|
||||
|
||||
@@ -78,42 +78,55 @@ For example, you can enable any {% data variables.product.prodname_GH_advanced_s
|
||||
1. Enable features for {% data variables.product.prodname_GH_advanced_security %}.
|
||||
|
||||
- To enable {% data variables.product.prodname_code_scanning_caps %}, enter the following commands.
|
||||
|
||||
```shell
|
||||
ghe-config app.minio.enabled true
|
||||
ghe-config app.code-scanning.enabled true
|
||||
```
|
||||
|
||||
- To enable {% data variables.product.prodname_secret_scanning_caps %}, enter the following command.
|
||||
|
||||
```shell
|
||||
ghe-config app.secret-scanning.enabled true
|
||||
```
|
||||
|
||||
- To enable the dependency graph, enter the following {% ifversion ghes %}command{% else %}commands{% endif %}.
|
||||
{% ifversion ghes %}```shell
|
||||
ghe-config app.dependency-graph.enabled true
|
||||
|
||||
```
|
||||
{% else %}```shell
|
||||
ghe-config app.github.dependency-graph-enabled true
|
||||
ghe-config app.github.vulnerability-alerting-and-settings-enabled true
|
||||
```{% endif %}
|
||||
|
||||
2. Optionally, disable features for {% data variables.product.prodname_GH_advanced_security %}.
|
||||
|
||||
- To disable {% data variables.product.prodname_code_scanning %}, enter the following commands.
|
||||
|
||||
```shell
|
||||
ghe-config app.minio.enabled false
|
||||
ghe-config app.code-scanning.enabled false
|
||||
```
|
||||
|
||||
- To disable {% data variables.product.prodname_secret_scanning %}, enter the following command.
|
||||
|
||||
```shell
|
||||
ghe-config app.secret-scanning.enabled false
|
||||
```
|
||||
|
||||
- To disable the dependency graph, enter the following {% ifversion ghes %}command{% else %}commands{% endif %}.
|
||||
{% ifversion ghes %}```shell
|
||||
ghe-config app.dependency-graph.enabled false
|
||||
|
||||
```
|
||||
{% else %}```shell
|
||||
ghe-config app.github.dependency-graph-enabled false
|
||||
ghe-config app.github.vulnerability-alerting-and-settings-enabled false
|
||||
```{% endif %}
|
||||
|
||||
3. Apply the configuration.
|
||||
|
||||
```shell
|
||||
ghe-config-apply
|
||||
```
|
||||
|
||||
@@ -40,6 +40,7 @@ Before configuring {% data variables.product.prodname_dependabot %}, install Doc
|
||||
docker pull ghcr.io/dependabot/dependabot-updater-github-actions:VERSION@SHA
|
||||
docker pull ghcr.io/dependabot/dependabot-updater-npm:VERSION@SHA
|
||||
```
|
||||
|
||||
{%- endif %}
|
||||
|
||||
{% note %}
|
||||
|
||||
@@ -42,6 +42,7 @@ If {% data variables.location.product_location %} uses clustering, you cannot en
|
||||
1. In the administrative shell, enable the dependency graph on {% data variables.location.product_location %}:
|
||||
{% ifversion ghes %}```shell
|
||||
ghe-config app.dependency-graph.enabled true
|
||||
|
||||
```
|
||||
{% else %}```shell
|
||||
ghe-config app.github.dependency-graph-enabled true
|
||||
|
||||
@@ -24,8 +24,10 @@ The first time that you access the {% data variables.enterprise.management_conso
|
||||
## Accessing the {% data variables.enterprise.management_console %} as an unauthenticated user
|
||||
|
||||
1. Visit this URL in your browser, replacing `hostname` with your actual {% data variables.product.prodname_ghe_server %} hostname or IP address:
|
||||
|
||||
```shell
|
||||
http(s)://HOSTNAME/setup
|
||||
```
|
||||
|
||||
{% data reusables.enterprise_management_console.type-management-console-password %}
|
||||
{% data reusables.enterprise_management_console.click-continue-authentication %}
|
||||
|
||||
@@ -65,4 +65,5 @@ Your instance validates the hostnames for proxy exclusion using the list of IANA
|
||||
```shell
|
||||
ghe-config noproxy.exception-tld-list "COMMA-SEPARATED-TLD-LIST"
|
||||
```
|
||||
|
||||
{% data reusables.enterprise.apply-configuration %}
|
||||
|
||||
@@ -30,6 +30,7 @@ We do not recommend customizing UFW as it can complicate some troubleshooting is
|
||||
|
||||
{% data reusables.enterprise_installation.ssh-into-instance %}
|
||||
2. To view the default firewall rules, use the `sudo ufw status` command. You should see output similar to this:
|
||||
|
||||
```shell
|
||||
$ sudo ufw status
|
||||
> Status: active
|
||||
@@ -67,10 +68,13 @@ We do not recommend customizing UFW as it can complicate some troubleshooting is
|
||||
|
||||
1. Configure a custom firewall rule.
|
||||
2. Check the status of each new rule with the `status numbered` command.
|
||||
|
||||
```shell
|
||||
sudo ufw status numbered
|
||||
```
|
||||
|
||||
3. To back up your custom firewall rules, use the `cp`command to move the rules to a new file.
|
||||
|
||||
```shell
|
||||
sudo cp -r /etc/ufw ~/ufw.backup
|
||||
```
|
||||
@@ -89,14 +93,19 @@ If something goes wrong after you change the firewall rules, you can reset the r
|
||||
|
||||
{% data reusables.enterprise_installation.ssh-into-instance %}
|
||||
2. To restore the previous backup rules, copy them back to the firewall with the `cp` command.
|
||||
|
||||
```shell
|
||||
sudo cp -f ~/ufw.backup/*rules /etc/ufw
|
||||
```
|
||||
|
||||
3. Restart the firewall with the `systemctl` command.
|
||||
|
||||
```shell
|
||||
sudo systemctl restart ufw
|
||||
```
|
||||
|
||||
4. Confirm that the rules are back to their defaults with the `ufw status` command.
|
||||
|
||||
```shell
|
||||
$ sudo ufw status
|
||||
> Status: active
|
||||
|
||||
@@ -33,6 +33,7 @@ $ ghe-announce -u
|
||||
|
||||
{% ifversion ghe-announce-dismiss %}
|
||||
To allow each user to dismiss the announcement for themselves, use the `-d` flag.
|
||||
|
||||
```shell
|
||||
# Sets a user-dismissible message that's visible to everyone
|
||||
$ ghe-announce -d -s MESSAGE
|
||||
@@ -41,6 +42,7 @@ $ ghe-announce -d -s MESSAGE
|
||||
$ ghe-announce -u
|
||||
> Removed the announcement message, which was user dismissible: MESSAGE
|
||||
```
|
||||
|
||||
{% endif %}
|
||||
|
||||
You can also set an announcement banner using the enterprise settings on {% data variables.product.product_name %}. For more information, see "[AUTOTITLE](/admin/user-management/managing-users-in-your-enterprise/customizing-user-messages-for-your-enterprise#creating-a-global-announcement-banner)."
|
||||
@@ -88,6 +90,7 @@ This utility cleans up a variety of caches that might potentially take up extra
|
||||
```shell
|
||||
ghe-cleanup-caches
|
||||
```
|
||||
|
||||
### ghe-cleanup-settings
|
||||
|
||||
This utility wipes all existing {% data variables.enterprise.management_console %} settings.
|
||||
@@ -114,6 +117,7 @@ $ ghe-config core.github-hostname URL
|
||||
$ ghe-config -l
|
||||
# Lists all the configuration values
|
||||
```
|
||||
|
||||
Allows you to find the universally unique identifier (UUID) of your node in `cluster.conf`.
|
||||
|
||||
```shell
|
||||
@@ -157,6 +161,7 @@ ghe-dbconsole
|
||||
This utility returns a summary of Elasticsearch indexes in CSV format.
|
||||
|
||||
Print an index summary with a header row to `STDOUT`:
|
||||
|
||||
```shell
|
||||
$ ghe-es-index-status -do
|
||||
> warning: parser/current is loading parser/ruby23, which recognizes
|
||||
@@ -424,12 +429,14 @@ ghe-ssh-check-host-keys
|
||||
```
|
||||
|
||||
If a leaked host key is found the utility exits with status `1` and a message:
|
||||
|
||||
```shell
|
||||
> One or more of your SSH host keys were found in the blacklist.
|
||||
> Please reset your host keys using ghe-ssh-roll-host-keys.
|
||||
```
|
||||
|
||||
If a leaked host key was not found, the utility exits with status `0` and a message:
|
||||
|
||||
```shell
|
||||
> The SSH host keys were not found in the SSH host key blacklist.
|
||||
> No additional steps are needed/recommended at this time.
|
||||
@@ -568,6 +575,7 @@ ghe-webhook-logs -f -a YYYY-MM-DD
|
||||
The date format should be `YYYY-MM-DD`, `YYYY-MM-DD HH:MM:SS`, or `YYYY-MM-DD HH:MM:SS (+/-) HH:M`.
|
||||
|
||||
To show the full hook payload, result, and any exceptions for the delivery:
|
||||
|
||||
```shell
|
||||
ghe-webhook-logs -g DELIVERY_GUID
|
||||
```
|
||||
@@ -639,21 +647,25 @@ By default, the command creates the tarball in _/tmp_, but you can also have it
|
||||
{% data reusables.enterprise.bundle-utility-period-argument-availability-note %}
|
||||
|
||||
To create a standard bundle:
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME -- 'ghe-cluster-support-bundle -o' > cluster-support-bundle.tgz
|
||||
```
|
||||
|
||||
To create a standard bundle including data from the last 3 hours:
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME -- "ghe-cluster-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}3hours {% elsif ghes < 3.9 %}'3 hours' {% endif %} -o" > support-bundle.tgz
|
||||
```
|
||||
|
||||
To create a standard bundle including data from the last 2 days:
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME -- "ghe-cluster-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}2days {% elsif ghes < 3.9 %}'2 days' {% endif %} -o" > support-bundle.tgz
|
||||
```
|
||||
|
||||
To create a standard bundle including data from the last 4 days and 8 hours:
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME -- "ghe-cluster-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}4days8hours {% elsif ghes < 3.9 %}'4 days 8 hours' {% endif %} -o" > support-bundle.tgz
|
||||
```
|
||||
@@ -665,11 +677,13 @@ ssh -p 122 admin@HOSTNAME -- ghe-cluster-support-bundle -x -o' > cluster-support
|
||||
```
|
||||
|
||||
To send a bundle to {% data variables.contact.github_support %}:
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME -- 'ghe-cluster-support-bundle -u'
|
||||
```
|
||||
|
||||
To send a bundle to {% data variables.contact.github_support %} and associate the bundle with a ticket:
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME -- 'ghe-cluster-support-bundle -t TICKET_ID'
|
||||
```
|
||||
@@ -683,11 +697,13 @@ ghe-dpages
|
||||
```
|
||||
|
||||
To show a summary of repository location and health:
|
||||
|
||||
```shell
|
||||
ghe-dpages status
|
||||
```
|
||||
|
||||
To evacuate a {% data variables.product.prodname_pages %} storage service before evacuating a cluster node:
|
||||
|
||||
```shell
|
||||
ghe-dpages evacuate pages-server-UUID
|
||||
```
|
||||
@@ -709,6 +725,7 @@ ghe-spokesctl routes
|
||||
```
|
||||
|
||||
To evacuate storage services on a cluster node:
|
||||
|
||||
```shell
|
||||
ghe-spokesctl server set evacuating git-server-UUID
|
||||
```
|
||||
@@ -983,6 +1000,7 @@ For more information, please see our guides on [migrating data to and from your
|
||||
### git-import-detect
|
||||
|
||||
Given a URL, detect which type of source control management system is at the other end. During a manual import this is likely already known, but this can be very useful in automated scripts.
|
||||
|
||||
```shell
|
||||
git-import-detect
|
||||
```
|
||||
@@ -990,6 +1008,7 @@ git-import-detect
|
||||
### git-import-hg-raw
|
||||
|
||||
This utility imports a Mercurial repository to this Git repository. For more information, see "[AUTOTITLE](/migrations/importing-source-code/using-the-command-line-to-import-source-code/importing-from-other-version-control-systems-with-the-administrative-shell)."
|
||||
|
||||
```shell
|
||||
git-import-hg-raw
|
||||
```
|
||||
@@ -997,6 +1016,7 @@ git-import-hg-raw
|
||||
### git-import-svn-raw
|
||||
|
||||
This utility imports Subversion history and file data into a Git branch. This is a straight copy of the tree, ignoring any trunk or branch distinction. For more information, see "[AUTOTITLE](/migrations/importing-source-code/using-the-command-line-to-import-source-code/importing-from-other-version-control-systems-with-the-administrative-shell)."
|
||||
|
||||
```shell
|
||||
git-import-svn-raw
|
||||
```
|
||||
@@ -1004,6 +1024,7 @@ git-import-svn-raw
|
||||
### git-import-tfs-raw
|
||||
|
||||
This utility imports from Team Foundation Version Control (TFVC). For more information, see "[AUTOTITLE](/migrations/importing-source-code/using-the-command-line-to-import-source-code/importing-from-other-version-control-systems-with-the-administrative-shell))."
|
||||
|
||||
```shell
|
||||
git-import-tfs-raw
|
||||
```
|
||||
@@ -1011,6 +1032,7 @@ git-import-tfs-raw
|
||||
### git-import-rewrite
|
||||
|
||||
This utility rewrites the imported repository. This gives you a chance to rename authors and, for Subversion and TFVC, produces Git branches based on folders. For more information, see "[AUTOTITLE](/migrations/importing-source-code/using-the-command-line-to-import-source-code/importing-from-other-version-control-systems-with-the-administrative-shell)."
|
||||
|
||||
```shell
|
||||
git-import-rewrite
|
||||
```
|
||||
@@ -1047,31 +1069,37 @@ By default, the command creates the tarball in _/tmp_, but you can also have it
|
||||
{% data reusables.enterprise.bundle-utility-period-argument-availability-note %}
|
||||
|
||||
To create a standard bundle:
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME -- 'ghe-support-bundle -o' > support-bundle.tgz
|
||||
```
|
||||
|
||||
To create a standard bundle including data from the last 3 hours:
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME -- "ghe-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}3hours {% elsif ghes < 3.9 %}'3 hours' {% endif %} -o" > support-bundle.tgz
|
||||
```
|
||||
|
||||
To create a standard bundle including data from the last 2 days:
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME -- "ghe-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}2days {% elsif ghes < 3.9 %}'2 days' {% endif %} -o" > support-bundle.tgz
|
||||
```
|
||||
|
||||
To create a standard bundle including data from the last 4 days and 8 hours:
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME -- "ghe-support-bundle -p {% ifversion bundle-cli-syntax-no-quotes %}4days8hours {% elsif ghes < 3.9 %}'4 days 8 hours' {% endif %} -o" > support-bundle.tgz
|
||||
```
|
||||
|
||||
To create an extended bundle including data from the last 8 days:
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME -- 'ghe-support-bundle -x -o' > support-bundle.tgz
|
||||
```
|
||||
|
||||
To send a bundle to {% data variables.contact.github_support %}:
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME -- 'ghe-support-bundle -u'
|
||||
```
|
||||
@@ -1087,11 +1115,13 @@ ssh -p 122 admin@HOSTNAME -- 'ghe-support-bundle -t TICKET_ID'
|
||||
This utility sends information from your appliance to {% data variables.product.prodname_enterprise %} support. You can either specify a local file, or provide a stream of up to 100MB of data via `STDIN`. The uploaded data can optionally be associated with a support ticket.
|
||||
|
||||
To send a file to {% data variables.contact.github_support %} and associate the file with a ticket:
|
||||
|
||||
```shell
|
||||
ghe-support-upload -f FILE_PATH -t TICKET_ID
|
||||
```
|
||||
|
||||
To upload data via `STDIN` and associating the data with a ticket:
|
||||
|
||||
```shell
|
||||
ghe-repl-status -vv | ghe-support-upload -t TICKET_ID -d "Verbose Replication Status"
|
||||
```
|
||||
@@ -1143,11 +1173,13 @@ ssh -p 122 admin@HOSTNAME -- 'ghe-update-check'
|
||||
This utility installs or verifies an upgrade package. You can also use this utility to roll back a patch release if an upgrade fails or is interrupted. For more information, see "[AUTOTITLE](/admin/enterprise-management/updating-the-virtual-machine-and-physical-resources/upgrading-github-enterprise-server)."
|
||||
|
||||
To verify an upgrade package:
|
||||
|
||||
```shell
|
||||
ghe-upgrade --verify UPGRADE-PACKAGE-FILENAME
|
||||
```
|
||||
|
||||
To install an upgrade package:
|
||||
|
||||
```shell
|
||||
ghe-upgrade UPGRADE-PACKAGE-FILENAME
|
||||
```
|
||||
@@ -1161,17 +1193,20 @@ This utility manages scheduled installation of upgrade packages. You can show, c
|
||||
The `ghe-upgrade-scheduler` utility is best suited for scheduling hotpatch upgrades, which do not require maintenance mode or a reboot in most cases. This utility is not practical for full package upgrades, which require an administrator to manually set maintenance mode, reboot the instance, and unset maintenance mode. For more information about the different types of upgrades, see "[AUTOTITLE](/admin/enterprise-management/updating-the-virtual-machine-and-physical-resources/upgrading-github-enterprise-server#upgrading-with-an-upgrade-package)"
|
||||
|
||||
To schedule a new installation for a package:
|
||||
|
||||
```shell
|
||||
ghe-upgrade-scheduler -c "0 2 15 12 *" UPGRADE-PACKAGE-FILENAME
|
||||
```
|
||||
|
||||
To show scheduled installations for a package:
|
||||
|
||||
```shell
|
||||
$ ghe-upgrade-scheduler -s UPGRADE PACKAGE FILENAME
|
||||
> 0 2 15 12 * /usr/local/bin/ghe-upgrade -y -s UPGRADE-PACKAGE-FILENAME > /data/user/common/UPGRADE-PACKAGE-FILENAME.log 2>&1
|
||||
```
|
||||
|
||||
To remove scheduled installations for a package:
|
||||
|
||||
```shell
|
||||
ghe-upgrade-scheduler -r UPGRADE PACKAGE FILENAME
|
||||
```
|
||||
|
||||
@@ -73,17 +73,20 @@ Backup snapshots are written to the disk path set by the `GHE_DATA_DIR` data dir
|
||||
```
|
||||
git clone https://github.com/github/backup-utils.git /path/to/target/directory/backup-utils
|
||||
```
|
||||
|
||||
1. To change into the local repository directory, run the following command.
|
||||
|
||||
```
|
||||
cd backup-utils
|
||||
```
|
||||
|
||||
{% data reusables.enterprise_backup_utilities.enterprise-backup-utils-update-repo %}
|
||||
1. To copy the included `backup.config-example` file to `backup.config`, run the following command.
|
||||
|
||||
```shell
|
||||
cp backup.config-example backup.config
|
||||
```
|
||||
|
||||
1. To customize your configuration, edit `backup.config` in a text editor.
|
||||
1. Set the `GHE_HOSTNAME` value to your primary {% data variables.product.prodname_ghe_server %} instance's hostname or IP address.
|
||||
|
||||
@@ -101,6 +104,7 @@ Backup snapshots are written to the disk path set by the `GHE_DATA_DIR` data dir
|
||||
```shell
|
||||
./bin/ghe-host-check
|
||||
```
|
||||
|
||||
1. To create an initial full backup, run the following command.
|
||||
|
||||
```shell
|
||||
@@ -168,11 +172,13 @@ If your backup host has internet connectivity and you previously used a compress
|
||||
```
|
||||
git clone https://github.com/github/backup-utils.git
|
||||
```
|
||||
|
||||
1. To change into the cloned repository, run the following command.
|
||||
|
||||
```
|
||||
cd backup-utils
|
||||
```
|
||||
|
||||
{% data reusables.enterprise_backup_utilities.enterprise-backup-utils-update-repo %}
|
||||
1. To restore your backup configuration from earlier, copy your existing backup configuration file to the local repository directory. Replace the path in the command with the location of the file saved in step 2.
|
||||
|
||||
|
||||
@@ -39,9 +39,11 @@ To improve security for clients that connect to {% data variables.location.produ
|
||||
```shell
|
||||
ghe-config app.babeld.host-key-ed25519 true
|
||||
```
|
||||
|
||||
1. Optionally, enter the following command to disable generation and advertisement of the Ed25519 host key.
|
||||
|
||||
```shell
|
||||
ghe-config app.babeld.host-key-ed25519 false
|
||||
```
|
||||
|
||||
{% data reusables.enterprise.apply-configuration %}
|
||||
|
||||
@@ -101,16 +101,19 @@ By default, the rate limit for {% data variables.product.prodname_actions %} is
|
||||
ghe-config actions-rate-limiting.enabled true
|
||||
ghe-config actions-rate-limiting.queue-runs-per-minute RUNS-PER-MINUTE
|
||||
```
|
||||
|
||||
1. To disable the rate limit after it's been enabled, run the following command.
|
||||
|
||||
```
|
||||
ghe-config actions-rate-limiting.enabled false
|
||||
```
|
||||
|
||||
1. To apply the configuration, run the following command.
|
||||
|
||||
```
|
||||
ghe-config-apply
|
||||
```
|
||||
|
||||
1. Wait for the configuration run to complete.
|
||||
|
||||
{% endif %}
|
||||
|
||||
@@ -49,4 +49,5 @@ For more information, see [{% data variables.product.prodname_blog %}](https://g
|
||||
```shell
|
||||
ghe-config app.gitauth.rsa-sha1 false
|
||||
```
|
||||
|
||||
{% data reusables.enterprise.apply-configuration %}
|
||||
|
||||
@@ -33,11 +33,13 @@ You can enable web commit signing, rotate the private key used for web commit si
|
||||
```bash copy
|
||||
ghe-config app.github.web-commit-signing-enabled true
|
||||
```
|
||||
|
||||
1. Apply the configuration, then wait for the configuration run to complete.
|
||||
|
||||
```bash copy
|
||||
ghe-config-apply
|
||||
```
|
||||
|
||||
1. Create a new user on {% data variables.location.product_location %} via built-in authentication or external authentication. For more information, see "[AUTOTITLE](/admin/identity-and-access-management/managing-iam-for-your-enterprise/about-authentication-for-your-enterprise)."
|
||||
- The user's username must be the same username you used when creating the PGP key in step 1 above, for example, `web-flow`.
|
||||
- The user's email address must be the same address you used when creating the PGP key.
|
||||
@@ -71,6 +73,7 @@ You can disable web commit signing for {% data variables.location.product_locati
|
||||
```bash copy
|
||||
ghe-config app.github.web-commit-signing-enabled false
|
||||
```
|
||||
|
||||
1. Apply the configuration.
|
||||
|
||||
```bash copy
|
||||
|
||||
@@ -25,10 +25,13 @@ shortTitle: Troubleshoot TLS errors
|
||||
If you have a Linux machine with OpenSSL installed, you can remove your passphrase.
|
||||
|
||||
1. Rename your original key file.
|
||||
|
||||
```shell
|
||||
mv yourdomain.key yourdomain.key.orig
|
||||
```
|
||||
|
||||
2. Generate a new key without a passphrase.
|
||||
|
||||
```shell
|
||||
openssl rsa -in yourdomain.key.orig -out yourdomain.key
|
||||
```
|
||||
@@ -69,14 +72,19 @@ If your {% data variables.product.prodname_ghe_server %} appliance interacts wit
|
||||
|
||||
1. Obtain the CA's root certificate from your local certificate authority and ensure it is in PEM format.
|
||||
2. Copy the file to your {% data variables.product.prodname_ghe_server %} appliance over SSH as the "admin" user on port 122.
|
||||
|
||||
```shell
|
||||
scp -P 122 rootCA.crt admin@HOSTNAME:/home/admin
|
||||
```
|
||||
|
||||
3. Connect to the {% data variables.product.prodname_ghe_server %} administrative shell over SSH as the "admin" user on port 122.
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@HOSTNAME
|
||||
```
|
||||
|
||||
4. Import the certificate into the system-wide certificate store.
|
||||
|
||||
```shell
|
||||
ghe-ssl-ca-certificate-install -c rootCA.crt
|
||||
```
|
||||
|
||||
@@ -63,9 +63,11 @@ To verify your enterprise account's domain, you must have access to modify domai
|
||||
{% data reusables.organizations.add-domain %}
|
||||
{% data reusables.organizations.add-dns-txt-record %}
|
||||
1. Wait for your DNS configuration to change, which may take up to 72 hours. You can confirm your DNS configuration has changed by running the `dig` command on the command line, replacing `ENTERPRISE-ACCOUNT` with the name of your enterprise account, and `example.com` with the domain you'd like to verify. You should see your new TXT record listed in the command output.
|
||||
|
||||
```shell
|
||||
dig _github-challenge-ENTERPRISE-ACCOUNT.DOMAIN-NAME +nostats +nocomments +nocmd TXT
|
||||
```
|
||||
|
||||
1. After confirming your TXT record is added to your DNS, follow steps one through four above to navigate to your enterprise account's approved and verified domains.
|
||||
{% data reusables.enterprise-accounts.continue-verifying-domain %}
|
||||
1. Optionally, after the "Verified" badge is visible on your organizations' profiles, delete the TXT entry from the DNS record at your domain hosting service.
|
||||
|
||||
@@ -45,24 +45,30 @@ Then, when told to fetch `https://github.example.com/myorg/myrepo`, Git will ins
|
||||
{% data reusables.enterprise_installation.add-ssh-key-to-primary %}
|
||||
1. To verify the connection to the primary and enable replica mode for the repository cache, run `ghe-repl-setup` again.
|
||||
- If the repository cache is your only additional node, no arguments are required.
|
||||
|
||||
```shell
|
||||
ghe-repl-setup PRIMARY-IP
|
||||
```
|
||||
|
||||
- If you're configuring a repository cache in addition to one or more existing replicas, use the `-a` or `--add` argument.
|
||||
|
||||
```
|
||||
ghe-repl-setup -a PRIMARY-IP
|
||||
```
|
||||
|
||||
{% ifversion ghes < 3.6 %}
|
||||
1. If you haven't already, set the datacenter name on the primary and any replica appliances, replacing DC-NAME with a datacenter name.
|
||||
|
||||
```
|
||||
ghe-repl-node --datacenter DC-NAME
|
||||
```
|
||||
1. Set a `cache-location` for the repository cache, replacing _CACHE-LOCATION_ with an alphanumeric identifier, such as the region where the cache is deployed. Also set a datacenter name for this cache; new caches will attempt to seed from another cache in the same datacenter.
|
||||
|
||||
1. Set a `cache-location` for the repository cache, replacing CACHE-LOCATION with an alphanumeric identifier, such as the region where the cache is deployed. Also set a datacenter name for this cache; new caches will attempt to seed from another cache in the same datacenter.
|
||||
|
||||
```shell
|
||||
ghe-repl-node --cache CACHE-LOCATION --datacenter REPLICA-DC-NAME
|
||||
```
|
||||
|
||||
{% else %}
|
||||
1. To configure the repository cache, use the `ghe-repl-node` command and include the necessary parameters.
|
||||
- Set a `cache-location` for the repository cache, replacing _CACHE-LOCATION_ with an alphanumeric identifier, such as the region where the cache is deployed. The _CACHE-LOCATION_ value must not be any of the subdomains reserved for use with subdomain isolation, such as `assets` or `media`. For a list of reserved names, see "[AUTOTITLE](/admin/configuration/configuring-network-settings/enabling-subdomain-isolation#about-subdomain-isolation)."
|
||||
@@ -72,11 +78,13 @@ Then, when told to fetch `https://github.example.com/myorg/myrepo`, Git will ins
|
||||
```
|
||||
ghe-repl-node --datacenter DC-NAME
|
||||
```
|
||||
- New caches will attempt to seed from another cache in the same datacenter. Set a `datacenter` for the repository cache, replacing _REPLICA-DC-NAME_ with the name of the datacenter where you're deploying the node.
|
||||
|
||||
- New caches will attempt to seed from another cache in the same datacenter. Set a `datacenter` for the repository cache, replacing REPLICA-DC-NAME with the name of the datacenter where you're deploying the node.
|
||||
|
||||
```shell
|
||||
ghe-repl-node --cache CACHE-LOCATION --cache-domain EXTERNAL-CACHE-DOMAIN --datacenter REPLICA-DC-NAME
|
||||
```
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% data reusables.enterprise_installation.replication-command %}
|
||||
|
||||
@@ -112,9 +112,11 @@ We strongly recommend enabling PROXY support for both your instance and the load
|
||||
{% data reusables.enterprise_installation.proxy-incompatible-with-aws-nlbs %}
|
||||
|
||||
- For your instance, use this command:
|
||||
|
||||
```shell
|
||||
ghe-config 'loadbalancer.proxy-protocol' 'true' && ghe-cluster-config-apply
|
||||
```
|
||||
|
||||
- For the load balancer, use the instructions provided by your vendor.
|
||||
|
||||
{% data reusables.enterprise_clustering.proxy_protocol_ports %}
|
||||
|
||||
@@ -95,13 +95,17 @@ If you plan to take a node offline and the node runs any of the following roles,
|
||||
- Command (replace REASON FOR EVACUATION with the reason for evacuation):
|
||||
|
||||
{% ifversion ghe-spokes-deprecation-phase-1 %}
|
||||
|
||||
```shell
|
||||
ghe-spokesctl server set evacuating git-server-UUID 'REASON FOR EVACUATION'
|
||||
```
|
||||
|
||||
{% else %}
|
||||
|
||||
```shell
|
||||
ghe-spokes server evacuate git-server-UUID 'REASON FOR EVACUATION'
|
||||
```
|
||||
|
||||
{% endif %}
|
||||
- `pages-server`:
|
||||
|
||||
@@ -136,13 +140,17 @@ If you plan to take a node offline and the node runs any of the following roles,
|
||||
- `git-server`:
|
||||
|
||||
{% ifversion ghe-spokes-deprecation-phase-1 %}
|
||||
|
||||
```shell
|
||||
ghe-spokesctl server evac-status git-server-UUID
|
||||
```
|
||||
|
||||
{% else %}
|
||||
|
||||
```shell
|
||||
ghe-spokes evac-status git-server-UUID
|
||||
```
|
||||
|
||||
{% endif %}
|
||||
- `pages-server`:
|
||||
|
||||
|
||||
@@ -56,11 +56,13 @@ By default, {% data variables.product.prodname_nes %} is disabled. You can enabl
|
||||
```shell copy
|
||||
ghe-config app.nes.enabled
|
||||
```
|
||||
|
||||
1. To enable {% data variables.product.prodname_nes %}, run the following command.
|
||||
|
||||
```shell copy
|
||||
ghe-config app.nes.enabled true
|
||||
```
|
||||
|
||||
{% data reusables.enterprise.apply-configuration %}
|
||||
1. To verify that {% data variables.product.prodname_nes %} is running, from any node, run the following command.
|
||||
|
||||
@@ -78,11 +80,13 @@ To determine how {% data variables.product.prodname_nes %} notifies you, you can
|
||||
```shell copy
|
||||
nes get-node-ttl all
|
||||
```
|
||||
|
||||
1. To set the TTL for the `fail` state, run the following command. Replace MINUTES with the number of minutes to use for failures.
|
||||
|
||||
```shell copy
|
||||
nes set-node-ttl fail MINUTES
|
||||
```
|
||||
|
||||
1. To set the TTL for the `warn` state, run the following command. Replace MINUTES with the number of minutes to use for warnings.
|
||||
|
||||
```shell copy
|
||||
@@ -104,6 +108,7 @@ To manage whether {% data variables.product.prodname_nes %} can take a node and
|
||||
```shell copy
|
||||
nes set-node-adminaction approved HOSTNAME
|
||||
```
|
||||
|
||||
- To revoke {% data variables.product.prodname_nes %}'s ability to take a node offline, run the following command. Replace HOSTNAME with the node's hostname.
|
||||
|
||||
```shell copy
|
||||
@@ -127,11 +132,13 @@ After {% data variables.product.prodname_nes %} detects that a node has exceeded
|
||||
```shell copy
|
||||
nes get-node-adminaction HOSTNAME
|
||||
```
|
||||
|
||||
1. If the `adminaction` state is currently set to `approved`, change the state to `none` by running the following command. Replace HOSTNAME with the hostname of the ineligible node.
|
||||
|
||||
```shell copy
|
||||
nes set-node-adminaction none HOSTNAME
|
||||
```
|
||||
|
||||
1. To ensure the node is in a healthy state, run the following command and confirm that the node's status is `ready`.
|
||||
|
||||
```shell copy
|
||||
@@ -143,11 +150,13 @@ After {% data variables.product.prodname_nes %} detects that a node has exceeded
|
||||
```shell copy
|
||||
nomad node eligibility -enable -self
|
||||
```
|
||||
|
||||
1. To update the node's eligibility in {% data variables.product.prodname_nes %}, run the following command. Replace HOSTNAME with the node's hostname.
|
||||
|
||||
```shell copy
|
||||
nes set-node-eligibility eligible HOSTNAME
|
||||
```
|
||||
|
||||
1. Wait 30 seconds, then check the cluster's health to confirm the target node is eligible by running the following command.
|
||||
|
||||
```shell copy
|
||||
@@ -164,6 +173,7 @@ You can view logs for {% data variables.product.prodname_nes %} from any node in
|
||||
```shell copy
|
||||
nomad alloc logs -job nes
|
||||
```
|
||||
|
||||
1. Alternatively, you can view logs for {% data variables.product.prodname_nes %} on the node that runs the service. The service writes logs to the systemd journal.
|
||||
|
||||
- To determine which node runs {% data variables.product.prodname_nes %}, run the following command.
|
||||
@@ -171,6 +181,7 @@ You can view logs for {% data variables.product.prodname_nes %} from any node in
|
||||
```shell copy
|
||||
nomad job status "nes" | grep running | grep "${nomad_node_id}" | awk 'NR==2{ print $1 }' | xargs nomad alloc status | grep "Node Name"
|
||||
```
|
||||
|
||||
- To view logs on the node, connect to the node via SSH, then run the following command.
|
||||
|
||||
```shell copy
|
||||
|
||||
@@ -39,6 +39,7 @@ admin@ghe-data-node-0:~$ ghe-cluster-status | grep error
|
||||
> mysql-replication ghe-data-node-0: error Stopped
|
||||
> mysql cluster: error
|
||||
```
|
||||
|
||||
{% note %}
|
||||
|
||||
**Note:** If there are no failing tests, this command produces no output. This indicates the cluster is healthy.
|
||||
@@ -55,6 +56,7 @@ You can configure [Nagios](https://www.nagios.org/) to monitor {% data variables
|
||||
|
||||
### Configuring the Nagios host
|
||||
1. Generate an SSH key with a blank passphrase. Nagios uses this to authenticate to the {% data variables.product.prodname_ghe_server %} cluster.
|
||||
|
||||
```shell
|
||||
nagiosuser@nagios:~$ ssh-keygen -t ed25519
|
||||
> Generating public/private ed25519 key pair.
|
||||
@@ -64,6 +66,7 @@ You can configure [Nagios](https://www.nagios.org/) to monitor {% data variables
|
||||
> Your identification has been saved in /home/nagiosuser/.ssh/id_ed25519.
|
||||
> Your public key has been saved in /home/nagiosuser/.ssh/id_ed25519.pub.
|
||||
```
|
||||
|
||||
{% danger %}
|
||||
|
||||
**Security Warning:** An SSH key without a passphrase can pose a security risk if authorized for full access to a host. Limit this key's authorization to a single read-only command.
|
||||
@@ -72,12 +75,14 @@ You can configure [Nagios](https://www.nagios.org/) to monitor {% data variables
|
||||
{% note %}
|
||||
|
||||
**Note:** If you're using a distribution of Linux that doesn't support the Ed25519 algorithm, use the command:
|
||||
|
||||
```shell
|
||||
nagiosuser@nagios:~$ ssh-keygen -t rsa -b 4096
|
||||
```
|
||||
|
||||
{% endnote %}
|
||||
2. Copy the private key (`id_ed25519`) to the `nagios` home folder and set the appropriate ownership.
|
||||
|
||||
```shell
|
||||
nagiosuser@nagios:~$ sudo cp .ssh/id_ed25519 /var/lib/nagios/.ssh/
|
||||
nagiosuser@nagios:~$ sudo chown nagios:nagios /var/lib/nagios/.ssh/id_ed25519
|
||||
@@ -95,6 +100,7 @@ You can configure [Nagios](https://www.nagios.org/) to monitor {% data variables
|
||||
```
|
||||
|
||||
5. To test that the Nagios plugin can successfully execute the command, run it interactively from Nagios host.
|
||||
|
||||
```shell
|
||||
nagiosuser@nagios:~$ /usr/lib/nagios/plugins/check_by_ssh -l admin -p 122 -H HOSTNAME -C "ghe-cluster-status -n" -t 30
|
||||
> OK - No errors detected
|
||||
@@ -110,6 +116,7 @@ You can configure [Nagios](https://www.nagios.org/) to monitor {% data variables
|
||||
command_line $USER1$/check_by_ssh -H $HOSTADDRESS$ -C "ghe-cluster-status -n" -l admin -p 122 -t 30
|
||||
}
|
||||
```
|
||||
|
||||
7. Add this command to a service definition for a node in the {% data variables.product.prodname_ghe_server %} cluster.
|
||||
|
||||
**Example definition**
|
||||
|
||||
@@ -31,6 +31,7 @@ In some cases, such as hardware failure, the underlying software that that manag
|
||||
```shell copy
|
||||
ghe-cluster-balance status
|
||||
```
|
||||
|
||||
1. If a job is not properly distributed, inspect the allocations by running the following command. Replace JOB with a single job or comma-delimited list of jobs.
|
||||
|
||||
```shell copy
|
||||
@@ -71,11 +72,13 @@ You can schedule rebalancing of jobs on your cluster by setting and applying con
|
||||
```shell copy
|
||||
ghe-config app.cluster-rebalance.enabled true
|
||||
```
|
||||
|
||||
1. Optionally, you can override the default schedule by defining a cron expression. For example, run the following command to balance jobs every three hours.
|
||||
|
||||
```shell copy
|
||||
ghe-config app.cluster-rebalance.schedule '0 */3 * * *'
|
||||
```
|
||||
|
||||
{% data reusables.enterprise.apply-configuration %}
|
||||
|
||||
## Further reading
|
||||
|
||||
@@ -25,6 +25,7 @@ topics:
|
||||
|
||||
1. Back up your data with [{% data variables.product.prodname_enterprise_backup_utilities %}](https://github.com/github/backup-utils#readme).
|
||||
2. From the administrative shell of any node, use the `ghe-cluster-hotpatch` command to install the latest hotpatch. You can provide a URL for a hotpatch, or manually download the hotpatch and specify a local filename.
|
||||
|
||||
```shell
|
||||
ghe-cluster-hotpatch https://HOTPATCH-URL/FILENAME.hpkg
|
||||
```
|
||||
@@ -39,6 +40,7 @@ Use an upgrade package to upgrade a {% data variables.product.prodname_ghe_serve
|
||||
3. Schedule a maintenance window for end users of your {% data variables.product.prodname_ghe_server %} cluster, as it will be unavailable for normal use during the upgrade. Maintenance mode blocks user access and prevents data changes while the cluster upgrade is in progress.
|
||||
4. On the [{% data variables.product.prodname_ghe_server %} Download Page](https://enterprise.github.com/download), copy the URL for the upgrade _.pkg_ file to the clipboard.
|
||||
5. From the administrative shell of any node, use the `ghe-cluster-each` command combined with `curl` to download the release package to each node in a single step. Use the URL you copied in the previous step as an argument.
|
||||
|
||||
```shell
|
||||
$ ghe-cluster-each -- "cd /home/admin && curl -L -O https://PACKAGE-URL.pkg"
|
||||
> ghe-app-node-1: % Total % Received % Xferd Average Speed Time Time Time Current
|
||||
@@ -57,6 +59,7 @@ Use an upgrade package to upgrade a {% data variables.product.prodname_ghe_serve
|
||||
> ghe-data-node-3: Dload Upload Total Spent Left Speed
|
||||
> 100 496M 100 496M 0 0 19.7M 0 0:00:25 0:00:25 --:--:-- 25.5M
|
||||
```
|
||||
|
||||
6. Identify the primary MySQL node, which is defined as `mysql-master = <hostname>` in `cluster.conf`. This node will be upgraded last.
|
||||
|
||||
### Upgrading the cluster nodes
|
||||
@@ -64,6 +67,7 @@ Use an upgrade package to upgrade a {% data variables.product.prodname_ghe_serve
|
||||
1. Enable maintenance mode according to your scheduled window by connecting to the administrative shell of any cluster node and running `ghe-cluster-maintenance -s`.
|
||||
2. **With the exception of the primary MySQL node**, connect to the administrative shell of each of the {% data variables.product.prodname_ghe_server %} nodes.
|
||||
Run the `ghe-upgrade` command, providing the package file name you downloaded in Step 4 of [Preparing to upgrade](#preparing-to-upgrade):
|
||||
|
||||
```shell
|
||||
$ ghe-upgrade PACKAGE-FILENAME.pkg
|
||||
> *** verifying upgrade package signature...
|
||||
@@ -74,8 +78,10 @@ Run the `ghe-upgrade` command, providing the package file name you downloaded in
|
||||
> gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
|
||||
> gpg: Good signature from "GitHub Enterprise (Upgrade Package Key) > <enterprise@github.com>"
|
||||
```
|
||||
|
||||
3. The upgrade process will reboot the node once it completes. Verify that you can `ping` each node after it reboots.
|
||||
4. Connect to the administrative shell of the primary MySQL node. Run the `ghe-upgrade` command, providing the package file name you downloaded in Step 4 of [Preparing to upgrade](#preparing-to-upgrade):
|
||||
|
||||
```shell
|
||||
$ ghe-upgrade PACKAGE-FILENAME.pkg
|
||||
> *** verifying upgrade package signature...
|
||||
@@ -86,6 +92,7 @@ Run the `ghe-upgrade` command, providing the package file name you downloaded in
|
||||
> gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
|
||||
> gpg: Good signature from "GitHub Enterprise (Upgrade Package Key) > <enterprise@github.com>"
|
||||
```
|
||||
|
||||
5. The upgrade process will reboot the primary MySQL node once it completes. Verify that you can `ping` each node after it reboots.{% ifversion ghes %}
|
||||
6. Connect to the administrative shell of the primary MySQL node and run the `ghe-cluster-config-apply` command.
|
||||
7. When `ghe-cluster-config-apply` is complete, check that the services are in a healthy state by running `ghe-cluster-status`.{% endif %}
|
||||
|
||||
@@ -23,15 +23,19 @@ shortTitle: Create HA replica
|
||||
1. In a browser, navigate to the new replica appliance's IP address and upload your {% data variables.product.prodname_enterprise %} license.
|
||||
{% data reusables.enterprise_installation.replica-steps %}
|
||||
1. Connect to the replica appliance's IP address using SSH.
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@REPLICA_IP
|
||||
```
|
||||
|
||||
{% data reusables.enterprise_installation.generate-replication-key-pair %}
|
||||
{% data reusables.enterprise_installation.add-ssh-key-to-primary %}
|
||||
1. To verify the connection to the primary and enable replica mode for the new replica, run `ghe-repl-setup` again.
|
||||
|
||||
```shell
|
||||
ghe-repl-setup PRIMARY_IP
|
||||
```
|
||||
|
||||
{% data reusables.enterprise_installation.replication-command %}
|
||||
{% data reusables.enterprise_installation.verify-replication-channel %}
|
||||
|
||||
@@ -42,29 +46,39 @@ This example configuration uses a primary and two replicas, which are located in
|
||||
{% data reusables.enterprise_clustering.network-latency %} If latency is more than 70 milliseconds, we recommend cache replica nodes instead. For more information, see "[AUTOTITLE](/admin/enterprise-management/caching-repositories/configuring-a-repository-cache)."
|
||||
|
||||
1. Create the first replica the same way you would for a standard two node configuration by running `ghe-repl-setup` on the first replica.
|
||||
|
||||
```shell
|
||||
(replica1)$ ghe-repl-setup PRIMARY_IP
|
||||
(replica1)$ ghe-repl-start
|
||||
```
|
||||
|
||||
2. Create a second replica and use the `ghe-repl-setup --add` command. The `--add` flag prevents it from overwriting the existing replication configuration and adds the new replica to the configuration.
|
||||
|
||||
```shell
|
||||
(replica2)$ ghe-repl-setup --add PRIMARY_IP
|
||||
(replica2)$ ghe-repl-start
|
||||
```
|
||||
|
||||
3. By default, replicas are configured to the same datacenter, and will now attempt to seed from an existing node in the same datacenter. Configure the replicas for different datacenters by setting a different value for the datacenter option. The specific values can be anything you would like as long as they are different from each other. Run the `ghe-repl-node` command on each node and specify the datacenter.
|
||||
|
||||
On the primary:
|
||||
|
||||
```shell
|
||||
(primary)$ ghe-repl-node --datacenter [PRIMARY DC NAME]
|
||||
```
|
||||
|
||||
On the first replica:
|
||||
|
||||
```shell
|
||||
(replica1)$ ghe-repl-node --datacenter [FIRST REPLICA DC NAME]
|
||||
```
|
||||
|
||||
On the second replica:
|
||||
|
||||
```shell
|
||||
(replica2)$ ghe-repl-node --datacenter [SECOND REPLICA DC NAME]
|
||||
```
|
||||
|
||||
{% tip %}
|
||||
|
||||
**Tip:** You can set the `--datacenter` and `--active` options at the same time.
|
||||
@@ -73,14 +87,19 @@ This example configuration uses a primary and two replicas, which are located in
|
||||
4. An active replica node will store copies of the appliance data and service end user requests. An inactive node will store copies of the appliance data but will be unable to service end user requests. Enable active mode using the `--active` flag or inactive mode using the `--inactive` flag.
|
||||
|
||||
On the first replica:
|
||||
|
||||
```shell
|
||||
(replica1)$ ghe-repl-node --active
|
||||
```
|
||||
|
||||
On the second replica:
|
||||
|
||||
```shell
|
||||
(replica2)$ ghe-repl-node --active
|
||||
```
|
||||
|
||||
5. To apply the configuration, use the `ghe-config-apply` command on the primary.
|
||||
|
||||
```shell
|
||||
(primary)$ ghe-config-apply
|
||||
```
|
||||
|
||||
@@ -25,6 +25,7 @@ The time required to failover depends on how long it takes to manually promote t
|
||||
- To use the management console, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)"
|
||||
|
||||
- You can also use the `ghe-maintenance -s` command.
|
||||
|
||||
```shell
|
||||
ghe-maintenance -s
|
||||
```
|
||||
@@ -44,6 +45,7 @@ The time required to failover depends on how long it takes to manually promote t
|
||||
```
|
||||
|
||||
4. On the replica appliance, to stop replication and promote the replica appliance to primary status, use the `ghe-repl-promote` command. This will also automatically put the primary node in maintenance mode if it’s reachable.
|
||||
|
||||
```shell
|
||||
ghe-repl-promote
|
||||
```
|
||||
@@ -59,10 +61,13 @@ The time required to failover depends on how long it takes to manually promote t
|
||||
7. If desired, set up replication from the new primary to existing appliances and the previous primary. For more information, see "[AUTOTITLE](/admin/enterprise-management/configuring-high-availability/about-high-availability-configuration#utilities-for-replication-management)."
|
||||
8. Appliances you do not intend to setup replication to that were part of the high availability configuration prior the failover, need to be removed from the high availability configuration by UUID.
|
||||
- On the former appliances, get their UUID via `cat /data/user/common/uuid`.
|
||||
|
||||
```shell
|
||||
cat /data/user/common/uuid
|
||||
```
|
||||
|
||||
- On the new primary, remove the UUIDs using `ghe-repl-teardown`. Please replace *`UUID`* with a UUID you retrieved in the previous step.
|
||||
|
||||
```shell
|
||||
ghe-repl-teardown -u UUID
|
||||
```
|
||||
|
||||
@@ -28,17 +28,23 @@ You can use the former primary appliance as the new replica appliance if the fai
|
||||
## Configuring a former primary appliance as a new replica
|
||||
|
||||
1. Connect to the former primary appliance's IP address using SSH.
|
||||
|
||||
```shell
|
||||
ssh -p 122 admin@ FORMER_PRIMARY_IP
|
||||
```
|
||||
|
||||
1. Enable maintenance mode on the former primary appliance. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)."
|
||||
1. On the former primary appliance, run `ghe-repl-setup` with the IP address of the former replica.
|
||||
|
||||
```shell
|
||||
ghe-repl-setup FORMER_REPLICA_IP
|
||||
```
|
||||
|
||||
{% data reusables.enterprise_installation.add-ssh-key-to-primary %}
|
||||
1. To verify the connection to the new primary and enable replica mode for the new replica, run `ghe-repl-setup` again.
|
||||
|
||||
```shell
|
||||
ghe-repl-setup FORMER_REPLICA_IP
|
||||
```
|
||||
|
||||
{% data reusables.enterprise_installation.replication-command %}
|
||||
|
||||
@@ -19,10 +19,13 @@ shortTitle: Remove a HA replica
|
||||
|
||||
1. If necessary, stop a geo-replication replica from serving user traffic by removing the Geo DNS entries for the replica.
|
||||
2. On the replica where you wish to temporarily stop replication, run ghe-repl-stop.
|
||||
|
||||
```shell
|
||||
ghe-repl-stop
|
||||
```
|
||||
|
||||
3. To start replication again, run `ghe-repl-start`.
|
||||
|
||||
```shell
|
||||
ghe-repl-start
|
||||
```
|
||||
@@ -31,10 +34,13 @@ shortTitle: Remove a HA replica
|
||||
|
||||
1. If necessary, stop a geo-replication replica from serving user traffic by removing the Geo DNS entries for the replica.
|
||||
2. On the replica you wish to remove replication from, run `ghe-repl-stop`.
|
||||
|
||||
```shell
|
||||
ghe-repl-stop
|
||||
```
|
||||
|
||||
3. On the replica, to tear down the replication state, run `ghe-repl-teardown`.
|
||||
|
||||
```shell
|
||||
ghe-repl-teardown
|
||||
```
|
||||
|
||||
@@ -28,6 +28,7 @@ SNMP is a common standard for monitoring devices over a network. We strongly rec
|
||||
4. In the **Community string** field, enter a new community string. If left blank, this defaults to `public`.
|
||||
{% data reusables.enterprise_management_console.save-settings %}
|
||||
5. Test your SNMP configuration by running the following command on a separate workstation with SNMP support in your network:
|
||||
|
||||
```shell
|
||||
# community-string is your community string
|
||||
# hostname is the IP or domain of your Enterprise instance
|
||||
@@ -87,6 +88,7 @@ Of the available MIBs for SNMP, the most useful is `HOST-RESOURCES-MIB` (1.3.6.1
|
||||
| hrStorageAllocationUnits.1 | 1.3.6.1.2.1.25.2.3.1.4.1 | The size, in bytes, of an hrStorageAllocationUnit |
|
||||
|
||||
For example, to query for `hrMemorySize` with SNMP v3, run the following command on a separate workstation with SNMP support in your network:
|
||||
|
||||
```shell
|
||||
# username is the unique username of your SNMP v3 user
|
||||
# auth password is the authentication password
|
||||
@@ -99,6 +101,7 @@ $ snmpget -v 3 -u USERNAME -l authPriv \
|
||||
```
|
||||
|
||||
With SNMP v2c, to query for `hrMemorySize`, run the following command on a separate workstation with SNMP support in your network:
|
||||
|
||||
```shell
|
||||
# community-string is your community string
|
||||
# hostname is the IP or domain of your Enterprise instance
|
||||
|
||||
@@ -37,22 +37,28 @@ As more users join {% data variables.location.product_location %}, you may need
|
||||
{% data reusables.enterprise_installation.ssh-into-instance %}
|
||||
3. Put the appliance in maintenance mode. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)."
|
||||
4. Reboot the appliance to detect the new storage allocation:
|
||||
|
||||
```shell
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
5. Run the `ghe-storage-extend` command to expand the `/data/user` filesystem:
|
||||
|
||||
```shell
|
||||
ghe-storage-extend
|
||||
```
|
||||
|
||||
6. Ensure system services are functioning correctly, then release maintenance mode. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)."
|
||||
|
||||
## Increasing the root partition size using a new appliance
|
||||
|
||||
1. Set up a new {% data variables.product.prodname_ghe_server %} instance with a larger root disk using the same version as your current appliance. For more information, see "[AUTOTITLE](/admin/installation/setting-up-a-github-enterprise-server-instance)."
|
||||
2. Shut down the current appliance:
|
||||
|
||||
```shell
|
||||
sudo poweroff
|
||||
```
|
||||
|
||||
3. Detach the data disk from the current appliance using your virtualization platform's tools.
|
||||
4. Attach the data disk to the new appliance with the larger root disk.
|
||||
|
||||
@@ -67,11 +73,13 @@ As more users join {% data variables.location.product_location %}, you may need
|
||||
1. Attach a new disk to your {% data variables.product.prodname_ghe_server %} appliance.
|
||||
1. Run the `lsblk` command to identify the new disk's device name.
|
||||
1. Run the `parted` command to format the disk, substituting your device name for `/dev/xvdg`:
|
||||
|
||||
```shell
|
||||
sudo parted /dev/xvdg mklabel msdos
|
||||
sudo parted /dev/xvdg mkpart primary ext4 0% 50%
|
||||
sudo parted /dev/xvdg mkpart primary ext4 50% 100%
|
||||
```
|
||||
|
||||
1. If your appliance is configured for high-availability or geo-replication, to stop replication run the `ghe-repl-stop` command on each replica node:
|
||||
|
||||
```shell
|
||||
@@ -83,10 +91,13 @@ As more users join {% data variables.location.product_location %}, you may need
|
||||
```shell
|
||||
ghe-upgrade PACKAGE-NAME.pkg -s -t /dev/xvdg1
|
||||
```
|
||||
|
||||
1. Shut down the appliance:
|
||||
|
||||
```shell
|
||||
sudo poweroff
|
||||
```
|
||||
|
||||
1. In the hypervisor, remove the old root disk and attach the new root disk at the same location as the old root disk.
|
||||
1. Start the appliance.
|
||||
1. Ensure system services are functioning correctly, then release maintenance mode. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)."
|
||||
|
||||
@@ -95,6 +95,7 @@ The following instructions are only intended for {% data variables.product.prod
|
||||
```shell copy
|
||||
ghe-config mysql.innodb-flush-no-fsync true
|
||||
```
|
||||
|
||||
{% data reusables.enterprise.apply-configuration %}
|
||||
|
||||
#### Upgrade your instance's storage
|
||||
|
||||
@@ -63,6 +63,7 @@ To upgrade to the latest version of {% data variables.product.prodname_enterpris
|
||||
|
||||
10. On the backup host, run the `ghe-backup` command to take a final backup snapshot. This ensures that all data from the old instance is captured.
|
||||
11. On the backup host, run the `ghe-restore` command you copied on the new instance's restore status screen to restore the latest snapshot.
|
||||
|
||||
```shell
|
||||
$ ghe-restore 169.254.1.1
|
||||
The authenticity of host '169.254.1.1:122' can't be established.
|
||||
|
||||
@@ -42,9 +42,11 @@ topics:
|
||||
- Additional root storage must be available when upgrading through hotpatching, as it installs multiple versions of certain services until the upgrade is complete. Pre-flight checks will notify you if you don't have enough root disk storage.
|
||||
- When upgrading through hotpatching, your instance cannot be too heavily loaded, as it may impact the hotpatching process.
|
||||
- Upgrading to {% data variables.product.prodname_ghe_server %} 2.17 migrates your audit logs from Elasticsearch to MySQL. This migration also increases the amount of time and disk space it takes to restore a snapshot. Before migrating, check the number of bytes in your Elasticsearch audit log indices with this command:
|
||||
|
||||
``` shell
|
||||
curl -s http://localhost:9201/audit_log/_stats/store | jq ._all.primaries.store.size_in_bytes
|
||||
```
|
||||
|
||||
Use the number to estimate the amount of disk space the MySQL audit logs will need. The script also monitors your free disk space while the import is in progress. Monitoring this number is especially useful if your free disk space is close to the amount of disk space necessary for migration.
|
||||
|
||||
{% ifversion mysql-8-upgrade %}
|
||||
|
||||
@@ -145,10 +145,12 @@ If the upgrade target you're presented with is a feature release instead of a pa
|
||||
1. {% data reusables.enterprise_installation.enterprise-download-upgrade-pkg %} Copy the URL for the upgrade hotpackage (_.hpkg_ file).
|
||||
{% data reusables.enterprise_installation.download-package %}
|
||||
1. Run the `ghe-upgrade` command using the package file name:
|
||||
|
||||
```shell
|
||||
admin@HOSTNAME:~$ ghe-upgrade GITHUB-UPGRADE.hpkg
|
||||
*** verifying upgrade package signature...
|
||||
```
|
||||
|
||||
1. If at least one service or system component requires a reboot, the hotpatch upgrade script notifies you. For example, updates to the kernel, MySQL, or Elasticsearch may require a reboot.
|
||||
|
||||
### Upgrading an instance with multiple nodes using a hotpatch
|
||||
@@ -194,11 +196,14 @@ While you can use a hotpatch to upgrade to the latest patch release within a fea
|
||||
{% endnote %}
|
||||
|
||||
1. Run the `ghe-upgrade` command using the package file name:
|
||||
|
||||
```shell
|
||||
admin@HOSTNAME:~$ ghe-upgrade GITHUB-UPGRADE.pkg
|
||||
*** verifying upgrade package signature...
|
||||
```
|
||||
|
||||
1. Confirm that you'd like to continue with the upgrade and restart after the package signature verifies. The new root filesystem writes to the secondary partition and the instance automatically restarts in maintenance mode:
|
||||
|
||||
```shell
|
||||
*** applying update...
|
||||
This package will upgrade your installation to version VERSION-NUMBER
|
||||
@@ -206,6 +211,7 @@ While you can use a hotpatch to upgrade to the latest patch release within a fea
|
||||
Target root partition: /dev/xvda2
|
||||
Proceed with installation? [y/N]
|
||||
```
|
||||
|
||||
{%- ifversion ghe-migrations-cli-utility %}
|
||||
1. Optionally, during an upgrade to a feature release, you can monitor the status of database migrations using the `ghe-migrations` utility. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/command-line-utilities#ghe-migrations)."
|
||||
{%- endif %}
|
||||
@@ -214,6 +220,7 @@ While you can use a hotpatch to upgrade to the latest patch release within a fea
|
||||
```shell
|
||||
tail -f /data/user/common/ghe-config.log
|
||||
```
|
||||
|
||||
{% ifversion ip-exception-list %}
|
||||
1. Optionally, after the upgrade, validate the upgrade by configuring an IP exception list to allow access to a specified list of IP addresses. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode#validating-changes-in-maintenance-mode-using-the-ip-exception-list)."
|
||||
{% endif %}
|
||||
@@ -262,6 +269,7 @@ To upgrade an instance that comprises multiple nodes using an upgrade package, y
|
||||
```
|
||||
CRITICAL: git replication is behind the primary by more than 1007 repositories and/or gists
|
||||
```
|
||||
|
||||
{% endnote %}
|
||||
|
||||
{%- ifversion ghes = 3.4 or ghes = 3.5 or ghes = 3.6 %}
|
||||
|
||||
@@ -31,6 +31,7 @@ To restore a backup of {% data variables.location.product_location %} with {% da
|
||||
```shell copy
|
||||
ssh -p 122 admin@HOSTNAME
|
||||
```
|
||||
|
||||
1. Configure the destination instance to use the same external storage service for {% data variables.product.prodname_actions %} as the source instance by entering one of the following commands.
|
||||
{% indented_data_reference reusables.actions.configure-storage-provider-platform-commands spaces=3 %}
|
||||
{% data reusables.actions.configure-storage-provider %}
|
||||
@@ -39,6 +40,7 @@ To restore a backup of {% data variables.location.product_location %} with {% da
|
||||
```shell copy
|
||||
ghe-config app.actions.enabled true
|
||||
```
|
||||
|
||||
{% data reusables.actions.apply-configuration-and-enable %}
|
||||
1. After {% data variables.product.prodname_actions %} is configured and enabled, to restore the rest of the data from the backup, use the `ghe-restore` command. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/configuring-backups-on-your-appliance#restoring-a-backup)."
|
||||
1. Re-register your self-hosted runners on the destination instance. For more information, see "[AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/adding-self-hosted-runners)."
|
||||
|
||||
@@ -150,6 +150,7 @@ If any of these services are at or near 100% CPU utilization, or the memory is n
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
1. Save and exit the file.
|
||||
1. Run `ghe-config-apply` to apply the changes.
|
||||
|
||||
@@ -175,13 +176,17 @@ There are three ways to resolve this problem:
|
||||
|
||||
1. Log in to the administrative shell using SSH. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/accessing-the-administrative-shell-ssh)."
|
||||
1. To remove the limitations on workflows triggered by {% data variables.product.prodname_dependabot %} on {% data variables.location.product_location %}, use the following command.
|
||||
|
||||
``` shell
|
||||
ghe-config app.actions.disable-dependabot-enforcement true
|
||||
```
|
||||
|
||||
1. Apply the configuration.
|
||||
|
||||
```shell
|
||||
ghe-config-apply
|
||||
```
|
||||
|
||||
1. Return to {% data variables.product.prodname_ghe_server %}.
|
||||
|
||||
{% endif %}
|
||||
@@ -204,18 +209,25 @@ To install the official bundled actions and starter workflows within a designate
|
||||
|
||||
1. Log in to the administrative shell using SSH. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/accessing-the-administrative-shell-ssh)."
|
||||
1. To designate your organization as the location to store the bundled actions, use the `ghe-config` command, replacing `ORGANIZATION` with the name of your organization.
|
||||
|
||||
```shell
|
||||
ghe-config app.actions.actions-org ORGANIZATION
|
||||
```
|
||||
|
||||
and:
|
||||
|
||||
```shell
|
||||
ghe-config app.actions.github-org ORGANIZATION
|
||||
```
|
||||
|
||||
1. To add the bundled actions to your organization, unset the SHA.
|
||||
|
||||
```shell
|
||||
ghe-config --unset 'app.actions.actions-repos-sha1sum'
|
||||
```
|
||||
|
||||
1. Apply the configuration.
|
||||
|
||||
```shell
|
||||
ghe-config-apply
|
||||
```
|
||||
|
||||
@@ -45,6 +45,7 @@ To more accurately mirror your production environment, you can optionally copy f
|
||||
```shell
|
||||
azcopy copy 'https://<em>SOURCE-STORAGE-ACCOUNT-NAME</em>.blob.core.windows.net/<em>SAS-TOKEN</em>' 'https://<em>DESTINATION-STORAGE-ACCOUNT-NAME</em>.blob.core.windows.net/' --recursive
|
||||
```
|
||||
|
||||
- For Amazon S3 buckets, you can use [`aws s3 sync`](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html). For example:
|
||||
|
||||
```shell
|
||||
|
||||
@@ -66,6 +66,7 @@ To configure {% data variables.product.prodname_ghe_server %} to use OIDC with a
|
||||
```
|
||||
SHA1 Fingerprint=AB:12:34:56:78:90:AB:CD:EF:12:34:56:78:90:AB:CD:EF:12:34:56
|
||||
```
|
||||
|
||||
1. Remove the colons (`:`) from the thumbprint value, and save the value to use later.
|
||||
|
||||
For example, the thumbprint for the value returned in the previous step is:
|
||||
@@ -73,6 +74,7 @@ To configure {% data variables.product.prodname_ghe_server %} to use OIDC with a
|
||||
```
|
||||
AB1234567890ABCDEF1234567890ABCDEF123456
|
||||
```
|
||||
|
||||
1. Using the AWS CLI, use the following command to create an OIDC provider for {% data variables.location.product_location_enterprise %}. Replace `HOSTNAME` with the public hostname for {% data variables.location.product_location_enterprise %}, and `THUMBPRINT` with the thumbprint value from the previous step.
|
||||
|
||||
```shell copy
|
||||
@@ -139,6 +141,7 @@ To configure {% data variables.product.prodname_ghe_server %} to use OIDC with a
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
1. Click **Update policy**.
|
||||
|
||||
### 3. Configure {% data variables.product.prodname_ghe_server %} to connect to Amazon S3 using OIDC
|
||||
|
||||
@@ -73,6 +73,7 @@ To configure {% data variables.product.prodname_ghe_server %} to use OIDC with G
|
||||
```
|
||||
https://my-ghes-host.example.com/_services/token
|
||||
```
|
||||
|
||||
- Under "Audiences", leave **Default audience** selected, but note the identity provider URL, as it is needed later. The identity provider URL is in the format `https://iam.googleapis.com/projects/PROJECT-NUMBER/locations/global/workloadIdentityPools/POOL-NAME/providers/PROVIDER-NAME`.
|
||||
- Click **Continue**.
|
||||
1. Under "Configure provider attributes":
|
||||
|
||||
@@ -74,6 +74,7 @@ You can populate the runner tool cache by running a {% data variables.product.pr
|
||||
with:
|
||||
path: {% raw %}${{runner.tool_cache}}/tool_cache.tar.gz{% endraw %}
|
||||
```
|
||||
|
||||
1. Download the tool cache artifact from the workflow run. For instructions on downloading artifacts, see "[AUTOTITLE](/actions/managing-workflow-runs/downloading-workflow-artifacts)."
|
||||
1. Transfer the tool cache artifact to your self hosted runner and extract it to the local tool cache directory. The default tool cache directory is `RUNNER_DIR/_work/_tool`. If the runner hasn't processed any jobs yet, you might need to create the `_work/_tool` directories.
|
||||
|
||||
|
||||
@@ -68,12 +68,14 @@ AMIs for {% data variables.product.prodname_ghe_server %} are available in the A
|
||||
### Using the AWS CLI to select an AMI
|
||||
|
||||
1. Using the AWS CLI, get a list of {% data variables.product.prodname_ghe_server %} images published by {% data variables.product.prodname_dotcom %}'s AWS owner IDs (`025577942450` for GovCloud, and `895557238572` for other regions). For more information, see "[describe-images](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-images.html)" in the AWS documentation.
|
||||
|
||||
```shell
|
||||
aws ec2 describe-images \
|
||||
--owners OWNER_ID \
|
||||
--query 'sort_by(Images,&Name)[*].{Name:Name,ImageID:ImageId}' \
|
||||
--output=text
|
||||
```
|
||||
|
||||
2. Take note of the AMI ID for the latest {% data variables.product.prodname_ghe_server %} image.
|
||||
|
||||
## Creating a security group
|
||||
@@ -81,6 +83,7 @@ AMIs for {% data variables.product.prodname_ghe_server %} are available in the A
|
||||
If you're setting up your AMI for the first time, you will need to create a security group and add a new security group rule for each port in the table below. For more information, see the AWS guide "[Using Security Groups](https://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-sg.html)."
|
||||
|
||||
1. Using the AWS CLI, create a new security group. For more information, see "[create-security-group](https://docs.aws.amazon.com/cli/latest/reference/ec2/create-security-group.html)" in the AWS documentation.
|
||||
|
||||
```shell
|
||||
aws ec2 create-security-group --group-name SECURITY_GROUP_NAME --description "SECURITY GROUP DESCRIPTION"
|
||||
```
|
||||
@@ -88,9 +91,11 @@ If you're setting up your AMI for the first time, you will need to create a secu
|
||||
2. Take note of the security group ID (`sg-xxxxxxxx`) of your newly created security group.
|
||||
|
||||
3. Create a security group rule for each of the ports in the table below. For more information, see "[authorize-security-group-ingress](https://docs.aws.amazon.com/cli/latest/reference/ec2/authorize-security-group-ingress.html)" in the AWS documentation.
|
||||
|
||||
```shell
|
||||
aws ec2 authorize-security-group-ingress --group-id SECURITY_GROUP_ID --protocol PROTOCOL --port PORT_NUMBER --cidr SOURCE IP RANGE
|
||||
```
|
||||
|
||||
This table identifies what each port is used for.
|
||||
|
||||
{% data reusables.enterprise_installation.necessary_ports %}
|
||||
|
||||
@@ -40,6 +40,7 @@ Before launching {% data variables.location.product_location %} on Azure, you'll
|
||||
{% data reusables.enterprise_installation.create-ghe-instance %}
|
||||
|
||||
1. Find the most recent {% data variables.product.prodname_ghe_server %} appliance image. For more information about the `vm image list` command, see "[`az vm image list`](https://docs.microsoft.com/cli/azure/vm/image?view=azure-cli-latest#az_vm_image_list)" in the Microsoft documentation.
|
||||
|
||||
```shell
|
||||
az vm image list --all -f GitHub-Enterprise | grep '"urn":' | sort -V
|
||||
```
|
||||
@@ -83,6 +84,7 @@ To configure the instance, you must confirm the instance's status, upload a lice
|
||||
{% data reusables.enterprise_installation.new-instance-attack-vector-warning %}
|
||||
|
||||
1. Before configuring the VM, you must wait for it to enter ReadyRole status. Check the status of the VM with the `vm list` command. For more information, see "[`az vm list`](https://docs.microsoft.com/cli/azure/vm?view=azure-cli-latest#az_vm_list)" in the Microsoft documentation.
|
||||
|
||||
```shell
|
||||
$ az vm list -d -g RESOURCE_GROUP -o table
|
||||
> Name ResourceGroup PowerState PublicIps Fqdns Location Zones
|
||||
@@ -90,6 +92,7 @@ To configure the instance, you must confirm the instance's status, upload a lice
|
||||
> VM_NAME RESOURCE_GROUP VM running 40.76.79.202 eastus
|
||||
|
||||
```
|
||||
|
||||
{% note %}
|
||||
|
||||
**Note:** Azure does not automatically create a FQDNS entry for the VM. For more information, see Azure's guide on how to "[Create a fully qualified domain name in the Azure portal for a Linux VM](https://docs.microsoft.com/azure/virtual-machines/linux/portal-create-fqdn)."
|
||||
|
||||
@@ -36,6 +36,7 @@ Before launching {% data variables.location.product_location %} on Google Cloud
|
||||
## Selecting the {% data variables.product.prodname_ghe_server %} image
|
||||
|
||||
1. Using the [gcloud compute](https://cloud.google.com/compute/docs/gcloud-compute/) command-line tool, list the public {% data variables.product.prodname_ghe_server %} images:
|
||||
|
||||
```shell
|
||||
gcloud compute images list --project github-enterprise-public --no-standard-images
|
||||
```
|
||||
@@ -47,15 +48,19 @@ Before launching {% data variables.location.product_location %} on Google Cloud
|
||||
GCE virtual machines are created as a member of a network, which has a firewall. For the network associated with the {% data variables.product.prodname_ghe_server %} VM, you'll need to configure the firewall to allow the required ports listed in the table below. For more information about firewall rules on Google Cloud Platform, see the Google guide "[Firewall Rules Overview](https://cloud.google.com/vpc/docs/firewalls)."
|
||||
|
||||
1. Using the gcloud compute command-line tool, create the network. For more information, see "[gcloud compute networks create](https://cloud.google.com/sdk/gcloud/reference/compute/networks/create)" in the Google documentation.
|
||||
|
||||
```shell
|
||||
gcloud compute networks create NETWORK-NAME --subnet-mode auto
|
||||
```
|
||||
|
||||
2. Create a firewall rule for each of the ports in the table below. For more information, see "[gcloud compute firewall-rules](https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/)" in the Google documentation.
|
||||
|
||||
```shell
|
||||
$ gcloud compute firewall-rules create RULE-NAME \
|
||||
--network NETWORK-NAME \
|
||||
--allow tcp:22,tcp:25,tcp:80,tcp:122,udp:161,tcp:443,udp:1194,tcp:8080,tcp:8443,tcp:9418,icmp
|
||||
```
|
||||
|
||||
This table identifies the required ports and what each port is used for.
|
||||
|
||||
{% data reusables.enterprise_installation.necessary_ports %}
|
||||
@@ -71,11 +76,13 @@ In production High Availability configurations, both primary and replica applian
|
||||
To create the {% data variables.product.prodname_ghe_server %} instance, you'll need to create a GCE instance with your {% data variables.product.prodname_ghe_server %} image and attach an additional storage volume for your instance data. For more information, see "[Hardware considerations](#hardware-considerations)."
|
||||
|
||||
1. Using the gcloud compute command-line tool, create a data disk to use as an attached storage volume for your instance data, and configure the size based on your user license count. For more information, see "[gcloud compute disks create](https://cloud.google.com/sdk/gcloud/reference/compute/disks/create)" in the Google documentation.
|
||||
|
||||
```shell
|
||||
gcloud compute disks create DATA-DISK-NAME --size DATA-DISK-SIZE --type DATA-DISK-TYPE --zone ZONE
|
||||
```
|
||||
|
||||
2. Then create an instance using the name of the {% data variables.product.prodname_ghe_server %} image you selected, and attach the data disk. For more information, see "[gcloud compute instances create](https://cloud.google.com/sdk/gcloud/reference/compute/instances/create)" in the Google documentation.
|
||||
|
||||
```shell
|
||||
$ gcloud compute instances create INSTANCE-NAME \
|
||||
--machine-type n1-standard-8 \
|
||||
|
||||
@@ -37,25 +37,35 @@ shortTitle: Install on Hyper-V
|
||||
{% data reusables.enterprise_installation.create-ghe-instance %}
|
||||
|
||||
1. In PowerShell, create a new Generation 1 virtual machine, configure the size based on your user license count, and attach the {% data variables.product.prodname_ghe_server %} image you downloaded. For more information, see "[New-VM](https://docs.microsoft.com/powershell/module/hyper-v/new-vm?view=win10-ps)" in the Microsoft documentation.
|
||||
|
||||
```shell
|
||||
PS C:\> New-VM -Generation 1 -Name VM_NAME -MemoryStartupBytes MEMORY_SIZE -BootDevice VHD -VHDPath PATH_TO_VHD
|
||||
```
|
||||
|
||||
{% data reusables.enterprise_installation.create-attached-storage-volume %} Replace `PATH_TO_DATA_DISK` with the path to the location where you create the disk. For more information, see "[New-VHD](https://docs.microsoft.com/powershell/module/hyper-v/new-vhd?view=win10-ps)" in the Microsoft documentation.
|
||||
|
||||
```shell
|
||||
PS C:\> New-VHD -Path PATH_TO_DATA_DISK -SizeBytes DISK_SIZE
|
||||
```
|
||||
|
||||
3. Attach the data disk to your instance. For more information, see "[Add-VMHardDiskDrive](https://docs.microsoft.com/powershell/module/hyper-v/add-vmharddiskdrive?view=win10-ps)" in the Microsoft documentation.
|
||||
|
||||
```shell
|
||||
PS C:\> Add-VMHardDiskDrive -VMName VM_NAME -Path PATH_TO_DATA_DISK
|
||||
```
|
||||
|
||||
4. Start the VM. For more information, see "[Start-VM](https://docs.microsoft.com/powershell/module/hyper-v/start-vm?view=win10-ps)" in the Microsoft documentation.
|
||||
|
||||
```shell
|
||||
PS C:\> Start-VM -Name VM_NAME
|
||||
```
|
||||
|
||||
5. Get the IP address of your VM. For more information, see "[Get-VMNetworkAdapter](https://docs.microsoft.com/powershell/module/hyper-v/get-vmnetworkadapter?view=win10-ps)" in the Microsoft documentation.
|
||||
|
||||
```shell
|
||||
PS C:\> (Get-VMNetworkAdapter -VMName VM_NAME).IpAddresses
|
||||
```
|
||||
|
||||
6. Copy the VM's IP address and paste it into a web browser.
|
||||
|
||||
## Configuring the {% data variables.product.prodname_ghe_server %} instance
|
||||
|
||||
@@ -108,6 +108,7 @@ Optionally, if you use {% data variables.product.prodname_registry %} on your pr
|
||||
ghe-config secrets.packages.azure-container-name "AZURE CONTAINER NAME"
|
||||
ghe-config secrets.packages.azure-connection-string "CONNECTION STRING"
|
||||
```
|
||||
|
||||
- Amazon S3:
|
||||
|
||||
```shell copy
|
||||
@@ -117,6 +118,7 @@ Optionally, if you use {% data variables.product.prodname_registry %} on your pr
|
||||
ghe-config secrets.packages.aws-access-key "S3 ACCESS KEY ID"
|
||||
ghe-config secrets.packages.aws-secret-key "S3 ACCESS SECRET"
|
||||
```
|
||||
|
||||
1. To prepare to enable {% data variables.product.prodname_registry %} on the staging instance, enter the following command.
|
||||
|
||||
```shell copy
|
||||
|
||||
@@ -45,7 +45,9 @@ For more information, see "[Using the activity view to see changes to your repos
|
||||
|
||||
{% data reusables.enterprise_installation.ssh-into-instance %}
|
||||
1. In the appropriate Git repository, open the audit log file:
|
||||
|
||||
```shell
|
||||
ghe-repo OWNER/REPOSITORY -c "cat audit_log"
|
||||
```
|
||||
|
||||
{% endif %}
|
||||
|
||||
@@ -101,6 +101,7 @@ For information on creating or accessing your access key ID and secret key, see
|
||||
|
||||
- Add the permissions policy you created above to allow writes to the bucket.
|
||||
- Edit the trust relationship to add the `sub` field to the validation conditions, replacing `ENTERPRISE` with the name of your enterprise.
|
||||
|
||||
```
|
||||
"Condition": {
|
||||
"StringEquals": {
|
||||
@@ -109,6 +110,7 @@ For information on creating or accessing your access key ID and secret key, see
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- Make note of the Amazon Resource Name (ARN) of the created role.
|
||||
{% data reusables.enterprise.navigate-to-log-streaming-tab %}
|
||||
{% data reusables.audit_log.streaming-choose-s3 %}
|
||||
|
||||
@@ -33,6 +33,7 @@ For more information about your options, see the official [MinIO docs](https://d
|
||||
1. Set up your preferred environment variables for MinIO.
|
||||
|
||||
These examples use `MINIO_DIR`:
|
||||
|
||||
```shell
|
||||
export MINIO_DIR=$(pwd)/minio
|
||||
mkdir -p $MINIO_DIR
|
||||
@@ -43,24 +44,29 @@ For more information about your options, see the official [MinIO docs](https://d
|
||||
```shell
|
||||
docker pull minio/minio
|
||||
```
|
||||
|
||||
For more information, see the official "[MinIO Quickstart Guide](https://docs.min.io/docs/minio-quickstart-guide)."
|
||||
|
||||
3. Sign in to MinIO using your MinIO access key and secret.
|
||||
|
||||
{% linux %}
|
||||
|
||||
```shell
|
||||
$ export MINIO_ACCESS_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
|
||||
# this one is actually a secret, so careful
|
||||
$ export MINIO_SECRET_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
|
||||
```
|
||||
|
||||
{% endlinux %}
|
||||
|
||||
{% mac %}
|
||||
|
||||
```shell
|
||||
$ export MINIO_ACCESS_KEY=$(cat /dev/urandom | LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
|
||||
# this one is actually a secret, so careful
|
||||
$ export MINIO_SECRET_KEY=$(cat /dev/urandom | LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
|
||||
```
|
||||
|
||||
{% endmac %}
|
||||
|
||||
You can access your MinIO keys using the environment variables:
|
||||
|
||||
@@ -31,6 +31,7 @@ You can use a Linux container management tool to build a pre-receive hook enviro
|
||||
FROM gliderlabs/alpine:3.3
|
||||
RUN apk add --no-cache git bash
|
||||
```
|
||||
|
||||
3. From the working directory that contains `Dockerfile.alpine-3.3`, build an image:
|
||||
|
||||
```shell
|
||||
@@ -43,11 +44,13 @@ You can use a Linux container management tool to build a pre-receive hook enviro
|
||||
> ---> 0250ab3be9c5
|
||||
> Successfully built 0250ab3be9c5
|
||||
```
|
||||
|
||||
4. Create a container:
|
||||
|
||||
```shell
|
||||
docker create --name pre-receive.alpine-3.3 pre-receive.alpine-3.3 /bin/true
|
||||
```
|
||||
|
||||
5. Export the Docker container to a `gzip` compressed `tar` file:
|
||||
|
||||
```shell
|
||||
@@ -60,6 +63,7 @@ You can use a Linux container management tool to build a pre-receive hook enviro
|
||||
|
||||
1. Create a Linux `chroot` environment.
|
||||
2. Create a `gzip` compressed `tar` file of the `chroot` directory.
|
||||
|
||||
```shell
|
||||
cd /path/to/chroot
|
||||
tar -czf /path/to/pre-receive-environment.tar.gz .
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user