1
0
mirror of synced 2026-01-06 06:02:35 -05:00

Merge branch 'main' into main

This commit is contained in:
Courtney Wilson
2022-10-20 13:28:03 -05:00
committed by GitHub
81 changed files with 1320 additions and 1106 deletions

View File

@@ -277,7 +277,7 @@ async function handleGetSearchResultsError(req, res, error, options) {
// where you might not have a HATSTACK_URL configured.
if (reports) await Promise.all(reports)
}
res.status(500).send(error.message)
res.status(500).json({ error: error.message })
}
// Alias for the latest version

View File

@@ -12,6 +12,8 @@ versions:
In addition to the [standard {% data variables.product.prodname_dotcom %}-hosted runners](/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources), {% data variables.product.prodname_dotcom %} also offers customers on {% data variables.product.prodname_team %} and {% data variables.product.prodname_ghe_cloud %} plans a range of {% data variables.actions.hosted_runner %}s with more RAM and CPU. These runners are hosted by {% data variables.product.prodname_dotcom %} and have the runner application and other tools preinstalled.
When {% data variables.actions.hosted_runner %}s are enabled for your organization, a default runner group is automatically created for you with a set of four pre-configured {% data variables.actions.hosted_runner %}s.
When you add a {% data variables.actions.hosted_runner %} to an organization, you are defining a type of machine from a selection of available hardware specifications and operating system images. {% data variables.product.prodname_dotcom %} will then create multiple instances of this runner that scale up and down to match the job demands of your organization, based on the autoscaling limits you define.
## Machine specs for {% data variables.actions.hosted_runner %}s

View File

@@ -1,7 +1,7 @@
---
title: 依存関係をキャッシュしてワークフローのスピードを上げる
shortTitle: Caching dependencies
intro: ワークフローを高速化して効率を上げるために、依存関係や広く再利用されるファイルに対するキャッシュを作成して利用できます。
title: Caching dependencies to speed up workflows
shortTitle: Cache dependencies
intro: 'To make your workflows faster and more efficient, you can create and use caches for dependencies and other commonly reused files.'
redirect_from:
- /github/automating-your-workflow-with-github-actions/caching-dependencies-to-speed-up-workflows
- /actions/automating-your-workflow-with-github-actions/caching-dependencies-to-speed-up-workflows
@@ -14,65 +14,60 @@ type: tutorial
topics:
- Workflows
miniTocMaxHeadingLevel: 3
ms.openlocfilehash: efae730b48d2423821bb95ac639df355e6b9b5d9
ms.sourcegitcommit: 47bd0e48c7dba1dde49baff60bc1eddc91ab10c5
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/05/2022
ms.locfileid: '147710308'
---
## ワークフローの依存関係のキャッシングについて
ワークフローの実行は、しばしば他の実行と同じ出力あるいはダウンロードされた依存関係を再利用します。 たとえばMaven、Gradle、npm、Yarnといったパッケージ及び依存関係管理ツールは、ダウンロードされた依存関係のローカルキャッシュを保持します。
## About caching workflow dependencies
{% ifversion fpt or ghec %} {% data variables.product.prodname_dotcom %} ホステッド ランナー上のジョブは、クリーンなランナー イメージで開始されますが、依存関係を毎回ダウンロードする必要があるため、ネットワークの利用率が増大し、実行時間が長くなり、コストが高くなります。 {% endif %}依存関係などのファイルの再生成にかかる時間を短縮しやすくするために、{% data variables.product.prodname_dotcom %} ではワークフロー内で頻繁に使われるファイルをキャッシュできます。
Workflow runs often reuse the same outputs or downloaded dependencies from one run to another. For example, package and dependency management tools such as Maven, Gradle, npm, and Yarn keep a local cache of downloaded dependencies.
ジョブの依存関係をキャッシュするには、{% data variables.product.prodname_dotcom %} の [`cache` アクション](https://github.com/actions/cache)を使用できます。 このアクションは、一意のキーによって識別されるキャッシュを作成し、復元します。 なお、以下に示すパッケージ マネージャーをキャッシュする場合、それぞれの setup-* アクションを使用するには、最小構成が必要となります。これにより、依存関係キャッシュが作成され、復元されます。
{% ifversion fpt or ghec %} Jobs on {% data variables.product.prodname_dotcom %}-hosted runners start in a clean runner image and must download dependencies each time, causing increased network utilization, longer runtime, and increased cost. {% endif %}To help speed up the time it takes to recreate files like dependencies, {% data variables.product.prodname_dotcom %} can cache files you frequently use in workflows.
| パッケージ マネージャー | キャッシュの setup-* アクション |
To cache dependencies for a job, you can use {% data variables.product.prodname_dotcom %}'s [`cache` action](https://github.com/actions/cache). The action creates and restores a cache identified by a unique key. Alternatively, if you are caching the package managers listed below, using their respective setup-* actions requires minimal configuration and will create and restore dependency caches for you.
| Package managers | setup-* action for caching |
|---|---|
| npmYarnpnpm | [setup-node](https://github.com/actions/setup-node#caching-global-packages-data) |
| pippipenvPoetry | [setup-python](https://github.com/actions/setup-python#caching-packages-dependencies) |
| GradleMaven | [setup-java](https://github.com/actions/setup-java#caching-packages-dependencies) |
| npm, Yarn, pnpm | [setup-node](https://github.com/actions/setup-node#caching-global-packages-data) |
| pip, pipenv, Poetry | [setup-python](https://github.com/actions/setup-python#caching-packages-dependencies) |
| Gradle, Maven | [setup-java](https://github.com/actions/setup-java#caching-packages-dependencies) |
| RubyGems | [setup-ruby](https://github.com/ruby/setup-ruby#caching-bundle-install-automatically) |
| Go `go.sum` | [setup-go](https://github.com/actions/setup-go#caching-dependency-files-and-build-outputs) |
{% warning %}
**警告**: {% ifversion fpt or ghec %}{% data variables.product.prodname_actions %} でキャッシュを使用する場合は、次の点に注意してください。
**Warning**: {% ifversion fpt or ghec %}Be mindful of the following when using caching with {% data variables.product.prodname_actions %}:
* {% endif %}キャッシュには、機密情報を保存しないことをお勧めします。 たとえばキャッシュパス内のファイルに保存されたアクセストークンあるいはログインクレデンシャルなどがセンシティブな情報です。 また、`docker login` のようなコマンド ライン インターフェイス (CLI) プログラムでは、アクセス資格情報を構成ファイルに保存できます。 読み取りアクセスを持つ人は誰でも、リポジトリに pull request を作成し、キャッシュの内容にアクセスできます。 リポジトリのフォークも、ベースブランチ上にPull Requestを作成し、ベースブランチ上のキャッシュにアクセスできます。
* {% endif %}We recommend that you don't store any sensitive information in the cache. For example, sensitive information can include access tokens or login credentials stored in a file in the cache path. Also, command line interface (CLI) programs like `docker login` can save access credentials in a configuration file. Anyone with read access can create a pull request on a repository and access the contents of a cache. Forks of a repository can also create pull requests on the base branch and access caches on the base branch.
{%- ifversion fpt or ghec %}
* セルフホステッド ランナーを使用する場合、ワークフロー実行のキャッシュは、{% data variables.product.company_short %} 所有のクラウド ストレージに保存されます。 顧客所有のストレージ ソリューションは、{% data variables.product.prodname_ghe_server %} でのみ使用できます。
* When using self-hosted runners, caches from workflow runs are stored on {% data variables.product.company_short %}-owned cloud storage. A customer-owned storage solution is only available with {% data variables.product.prodname_ghe_server %}.
{%- endif %}
{% endwarning %}
{% data reusables.actions.comparing-artifacts-caching %}
ワークフロー実行のアーティファクトについて詳しくは、「[アーティファクトを使用してワークフロー データを永続化する](/github/automating-your-workflow-with-github-actions/persisting-workflow-data-using-artifacts)」を参照してください。
For more information on workflow run artifacts, see "[Persisting workflow data using artifacts](/github/automating-your-workflow-with-github-actions/persisting-workflow-data-using-artifacts)."
## キャッシュへのアクセスについての制限
## Restrictions for accessing a cache
ワークフローは、現在のブランチ、ベース ブランチ (フォークされたリポジトリのベース ブランチを含む)、または既定のブランチ (通常 `main`) で作成されたキャッシュにアクセスして復元できます。 たとえば、デフォルトブランチで作成されたキャッシュは、どのPull Requestからもアクセスできます。 また、ブランチ `feature-b` にベース ブランチ `feature-a` がある場合、`feature-b` でトリガーされたワークフローは、既定のブランチ (`main`)`feature-a`、および `feature-b` で作成されたキャッシュにアクセスすることができます。
A workflow can access and restore a cache created in the current branch, the base branch (including base branches of forked repositories), or the default branch (usually `main`). For example, a cache created on the default branch would be accessible from any pull request. Also, if the branch `feature-b` has the base branch `feature-a`, a workflow triggered on `feature-b` would have access to caches created in the default branch (`main`), `feature-a`, and `feature-b`.
アクセス制限を使用すると、さまざまなブランチまたはタグ間に論理境界を作成することで、キャッシュを分離しセキュリティで保護することができます。 たとえば、ブランチ `feature-a` (ベース `main` を使用) のために作成されたキャッシュは、ブランチ `feature-c` (ベース`main` を使用) の pull request にはアクセスできません。 同様の行上で、(ベースの `main` からの) タグ `release-a` 用に作成されるキャッシュは、(ベース `main` を使用して) タグ `release-b` に対してトリガーされるワークフローにアクセスできません。
Access restrictions provide cache isolation and security by creating a logical boundary between different branches or tags. For example, a cache created for the branch `feature-a` (with the base `main`) would not be accessible to a pull request for the branch `feature-c` (with the base `main`). On similar lines, a cache created for the tag `release-a` (from the base `main`) would not be accessible to a workflow triggered for the tag `release-b` (with the base `main`).
リポジトリ内の複数のワークフローは、キャッシュ エントリを共有します。 ワークフロー内のブランチ用に作成されたキャッシュは、同じリポジトリとブランチの別のワークフローからアクセスおよび復元できます。
Multiple workflows within a repository share cache entries. A cache created for a branch within a workflow can be accessed and restored from another workflow for the same repository and branch.
## `cache` アクションの使用
## Using the `cache` action
[`cache` action](https://github.com/actions/cache) アクションは、指定した `key` に基づいてキャッシュの復元を試みます。 アクションでキャッシュが見つかると、アクションは、キャッシュされたファイルを構成した `path` に復元します。
The [`cache` action](https://github.com/actions/cache) will attempt to restore a cache based on the `key` you provide. When the action finds a cache, the action restores the cached files to the `path` you configure.
完全に一致するものがない場合、ジョブが正常に完了すると、このアクションによって新しいキャッシュが自動的に作成されます。 新しいキャッシュでは、指定した `key` が使用され、`path` で指定したファイルが含められます。
If there is no exact match, the action automatically creates a new cache if the job completes successfully. The new cache will use the `key` you provided and contains the files you specify in `path`.
必要に応じて、`key` が既存のキャッシュと一致しない場合に使用する `restore-keys` のリストを指定できます。 `restore-keys` はキャッシュ キーと部分的に一致する可能性があるため、`restore-keys` のリストは別のブランチからキャッシュを復元する場合に便利です。 `restore-keys` の照合の詳細については、「[キャッシュ キーのマッチング](#matching-a-cache-key)」を参照してください。
You can optionally provide a list of `restore-keys` to use when the `key` doesn't match an existing cache. A list of `restore-keys` is useful when you are restoring a cache from another branch because `restore-keys` can partially match cache keys. For more information about matching `restore-keys`, see "[Matching a cache key](#matching-a-cache-key)."
### `cache` アクションの入力パラメーター
### Input parameters for the `cache` action
- `key`: **必須** キャッシュの保存時に作成されたキーと、キャッシュの検索に使用されるキー。 変数、コンテキスト値、静的な文字列、関数の任意の組み合わせが使えます。 キーの長さは最大で512文字であり、キーが最大長よりも長いとアクションは失敗します。
- `path`: **必須** キャッシュまたは復元するランナー上のパス。
- 1 つのパスを指定することも、複数のパスを別々の行に追加することもできます。 たとえば次のような点です。
- `key`: **Required** The key created when saving a cache and the key used to search for a cache. It can be any combination of variables, context values, static strings, and functions. Keys have a maximum length of 512 characters, and keys longer than the maximum length will cause the action to fail.
- `path`: **Required** The path(s) on the runner to cache or restore.
- You can specify a single path, or you can add multiple paths on separate lines. For example:
```
- name: Cache Gradle packages
@@ -82,9 +77,9 @@ ms.locfileid: '147710308'
~/.gradle/caches
~/.gradle/wrapper
```
- ディレクトリまたは単一ファイルのいずれかを指定できます。glob パターンがサポートされています。
- 絶対パス、またはワークスペース ディレクトリに対する相対パスを指定できます。
- `restore-keys`: **オプション** 代替の復元キーを含んだ文字列。各復元キーは新しい行に配置されます。 `key` に対するキャッシュ ヒットが発生しない場合は、キャッシュを検索して復元するために、これらの復元キーが指定された順序で使用されます。 たとえば次のような点です。
- You can specify either directories or single files, and glob patterns are supported.
- You can specify absolute paths, or paths relative to the workspace directory.
- `restore-keys`: **Optional** A string containing alternative restore keys, with each restore key placed on a new line. If no cache hit occurs for `key`, these restore keys are used sequentially in the order provided to find and restore a cache. For example:
{% raw %}
```yaml
@@ -95,13 +90,13 @@ ms.locfileid: '147710308'
```
{% endraw %}
### `cache` アクションの出力パラメーター
### Output parameters for the `cache` action
- `cache-hit`: キーに対して完全一致が見つかったかどうかを示すブール値。
- `cache-hit`: A boolean value to indicate an exact match was found for the key.
### `cache` アクションの使用例
### Example using the `cache` action
次の例では、`package-lock.json` ファイル内のパッケージが変更されたとき、またはランナーのオペレーティング システムが変更されたときに、新しいキャッシュを作成します。 キャッシュ キーは、コンテキストと式を使用して、ランナーのオペレーティング システムと `package-lock.json` ファイルの SHA-256 ハッシュを含むキーを生成します。
This example creates a new cache when the packages in `package-lock.json` file change, or when the runner's operating system changes. The cache key uses contexts and expressions to generate a key that includes the runner's operating system and a SHA-256 hash of the `package-lock.json` file.
```yaml{:copy}
name: Caching with npm
@@ -141,27 +136,27 @@ jobs:
run: npm test
```
`key` が既存のキャッシュと一致した場合 (これは _キャッシュ ヒット_ と呼ばれます)、アクションはキャッシュされたファイルを `path` ディレクトリに復元します。
When `key` matches an existing cache, it's called a _cache hit_, and the action restores the cached files to the `path` directory.
`key` が既存のキャッシュと一致しない場合 (これは _キャッシュ ミス_ と呼ばれます)、ジョブが正常に完了すると、新しいキャッシュが作成されます。
When `key` doesn't match an existing cache, it's called a _cache miss_, and a new cache is automatically created if the job completes successfully.
キャッシュ ミスが発生した場合、アクションはユーザーが指定した `restore-keys` の一致も検索します。
When a cache miss occurs, the action also searches your specified `restore-keys` for any matches:
1. `restore-keys` を指定した場合、`cache` アクションは `restore-keys` のリストに一致するすべてのキャッシュを順次検索します。
- 完全に一致する場合、アクションはキャッシュ内のファイルを `path` ディレクトリに復元します。
- 完全なマッチがなかった場合、アクションはリストアキーに対する部分一致を検索します。 アクションで部分的な一致が見つかると、最新のキャッシュが `path` ディレクトリに復元されます。
1. `cache` アクションが完了し、ジョブの次のステップが実行されます。
1. ジョブが正常に完了すると、アクションは `path` ディレクトリのコンテンツを含んだ新しいキャッシュを自動的に作成します。
1. If you provide `restore-keys`, the `cache` action sequentially searches for any caches that match the list of `restore-keys`.
- When there is an exact match, the action restores the files in the cache to the `path` directory.
- If there are no exact matches, the action searches for partial matches of the restore keys. When the action finds a partial match, the most recent cache is restored to the `path` directory.
1. The `cache` action completes and the next step in the job runs.
1. If the job completes successfully, the action automatically creates a new cache with the contents of the `path` directory.
キャッシュ照合プロセスの詳細については、「[キャッシュキーのマッチング](#matching-a-cache-key)」を参照してください。 キャッシュをいったん作成すると、既存のキャッシュの内容を変更することはできませんが、新しいキーで新しいキャッシュを作成することはできます。
For a more detailed explanation of the cache matching process, see "[Matching a cache key](#matching-a-cache-key)." Once you create a cache, you cannot change the contents of an existing cache but you can create a new cache with a new key.
### コンテキストを使ったキャッシュキーの作成
### Using contexts to create cache keys
キャッシュキーには、コンテキスト、関数、リテラル、{% data variables.product.prodname_actions %}がサポートする演算子を含めることができます。 詳細については、「[コンテキスト](/actions/learn-github-actions/contexts)」および「[](/actions/learn-github-actions/expressions)」を参照してください。
A cache key can include any of the contexts, functions, literals, and operators supported by {% data variables.product.prodname_actions %}. For more information, see "[Contexts](/actions/learn-github-actions/contexts)" and "[Expressions](/actions/learn-github-actions/expressions)."
式を使用して `key` を作成すると、依存関係が変更されたときに新しいキャッシュを自動的に作成できます。
Using expressions to create a `key` allows you to automatically create a new cache when dependencies change.
たとえば、npm `package-lock.json` ファイルのハッシュを計算する式を使用して `key` を作成できます。 その場合、`package-lock.json` ファイルを構成する依存関係が変更されると、キャッシュ キーが変更され、新しいキャッシュが自動的に作成されます。
For example, you can create a `key` using an expression that calculates the hash of an npm `package-lock.json` file. So, when the dependencies that make up the `package-lock.json` file change, the cache key changes and a new cache is automatically created.
{% raw %}
```yaml
@@ -169,17 +164,17 @@ npm-${{ hashFiles('package-lock.json') }}
```
{% endraw %}
{% data variables.product.prodname_dotcom %} は、式 `hash "package-lock.json"` を評価して最終的な `key` を導き出します。
{% data variables.product.prodname_dotcom %} evaluates the expression `hash "package-lock.json"` to derive the final `key`.
```yaml
npm-d5ea0750
```
### `cache` アクションの出力の使用
### Using the output of the `cache` action
`cache` アクションの出力を使用すると、キャッシュ ヒットやキャッシュ ミスが発生したどうかに基づいて操作を実行することができます。 指定した `key` のキャッシュに完全一致が見つかった場合、`cache-hit` の出力は `true` に設定されます。
You can use the output of the `cache` action to do something based on whether a cache hit or miss occurred. When an exact match is found for a cache for the specified `key`, the `cache-hit` output is set to `true`.
上記のワークフロー例では、キャッシュ ミスが発生した場合に、Node モジュールの状態をリストする手順があります。
In the example workflow above, there is a step that lists the state of the Node modules if a cache miss occurred:
```yaml
- if: {% raw %}${{ steps.cache-npm.outputs.cache-hit != 'true' }}{% endraw %}
@@ -188,13 +183,13 @@ npm-d5ea0750
run: npm list
```
## キャッシュキーのマッチング
## Matching a cache key
`cache` アクションは、最初にワークフロー実行を含むブランチで、`key` および `restore-keys` のキャッシュ ヒットを検索します。 現在のブランチにヒットがない場合、`cache` アクションは親ブランチとアップストリーム ブランチの `key` および `restore-keys` を検索します。
The `cache` action first searches for cache hits for `key` and `restore-keys` in the branch containing the workflow run. If there are no hits in the current branch, the `cache` action searches for `key` and `restore-keys` in the parent branch and upstream branches.
`restore-keys` では、`key` でキャッシュ ミスが発生した場合に使用する代替復元キーのリストを指定できます。 特定の度合いが強いものから弱いものへ並べて複数のリストアキーを作成できます。 `cache` アクションは `restore-keys` を順番に検索します。 キーが直接マッチしなかった場合、アクションはリストアキーでプレフィックスされたキーを検索します。 リストアキーに対して複数の部分一致があった場合、アクションは最も最近に作成されたキャッシュを返します。
`restore-keys` allows you to specify a list of alternate restore keys to use when there is a cache miss on `key`. You can create multiple restore keys ordered from the most specific to least specific. The `cache` action searches the `restore-keys` in sequential order. When a key doesn't match directly, the action searches for keys prefixed with the restore key. If there are multiple partial matches for a restore key, the action returns the most recently created cache.
### 複数のリストアキーの利用例
### Example using multiple restore keys
{% raw %}
```yaml
@@ -205,7 +200,7 @@ restore-keys: |
```
{% endraw %}
ランナーは式を評価し、次の `restore-keys` に解決します。
The runner evaluates the expressions, which resolve to these `restore-keys`:
{% raw %}
```yaml
@@ -216,13 +211,13 @@ restore-keys: |
```
{% endraw %}
復元キー `npm-feature-` は、文字列 `npm-feature-` で始まるすべてのキーと一致します。 たとえば、`npm-feature-fd3052de` および `npm-feature-a9b253ff` の両方のキーと復元キーが一致します。 最も最近の期日に作成されたキャッシュが使われます。 この例でのキーは、以下の順序で検索されます。
The restore key `npm-feature-` matches any key that starts with the string `npm-feature-`. For example, both of the keys `npm-feature-fd3052de` and `npm-feature-a9b253ff` match the restore key. The cache with the most recent creation date would be used. The keys in this example are searched in the following order:
1. **`npm-feature-d5ea0750`** は特定のハッシュと一致します。
1. **`npm-feature-`** `npm-feature-` というプレフィックスが付いたキャッシュ キーと一致します。
1. **`npm-`** `npm-` というプレフィックスが付いたすべてのキーと一致します。
1. **`npm-feature-d5ea0750`** matches a specific hash.
1. **`npm-feature-`** matches cache keys prefixed with `npm-feature-`.
1. **`npm-`** matches any keys prefixed with `npm-`.
#### 検索の優先度の例
#### Example of search priority
```yaml
key:
@@ -232,30 +227,81 @@ restore-keys: |
npm-
```
たとえば、pull request `feature` ブランチを含んでいて、既定のブランチ (`main`) をターゲットとしている場合、アクションは `key` `restore-keys` を次の順序で検索します。
For example, if a pull request contains a `feature` branch and targets the default branch (`main`), the action searches for `key` and `restore-keys` in the following order:
1. `feature` ブランチ内のキー `npm-feature-d5ea0750`
1. `feature` ブランチ内のキー `npm-feature-`
1. `feature` ブランチ内のキー `npm-`
1. `main` ブランチ内のキー `npm-feature-d5ea0750`
1. `main` ブランチ内のキー `npm-feature-`
1. `main` ブランチ内のキー `npm-`
1. Key `npm-feature-d5ea0750` in the `feature` branch
1. Key `npm-feature-` in the `feature` branch
1. Key `npm-` in the `feature` branch
1. Key `npm-feature-d5ea0750` in the `main` branch
1. Key `npm-feature-` in the `main` branch
1. Key `npm-` in the `main` branch
## 利用制限と退去のポリシー
## Usage limits and eviction policy
{% data variables.product.prodname_dotcom %}は、7日間以上アクセスされていないキャッシュエントリを削除します。 保存できるキャッシュの数に制限はありませんが、リポジトリ内のすべてのキャッシュの合計サイズは制限されています{% ifversion actions-cache-policy-apis %}。 既定では、リポジトリあたり 10 GB の制限ですが、この制限は、エンタープライズ所有者やリポジトリ管理者が設定したポリシーによって変わる場合があります。{% else %} (最大 10 GB)。{% endif %}
{% data variables.product.prodname_dotcom %} will remove any cache entries that have not been accessed in over 7 days. There is no limit on the number of caches you can store, but the total size of all caches in a repository is limited{% ifversion actions-cache-policy-apis %}. By default, the limit is 10 GB per repository, but this limit might be different depending on policies set by your enterprise owners or repository administrators.{% else %} to 10 GB.{% endif %}
{% data reusables.actions.cache-eviction-process %}
{% data reusables.actions.cache-eviction-process %} {% ifversion actions-cache-ui %}The cache eviction process may cause cache thrashing, where caches are created and deleted at a high frequency. To reduce this, you can review the caches for a repository and take corrective steps, such as removing caching from specific workflows. For more information, see "[Managing caches](#managing-caches)."{% endif %}{% ifversion actions-cache-admin-ui %} You can also increase the cache size limit for a repository. For more information, see "[Managing {% data variables.product.prodname_actions %} settings for a repository](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#configuring-cache-storage-for-a-repository)."
{% elsif actions-cache-policy-apis %}
For information on changing the policies for the repository cache size limit, see "[Enforcing policies for {% data variables.product.prodname_actions %} in your enterprise](/admin/policies/enforcing-policies-for-your-enterprise/enforcing-policies-for-github-actions-in-your-enterprise#enforcing-a-policy-for-cache-storage-in-your-enterprise)" and "[Managing {% data variables.product.prodname_actions %} settings for a repository](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#configuring-cache-storage-for-a-repository)."
{% ifversion actions-cache-policy-apis %} リポジトリのキャッシュ サイズ制限のポリシー変更については、「[エンタープライズで {% data variables.product.prodname_actions %} のポリシーを適用する](/admin/policies/enforcing-policies-for-your-enterprise/enforcing-policies-for-github-actions-in-your-enterprise#enforcing-a-policy-for-cache-storage-in-your-enterprise)」および「[リポジトリの {% data variables.product.prodname_actions %} の設定を管理する](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#configuring-cache-storage-for-a-repository)」をご覧ください。
{% endif %}
{% ifversion actions-cache-management %}
## キャッシュの管理
## Managing caches
{% data variables.product.product_name %} REST API を使用してキャッシュを管理できます。 {% ifversion actions-cache-list-delete-apis %}API を使って、キャッシュ エントリの一覧表示と削除を行い、キャッシュの使用状況を確認できます。{% elsif actions-cache-management %}現時点では、API を使ってキャッシュの使用状況を確認できます。今後の更新で、さらに多くの機能が追加される予定です。{% endif %}詳しくは、REST API のドキュメント「[{% data variables.product.prodname_actions %} キャッシュ](/rest/actions/cache)」をご覧ください。
{% ifversion actions-cache-ui %}
{% data variables.product.prodname_cli %} 拡張機能をインストールして、コマンド ラインからキャッシュを管理することもできます。 拡張機能について詳しくは、[拡張機能のドキュメント](https://github.com/actions/gh-actions-cache#readme)を参照してください。 {% data variables.product.prodname_cli %} 拡張機能について詳しくは、「[GitHub CLI 拡張機能を使用する](/github-cli/github-cli/using-github-cli-extensions)」を参照してください。
To manage caches created from your workflows, you can:
- View a list of all cache entries for a repository.
- Filter and sort the list of caches using specific metadata such as cache size, creation time, or last accessed time.
- Delete cache entries from a repository.
- Monitor aggregate cache usage for repositories and organizations.
There are multiple ways to manage caches for your repositories:
- Using the {% data variables.product.prodname_dotcom %} web interface, as shown below.
- Using the REST API. For more information, see the "[{% data variables.product.prodname_actions %} Cache](/rest/actions/cache)" REST API documentation.
- Installing a {% data variables.product.prodname_cli %} extension to manage your caches from the command line. For more information, see the [gh-actions-cache](https://github.com/actions/gh-actions-cache) extension.
{% else %}
You can use the {% data variables.product.product_name %} REST API to manage your caches. {% ifversion actions-cache-list-delete-apis %}You can use the API to list and delete cache entries, and see your cache usage.{% elsif actions-cache-management %}At present, you can use the API to see your cache usage, with more functionality expected in future updates.{% endif %} For more information, see the "[{% data variables.product.prodname_actions %} Cache](/rest/actions/cache)" REST API documentation.
You can also install a {% data variables.product.prodname_cli %} extension to manage your caches from the command line. For more information about the extension, see [the extension documentation](https://github.com/actions/gh-actions-cache#readme). For more information about {% data variables.product.prodname_cli %} extensions, see "[Using GitHub CLI extensions](/github-cli/github-cli/using-github-cli-extensions)."
{% endif %}
{% ifversion actions-cache-ui %}
### Viewing cache entries
You can use the web interface to view a list of cache entries for a repository. In the cache list, you can see how much disk space each cache is using, when the cache was created, and when the cache was last used.
{% data reusables.repositories.navigate-to-repo %}
{% data reusables.repositories.actions-tab %}
{% data reusables.repositories.actions-cache-list %}
1. Review the list of cache entries for the repository.
* To search for cache entries used for a specific branch, click the **Branch** dropdown menu and select a branch. The cache list will display all of the caches used for the selected branch.
* To search for cache entries with a specific cache key, use the syntax `key: key-name` in the **Filter caches** field. The cache list will display caches from all branches where the key was used.
![Screenshot of the list of cache entries](/assets/images/help/repository/actions-cache-entry-list.png)
### Deleting cache entries
Users with `write` access to a repository can use the {% data variables.product.prodname_dotcom %} web interface to delete cache entries.
{% data reusables.repositories.navigate-to-repo %}
{% data reusables.repositories.actions-tab %}
{% data reusables.repositories.actions-cache-list %}
1. To the right of the cache entry you want to delete, click {% octicon "trash" aria-label="The trash icon" %}.
![Screenshot of the list of cache entries](/assets/images/help/repository/actions-cache-delete.png)
{% endif %}
{% endif %}

View File

@@ -1,6 +1,6 @@
---
title: クラスタのネットワーク設定
intro: '{% data variables.product.prodname_ghe_server %} クラスタリングが適切に動作するためには、DNS の名前解決、ロードバランシング、ノード間の通信が適切に行われなければなりません。'
title: Cluster network configuration
intro: '{% data variables.product.prodname_ghe_server %} clustering relies on proper DNS name resolution, load balancing, and communication between nodes to operate properly.'
redirect_from:
- /enterprise/admin/clustering/cluster-network-configuration
- /enterprise/admin/enterprise-management/cluster-network-configuration
@@ -14,68 +14,62 @@ topics:
- Infrastructure
- Networking
shortTitle: Configure a cluster network
ms.openlocfilehash: d6e4d50077cccc3e5582be0af39bdae0046cd8c8
ms.sourcegitcommit: fcf3546b7cc208155fb8acdf68b81be28afc3d2d
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/10/2022
ms.locfileid: '145112765'
---
## ネットワークに関する考慮事項
## Network considerations
クラスタリングのための最もシンプルなネットワーク設計は、ード群を単一のLANに置くことです。 クラスタがサブネットワークにまたがる必要がある場合は、ネットワーク間にファイアウォールルールを設定することはお勧めしません。 ノード間の遅延は 1 ミリ秒未満である必要があります。
The simplest network design for clustering is to place the nodes on a single LAN. If a cluster must span subnetworks, we do not recommend configuring any firewall rules between the networks. The latency between nodes should be less than 1 millisecond.
{% ifversion ghes %}高可用性を実現するには、アクティブ ノードを備えたネットワークとパッシブ ノードを備えたネットワーク間の待ち時間が 70 ミリ秒未満である必要があります。 2 つのネットワーク間にファイアウォールを設定することはお勧めしません。{% endif %}
{% data reusables.enterprise_clustering.network-latency %}
### エンドユーザーのためのアプリケーションポート
### Application ports for end users
アプリケーションのポートは、エンドユーザーにWebアプリケーションとGitへのアクセスを提供します。
Application ports provide web application and Git access for end users.
| Port | 説明 | Encrypted |
| Port | Description | Encrypted |
| :------------- | :------------- | :------------- |
| 22/TCP | Git over SSH | Yes |
| 25/TCP | SMTP | STARTTLSが必要 |
| 80/TCP | HTTP | No<br>(SSL が有効になっている場合、このポートは HTTPS にリダイレクトされます) |
| 25/TCP | SMTP | Requires STARTTLS |
| 80/TCP | HTTP | No<br>(When SSL is enabled this port redirects to HTTPS) |
| 443/TCP | HTTPS | Yes |
| 9418/TCP | 単純な Git プロトコル ポート<br>(プライベート モードでは無効) | No |
| 9418/TCP | Simple Git protocol port<br>(Disabled in private mode) | No |
### 管理ポート
### Administrative ports
管理ポートは、エンドユーザが基本的なアプリケーションを利用するためには必要ありません。
Administrative ports are not required for basic application use by end users.
| Port | 説明 | Encrypted |
| Port | Description | Encrypted |
| :------------- | :------------- | :------------- |
| ICMP | ICMP Ping | No |
| 122/TCP | 管理SSH | Yes |
| 122/TCP | Administrative SSH | Yes |
| 161/UDP | SNMP | No |
| 8080/TCP | Management Console HTTP | No<br>(SSL が有効になっている場合、このポートは HTTPS にリダイレクトされます) |
| 8080/TCP | Management Console HTTP | No<br>(When SSL is enabled this port redirects to HTTPS) |
| 8443/TCP | Management Console HTTPS | Yes |
### クラスタ通信ポート
### Cluster communication ports
ネットワークレベルのファイアウォールがノード間にある場合は、これらのポートがアクセス可能である必要があります。 ノード間の通信は暗号化されていません。 これらのポートは外部からアクセスできません。
If a network level firewall is in place between nodes, these ports will need to be accessible. The communication between nodes is not encrypted. These ports should not be accessible externally.
| Port | 説明 |
| Port | Description |
| :------------- | :------------- |
| 1336/TCP | 内部 API |
| 3033/TCP | 内部 SVN アクセス |
| 3037/TCP | 内部 SVN アクセス |
| 1336/TCP | Internal API |
| 3033/TCP | Internal SVN access |
| 3037/TCP | Internal SVN access |
| 3306/TCP | MySQL |
| 4486/TCP | Governor アクセス |
| 5115/TCP | ストレージ バックエンド |
| 5208/TCP | 内部 SVN アクセス |
| 4486/TCP | Governor access |
| 5115/TCP | Storage backend |
| 5208/TCP | Internal SVN access |
| 6379/TCP | Redis |
| 8001/TCP | Grafana |
| 8090/TCP | 内部 GPG アクセス |
| 8149/TCP | GitRPC ファイルサーバーアクセス |
| 8090/TCP | Internal GPG access |
| 8149/TCP | GitRPC file server access |
| 8300/TCP | Consul |
| 8301/TCP | Consul |
| 8302/TCP | Consul |
| 9000/TCP | Git デーモン |
| 9102/TCP | Pages ファイルサーバー |
| 9105/TCP | LFS サーバー |
| 9000/TCP | Git Daemon |
| 9102/TCP | Pages file server |
| 9105/TCP | LFS server |
| 9200/TCP | Elasticsearch |
| 9203/TCP | セマンティックコードサービス |
| 9203/TCP | Semantic code service |
| 9300/TCP | Elasticsearch |
| 11211/TCP | Memcache |
| 161/UDP | SNMP |
@@ -84,42 +78,42 @@ ms.locfileid: '145112765'
| 8302/UDP | Consul |
| 25827/UDP | Collectd |
## ロードバランサの設定
## Configuring a load balancer
ード間のトラフィックの分配には、PROXY プロトコルをサポートする TCP ベースの外部ロードバランサをおすすめします。 以下のロードバランサ設定を検討してください:
We recommend an external TCP-based load balancer that supports the PROXY protocol to distribute traffic across nodes. Consider these load balancer configurations:
- TCP ポート (以下に示す) は、`web-server` サービスを実行しているノードに転送する必要があります。 これらは、外部クライアント要求を処理する唯一のノードです。
- スティッキーセッションは有効化してはなりません。
- TCP ports (shown below) should be forwarded to nodes running the `web-server` service. These are the only nodes that serve external client requests.
- Sticky sessions shouldn't be enabled.
{% data reusables.enterprise_installation.terminating-tls %}
## クライアントの接続情報の処理
## Handling client connection information
クラスタへのクライアント接続はロードバランサから行われるため、クライアントの IP アドレスが失われる可能性があります。 クライアント接続情報を正しく取り込むには、追加の検討が必要です。
Because client connections to the cluster come from the load balancer, the client IP address can be lost. To properly capture the client connection information, additional consideration is required.
{% data reusables.enterprise_clustering.proxy_preference %}
{% data reusables.enterprise_clustering.proxy_xff_firewall_warning %}
### {% data variables.product.prodname_ghe_server %}での PROXY サポートの有効化
### Enabling PROXY support on {% data variables.product.prodname_ghe_server %}
インスタンスとロードバランサの双方でPROXYサポートを有効化することを強くおすすめします。
We strongly recommend enabling PROXY support for both your instance and the load balancer.
{% data reusables.enterprise_installation.proxy-incompatible-with-aws-nlbs %}
- インスタンスにはこのコマンドを使用してください:
- For your instance, use this command:
```shell
$ ghe-config 'loadbalancer.proxy-protocol' 'true' && ghe-cluster-config-apply
```
- ロードバランサでは、ベンダーから提供された手順書に従ってください。
- For the load balancer, use the instructions provided by your vendor.
{% data reusables.enterprise_clustering.proxy_protocol_ports %}
### {% data variables.product.prodname_ghe_server %}での X-Forwarded-For サポートの有効化
### Enabling X-Forwarded-For support on {% data variables.product.prodname_ghe_server %}
{% data reusables.enterprise_clustering.x-forwarded-for %}
`X-Forwarded-For` ヘッダーを有効にするには、次のコマンドを使用します。
To enable the `X-Forwarded-For` header, use this command:
```shell
$ ghe-config 'loadbalancer.http-forward' 'true' && ghe-cluster-config-apply
@@ -127,11 +121,12 @@ $ ghe-config 'loadbalancer.http-forward' 'true' && ghe-cluster-config-apply
{% data reusables.enterprise_clustering.without_proxy_protocol_ports %}
### ヘルスチェックの設定
ロードバランサは健全性チェックによって、事前に設定されたチェックが失敗するようになったノードがあれば、反応しなくなったノードへのトラフィックの送信を止めます。 クラスタのノードに障害が起きた場合、冗長なノードと組み合わさったヘルスチェックが高可用性を提供してくれます。
### Configuring Health Checks
Health checks allow a load balancer to stop sending traffic to a node that is not responding if a pre-configured check fails on that node. If a cluster node fails, health checks paired with redundant nodes provides high availability.
{% data reusables.enterprise_clustering.health_checks %} {% data reusables.enterprise_site_admin_settings.maintenance-mode-status %}
{% data reusables.enterprise_clustering.health_checks %}
{% data reusables.enterprise_site_admin_settings.maintenance-mode-status %}
## DNS の要件
## DNS Requirements
{% data reusables.enterprise_clustering.load_balancer_dns %}

View File

@@ -1,6 +1,6 @@
---
title: クラスタの High Availability レプリケーションを設定する
intro: '{% data variables.product.prodname_ghe_server %} クラスタ全体のパッシブレプリカを別の場所に設定することで、クラスタを冗長ノードにフェイルオーバーできるようにします。'
title: Configuring high availability replication for a cluster
intro: 'You can configure a passive replica of your entire {% data variables.product.prodname_ghe_server %} cluster in a different location, allowing your cluster to fail over to redundant nodes.'
miniTocMaxHeadingLevel: 3
redirect_from:
- /enterprise/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster
@@ -14,86 +14,80 @@ topics:
- High availability
- Infrastructure
shortTitle: Configure HA replication
ms.openlocfilehash: 3663fe290fab6644c5650c3f1ff435dfae87bcf4
ms.sourcegitcommit: fb047f9450b41b24afc43d9512a5db2a2b750a2a
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/11/2022
ms.locfileid: '145120630'
---
## クラスタの High Availability レプリケーションについて
## About high availability replication for clusters
High Availability を実現するために、{% data variables.product.prodname_ghe_server %} のクラスタデプロイメントを設定できます。この場合、パッシブノードの同一のセットがアクティブクラスタ内のノードと同期されます。 ハードウェアまたはソフトウェアの障害がアクティブなクラスタのデータセンターに影響を与える場合は、手動でレプリカノードにフェイルオーバーし、ユーザリクエストの処理を続行して、停止の影響を最小限に抑えることができます。
You can configure a cluster deployment of {% data variables.product.prodname_ghe_server %} for high availability, where an identical set of passive nodes sync with the nodes in your active cluster. If hardware or software failures affect the datacenter with your active cluster, you can manually fail over to the replica nodes and continue processing user requests, minimizing the impact of the outage.
High Availability モードでは、各アクティブノードは対応するパッシブノードと定期的に同期します。 パッシブノードはスタンバイで実行され、アプリケーションへのサービス提供や、ユーザ要求の処理は行われません。
In high availability mode, each active node syncs regularly with a corresponding passive node. The passive node runs in standby and does not serve applications or process user requests.
{% data variables.product.prodname_ghe_server %} の包括的なシステム災害復旧計画の一部として High Availability を設定することをお勧めします。 また、定期的なバックアップを実行することをお勧めします。 詳細については、「[アプライアンスでのバックアップの設定](/enterprise/admin/configuration/configuring-backups-on-your-appliance)」を参照してください。
We recommend configuring high availability as a part of a comprehensive disaster recovery plan for {% data variables.product.prodname_ghe_server %}. We also recommend performing regular backups. For more information, see "[Configuring backups on your appliance](/enterprise/admin/configuration/configuring-backups-on-your-appliance)."
## 前提条件
## Prerequisites
### ハードウェアとソフトウェア
### Hardware and software
アクティブなクラスタ内の既存のードごとに、同一のハードウェアリソースを使用して2番目の仮想マシンをプロビジョニングする必要があります。 たとえば、クラスターに 11 個のノードがあり、各ノードに 12 個の vCPU96 GB RAM、および 750 GB の接続ストレージがある場合、それぞれが 12 個の vCPU96 GB RAM、および 750 GB の接続ストレージを備えた 11 個の新しい仮想マシンをプロビジョニングする必要があります。
For each existing node in your active cluster, you'll need to provision a second virtual machine with identical hardware resources. For example, if your cluster has 11 nodes and each node has 12 vCPUs, 96 GB of RAM, and 750 GB of attached storage, you must provision 11 new virtual machines that each have 12 vCPUs, 96 GB of RAM, and 750 GB of attached storage.
新しい仮想マシンごとに、アクティブクラスタ内のノードで実行されているものと同じバージョンの {% data variables.product.prodname_ghe_server %} をインストールします。 ライセンスをアップロードしたり、追加の設定を実行したりする必要はありません。 詳細については、「[{% data variables.product.prodname_ghe_server %} インスタンスをセットアップする](/enterprise/admin/installation/setting-up-a-github-enterprise-server-instance)」を参照してください。
On each new virtual machine, install the same version of {% data variables.product.prodname_ghe_server %} that runs on the nodes in your active cluster. You don't need to upload a license or perform any additional configuration. For more information, see "[Setting up a {% data variables.product.prodname_ghe_server %} instance](/enterprise/admin/installation/setting-up-a-github-enterprise-server-instance)."
{% note %}
**注**: High Availability レプリケーションに使用する予定のノードは、スタンドアロンの {% data variables.product.prodname_ghe_server %} インスタンスである必要があります。 パッシブードを2番目のクラスタとして初期化しないでください。
**Note**: The nodes that you intend to use for high availability replication should be standalone {% data variables.product.prodname_ghe_server %} instances. Don't initialize the passive nodes as a second cluster.
{% endnote %}
### ネットワーク
### Network
プロビジョニングする新しいノードごとに静的 IP アドレスを割り当てる必要があります。また、接続を受け入れてクラスタのフロントエンド層のノードに転送するようにロードバランサを設定する必要があります。
You must assign a static IP address to each new node that you provision, and you must configure a load balancer to accept connections and direct them to the nodes in your cluster's front-end tier.
アクティブクラスタを使用するネットワークとパッシブクラスタを使用するネットワークの間にファイアウォールを設定することはお勧めしません。 アクティブードのあるネットワークとパッシブードのあるネットワークの間の遅延は、70 ミリ秒未満である必要があります。 パッシブ クラスター内のノード間のネットワーク接続の詳細については、「[クラスターのネットワーク構成](/enterprise/admin/enterprise-management/cluster-network-configuration)」を参照してください。
{% data reusables.enterprise_clustering.network-latency %} For more information about network connectivity between nodes in the passive cluster, see "[Cluster network configuration](/enterprise/admin/enterprise-management/cluster-network-configuration)."
## クラスタの High Availability レプリカを作成する
## Creating a high availability replica for a cluster
- [アクティブ ノードをプライマリ データセンターに割り当てる](#assigning-active-nodes-to-the-primary-datacenter)
- [パッシブ ノードをクラスター構成ファイルに追加する](#adding-passive-nodes-to-the-cluster-configuration-file)
- [構成例](#example-configuration)
- [Assigning active nodes to the primary datacenter](#assigning-active-nodes-to-the-primary-datacenter)
- [Adding passive nodes to the cluster configuration file](#adding-passive-nodes-to-the-cluster-configuration-file)
- [Example configuration](#example-configuration)
### アクティブノードをプライマリデータセンターに割り当てる
### Assigning active nodes to the primary datacenter
パッシブノードのセカンダリデータセンターを定義する前に、アクティブノードをプライマリデータセンターに割り当てていることを確認してください。
Before you define a secondary datacenter for your passive nodes, ensure that you assign your active nodes to the primary datacenter.
{% data reusables.enterprise_clustering.ssh-to-a-node %}
{% data reusables.enterprise_clustering.open-configuration-file %}
3. クラスタのプライマリデータセンターの名前に注意します。 クラスター構成ファイルの上部にある `[cluster]` セクションでは、キーと値のペア `primary-datacenter` を使用して、プライマリ データセンターの名前を定義します。 既定では、クラスターのプライマリ データセンターの名前は `default` です。
3. Note the name of your cluster's primary datacenter. The `[cluster]` section at the top of the cluster configuration file defines the primary datacenter's name, using the `primary-datacenter` key-value pair. By default, the primary datacenter for your cluster is named `default`.
```shell
[cluster]
mysql-master = <em>HOSTNAME</em>
redis-master = <em>HOSTNAME</em>
mysql-master = HOSTNAME
redis-master = HOSTNAME
<strong>primary-datacenter = default</strong>
```
- 必要に応じて、`primary-datacenter` の値を編集して、プライマリ データセンター名をよりわかりやすい名前に変更します。
- Optionally, change the name of the primary datacenter to something more descriptive or accurate by editing the value of `primary-datacenter`.
4. {% data reusables.enterprise_clustering.configuration-file-heading %} 各ノードの見出しの下に、新しいキー/値ペアのペアを追加して、ノードをデータセンターに割り当てます。 上記のステップ 3 と同じ値 `primary-datacenter` を使用します。 たとえば、既定の名前 (`default`) を使用する場合は、次のキーと値のペアを各ノードのセクションに追加します。
4. {% data reusables.enterprise_clustering.configuration-file-heading %} Under each node's heading, add a new key-value pair to assign the node to a datacenter. Use the same value as `primary-datacenter` from step 3 above. For example, if you want to use the default name (`default`), add the following key-value pair to the section for each node.
```
datacenter = default
```
完了すると、クラスタ設定ファイルの各ノードのセクションは次の例のようになります。 {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %}
When you're done, the section for each node in the cluster configuration file should look like the following example. {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %}
```shell
[cluster "<em>HOSTNAME</em>"]
[cluster "HOSTNAME"]
<strong>datacenter = default</strong>
hostname = <em>HOSTNAME</em>
ipv4 = <em>IP ADDRESS</em>
hostname = HOSTNAME
ipv4 = IP-ADDRESS
...
...
```
{% note %}
****: ステップ 3 でプライマリ データセンター名を変更した場合は、各ノードのセクションでキーと値のペア `consul-datacenter` を見つけ、その値を名前変更したプライマリ データセンターに変更します。 たとえば、プライマリ データセンターに `primary` という名前を付けた場合は、ノードごとに次のキーと値のペアを使用します。
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
```
consul-datacenter = primary
@@ -105,123 +99,123 @@ High Availability モードでは、各アクティブノードは対応する
{% data reusables.enterprise_clustering.configuration-finished %}
{% data variables.product.prodname_ghe_server %} がプロンプトに戻ったら、ノードをクラスタのプライマリデータセンターに割り当てます。
After {% data variables.product.prodname_ghe_server %} returns you to the prompt, you've finished assigning your nodes to the cluster's primary datacenter.
### パッシブノードをクラスタ設定ファイルに追加する
### Adding passive nodes to the cluster configuration file
High Availability を設定するには、クラスタ内のすべてのアクティブノードに対応するパッシブノードを定義する必要があります。 次の手順では、アクティブノードとパッシブノードの両方を定義する新しいクラスタ設定を作成します。 このチュートリアルの内容は次のとおりです。
To configure high availability, you must define a corresponding passive node for every active node in your cluster. The following instructions create a new cluster configuration that defines both active and passive nodes. You will:
- アクティブなクラスタ設定ファイルのコピーを作成します。
- コピーを編集して、アクティブノードに対応するパッシブノードを定義し、プロビジョニングした新しい仮想マシンの IP アドレスを追加します。
- クラスタ設定の変更されたコピーをアクティブな設定にマージします。
- 新しい設定を適用してレプリケーションを開始します。
- Create a copy of the active cluster configuration file.
- Edit the copy to define passive nodes that correspond to the active nodes, adding the IP addresses of the new virtual machines that you provisioned.
- Merge the modified copy of the cluster configuration back into your active configuration.
- Apply the new configuration to start replication.
構成例については、「[構成例](#example-configuration)」を参照してください。
For an example configuration, see "[Example configuration](#example-configuration)."
1. クラスタ内のノードごとに、同じバージョンの {% data variables.product.prodname_ghe_server %} を実行して、同じ仕様で一致する仮想マシンをプロビジョニングします。 新しい各クラスターノードの IPv4 アドレスとホスト名に注意してください。 詳しい情報については、「[前提条件](#prerequisites)」を参照してください。
1. For each node in your cluster, provision a matching virtual machine with identical specifications, running the same version of {% data variables.product.prodname_ghe_server %}. Note the IPv4 address and hostname for each new cluster node. For more information, see "[Prerequisites](#prerequisites)."
{% note %}
****: フェイルオーバー後に High Availability を再構成する場合は、代わりにプライマリ データセンターの古いノードを使用できます。
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
{% endnote %}
{% data reusables.enterprise_clustering.ssh-to-a-node %}
3. 既存のクラスタ設定をバックアップします。
3. Back up your existing cluster configuration.
```
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
```
4. _/home/admin/cluster-passive.conf_ などの一時的な場所に、既存のクラスター設定ファイルのコピーを作成します。 IP アドレス (`ipv*`)UUID (`uuid`)WireGuard の公開キー (`wireguard-pubkey`) に対するキーと値のペアを削除します。
4. Create a copy of your existing cluster configuration file in a temporary location, like _/home/admin/cluster-passive.conf_. Delete unique key-value pairs for IP addresses (`ipv*`), UUIDs (`uuid`), and public keys for WireGuard (`wireguard-pubkey`).
```
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
```
5. 前のステップでコピーした一時クラスター構成ファイルから `[cluster]` セクションを削除します。
5. Remove the `[cluster]` section from the temporary cluster configuration file that you copied in the previous step.
```
git config -f ~/cluster-passive.conf --remove-section cluster
```
6. パッシブノードをプロビジョニングしたセカンダリデータセンターの名前を決定してから、一時クラスタ設定ファイルを新しいデータセンター名で更新します。 `SECONDARY` を、選んだ名前に置き換えます。
6. Decide on a name for the secondary datacenter where you provisioned your passive nodes, then update the temporary cluster configuration file with the new datacenter name. Replace `SECONDARY` with the name you choose.
```shell
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
sed -i 's/datacenter = default/datacenter = SECONDARY/g' ~/cluster-passive.conf
```
7. パッシブノードのホスト名のパターンを決定します。
7. Decide on a pattern for the passive nodes' hostnames.
{% warning %}
**警告**: パッシブ ノードのホスト名は一意であり、対応するアクティブ ノードのホスト名とは違うものにする必要があります。
**Warning**: Hostnames for passive nodes must be unique and differ from the hostname for the corresponding active node.
{% endwarning %}
8. ステップ 3 の一時クラスタ設定ファイルをテキストエディタで開きます。 たとえばVimを利用できます。
8. Open the temporary cluster configuration file from step 3 in a text editor. For example, you can use Vim.
```shell
sudo vim ~/cluster-passive.conf
```
9. 一時クラスタ設定ファイル内の各セクションで、ノードの設定を更新します。 {% data reusables.enterprise_clustering.configuration-file-heading %}
9. In each section within the temporary cluster configuration file, update the node's configuration. {% data reusables.enterprise_clustering.configuration-file-heading %}
- 上記のステップ 7 で選んだパターンに従って、セクション見出しの引用符で囲まれたホスト名とセクション内の `hostname` の値をパッシブ ノードのホスト名に変更します。
- `ipv4` という名前の新しいキーを追加し、その値をパッシブノードの静的 IPv4 アドレスに設定します。
- 新しいキーと値のペア `replica = enabled` を追加します。
- Change the quoted hostname in the section heading and the value for `hostname` within the section to the passive node's hostname, per the pattern you chose in step 7 above.
- Add a new key named `ipv4`, and set the value to the passive node's static IPv4 address.
- Add a new key-value pair, `replica = enabled`.
```shell
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
[cluster "NEW PASSIVE NODE HOSTNAME"]
...
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
hostname = NEW PASSIVE NODE HOSTNAME
ipv4 = NEW PASSIVE NODE IPV4 ADDRESS
<strong>replica = enabled</strong>
...
...
```
10. ステップ 4 で作成した一時クラスタ設定ファイルの内容をアクティブ設定ファイルに追加します。
10. Append the contents of the temporary cluster configuration file that you created in step 4 to the active configuration file.
```shell
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
```
11. セカンダリデータセンターのプライマリ MySQL ノードと Redis ノードを指定します。 `REPLICA MYSQL PRIMARY HOSTNAME` `REPLICA REDIS PRIMARY HOSTNAME` を、既存の MySQL および Redis プライマリと一致するようにプロビジョニングしたパッシブ ノードのホスト名に置き換えます。
11. Designate the primary MySQL and Redis nodes in the secondary datacenter. Replace `REPLICA MYSQL PRIMARY HOSTNAME` and `REPLICA REDIS PRIMARY HOSTNAME` with the hostnames of the passives node that you provisioned to match your existing MySQL and Redis primaries.
```shell
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica REPLICA-MYSQL-PRIMARY-HOSTNAME
git config -f /data/user/common/cluster.conf cluster.redis-master-replica REPLICA-REDIS-PRIMARY-HOSTNAME
```
{% warning %}
**警告**: 続ける前に、クラスター構成ファイルを確認してください。
**Warning**: Review your cluster configuration file before proceeding.
- トップレベルの `[cluster]` セクションで、`mysql-master-replica` `redis-master-replica` の値が、フェイルオーバー後に MySQL Redis のプライマリとして機能するセカンダリ データセンター内のパッシブ ノードに対する正しいホスト名であることを保証します。
- <code>[cluster "<em>ACTIVE NODE HOSTNAME</em>"]</code> という名前の付いたアクティブ ノードの各セクションで、次のキーと値のペアをもう一度確認します。
- `datacenter` は、最上位セクション `primary-datacenter` `[cluster]` 値と一致する必要があります。
- `consul-datacenter` は、`datacenter` の値と一致する必要があります。これは、最上位セクション `[cluster]` にある `primary-datacenter` の値と同じです。
- アクティブ ノードごとに、構成には、同じロールを持つ **1 つ** のパッシブ ノードに対応するセクションが構成に **1 つ** 確実に存在するようにします。 パッシブノードの各セクションで、各キー/値ペアを再確認します。
- `datacenter` は、他のすべてのパッシブ ノードと一致する必要があります。
- `consul-datacenter` は、他のすべてのパッシブ ノードと一致する必要があります。
- `hostname` は、セクション見出しのホスト名と一致する必要があります。
- `ipv4` は、ノードの一意の静的 IPv4 アドレスと一致する必要があります。
- `replica` `enabled` として構成する必要があります。
- 必要に応じて、使用されなくなったオフラインノードのセクションを削除してください。
- In the top-level `[cluster]` section, ensure that the values for `mysql-master-replica` and `redis-master-replica` are the correct hostnames for the passive nodes in the secondary datacenter that will serve as the MySQL and Redis primaries after a failover.
- In each section for an active node named <code>[cluster "ACTIVE NODE HOSTNAME"]</code>, double-check the following key-value pairs.
- `datacenter` should match the value of `primary-datacenter` in the top-level `[cluster]` section.
- `consul-datacenter` should match the value of `datacenter`, which should be the same as the value for `primary-datacenter` in the top-level `[cluster]` section.
- Ensure that for each active node, the configuration has **one** corresponding section for **one** passive node with the same roles. In each section for a passive node, double-check each key-value pair.
- `datacenter` should match all other passive nodes.
- `consul-datacenter` should match all other passive nodes.
- `hostname` should match the hostname in the section heading.
- `ipv4` should match the node's unique, static IPv4 address.
- `replica` should be configured as `enabled`.
- Take the opportunity to remove sections for offline nodes that are no longer in use.
構成例を確認するには、「[構成例](#example-configuration)」を参照してください。
To review an example configuration, see "[Example configuration](#example-configuration)."
{% endwarning %}
13. 新しいクラスタ設定を初期化します。 {% data reusables.enterprise.use-a-multiplexer %}
13. Initialize the new cluster configuration. {% data reusables.enterprise.use-a-multiplexer %}
```shell
ghe-cluster-config-init
```
14. 初期化が完了すると、{% data variables.product.prodname_ghe_server %} は次のメッセージを表示します。
14. After the initialization finishes, {% data variables.product.prodname_ghe_server %} displays the following message.
```shell
Finished cluster initialization
@@ -231,33 +225,33 @@ High Availability を設定するには、クラスタ内のすべてのアク
{% data reusables.enterprise_clustering.configuration-finished %}
17. パッシブノードにフェイルオーバーした場合にユーザからの接続を受け入れるロードバランサを設定します。 詳細については、[クラスター ネットワーク構成](/enterprise/admin/enterprise-management/cluster-network-configuration#configuring-a-load-balancer)に関する記事を参照してください。
17. Configure a load balancer that will accept connections from users if you fail over to the passive nodes. For more information, see "[Cluster network configuration](/enterprise/admin/enterprise-management/cluster-network-configuration#configuring-a-load-balancer)."
クラスタ内のノードの High Availability レプリケーションの設定が完了しました。 各アクティブノードは、対応するパッシブノードへの設定とデータの複製を開始します。障害が発生した場合は、トラフィックをセカンダリデータセンターのロードバランサに転送できます。 フェールオーバーの詳細については、「[レプリカ クラスターへのフェールオーバーの開始](/enterprise/admin/enterprise-management/initiating-a-failover-to-your-replica-cluster)」を参照してください。
You've finished configuring high availability replication for the nodes in your cluster. Each active node begins replicating configuration and data to its corresponding passive node, and you can direct traffic to the load balancer for the secondary datacenter in the event of a failure. For more information about failing over, see "[Initiating a failover to your replica cluster](/enterprise/admin/enterprise-management/initiating-a-failover-to-your-replica-cluster)."
### 構成例
### Example configuration
最上位の `[cluster]` 構成は、次の例のようになります。
The top-level `[cluster]` configuration should look like the following example.
```shell
[cluster]
mysql-master = <em>HOSTNAME OF ACTIVE MYSQL MASTER</em>
redis-master = <em>HOSTNAME OF ACTIVE REDIS MASTER</em>
primary-datacenter = <em>PRIMARY DATACENTER NAME</em>
mysql-master-replica = <em>HOSTNAME OF PASSIVE MYSQL MASTER</em>
redis-master-replica = <em>HOSTNAME OF PASSIVE REDIS MASTER</em>
mysql-master = HOSTNAME-OF-ACTIVE-MYSQL-MASTER
redis-master = HOSTNAME-OF-ACTIVE-REDIS-MASTER
primary-datacenter = PRIMARY-DATACENTER-NAME
mysql-master-replica = HOSTNAME-OF-PASSIVE-MYSQL-MASTER
redis-master-replica = HOSTNAME-OF-PASSIVE-REDIS-MASTER
mysql-auto-failover = false
...
```
クラスタのストレージ層のアクティブノードの設定は、次の例のようになります。
The configuration for an active node in your cluster's storage tier should look like the following example.
```shell
...
[cluster "<em>UNIQUE ACTIVE NODE HOSTNAME</em>"]
[cluster "UNIQUE ACTIVE NODE HOSTNAME"]
datacenter = default
hostname = <em>UNIQUE ACTIVE NODE HOSTNAME</em>
ipv4 = <em>IPV4 ADDRESS</em>
hostname = UNIQUE-ACTIVE-NODE-HOSTNAME
ipv4 = IPV4-ADDRESS
consul-datacenter = default
consul-server = true
git-server = true
@@ -268,26 +262,26 @@ High Availability を設定するには、クラスタ内のすべてのアク
memcache-server = true
metrics-server = true
storage-server = true
vpn = <em>IPV4 ADDRESS SET AUTOMATICALLY</em>
uuid = <em>UUID SET AUTOMATICALLY</em>
wireguard-pubkey = <em>PUBLIC KEY SET AUTOMATICALLY</em>
vpn = IPV4 ADDRESS SET AUTOMATICALLY
uuid = UUID SET AUTOMATICALLY
wireguard-pubkey = PUBLIC KEY SET AUTOMATICALLY
...
```
ストレージ層内の対応するパッシブノードの設定は、次の例のようになります。
The configuration for the corresponding passive node in the storage tier should look like the following example.
- 対応するアクティブ ノードとの大きな違いは **太字** であることです。
- {% data variables.product.prodname_ghe_server %} は、`vpn``uuid`、および `wireguard-pubkey` の値を自動的に割り当てるため、初期化するパッシブ ノードの値を定義しないでください。
- `*-server` キーで定義されたサーバー ロールは、対応するアクティブ ノードと一致します。
- Important differences from the corresponding active node are **bold**.
- {% data variables.product.prodname_ghe_server %} assigns values for `vpn`, `uuid`, and `wireguard-pubkey` automatically, so you shouldn't define the values for passive nodes that you will initialize.
- The server roles, defined by `*-server` keys, match the corresponding active node.
```shell
...
<strong>[cluster "<em>UNIQUE PASSIVE NODE HOSTNAME</em>"]</strong>
<strong>[cluster "UNIQUE PASSIVE NODE HOSTNAME"]</strong>
<strong>replica = enabled</strong>
<strong>ipv4 = <em>IPV4 ADDRESS OF NEW VM WITH IDENTICAL RESOURCES</em></strong>
<strong>datacenter = <em>SECONDARY DATACENTER NAME</em></strong>
<strong>hostname = <em>UNIQUE PASSIVE NODE HOSTNAME</em></strong>
<strong>consul-datacenter = <em>SECONDARY DATACENTER NAME</em></strong>
<strong>ipv4 = IPV4 ADDRESS OF NEW VM WITH IDENTICAL RESOURCES</strong>
<strong>datacenter = SECONDARY DATACENTER NAME</strong>
<strong>hostname = UNIQUE PASSIVE NODE HOSTNAME</strong>
<strong>consul-datacenter = SECONDARY DATACENTER NAME</strong>
consul-server = true
git-server = true
pages-server = true
@@ -297,73 +291,73 @@ High Availability を設定するには、クラスタ内のすべてのアク
memcache-server = true
metrics-server = true
storage-server = true
<strong>vpn = <em>DO NOT DEFINE</em></strong>
<strong>uuid = <em>DO NOT DEFINE</em></strong>
<strong>wireguard-pubkey = <em>DO NOT DEFINE</em></strong>
<strong>vpn = DO NOT DEFINE</strong>
<strong>uuid = DO NOT DEFINE</strong>
<strong>wireguard-pubkey = DO NOT DEFINE</strong>
...
```
## アクティブクラスターノードとパッシブクラスターノード間のレプリケーションを監視する
## Monitoring replication between active and passive cluster nodes
クラスタ内のアクティブノードとパッシブノード間の初期レプリケーションには時間がかかります。 時間は、複製するデータの量と {% data variables.product.prodname_ghe_server %} のアクティビティレベルによって異なります。
Initial replication between the active and passive nodes in your cluster takes time. The amount of time depends on the amount of data to replicate and the activity levels for {% data variables.product.prodname_ghe_server %}.
{% data variables.product.prodname_ghe_server %} 管理シェルから利用できるコマンドラインツールを使用して、クラスタ内の任意のノードの進行状況を監視できます。 管理シェルの詳細については、「[管理シェル (SSH) にアクセスする](/enterprise/admin/configuration/accessing-the-administrative-shell-ssh)」を参照してください。
You can monitor the progress on any node in the cluster, using command-line tools available via the {% data variables.product.prodname_ghe_server %} administrative shell. For more information about the administrative shell, see "[Accessing the administrative shell (SSH)](/enterprise/admin/configuration/accessing-the-administrative-shell-ssh)."
- データベースのレプリケーションの監視する:
- Monitor replication of databases:
```
/usr/local/share/enterprise/ghe-cluster-status-mysql
```
- リポジトリと Gist データのレプリケーションを監視する:
- Monitor replication of repository and Gist data:
```
ghe-spokes status
```
- 添付ファイルと LFS データのレプリケーションを監視する:
- Monitor replication of attachment and LFS data:
```
ghe-storage replication-status
```
- Pages データのレプリケーションを監視する:
- Monitor replication of Pages data:
```
ghe-dpages replication-status
```
`ghe-cluster-status` を使用すると、クラスターの全体的な正常性を確認することができます。 詳細については、「[コマンド ライン ユーティリティ](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)」を参照してください。
You can use `ghe-cluster-status` to review the overall health of your cluster. For more information, see "[Command-line utilities](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)."
## フェイルオーバー後の High Availability レプリケーションを再設定する
## Reconfiguring high availability replication after a failover
クラスタのアクティブードからクラスタのパッシブードにフェイルオーバーした後、2 つの方法で High Availability レプリケーションを再設定できます。
After you fail over from the cluster's active nodes to the cluster's passive nodes, you can reconfigure high availability replication in two ways.
### 新しいパッシブノードのプロビジョニングと設定
### Provisioning and configuring new passive nodes
フェイルオーバー後、2 つの方法で High Availability を再設定できます。 選択する方法は、フェイルオーバーした理由と元のアクティブノードの状態によって異なります。
After a failover, you can reconfigure high availability in two ways. The method you choose will depend on the reason that you failed over, and the state of the original active nodes.
1. セカンダリデータセンターの新しいアクティブノードごとに、パッシブノードの新しいセットをプロビジョニングして設定します。
1. Provision and configure a new set of passive nodes for each of the new active nodes in your secondary datacenter.
2. 古いアクティブノードを新しいパッシブノードとして使用します。
2. Use the old active nodes as the new passive nodes.
High Availability を再設定するプロセスは、High Availability の初期設定と同じです。 詳細については、「[クラスターの High Availability レプリカを作成する](#creating-a-high-availability-replica-for-a-cluster)」を参照してください。
The process for reconfiguring high availability is identical to the initial configuration of high availability. For more information, see "[Creating a high availability replica for a cluster](#creating-a-high-availability-replica-for-a-cluster)."
## クラスタの High Availability レプリケーションを無効化する
## Disabling high availability replication for a cluster
{% data variables.product.prodname_ghe_server %} のクラスタデプロイメントのパッシブノードへのレプリケーションを停止できます。
You can stop replication to the passive nodes for your cluster deployment of {% data variables.product.prodname_ghe_server %}.
{% data reusables.enterprise_clustering.ssh-to-a-node %}
{% data reusables.enterprise_clustering.open-configuration-file %}
3. 最上位のセクション `[cluster]` で、`redis-master-replica` `mysql-master-replica` のキーと値のペアを削除します。
3. In the top-level `[cluster]` section, delete the `redis-master-replica`, and `mysql-master-replica` key-value pairs.
4. パッシブノードの各セクションを削除します。 パッシブ ノードの場合、`replica` `enabled` として構成されます。
4. Delete each section for a passive node. For passive nodes, `replica` is configured as `enabled`.
{% data reusables.enterprise_clustering.apply-configuration %}
{% data reusables.enterprise_clustering.configuration-finished %}
{% data variables.product.prodname_ghe_server %} がプロンプトに戻ったら、High Availability レプリケーションの無効化が完了したことになります。
After {% data variables.product.prodname_ghe_server %} returns you to the prompt, you've finished disabling high availability replication.

View File

@@ -1,6 +1,6 @@
---
title: High Availabilityレプリカの作成
intro: アクティブ/パッシブ設定では、レプリカアプライアンスはプライマリアプライアンスの冗長コピーです。 プライマリアプライアンスに障害が起こると、High Availabilityモードではレプリカがプライマリアプライアンスとして動作し、サービスの中断を最小限にできます。
title: Creating a high availability replica
intro: 'In an active/passive configuration, the replica appliance is a redundant copy of the primary appliance. If the primary appliance fails, high availability mode allows the replica to act as the primary appliance, allowing minimal service disruption.'
redirect_from:
- /enterprise/admin/installation/creating-a-high-availability-replica
- /enterprise/admin/enterprise-management/creating-a-high-availability-replica
@@ -13,94 +13,92 @@ topics:
- High availability
- Infrastructure
shortTitle: Create HA replica
ms.openlocfilehash: 115295bd685284c9bd96eab9990c7619c1a0a8d3
ms.sourcegitcommit: 47bd0e48c7dba1dde49baff60bc1eddc91ab10c5
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/05/2022
ms.locfileid: '147648224'
---
{% data reusables.enterprise_installation.replica-limit %}
## High Availabilityレプリカの作成
## Creating a high availability replica
1. 新しい {% data variables.product.prodname_ghe_server %} アプライアンスを希望するプラットフォームにセットアップします。 レプリカアプライアンスのCPURAM、ストレージ設定は、プライマリアプライアンスと同じにするべきです。 レプリカアプライアンスは、独立した環境にインストールすることをお勧めします。 下位層のハードウェア、ソフトウェア、ネットワークコンポーネントは、プライマリアプライアンスのそれらとは分離されているべきです。 クラウドプロバイダを利用している場合には、別個のリージョンもしくはゾーンを使ってください。 詳細については、「[{% data variables.product.prodname_ghe_server %} インスタンスをセットアップする](/enterprise/admin/guides/installation/setting-up-a-github-enterprise-server-instance)」を参照してください。
1. 新しいアプライアンスが、ポート 122/TCP 1194/UDP 経由で、この高可用性環境の他のすべてのアプライアンスと通信できることを確認します。 詳細については、「[ネットワーク ポート](/admin/configuration/configuring-network-settings/network-ports#administrative-ports)」を参照してください。
1. ブラウザで新しいレプリカアプライアンスのIPアドレスにアクセスして、所有する{% data variables.product.prodname_enterprise %}のライセンスをアップロードしてください。
1. Set up a new {% data variables.product.prodname_ghe_server %} appliance on your desired platform. The replica appliance should mirror the primary appliance's CPU, RAM, and storage settings. We recommend that you install the replica appliance in an independent environment. The underlying hardware, software, and network components should be isolated from those of the primary appliance. If you are a using a cloud provider, use a separate region or zone. For more information, see ["Setting up a {% data variables.product.prodname_ghe_server %} instance"](/enterprise/admin/guides/installation/setting-up-a-github-enterprise-server-instance).
1. Ensure that the new appliance can communicate with all other appliances in this high availability environment over ports 122/TCP and 1194/UDP. For more information, see "[Network ports](/admin/configuration/configuring-network-settings/network-ports#administrative-ports)."
1. In a browser, navigate to the new replica appliance's IP address and upload your {% data variables.product.prodname_enterprise %} license.
{% data reusables.enterprise_installation.replica-steps %}
1. SSHを使ってレプリカアプライアンスのIPアドレスに接続してください。
1. Connect to the replica appliance's IP address using SSH.
```shell
$ ssh -p 122 admin@<em>REPLICA IP</em>
$ ssh -p 122 admin@REPLICA_IP
```
{% data reusables.enterprise_installation.generate-replication-key-pair %} {% data reusables.enterprise_installation.add-ssh-key-to-primary %}
1. 新しいレプリカに対して、プライマリへの接続を確認してレプリカ モードを有効にするには、`ghe-repl-setup` をもう一度実行します。
{% data reusables.enterprise_installation.generate-replication-key-pair %}
{% data reusables.enterprise_installation.add-ssh-key-to-primary %}
1. To verify the connection to the primary and enable replica mode for the new replica, run `ghe-repl-setup` again.
```shell
$ ghe-repl-setup <em>PRIMARY IP</em>
$ ghe-repl-setup PRIMARY_IP
```
{% data reusables.enterprise_installation.replication-command %} {% data reusables.enterprise_installation.verify-replication-channel %}
{% data reusables.enterprise_installation.replication-command %}
{% data reusables.enterprise_installation.verify-replication-channel %}
## Geo-replicationレプリカの作成
## Creating geo-replication replicas
レプリカを作成する以下の例の設定では、1 つのプライマリと 2 つのレプリカを使用しており、これらは 3 つの異なる地域にあります。 3 つのノードは別のネットワークに配置できますが、すべてのノードは他のすべてのノードから到達可能である必要があります。 最低限、必要な管理ポートは他のすべてのノードに対して開かれている必要があります。 ポート要件の詳細については、「[ネットワーク ポート](/enterprise/admin/guides/installation/network-ports/#administrative-ports)」を参照してください。
This example configuration uses a primary and two replicas, which are located in three different geographic regions. While the three nodes can be in different networks, all nodes are required to be reachable from all the other nodes. At the minimum, the required administrative ports should be open to all the other nodes. For more information about the port requirements, see "[Network Ports](/enterprise/admin/guides/installation/network-ports/#administrative-ports)."
1. 最初のレプリカで `ghe-repl-setup` を実行することで、標準の 2 ノード構成の場合と同じ方法で最初のレプリカを作成します。
{% data reusables.enterprise_clustering.network-latency %}{% ifversion ghes > 3.2 %} If latency is more than 70 milliseconds, we recommend cache replica nodes instead. For more information, see "[Configuring a repository cache](/admin/enterprise-management/caching-repositories/configuring-a-repository-cache)."{% endif %}
1. Create the first replica the same way you would for a standard two node configuration by running `ghe-repl-setup` on the first replica.
```shell
(replica1)$ ghe-repl-setup <em>PRIMARY IP</em>
(replica1)$ ghe-repl-setup PRIMARY_IP
(replica1)$ ghe-repl-start
```
2. 2 つ目のレプリカを作成し、`ghe-repl-setup --add` コマンドを使用します。 `--add` フラグは、既存のレプリケーション構成を上書きするのを防ぎ、新しいレプリカを構成に追加します。
2. Create a second replica and use the `ghe-repl-setup --add` command. The `--add` flag prevents it from overwriting the existing replication configuration and adds the new replica to the configuration.
```shell
(replica2)$ ghe-repl-setup --add <em>PRIMARY IP</em>
(replica2)$ ghe-repl-setup --add PRIMARY_IP
(replica2)$ ghe-repl-start
```
3. デフォルトでは、レプリカは同じデータセンターに設定され、同じノードにある既存のノードからシードを試行します。 レプリカを別のデータセンターに設定するには、datacenter オプションに異なる値を設定します。 具体的な値は、それらが互いに異なる限り、どのようなものでもかまいません。 各ノードで `ghe-repl-node` コマンドを実行し、データセンターを指定します。
3. By default, replicas are configured to the same datacenter, and will now attempt to seed from an existing node in the same datacenter. Configure the replicas for different datacenters by setting a different value for the datacenter option. The specific values can be anything you would like as long as they are different from each other. Run the `ghe-repl-node` command on each node and specify the datacenter.
プライマリでは以下のコマンドを実行します。
On the primary:
```shell
(primary)$ ghe-repl-node --datacenter <em>[PRIMARY DC NAME]</em>
(primary)$ ghe-repl-node --datacenter [PRIMARY DC NAME]
```
1 番目のレプリカでは以下のコマンドを実行します。
On the first replica:
```shell
(replica1)$ ghe-repl-node --datacenter <em>[FIRST REPLICA DC NAME]</em>
(replica1)$ ghe-repl-node --datacenter [FIRST REPLICA DC NAME]
```
2 番目のレプリカでは以下のコマンドを実行します。
On the second replica:
```shell
(replica2)$ ghe-repl-node --datacenter <em>[SECOND REPLICA DC NAME]</em>
(replica2)$ ghe-repl-node --datacenter [SECOND REPLICA DC NAME]
```
{% tip %}
**ヒント:** `--datacenter` および `--active` オプションは同時に設定できます。
**Tip:** You can set the `--datacenter` and `--active` options at the same time.
{% endtip %}
4. アクティブなレプリカノードは、アプライアンスデータのコピーを保存し、エンドユーザーのリクエストに応じます。 アクティブではないノードは、アプライアンスデータのコピーを保存しますが、エンドユーザーのリクエストに応じることはできません。 `--active` フラグを使用してアクティブ モードを有効にするか、`--inactive` フラグを使用して非アクティブ モードを有効にします。
4. An active replica node will store copies of the appliance data and service end user requests. An inactive node will store copies of the appliance data but will be unable to service end user requests. Enable active mode using the `--active` flag or inactive mode using the `--inactive` flag.
1 番目のレプリカでは以下のコマンドを実行します。
On the first replica:
```shell
(replica1)$ ghe-repl-node --active
```
2 番目のレプリカでは以下のコマンドを実行します。
On the second replica:
```shell
(replica2)$ ghe-repl-node --active
```
5. 構成を適用するには、プライマリで `ghe-config-apply` コマンドを使用します。
5. To apply the configuration, use the `ghe-config-apply` command on the primary.
```shell
(primary)$ ghe-config-apply
```
## Geo-replicationのためのDNSの設定
## Configuring DNS for geo-replication
プライマリとレプリカノードの IP アドレスを使って、Geo DNS を設定します。 SSH でプライマリ ノードにアクセスしたり、`backup-utils` でバックアップしたりするために、プライマリ ノード (たとえば、`primary.github.example.com`) に対して DNS CNAME を作成することもできます。
Configure Geo DNS using the IP addresses of the primary and replica nodes. You can also create a DNS CNAME for the primary node (e.g. `primary.github.example.com`) to access the primary node via SSH or to back it up via `backup-utils`.
テストのために、ローカル ワークステーションの `hosts` ファイル (たとえば、`/etc/hosts`) にエントリを追加することができます。 以下の例のエントリでは、`HOSTNAME` に対する要求が `replica2` に解決されることになります。 別の行をコメントアウトすることで、特定のホストをターゲットにすることができます。
For testing, you can add entries to the local workstation's `hosts` file (for example, `/etc/hosts`). These example entries will resolve requests for `HOSTNAME` to `replica2`. You can target specific hosts by commenting out different lines.
```
# <primary IP> <em>HOSTNAME</em>
# <replica1 IP> <em>HOSTNAME</em>
<replica2 IP> <em>HOSTNAME</em>
# <primary IP> HOSTNAME
# <replica1 IP> HOSTNAME
<replica2 IP> HOSTNAME
```
## 参考資料
## Further reading
- "[高可用性構成について](/enterprise/admin/guides/installation/about-high-availability-configuration)"
- [Utilities for replication management](/enterprise/admin/guides/installation/about-high-availability-configuration/#utilities-for-replication-management)」 (レプリケーション管理のユーティリティ)
- "[geo レプリケーションについて](/enterprise/admin/guides/installation/about-geo-replication/)"
- "[About high availability configuration](/enterprise/admin/guides/installation/about-high-availability-configuration)"
- "[Utilities for replication management](/enterprise/admin/guides/installation/about-high-availability-configuration/#utilities-for-replication-management)"
- "[About geo-replication](/enterprise/admin/guides/installation/about-geo-replication/)"

View File

@@ -36,7 +36,9 @@ When you use external authentication, {% data variables.location.product_locatio
If you use an enterprise with {% data variables.product.prodname_emus %}, members of your enterprise authenticate to access {% data variables.product.prodname_dotcom %} through your SAML identity provider (IdP). For more information, see "[About {% data variables.product.prodname_emus %}](/admin/identity-and-access-management/using-enterprise-managed-users-and-saml-for-iam/about-enterprise-managed-users)" and "[About authentication for your enterprise](/admin/identity-and-access-management/managing-iam-for-your-enterprise/about-authentication-for-your-enterprise#authentication-methods-for-github-enterprise-server)."
{% data variables.product.product_name %} automatically creates a username for each person when their user account is provisioned via SCIM, by normalizing an identifier provided by your IdP. If multiple identifiers are normalized into the same username, a username conflict occurs, and only the first user account is created. {% data reusables.enterprise-accounts.emu-only-emails-within-the-enterprise-can-conflict %} You can resolve username conflicts by making a change in your IdP so that the normalized usernames will be unique.
{% data variables.product.prodname_dotcom %} automatically creates a username for each person when their user account is provisioned via SCIM, by normalizing an identifier provided by your IdP, then adding an underscore and short code. If multiple identifiers are normalized into the same username, a username conflict occurs, and only the first user account is created. You can resolve username problems by making a change in your IdP so that the normalized usernames will be unique and within the 39-character limit.
{% data reusables.enterprise-accounts.emu-only-emails-within-the-enterprise-can-conflict %}
{% elsif ghae %}
@@ -62,7 +64,7 @@ These rules may result in your IdP providing the same _IDP-USERNAME_ for multipl
- `bob@fabrikam.com`
- `bob#EXT#fabrikamcom@contoso.com`
This will cause a username conflict, and only the first user will be provisioned. For more information, see "[Resolving username conflicts](#resolving-username-conflicts)."
This will cause a username conflict, and only the first user will be provisioned. For more information, see "[Resolving username problems](#resolving-username-problems)."
{% endif %}
Usernames{% ifversion ghec %}, including underscore and short code,{% endif %} must not exceed 39 characters.
@@ -83,7 +85,7 @@ When you configure SAML authentication, {% data variables.product.product_name %
1. Usernames created from email addresses are created from the normalized characters that precede the `@` character.
1. If multiple accounts are normalized into the same {% data variables.product.product_name %} username, only the first user account is created. Subsequent users with the same username won't be able to sign in. {% ifversion ghec %}For more information, see "[Resolving username conflicts](#resolving-username-conflicts)."{% endif %}
1. If multiple accounts are normalized into the same {% data variables.product.product_name %} username, only the first user account is created. Subsequent users with the same username won't be able to sign in. {% ifversion ghec %}For more information, see "[Resolving username problems](#resolving-username-problems)."{% endif %}
### Examples of username normalization
@@ -121,11 +123,16 @@ When you configure SAML authentication, {% data variables.product.product_name %
{% endif %}
{% ifversion ghec %}
## Resolving username conflicts
## Resolving username problems
When a new user is being provisioned, if the user's normalized username conflicts with an existing user in the enterprise, the provisioning attempt will fail with a `409` error.
When a new user is being provisioned, if the username is longer than 39 characters (including underscore and short code), or conflicts with an existing user in the enterprise, the provisioning attempt will fail with a `409` error.
To resolve this problem, you must make a change in your IdP so that the normalized usernames will be unique. If you cannot change the identifier that's being normalized, you can change the attribute mapping for the `userName` attribute. If you change the attribute mapping, usernames of existing {% data variables.enterprise.prodname_managed_users %} will be updated, but nothing else about the accounts will change, including activity history.
To resolve this problem, you must make one of the following changes in your IdP so that all normalized usernames will be within the character limit and unique.
- Change the `userName` attribute value for individual users that are causing problems
- Change the `userName` attribute mapping for all users
- Configure a custom `userName` attribute for all users
When you change the attribute mapping, usernames of existing {% data variables.enterprise.prodname_managed_users %} will be updated, but nothing else about the accounts will change, including activity history.
{% note %}
@@ -133,9 +140,9 @@ To resolve this problem, you must make a change in your IdP so that the normaliz
{% endnote %}
### Resolving username conflicts with Azure AD
### Resolving username problems with Azure AD
To resolve username conflicts in Azure AD, either modify the User Principal Name value for the conflicting user or modify the attribute mapping for the `userName` attribute. If you modify the attribute mapping, you can choose an existing attribute or use an expression to ensure that all provisioned users have a unique normalized alias.
To resolve username problems in Azure AD, either modify the User Principal Name value for the conflicting user or modify the attribute mapping for the `userName` attribute. If you modify the attribute mapping, you can choose an existing attribute or use an expression to ensure that all provisioned users have a unique normalized alias.
1. In Azure AD, open the {% data variables.product.prodname_emu_idp_application %} application.
1. In the left sidebar, click **Provisioning**.
@@ -146,9 +153,9 @@ To resolve username conflicts in Azure AD, either modify the User Principal Name
- To map an existing attribute in Azure AD to the `userName` attribute in {% data variables.product.prodname_dotcom %}, click your desired attribute field. Then, save and wait for a provisioning cycle to occur within about 40 minutes.
- To use an expression instead of an existing attribute, change the Mapping type to "Expression", then add a custom expression that will make this value unique for all users. For example, you could use `[FIRST NAME]-[LAST NAME]-[EMPLOYEE ID]`. For more information, see [Reference for writing expressions for attribute mappings in Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/functions-for-customizing-application-data) in Microsoft Docs.
### Resolving username conflicts with Okta
### Resolving username problems with Okta
To resolve username conflicts in Okta, update the attribute mapping settings for the {% data variables.product.prodname_emu_idp_application %} application.
To resolve username problems in Okta, update the attribute mapping settings for the {% data variables.product.prodname_emu_idp_application %} application.
1. In Okta, open the {% data variables.product.prodname_emu_idp_application %} application.
1. Click **Sign On**.

View File

@@ -136,7 +136,9 @@ By default, when an unauthenticated user attempts to access an enterprise that u
{% data variables.product.product_name %} automatically creates a username for each person by normalizing an identifier provided by your IdP. For more information, see "[Username considerations for external authentication](/admin/identity-and-access-management/managing-iam-for-your-enterprise/username-considerations-for-external-authentication)."
A conflict may occur when provisioning users if the unique parts of the identifier provided by your IdP are removed during normalization. {% data reusables.enterprise-accounts.emu-only-emails-within-the-enterprise-can-conflict %} If you're unable to provision a user due to a username conflict, you should modify the username provided by your IdP. For more information, see "[Resolving username conflicts](/admin/identity-and-access-management/managing-iam-for-your-enterprise/username-considerations-for-external-authentication#resolving-username-conflicts)."
A conflict may occur when provisioning users if the unique parts of the identifier provided by your IdP are removed during normalization. If you're unable to provision a user due to a username conflict, you should modify the username provided by your IdP. For more information, see "[Resolving username problems](/admin/identity-and-access-management/managing-iam-for-your-enterprise/username-considerations-for-external-authentication#resolving-username-problems)."
{% data reusables.enterprise-accounts.emu-only-emails-within-the-enterprise-can-conflict %}
The profile name and email address of a {% data variables.enterprise.prodname_managed_user %} is also provided by the IdP. {% data variables.enterprise.prodname_managed_users_caps %} cannot change their profile name or email address on {% data variables.product.prodname_dotcom %}, and the IdP can only provide a single email address.

View File

@@ -168,9 +168,19 @@ By default, when you create a new enterprise, workflows are not allowed to creat
{% data reusables.actions.cache-default-size %} {% data reusables.actions.cache-eviction-process %}
However, you can set an enterprise policy to customize both the default total cache size for each repository, as well as the maximum total cache size allowed for a repository. For example, you might want the default total cache size for each repository to be 5 GB, but also allow repository administrators to configure a total cache size up to 15 GB if necessary.
However, you can set an enterprise policy to customize both the default total cache size for each repository, as well as the maximum total cache size allowed for a repository. For example, you might want the default total cache size for each repository to be 5 GB, but also allow {% ifversion actions-cache-admin-ui %}organization owners and{% endif %} repository administrators to configure a total cache size up to 15 GB if necessary.
People with admin access to a repository can set a total cache size for their repository up to the maximum cache size allowed by the enterprise policy setting.
{% ifversion actions-cache-admin-ui %}Organization owners can set a lower total cache size that applies to each repository in their organization. {% endif %}People with admin access to a repository can set a total cache size for their repository up to the maximum cache size allowed by the enterprise {% ifversion actions-cache-admin-ui %}or organization{% endif %} policy setting.
{% ifversion actions-cache-admin-ui %}
{% data reusables.enterprise-accounts.access-enterprise %}
{% data reusables.enterprise-accounts.policies-tab %}
{% data reusables.enterprise-accounts.actions-tab %}
1. In the "Artifact, cache and log settings" section, under **Maximum cache size limit**, enter a value, then click **Save** to apply the setting.
1. In the "Artifact, cache and log settings" section, under **Default cache size limit**, enter a value, then click **Save** to apply the setting.
{% else %}
The policy settings for {% data variables.product.prodname_actions %} cache storage can currently only be modified using the REST API:
@@ -180,3 +190,5 @@ The policy settings for {% data variables.product.prodname_actions %} cache stor
{% data reusables.actions.cache-no-org-policy %}
{% endif %}
{% endif %}

View File

@@ -125,7 +125,7 @@ Before adding a new SSH key to the ssh-agent to manage your keys, you should hav
* Open your `~/.ssh/config` file, then modify the file to contain the following lines. If your SSH key file has a different name or path than the example code, modify the filename or path to match your current setup.
```
Host *
Host *.{% ifversion ghes or ghae %}HOSTNAME{% else %}github.com{% endif %}
AddKeysToAgent yes
UseKeychain yes
IdentityFile ~/.ssh/id_{% ifversion ghae %}ecdsa{% else %}ed25519{% endif %}
@@ -137,10 +137,10 @@ Before adding a new SSH key to the ssh-agent to manage your keys, you should hav
- If you chose not to add a passphrase to your key, you should omit the `UseKeychain` line.
- If you see a `Bad configuration option: usekeychain` error, add an additional line to the configuration's' `Host *` section.
- If you see a `Bad configuration option: usekeychain` error, add an additional line to the configuration's' `Host *.{% ifversion ghes or ghae %}HOSTNAME{% else %}github.com{% endif %}` section.
```
Host *
Host *.{% ifversion ghes or ghae %}HOSTNAME{% else %}github.com{% endif %}
IgnoreUnknown UseKeychain
```
{% endnote %}

View File

@@ -35,8 +35,6 @@ When you create a {% data variables.product.pat_generic %}, we recommend that yo
If a valid OAuth token, {% data variables.product.prodname_github_app %} token, or {% data variables.product.pat_generic %} is pushed to a public repository or public gist, the token will be automatically revoked.
OAuth tokens and personal {% data variables.product.pat_v1_plural %} pushed to public repositories and public gists will only be revoked if the token has scopes.{% ifversion pat-v2 %} {% data variables.product.pat_v2_caps %}s will always be revoked.{% endif %}
{% endif %}
{% ifversion fpt or ghec %}

View File

@@ -860,7 +860,7 @@ registries:
The `npm-registry` type supports username and password, or token.
When using username and password, your `.npmrc`'s auth token may contain a `base64` encoded `_password`; however, the password referenced in your {% data variables.product.prodname_dependabot %} configuration file must be the original (unencoded) password.
When using username and password, your `.npmrc`'s auth token may contain a `base64` encoded `_password`; however, the password referenced in your {% data variables.product.prodname_dependabot %} configuration file must be the original (unencoded) password.
{% raw %}
```yaml
@@ -882,6 +882,8 @@ registries:
token: ${{secrets.MY_GITHUB_PERSONAL_TOKEN}}
```
{% endraw %}
{% ifversion dependabot-yarn-v3-update %}
For security reasons, {% data variables.product.prodname_dependabot %} does not set environment variables. Yarn (v2 and later) requires that any accessed environment variables are set. When accessing environment variables in your `.yarnrc.yml` file, you should provide a fallback value such as {% raw %}`${ENV_VAR-fallback}`{% endraw %} or {% raw %}`${ENV_VAR:-fallback}`{% endraw %}. For more information, see [Yarnrc files](https://yarnpkg.com/configuration/yarnrc) in the Yarn documentation.{% endif %}
### `nuget-feed`

View File

@@ -1,6 +1,6 @@
---
title: シークレット スキャンによるプッシュの保護
intro: '{% data variables.product.prodname_secret_scanning %} を使って、プッシュ保護を有効にすることで、サポートされているシークレットが組織またはリポジトリにプッシュされないようにすることができます。'
title: Protecting pushes with secret scanning
intro: 'You can use {% data variables.product.prodname_secret_scanning %} to prevent supported secrets from being pushed into your {% ifversion secret-scanning-enterprise-level %}enterprise,{% endif %} organization{% ifversion secret-scanning-enterprise-level %},{% endif %} or repository by enabling push protection.'
product: '{% data reusables.gated-features.secret-scanning %}'
miniTocMaxHeadingLevel: 3
versions:
@@ -14,118 +14,127 @@ topics:
- Alerts
- Repositories
shortTitle: Enable push protection
ms.openlocfilehash: 4c6aefb5614fff741f7c94fe0ca6fd34029e2129
ms.sourcegitcommit: 47bd0e48c7dba1dde49baff60bc1eddc91ab10c5
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/05/2022
ms.locfileid: '147683744'
---
{% data reusables.secret-scanning.beta %} {% data reusables.secret-scanning.enterprise-enable-secret-scanning %} {% data reusables.secret-scanning.push-protection-beta %}
## シークレットのプッシュ保護について
{% data reusables.secret-scanning.beta %}
{% data reusables.secret-scanning.enterprise-enable-secret-scanning %}
{% data reusables.secret-scanning.push-protection-beta %}
これまで、{% data variables.product.prodname_secret_scanning_GHAS %}は、プッシュ "_後_" にシークレットをチェックし、公開されたシークレットに対してユーザーに警告します。 {% data reusables.secret-scanning.push-protection-overview %}
## About push protection for secrets
共同作成者がシークレットのプッシュ保護ブロックをバイパスする場合、{% data variables.product.prodname_dotcom %} では次のことが行われます。
- アラートを生成する。
- リポジトリの [セキュリティ] タブでアラートを作成する。
- バイパス イベントを監査ログに追加する。{% ifversion secret-scanning-push-protection-email %}
- 関連するシークレットへのリンクとそれが許可された理由を含む、電子メール アラートを Organization の所有者、セキュリティ マネージャー、リポジトリ管理者に送信する。{% endif %}
Up to now, {% data variables.product.prodname_secret_scanning_GHAS %} checks for secrets _after_ a push and alerts users to exposed secrets. {% data reusables.secret-scanning.push-protection-overview %}
プッシュ保護に対応しているシークレットとサービス プロバイダーの詳細については、「[{% data variables.product.prodname_secret_scanning_caps %} パターン](/code-security/secret-scanning/secret-scanning-patterns#supported-secrets-for-push-protection)」を参照してください。
If a contributor bypasses a push protection block for a secret, {% data variables.product.prodname_dotcom %}:
- generates an alert.
- creates an alert in the "Security" tab of the repository.
- adds the bypass event to the audit log.{% ifversion secret-scanning-push-protection-email %}
- sends an email alert to organization owners, security managers, and repository administrators, with a link to the related secret and the reason why it was allowed.{% endif %}
## プッシュ保護としての {% data variables.product.prodname_secret_scanning %} の有効化
For information on the secrets and service providers supported for push protection, see "[{% data variables.product.prodname_secret_scanning_caps %} patterns](/code-security/secret-scanning/secret-scanning-patterns#supported-secrets-for-push-protection)."
プッシュ保護として {% data variables.product.prodname_secret_scanning %} を使用するには、組織またはリポジトリで {% data variables.product.prodname_GH_advanced_security %} と {% data variables.product.prodname_secret_scanning %} の両方が有効になっている必要があります。 詳細については、「[組織のセキュリティと分析の設定の管理](/organizations/keeping-your-organization-secure/managing-security-and-analysis-settings-for-your-organization)」、「[リポジトリのセキュリティと分析の設定の管理](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-security-and-analysis-settings-for-your-repository)」、「[{% data variables.product.prodname_GH_advanced_security %} について](/get-started/learning-about-github/about-github-advanced-security)」を参照してください。
## Enabling {% data variables.product.prodname_secret_scanning %} as a push protection
組織の所有者、セキュリティ マネージャー、リポジトリ管理者は、UI と API を介して {% data variables.product.prodname_secret_scanning %} のプッシュ保護を有効にすることができます。 詳細については、「[リポジトリ](/rest/reference/repos#update-a-repository)」を参照し、REST API ドキュメントの "`security_and_analysis` オブジェクトのプロパティ" セクションを展開します。
For you to use {% data variables.product.prodname_secret_scanning %} as a push protection, the {% ifversion secret-scanning-enterprise-level %}enterprise,{% endif %} organization{% ifversion secret-scanning-enterprise-level %},{% endif %} or repository needs to have both {% data variables.product.prodname_GH_advanced_security %} and {% data variables.product.prodname_secret_scanning %} enabled. For more information, see {% ifversion secret-scanning-enterprise-level %}"[Managing security and analysis settings for your enterprise](/admin/code-security/managing-github-advanced-security-for-your-enterprise/managing-github-advanced-security-features-for-your-enterprise),"{% endif %} "[Managing security and analysis settings for your organization](/organizations/keeping-your-organization-secure/managing-security-and-analysis-settings-for-your-organization)," "[Managing security and analysis settings for your repository](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-security-and-analysis-settings-for-your-repository)," and "[About {% data variables.product.prodname_GH_advanced_security %}](/get-started/learning-about-github/about-github-advanced-security)."
### 組織のプッシュ保護としての {% data variables.product.prodname_secret_scanning %} の有効化
Organization owners, security managers, and repository administrators can enable push protection for {% data variables.product.prodname_secret_scanning %} via the UI and API. For more information, see "[Repositories](/rest/reference/repos#update-a-repository)" and expand the "Properties of the `security_and_analysis` object" section in the REST API documentation.
{% data reusables.organizations.navigate-to-org %} {% data reusables.organizations.org_settings %} {% data reusables.organizations.security-and-analysis %} {% data reusables.repositories.navigate-to-ghas-settings %} {% data reusables.advanced-security.secret-scanning-push-protection-org %}
{% ifversion secret-scanning-enterprise-level %}
### Enabling {% data variables.product.prodname_secret_scanning %} as a push protection for your enterprise
{% data reusables.enterprise-accounts.access-enterprise %}
{% data reusables.enterprise-accounts.settings-tab %}
1. In the left sidebar, click **Code security and analysis**.
{% data reusables.advanced-security.secret-scanning-push-protection-enterprise %}
{% endif %}
### リポジトリのプッシュ保護としての {% data variables.product.prodname_secret_scanning %} の有効化
### Enabling {% data variables.product.prodname_secret_scanning %} as a push protection for an organization
{% data reusables.repositories.navigate-to-repo %} {% data reusables.repositories.sidebar-settings %} {% data reusables.repositories.navigate-to-code-security-and-analysis %} {% data reusables.repositories.navigate-to-ghas-settings %} {% data reusables.advanced-security.secret-scanning-push-protection-repo %}
{% data reusables.organizations.navigate-to-org %}
{% data reusables.organizations.org_settings %}
{% data reusables.organizations.security-and-analysis %}
{% data reusables.repositories.navigate-to-ghas-settings %}
{% data reusables.advanced-security.secret-scanning-push-protection-org %}
## コマンド ラインからのプッシュ保護としてシークレット スキャンを使用する
### Enabling {% data variables.product.prodname_secret_scanning %} as a push protection for a repository
{% data reusables.repositories.navigate-to-repo %}
{% data reusables.repositories.sidebar-settings %}
{% data reusables.repositories.navigate-to-code-security-and-analysis %}
{% data reusables.repositories.navigate-to-ghas-settings %}
{% data reusables.advanced-security.secret-scanning-push-protection-repo %}
## Using secret scanning as a push protection from the command line
{% data reusables.secret-scanning.push-protection-command-line-choice %}
検出されたシークレットは、コマンド ラインに一度に最大 5 つ表示されます。 リポジトリで特定のシークレットが既に検出されていて、アラートが既に存在する場合、{% data variables.product.prodname_dotcom %} はそのシークレットをブロックしません。
Up to five detected secrets will be displayed at a time on the command line. If a particular secret has already been detected in the repository and an alert already exists, {% data variables.product.prodname_dotcom %} will not block that secret.
{% ifversion push-protection-custom-link-orgs %}
Organization の管理者は、push がブロックされると表示されるカスタム リンクを指定できます。 このカスタム リンクには、推奨されるシークレット コンテナーの使用についての指示や、ブロックされたシークレットに関連する質問を問い合わせるユーザーなど、Organization 固有のリソースやアドバイスを含めることができます。
Organization admins can provide a custom link that will be displayed when a push is blocked. This custom link can contain organization-specific resources and advice, such as directions on using a recommended secrets vault or who to contact for questions relating to the blocked secret.
{% ifversion push-protection-custom-link-orgs-beta %}{% data reusables.advanced-security.custom-link-beta %}{% endif %}
![ユーザーがリポジトリにシークレットをプッシュしようとしたときにプッシュがブロックされることを示すスクリーンショット](/assets/images/help/repository/secret-scanning-push-protection-with-custom-link.png)
![Screenshot showing that a push is blocked when a user attempts to push a secret to a repository](/assets/images/help/repository/secret-scanning-push-protection-with-custom-link.png)
{% else %}
![ユーザーがリポジトリにシークレットをプッシュしようとしたときにプッシュがブロックされることを示すスクリーンショット](/assets/images/help/repository/secret-scanning-push-protection-with-link.png)
![Screenshot showing that a push is blocked when a user attempts to push a secret to a repository](/assets/images/help/repository/secret-scanning-push-protection-with-link.png)
{% endif %}
{% data reusables.secret-scanning.push-protection-remove-secret %} ブロックされたシークレットの修復について詳しくは、「[プッシュ保護によってブロックされたブランチのプッシュ](/code-security/secret-scanning/pushing-a-branch-blocked-by-push-protection#resolving-a-blocked-push-on-the-command-line)」を参照してください。
{% data reusables.secret-scanning.push-protection-remove-secret %} For more information about remediating blocked secrets, see "[Pushing a branch blocked by push protection](/code-security/secret-scanning/pushing-a-branch-blocked-by-push-protection#resolving-a-blocked-push-on-the-command-line)."
シークレットが本物で、後で修正する予定であることを確認する場合は、できるだけ早くシークレットの修復を目指す必要があります。 たとえば、シークレットを取り消し、リポジトリのコミット履歴からシークレットを削除できます。 不正アクセスを回避するために、公開されている実際のシークレットを取り消す必要があります。 取り消す前に、まずシークレットをローテーションすることを検討できます。 詳細については、「[Removing sensitive data from a repository](/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository)」 (リポジトリからの機密データの削除) を参照してください。
If you confirm a secret is real and that you intend to fix it later, you should aim to remediate the secret as soon as possible. For example, you might revoke the secret and remove the secret from the repository's commit history. Real secrets that have been exposed must be revoked to avoid unauthorized access. You might consider first rotating the secret before revoking it. For more information, see "[Removing sensitive data from a repository](/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository)."
{% data reusables.secret-scanning.push-protection-multiple-branch-note %}
### ブロックされたシークレットのプッシュを許可する
### Allowing a blocked secret to be pushed
{% data variables.product.prodname_dotcom %} が、プッシュしても安全であると思われるシークレットをブロックする場合は、シークレットを許可し、許可する必要がある理由を指定できます。
If {% data variables.product.prodname_dotcom %} blocks a secret that you believe is safe to push, you can allow the secret and specify the reason why it should be allowed.
{% data reusables.secret-scanning.push-protection-allow-secrets-alerts %}
{% data reusables.secret-scanning.push-protection-allow-email %}
1. プッシュがブロックされたときに {% data variables.product.prodname_dotcom %} から返される URL にアクセスします。
![シークレットのプッシュをブロック解除するためのオプションを含むフォームを示すスクリーンショット](/assets/images/help/repository/secret-scanning-unblock-form.png) {% data reusables.secret-scanning.push-protection-choose-allow-secret-options %}
1. **[このシークレットをプッシュできるようにする]** をクリックします。
2. 3 時間以内にコマンド ラインでプッシュを再試行します。 3 時間以内にプッシュしていない場合は、このプロセスを繰り返す必要があります。
1. Visit the URL returned by {% data variables.product.prodname_dotcom %} when your push was blocked.
![Screenshot showing form with options for unblocking the push of a secret](/assets/images/help/repository/secret-scanning-unblock-form.png)
{% data reusables.secret-scanning.push-protection-choose-allow-secret-options %}
1. Click **Allow me to push this secret**.
2. Reattempt the push on the command line within three hours. If you have not pushed within three hours, you will need to repeat this process.
{% ifversion secret-scanning-push-protection-web-ui %}
## Web UI からのプッシュ保護としてシークレット スキャンを使用する
## Using secret scanning as a push protection from the web UI
{% data reusables.secret-scanning.push-protection-web-ui-choice %}
{% data variables.product.prodname_dotcom %} では、Web UI で検出されたシークレットを一度に 1 つのみ表示します。 リポジトリで特定のシークレットが既に検出されていて、アラートが既に存在する場合、{% data variables.product.prodname_dotcom %} はそのシークレットをブロックしません。
{% data variables.product.prodname_dotcom %} will only display one detected secret at a time in the web UI. If a particular secret has already been detected in the repository and an alert already exists, {% data variables.product.prodname_dotcom %} will not block that secret.
{% ifversion push-protection-custom-link-orgs %}
Organization の管理者は、push がブロックされると表示されるカスタム リンクを指定できます。 このカスタム リンクには、Organization 固有のリソースとアドバイスを含めることができます。 たとえば、Organization のシークレット コンテナー、質問をエスカレートするチームや個人、シークレットの操作とコミット履歴の書き換えに関して Organization で承認されたポリシーに関する情報を含む README ファイルをカスタム リンクが指すようにすることができます。
{% ifversion push-protection-custom-link-orgs-beta %}{% data reusables.advanced-security.custom-link-beta %}{% endif %}
Organization admins can provide a custom link that will be displayed when a push is blocked. This custom link can contain resources and advice specific to your organization. For example, the custom link can point to a README file with information about the organization's secret vault, which teams and individuals to escalate questions to, or the organization's approved policy for working with secrets and rewriting commit history.
{% endif %}
Web UI を使用して、ファイルからシークレットを削除できます。 シークレットを削除すると、ページ上部のバナーが変更され、変更をコミットできるようになったことが通知されます。
You can remove the secret from the file using the web UI. Once you remove the secret, the banner at the top of the page will change and tell you that you can now commit your changes.
![シークレットの修正後に許可される Web UI でのコミットを示すスクリーンショット](/assets/images/help/repository/secret-scanning-push-protection-web-ui-commit-allowed.png)
![Screenshot showing commit in web ui allowed after secret fixed](/assets/images/help/repository/secret-scanning-push-protection-web-ui-commit-allowed.png)
### シークレットのプッシュ保護をバイパスする
### Bypassing push protection for a secret
{% data reusables.secret-scanning.push-protection-remove-secret %} ブロックされたシークレットの修復について詳しくは、「[プッシュ保護によってブロックされたブランチのプッシュ](/code-security/secret-scanning/pushing-a-branch-blocked-by-push-protection#resolving-a-blocked-push-in-the-web-ui)」を参照してください。
{% data reusables.secret-scanning.push-protection-remove-secret %} For more information about remediating blocked secrets, see "[Pushing a branch blocked by push protection](/code-security/secret-scanning/pushing-a-branch-blocked-by-push-protection#resolving-a-blocked-push-in-the-web-ui)."
シークレットが本物で、後で修正する予定であることを確認する場合は、できるだけ早くシークレットの修復を目指す必要があります。 詳細については、「[Removing sensitive data from a repository](/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository)」 (リポジトリからの機密データの削除) を参照してください。
If you confirm a secret is real and that you intend to fix it later, you should aim to remediate the secret as soon as possible. For more information, see "[Removing sensitive data from a repository](/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository)."
{% data variables.product.prodname_dotcom %} が、プッシュしても安全であると思われるシークレットをブロックする場合は、シークレットを許可し、許可する必要がある理由を指定できます。
If {% data variables.product.prodname_dotcom %} blocks a secret that you believe is safe to push, you can allow the secret and specify the reason why it should be allowed.
{% data reusables.secret-scanning.push-protection-allow-secrets-alerts %}
{% data reusables.secret-scanning.push-protection-allow-email %}
シークレットが本物で、後で修正する予定であることを確認する場合は、できるだけ早くシークレットの修復を目指す必要があります。
If you confirm a secret is real and that you intend to fix it later, you should aim to remediate the secret as soon as possible.
1. {% data variables.product.prodname_dotcom %} がコミットをブロックしたときにページの上部に表示されるバナーで、 **[保護のバイパス]** をクリックします。
1. In the banner that appeared at the top of the page when {% data variables.product.prodname_dotcom %} blocked your commit, click **Bypass protection**.
{% data reusables.secret-scanning.push-protection-choose-allow-secret-options %}
![シークレットのプッシュをブロック解除するためのオプションを含むフォームを示すスクリーンショット](/assets/images/help/repository/secret-scanning-push-protection-web-ui-allow-secret-options.png)
![Screenshot showing form with options for unblocking the push of a secret](/assets/images/help/repository/secret-scanning-push-protection-web-ui-allow-secret-options.png)
1. **[シークレットの許可]** をクリックします。
1. Click **Allow secret**.
{% endif %}
{% endif %}

View File

@@ -1,6 +1,6 @@
---
title: プッシュ保護によってブロックされたブランチをプッシュする
intro: '{% data variables.product.prodname_secret_scanning %}のプッシュ保護機能を使用すると、リポジトリでのシークレットの漏洩を予防することができます。 ブロックされたプッシュを解決でき、検出されたシークレットが削除されたら、コマンド ラインまたは Web UI から作業ブランチに変更をプッシュできます。'
title: Pushing a branch blocked by push protection
intro: 'The push protection feature of {% data variables.product.prodname_secret_scanning %} proactively protects you against leaked secrets in your repositories. You can resolve blocked pushes and, once the detected secret is removed, you can push changes to your working branch from the command line or the web UI.'
product: '{% data reusables.gated-features.secret-scanning %}'
miniTocMaxHeadingLevel: 3
versions:
@@ -12,58 +12,51 @@ topics:
- Alerts
- Repositories
shortTitle: Push a blocked branch
ms.openlocfilehash: 743cdc094acfd2465d4bb97f1ae7ec0a7f8b86f0
ms.sourcegitcommit: 47bd0e48c7dba1dde49baff60bc1eddc91ab10c5
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/05/2022
ms.locfileid: '147683789'
---
## {% data variables.product.prodname_secret_scanning %}のプッシュ保護について
{% data variables.product.prodname_secret_scanning %} のプッシュ保護機能を使用すると、リポジトリに変更をプッシュする前にシークレットをスキャンすることで、セキュリティ リークを防ぐことができます。 {% data reusables.secret-scanning.push-protection-overview %} プッシュ保護に対応しているシークレットとサービス プロバイダーの詳細については、「[{% data variables.product.prodname_secret_scanning_caps %} パターン](/code-security/secret-scanning/secret-scanning-patterns#supported-secrets-for-push-protection)」を参照してください。
## About push protection for {% data variables.product.prodname_secret_scanning %}
The push protection feature of {% data variables.product.prodname_secret_scanning %} helps to prevent security leaks by scanning for secrets before you push changes to your repository. {% data reusables.secret-scanning.push-protection-overview %} For information on the secrets and service providers supported for push protection, see "[{% data variables.product.prodname_secret_scanning_caps %} patterns](/code-security/secret-scanning/secret-scanning-patterns#supported-secrets-for-push-protection)."
{% data reusables.secret-scanning.push-protection-remove-secret %}
{% tip %}
**ヒント** {% data variables.product.prodname_dotcom %} が、プッシュしても安全であると思われるシークレットをブロックする場合は、シークレットを許可し、許可する必要がある理由を指定できます。 シークレットのプッシュ保護をバイパスする方法の詳細については、コマンド ラインと Web UI についてそれぞれ「[ブロックされたシークレットのプッシュを許可する](/code-security/secret-scanning/protecting-pushes-with-secret-scanning#allowing-a-blocked-secret-to-be-pushed)」と「[シークレットのプッシュ保護をバイパスする](/code-security/secret-scanning/protecting-pushes-with-secret-scanning#bypassing-push-protection-for-a-secret)」を参照してください。
**Tip**
If {% data variables.product.prodname_dotcom %} blocks a secret that you believe is safe to push, you can allow the secret and specify the reason why it should be allowed. For more information about bypassing push protection for a secret, see "[Allowing a blocked secret to be pushed](/code-security/secret-scanning/protecting-pushes-with-secret-scanning#allowing-a-blocked-secret-to-be-pushed)" and "[Bypassing push protection for a secret](/code-security/secret-scanning/protecting-pushes-with-secret-scanning#bypassing-push-protection-for-a-secret)" for the command line and the web UI, respectively.
{% endtip %}
{% ifversion push-protection-custom-link-orgs %}
Organization の管理者は、push がブロックされると {% data variables.product.product_name %} からのメッセージに含まれるカスタム リンクを指定できます。 このカスタム リンクには、Organization およびそのポリシーに固有のリソースとアドバイスを含めることができます。
{% ifversion push-protection-custom-link-orgs-beta %}{% data reusables.advanced-security.custom-link-beta %}{% endif %}
Organization admins can provide a custom link that will be included in the message from {% data variables.product.product_name %} when your push is blocked. This custom link can contain resources and advice specific to your organization and its policies.
{% endif %}
## コマンド ラインでのブロックされたプッシュの解決
## Resolving a blocked push on the command line
{% data reusables.secret-scanning.push-protection-command-line-choice %}
{% data reusables.secret-scanning.push-protection-multiple-branch-note %}
ブロックされたシークレットがブランチ上で最新のコミットによって導入された場合は、次のガイダンスに従うことができます。
If the blocked secret was introduced by the latest commit on your branch, you can follow the guidance below.
1. コードからシークレットを削除します。
1. `git commit --amend` を使用して変更をコミットします。
1. `git push` を使用して変更をプッシュします。
1. Remove the secret from your code.
1. Commit the changes, by using `git commit --amend`.
1. Push your changes with `git push`.
シークレットが Git 履歴の以前のコミットに表示される場合は、シークレットを削除することもできます。
You can also remove the secret if the secret appears in an earlier commit in the Git history.
1. `git log` を使用して、プッシュ エラーで表面化したどのコミットが履歴で最初に発生したかを判断します。
1. `git rebase -i <commit-id>~1` を使用して、インタラクティブなリベースを開始します。 <commit-id> は、手順 1 のコミットの ID です。
1. エディターに表示されるテキストの最初の行の `pick` `edit` に変更して、編集するコミットを特定します。
1. コードからシークレットを削除します。
1. `git commit --amend` を使用して、変更をコミットします。
1. `git rebase --continue` を実行して、リベースを完了します。
1. Use `git log` to determine which commit surfaced in the push error came first in history.
1. Start an interactive rebase with `git rebase -i <commit-id>~1`. <commit-id> is the id of the commit from step 1.
1. Identify your commit to edit by changing `pick` to `edit` on the first line of the text that appears in the editor.
1. Remove the secret from your code.
1. Commit the change with `git commit --amend`.
1. Run `git rebase --continue` to finish the rebase.
## Web UI でのブロックされたコミットの解決
## Resolving a blocked commit in the web UI
{% data reusables.secret-scanning.push-protection-web-ui-choice %}
Web UI でブロックされたコミットを解決するには、ファイルからシークレットを削除するか、 **[保護のバイパス]** ドロップダウンを使用してシークレットを許可します。 プッシュ保護のバイパスについて詳しくは、「[シークレット スキャンによるプッシュの保護](/code-security/secret-scanning/protecting-pushes-with-secret-scanning#bypassing-push-protection-for-a-secret)」を参照してください。
To resolve a blocked commit in the web UI, you need to remove the secret from the file, or use the **Bypass protection** dropdown to allow the secret. For more information about bypassing push protection from the web UI, see "[Protecting pushes with secret scanning](/code-security/secret-scanning/protecting-pushes-with-secret-scanning#bypassing-push-protection-for-a-secret)."
シークレットが本物であることを確認したら、ファイルからシークレットを削除する必要があります。 シークレットを削除すると、ページ上部のバナーが変更され、変更をコミットできるようになったことが通知されます。
If you confirm a secret is real, you need to remove the secret from the file. Once you remove the secret, the banner at the top of the page will change and tell you that you can now commit your changes.

View File

@@ -11,6 +11,7 @@ topics:
- Codespaces
children:
- /personalizing-github-codespaces-for-your-account
- /renaming-a-codespace
- /changing-the-machine-type-for-your-codespace
- /setting-your-default-editor-for-github-codespaces
- /setting-your-default-region-for-github-codespaces

View File

@@ -59,6 +59,8 @@ In the example `postCreate.sh` file below, the contents of the `config` director
ln -sf $PWD/.devcontainer/config $HOME/config && set +x
```
For more information, see "[Introduction to dev containers](/codespaces/setting-up-your-project-for-codespaces/introduction-to-dev-containers#applying-configuration-changes-to-a-codespace)."
## Stopping a codespace
{% data reusables.codespaces.stopping-a-codespace %} For more information, see "[Stopping and starting a codespace](/codespaces/developing-in-codespaces/stopping-and-starting-a-codespace)."

View File

@@ -16,7 +16,6 @@ children:
- /using-source-control-in-your-codespace
- /using-github-codespaces-for-pull-requests
- /stopping-and-starting-a-codespace
- /renaming-a-codespace
- /forwarding-ports-in-your-codespace
- /default-environment-variables-for-your-codespace
- /connecting-to-a-private-network

View File

@@ -1,61 +0,0 @@
---
title: codespace の名前を変更する
intro: '{% data variables.product.prodname_cli %} を使用して、codespace の表示名を任意の名前に変更できます。'
product: '{% data reusables.gated-features.codespaces %}'
versions:
fpt: '*'
ghec: '*'
type: how_to
topics:
- Codespaces
- Fundamentals
- Developer
shortTitle: Rename a codespace
ms.openlocfilehash: 83a5ce0064a8f8deed752eaef0cd49be538ff9be
ms.sourcegitcommit: 478f2931167988096ae6478a257f492ecaa11794
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/09/2022
ms.locfileid: '147682504'
---
## codespace の名前変更について
各 codespace には、自動生成された表示名が割り当てられます。 複数の codespace がある場合、表示名は codespace を区別するのに役立ちます。 (例: `literate space parakeet`)。 codespace の表示名を変更できます。
codespace の表示名を見つけるには:
- {% data variables.product.product_name %} の https://github.com/codespaces で、codespace の一覧を表示します。
![GitHub の codespace の一覧のスクリーンショット](/assets/images/help/codespaces/codespaces-list-display-name.png)
- {% data variables.product.prodname_vscode %} デスクトップ アプリケーションまたは {% data variables.product.prodname_vscode_shortname %} Web クライアントで、[リモート エクスプローラー] をクリックします。 表示名は、リポジトリ名の下に表示されます。 たとえば、次のスクリーンショットの `symmetrical space telegram`
![VS Code のリモート エクスプローラーのスクリーンショット](/assets/images/help/codespaces/codespaces-remote-explorer.png)
{% indented_data_reference reusables.codespaces.remote-explorer spaces=2 %}
- ローカル コンピューターのターミナル ウィンドウで、次の {% data variables.product.prodname_cli %} コマンドを使用します: `gh codespace list`
### 永続的な codespace 名
表示名に加えて、codespace を作成するときに、永続的な名前も codespace に割り当てられます。 名前は、{% data variables.product.company_short %} ハンドル、リポジトリ名、およびいくつかのランダムな文字の組み合わせです。 (例: `octocat-myrepo-gmc7`)。 この名前は変更できません。
codespace の永続的な名前を見つけるには:
* {% data variables.product.product_name %} では、 https://github.com/codespaces で **[ブラウザーで開く]** オプションにカーソルを合わせると、永続的な名前がポップアップに表示されます。
![カーソルを合わせると表示される codespace 名のスクリーンショット](/assets/images/help/codespaces/find-codespace-name-github.png)
* codespace では、ターミナルで次のコマンドを使用します: `echo $CODESPACE_NAME`
* ローカル コンピューターのターミナル ウィンドウで、次の {% data variables.product.prodname_cli %} コマンドを使用します: `gh codespace list`
## codespace の名前を変更する
codespace の表示名を変更すると、長期間使用する複数の codespace がある場合に便利です。 適切な名前は、特定の目的に使用する codespace を識別するのに役立ちます。 {% data variables.product.prodname_cli %} を使用して、codespace の表示名を変更できます。
codespace の名前を変更するには、`gh codespace edit` サブコマンドを使用します。
```shell
gh codespace edit -c <em>permanent name of the codespace</em> -d <em>new display name</em>
```
この例では、`permanent name of the codespace` を codespace の永続的な名前に置き換えます。 `new display name` を必要な表示名に置き換えます。

View File

@@ -6,6 +6,7 @@ product: '{% data reusables.gated-features.codespaces %}'
miniTocMaxHeadingLevel: 3
versions:
fpt: '*'
ghec: '*'
type: how_to
topics:
- Codespaces
@@ -24,6 +25,7 @@ You can work with {% data variables.product.prodname_github_codespaces %} in the
- [Create a new codespace](#create-a-new-codespace)
- [Stop a codespace](#stop-a-codespace)
- [Delete a codespace](#delete-a-codespace)
- [Rename a codespace](#rename-a-codespace)
- [SSH into a codespace](#ssh-into-a-codespace)
- [Open a codespace in {% data variables.product.prodname_vscode %}](#open-a-codespace-in--data-variablesproductprodname_vscode-)
- [Open a codespace in JupyterLab](#open-a-codespace-in-jupyterlab)
@@ -74,6 +76,8 @@ gh codespace list
The list includes the unique name of each codespace, which you can use in other `gh codespace` commands.
An asterisk at the end of the branch name for a codespace indicates that there are uncommitted or unpushed changes in that codespace.
### Create a new codespace
```shell
@@ -98,6 +102,14 @@ gh codespace delete -c CODESPACE-NAME
For more information, see "[Deleting a codespace](/codespaces/developing-in-codespaces/deleting-a-codespace)."
### Rename a codespace
```shell
gh codespace edit -c CODESPACE-NAME -d DISPLAY-NAME
```
For more information, see "[Renaming a codespace](/codespaces/customizing-your-codespace/renaming-a-codespace)."
### SSH into a codespace
To run commands on the remote codespace machine, from your terminal, you can SSH into the codespace.
@@ -215,4 +227,4 @@ You can use the {% data variables.product.prodname_cli %} extension to create a
gh codespace edit -m <em>machine-type-name</em>
```
For more information, see the "{% data variables.product.prodname_cli %}" tab of "[Changing the machine type for your codespace](/codespaces/customizing-your-codespace/changing-the-machine-type-for-your-codespace)."
For more information, see the "{% data variables.product.prodname_cli %}" tab of "[Changing the machine type for your codespace](/codespaces/customizing-your-codespace/changing-the-machine-type-for-your-codespace)."

View File

@@ -35,7 +35,7 @@ When you create a codespace, a [shallow clone](https://github.blog/2020-12-21-ge
### Step 2: Container is created
{% data variables.product.prodname_github_codespaces %} uses a container as the development environment. This container is created based on the configurations that you can define in a `devcontainer.json` file and/or Dockerfile in your repository. If you don't [configure a container](/codespaces/customizing-your-codespace/configuring-codespaces-for-your-project), {% data variables.product.prodname_github_codespaces %} uses a [default image](/codespaces/customizing-your-codespace/configuring-codespaces-for-your-project#using-the-default-configuration), which has many languages and runtimes available. For information on what the default image contains, see the [`vscode-dev-containers`](https://github.com/microsoft/vscode-dev-containers/tree/main/containers/codespaces-linux) repository.
{% data variables.product.prodname_github_codespaces %} uses a container as the development environment. This container is created based on the configurations that you can define in a `devcontainer.json` file and/or Dockerfile in your repository. If you don't specify a custom Docker image in your configuration, {% data variables.product.prodname_codespaces %} uses a default image, which has many languages and runtimes available. For information, see "[Introduction to dev containers](/codespaces/setting-up-your-project-for-codespaces/introduction-to-dev-containers#using-the-default-dev-container-configuration)." For details of what the default image contains, see the [`vscode-dev-containers`](https://github.com/microsoft/vscode-dev-containers/tree/main/containers/codespaces-linux) repository.
{% note %}

View File

@@ -92,11 +92,10 @@ Within a codespace, you have access to the {% data variables.product.prodname_vs
1. In the left sidebar, click the Extensions icon.
1. In the search bar, enter `fairyfloss` and install the fairyfloss extension.
1. In the search bar, type `fairyfloss` and click **Install**.
![Add an extension](/assets/images/help/codespaces/add-extension.png)
1. Click **Install in Codespaces**.
1. Select the `fairyfloss` theme by selecting it from the list.
![Select the fairyfloss theme](/assets/images/help/codespaces/fairyfloss.png)

View File

@@ -44,7 +44,8 @@ includeGuides:
- /codespaces/managing-codespaces-for-your-organization/managing-billing-for-codespaces-in-your-organization
- /codespaces/managing-codespaces-for-your-organization/managing-encrypted-secrets-for-your-repository-and-organization-for-codespaces
- /codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types
- /codespaces/managing-codespaces-for-your-organization/retricting-the-idle-timeout-period
- /codespaces/managing-codespaces-for-your-organization/restricting-the-base-image-for-codespaces.md
- /codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period
- /codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces
- /codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports
- /codespaces/managing-codespaces-for-your-organization/reviewing-your-organizations-audit-logs-for-codespaces

View File

@@ -16,6 +16,7 @@ children:
- /managing-repository-access-for-your-organizations-codespaces
- /reviewing-your-organizations-audit-logs-for-github-codespaces
- /restricting-access-to-machine-types
- /restricting-the-base-image-for-codespaces
- /restricting-the-visibility-of-forwarded-ports
- /restricting-the-idle-timeout-period
- /restricting-the-retention-period-for-codespaces

View File

@@ -14,7 +14,9 @@ topics:
## Overview
Typically, when you create a codespace you are offered a choice of specifications for the machine that will run your codespace. You can choose the machine type that best suits your needs. For more information, see "[Creating a codespace](/codespaces/developing-in-codespaces/creating-a-codespace#creating-a-codespace)." If you pay for using {% data variables.product.prodname_github_codespaces %} then your choice of machine type will affect how much your are billed. For more information about pricing, see "[About billing for {% data variables.product.prodname_github_codespaces %}](/billing/managing-billing-for-github-codespaces/about-billing-for-github-codespaces)."
Typically, when you create a codespace you are offered a choice of specifications for the machine that will run your codespace. You can choose the machine type that best suits your needs. For more information, see "[Creating a codespace](/codespaces/developing-in-codespaces/creating-a-codespace#creating-a-codespace)."
If you pay for using {% data variables.product.prodname_github_codespaces %} then your choice of machine type will affect how much your are billed. The compute cost for a codespace is proportional to the number of processor cores in the machine type you choose. For example, the compute cost of using a codespace for an hour on a 16-core machine is eight times greater than a 2-core machine. For more information about pricing, see "[About billing for {% data variables.product.prodname_github_codespaces %}](/billing/managing-billing-for-github-codespaces/about-billing-for-github-codespaces)."
As an organization owner, you may want to configure constraints on the types of machine that are available. For example, if the work in your organization doesn't require significant compute power or storage space, you can remove the highly resourced machines from the list of options that people can choose from. You do this by defining one or more policies in the {% data variables.product.prodname_github_codespaces %} settings for your organization.
@@ -52,21 +54,29 @@ If you add an organization-wide policy, you should set it to the largest choice
{% data reusables.codespaces.codespaces-org-policies %}
1. Click **Add constraint** and choose **Machine types**.
![Add a constraint for machine types](/assets/images/help/codespaces/add-constraint-dropdown.png)
![Screenshot of the 'Add constraint' dropdown menu](/assets/images/help/codespaces/add-constraint-dropdown.png)
1. Click {% octicon "pencil" aria-label="The edit icon" %} to edit the constraint, then clear the selection of any machine types that you don't want to be available.
![Edit the machine type constraint](/assets/images/help/codespaces/edit-machine-constraint.png)
![Screenshot of the pencil icon for editing the constraint](/assets/images/help/codespaces/edit-machine-constraint.png)
{% data reusables.codespaces.codespaces-policy-targets %}
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see "[Restricting the visibility of forwarded ports](/codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports)," "[Restricting the idle timeout period](/codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period)," and "[Restricting the retention period for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces)."
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see:
* "[Restricting the base image for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-base-image-for-codespaces)"
* "[Restricting the visibility of forwarded ports](/codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports)"
* "[Restricting the idle timeout period](/codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period)"
* "[Restricting the retention period for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces)"
1. After you've finished adding constraints to your policy, click **Save**.
The policy will be applied to all new codespaces that are billable to your organization. The machine type constraint is also applied to existing codespaces when someone attempts to restart a stopped codespace or reconnect to an active codespace.
## Editing a policy
You can edit an existing policy. For example, you may want to add or remove constraints to or from a policy.
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to limit the available machine types](#adding-a-policy-to-limit-the-available-machine-types)."
1. Click the name of the policy you want to edit.
1. Click the pencil icon ({% octicon "pencil" aria-label="The edit icon" %}) beside the "Machine types" constraint.
1. Make the required changes then click **Save**.
## Deleting a policy
@@ -74,7 +84,7 @@ You can edit an existing policy. For example, you may want to add or remove cons
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to limit the available machine types](#adding-a-policy-to-limit-the-available-machine-types)."
1. Click the delete button to the right of the policy you want to delete.
![The delete button for a policy](/assets/images/help/codespaces/policy-delete.png)
![Screenshot of the delete button for a policy](/assets/images/help/codespaces/policy-delete.png)
## Further reading

View File

@@ -49,21 +49,25 @@ If you add an organization-wide policy with a timeout constraint, you should set
{% data reusables.codespaces.codespaces-org-policies %}
1. Click **Add constraint** and choose **Maximum idle timeout**.
![Add a constraint for idle timeout](/assets/images/help/codespaces/add-constraint-dropdown-timeout.png)
![Screenshot of the 'Add constraint' dropdown menu](/assets/images/help/codespaces/add-constraint-dropdown-timeout.png)
1. Click {% octicon "pencil" aria-label="The edit icon" %} to edit the constraint.
![Edit the timeout constraint](/assets/images/help/codespaces/edit-timeout-constraint.png)
![Screenshot of the pencil icon for editing the constraint](/assets/images/help/codespaces/edit-timeout-constraint.png)
1. Enter the maximum number of minutes codespaces can remain inactive before they time out, then click **Save**.
![Set the maximum timeout in minutes](/assets/images/help/codespaces/maximum-minutes-timeout.png)
![Screenshot of setting the maximum timeout in minutes](/assets/images/help/codespaces/maximum-minutes-timeout.png)
{% data reusables.codespaces.codespaces-policy-targets %}
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see "[Restricting access to machine types](/codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types)," "[Restricting the visibility of forwarded ports](/codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports)," and "[Restricting the retention period for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces)."
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see:
* "[Restricting access to machine types](/codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types)"
* "[Restricting the base image for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-base-image-for-codespaces)"
* "[Restricting the visibility of forwarded ports](/codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports)"
* "[Restricting the retention period for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces)"
1. After you've finished adding constraints to your policy, click **Save**.
The policy will be applied to all new codespaces that are created, and to existing codespaces the next time they are started.
The policy will be applied to all new codespaces that are billable to your organization. The timeout constraint is also applied to existing codespaces the next time they are started.
## Editing a policy
@@ -71,6 +75,7 @@ You can edit an existing policy. For example, you may want to add or remove cons
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to set a maximum idle timeout period](#adding-a-policy-to-set-a-maximum-idle-timeout-period)."
1. Click the name of the policy you want to edit.
1. Click the pencil icon ({% octicon "pencil" aria-label="The edit icon" %}) beside the "Maximum idle timeout" constraint.
1. Make the required changes then click **Save**.
## Deleting a policy
@@ -78,4 +83,4 @@ You can edit an existing policy. For example, you may want to add or remove cons
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to set a maximum idle timeout period](#adding-a-policy-to-set-a-maximum-idle-timeout-period)."
1. Click the delete button to the right of the policy you want to delete.
![The delete button for a policy](/assets/images/help/codespaces/policy-delete.png)
![Screenshot of the delete button for a policy](/assets/images/help/codespaces/policy-delete.png)

View File

@@ -35,15 +35,15 @@ If you add an organization-wide policy with a retention constraint, you should s
{% data reusables.codespaces.codespaces-org-policies %}
1. Click **Add constraint** and choose **Retention period**.
![Add a constraint for retention periods](/assets/images/help/codespaces/add-constraint-dropdown-retention.png)
![Screenshot of the 'Add constraint' dropdown menu](/assets/images/help/codespaces/add-constraint-dropdown-retention.png)
1. Click {% octicon "pencil" aria-label="The edit icon" %} to edit the constraint.
![Edit the timeout constraint](/assets/images/help/codespaces/edit-timeout-constraint.png)
![Screenshot of the pencil icon for editing the constraint](/assets/images/help/codespaces/edit-timeout-constraint.png)
1. Enter the maximum number of days codespaces can remain stopped before they are automatically deleted, then click **Save**.
![Set the retention period in days](/assets/images/help/codespaces/maximum-days-retention.png)
![Screenshot of setting the retention period in days](/assets/images/help/codespaces/maximum-days-retention.png)
{% note %}
@@ -55,10 +55,14 @@ If you add an organization-wide policy with a retention constraint, you should s
{% endnote %}
{% data reusables.codespaces.codespaces-policy-targets %}
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see "[Restricting access to machine types](/codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types)," "[Restricting the visibility of forwarded ports](/codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports)," and "[Restricting the idle timeout period](/codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period)."
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see:
* "[Restricting access to machine types](/codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types)"
* "[Restricting the base image for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-base-image-for-codespaces)"
* "[Restricting the visibility of forwarded ports](/codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports)"
* "[Restricting the idle timeout period](/codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period)"
1. After you've finished adding constraints to your policy, click **Save**.
The policy will be applied to all new codespaces that are created.
The policy will be applied to all new codespaces that are billable to your organization. The retention period constraint is only applied on codespace creation.
## Editing a policy
@@ -68,6 +72,7 @@ The retention period constraint is only applied to codespaces when they are crea
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to set a maximum codespace retention period](#adding-a-policy-to-set-a-maximum-codespace-retention-period)."
1. Click the name of the policy you want to edit.
1. Click the pencil icon ({% octicon "pencil" aria-label="The edit icon" %}) beside the "Retention period" constraint.
1. Make the required changes then click **Save**.
## Deleting a policy
@@ -77,4 +82,4 @@ You can delete a policy at any time. Deleting a policy has no effect on existing
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to set a maximum codespace retention period](#adding-a-policy-to-set-a-maximum-codespace-retention-period)."
1. Click the delete button to the right of the policy you want to delete.
![The delete button for a policy](/assets/images/help/codespaces/policy-delete.png)
![Screenshot of the delete button for a policy](/assets/images/help/codespaces/policy-delete.png)

View File

@@ -45,25 +45,33 @@ If you add an organization-wide policy, you should set it to the most lenient vi
{% data reusables.codespaces.codespaces-org-policies %}
1. Click **Add constraint** and choose **Port visibility**.
![Add a constraint for port visibility](/assets/images/help/codespaces/add-constraint-dropdown-ports.png)
![Screenshot of the 'Add constraint' dropdown menu](/assets/images/help/codespaces/add-constraint-dropdown-ports.png)
1. Click {% octicon "pencil" aria-label="The edit icon" %} to edit the constraint.
![Edit the port visibility constraint](/assets/images/help/codespaces/edit-port-visibility-constraint.png)
![Screenshot of the pencil icon for editing the constraint](/assets/images/help/codespaces/edit-port-visibility-constraint.png)
1. Clear the selection of the port visibility options (**Org** or **Public**) that you don't want to be available.
![Choose the port visibility options](/assets/images/help/codespaces/choose-port-visibility-options.png)
![Screenshot of clearing a port visibility option](/assets/images/help/codespaces/choose-port-visibility-options.png)
{% data reusables.codespaces.codespaces-policy-targets %}
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see "[Restricting access to machine types](/codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types)," "[Restricting the idle timeout period](/codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period)," and "[Restricting the retention period for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces)."
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see:
* "[Restricting access to machine types](/codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types)"
* "[Restricting the base image for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-base-image-for-codespaces)"
* "[Restricting the idle timeout period](/codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period)"
* "[Restricting the retention period for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces)"
1. After you've finished adding constraints to your policy, click **Save**.
The policy will be applied to all new codespaces that are billable to your organization. The port visibility constraint is also applied to existing codespaces the next time they are started.
## Editing a policy
You can edit an existing policy. For example, you may want to add or remove constraints to or from a policy.
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to limit the port visibility options](#adding-a-policy-to-limit-the-port-visibility-options)."
1. Click the name of the policy you want to edit.
1. Click the pencil icon ({% octicon "pencil" aria-label="The edit icon" %}) beside the "Port visibility" constraint.
1. Make the required changes then click **Save**.
## Deleting a policy
@@ -71,4 +79,4 @@ You can edit an existing policy. For example, you may want to add or remove cons
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to limit the port visibility options](#adding-a-policy-to-limit-the-port-visibility-options)."
1. Click the delete button to the right of the policy you want to delete.
![The delete button for a policy](/assets/images/help/codespaces/policy-delete.png)
![Screenshot of the delete button for a policy](/assets/images/help/codespaces/policy-delete.png)

View File

@@ -93,10 +93,10 @@ You can use secrets in a codespace after the codespace is built and is running.
* When launching an application from the integrated terminal or ssh session.
* Within a dev container lifecycle script that is run after the codespace is running. For more information about dev container lifecycle scripts, see the documentation on containers.dev: [Specification](https://containers.dev/implementors/json_reference/#lifecycle-scripts).
Codespace secrets cannot be used during:
Codespace secrets cannot be used:
* Codespace build time (that is, within a Dockerfile or custom entry point).
* Within a dev container feature. For more information, see the `features` attribute in the documentation on containers.dev: [Specification](https://containers.dev/implementors/json_reference/#general-properties).
* During codespace build time (that is, within a Dockerfile or custom entry point).
* Within a dev container feature. For more information, see the `features` property in the [dev containers specification](https://containers.dev/implementors/json_reference/#general-properties) on containers.dev.
## Further reading

View File

@@ -65,7 +65,7 @@ The Dockerfile for a dev container is typically located in the `.devcontainer` f
{% note %}
**Note**: As an alternative to using a Dockerfile you can use the `image` property in the `devcontainer.json` file to refer directly to an existing image you want to use. If neither a Dockerfile nor an image is found then the default container image is used. For more information, see "[Using the default dev container configuration](#using-the-default-dev-container-configuration)."
**Note**: As an alternative to using a Dockerfile you can use the `image` property in the `devcontainer.json` file to refer directly to an existing image you want to use. The image you specify here must be allowed by any organization image policy that has been set. For more information, see "[Restricting the base image for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-base-image-for-codespaces)." If neither a Dockerfile nor an image is found then the default container image is used. For more information, see "[Using the default dev container configuration](#using-the-default-dev-container-configuration)."
{% endnote %}

View File

@@ -104,14 +104,12 @@ The newly added `devcontainer.json` file defines a few properties that are descr
// "ASPNETCORE_Kestrel__Certificates__Default__Path": "/home/vscode/.aspnet/https/aspnetapp.pfx",
// },
//
// 3. Do one of the following depending on your scenario:
// * When using GitHub Codespaces and/or Remote - Containers:
// 1. Start the container
// 2. Drag ~/.aspnet/https/aspnetapp.pfx into the root of the file explorer
// 3. Open a terminal in VS Code and run "mkdir -p /home/vscode/.aspnet/https && mv aspnetapp.pfx /home/vscode/.aspnet/https"
// 3. Start the container.
//
// 4. Drag ~/.aspnet/https/aspnetapp.pfx into the root of the file explorer.
//
// 5. Open a terminal in VS Code and run "mkdir -p /home/vscode/.aspnet/https && mv aspnetapp.pfx /home/vscode/.aspnet/https".
//
// * If only using Remote - Containers with a local container, uncomment this line instead:
// "mounts": [ "source=${env:HOME}${env:USERPROFILE}/.aspnet/https,target=/home/vscode/.aspnet/https,type=bind" ],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "dotnet restore",

View File

@@ -32,7 +32,7 @@ This guide shows you how to set up your Java project in {% data variables.produc
If you dont see this option, {% data variables.product.prodname_github_codespaces %} isn't available for your project. See [Access to {% data variables.product.prodname_github_codespaces %}](/codespaces/developing-in-codespaces/creating-a-codespace#access-to-github-codespaces) for more information.
When you create a codespace, your project is created on a remote VM that is dedicated to you. By default, the container for your codespace has many languages and runtimes including Java, nvm, npm, and Yarn. It also includes a common set of tools like git, wget, rsync, openssh, and nano.
When you create a codespace, your project is created on a remote VM that is dedicated to you. By default, the container for your codespace has many languages and runtimes including Java, nvm, npm, and Yarn. It also includes a set of commonly used tools such as git, wget, rsync, openssh, and nano.
{% data reusables.codespaces.customize-vcpus-and-ram %}

View File

@@ -154,3 +154,37 @@ By default, when you create a new organization, workflows are not allowed to {%
1. Click **Save** to apply the settings.
{% endif %}
{% ifversion actions-cache-org-ui %}
## Managing {% data variables.product.prodname_actions %} cache storage for your organization
Organization administrators can view {% ifversion actions-cache-admin-ui %}and manage {% endif %}{% data variables.product.prodname_actions %} cache storage for all repositories in the organization.
### Viewing {% data variables.product.prodname_actions %} cache storage by repository
For each repository in your organization, you can see how much cache storage a repository is using, the number of active caches, and if a repository is near the total cache size limit. For more information about the cache usage and eviction process, see "[Caching dependencies to speed up workflows](/actions/using-workflows/caching-dependencies-to-speed-up-workflows#usage-limits-and-eviction-policy)."
{% data reusables.profile.access_profile %}
{% data reusables.profile.access_org %}
{% data reusables.profile.org_settings %}
1. In the left sidebar, click {% octicon "play" aria-label="The {% data variables.product.prodname_actions %} icon" %} **Actions**, then click **Caches**.
1. Review the list of repositories for information about their {% data variables.product.prodname_actions %} caches. You can click on a repository name to see more detail about the repository's caches.
{% ifversion actions-cache-admin-ui %}
### Configuring {% data variables.product.prodname_actions %} cache storage for your organization
{% data reusables.actions.cache-default-size %}
You can configure the size limit for {% data variables.product.prodname_actions %} caches that will apply to each repository in your organization. The cache size limit for an organization cannot exceed the cache size limit set in the enterprise policy. Repository admins will be able to set a smaller limit in their repositories.
{% data reusables.profile.access_profile %}
{% data reusables.profile.access_org %}
{% data reusables.profile.org_settings %}
{% data reusables.organizations.settings-sidebar-actions-general %}
{% data reusables.actions.change-cache-size-limit %}
{% endif %}
{% endif %}

View File

@@ -1,6 +1,6 @@
---
title: カスタムドメインとGitHub Pagesについて
intro: '{% data variables.product.prodname_pages %} では、カスタム ドメインを使用する、つまりサイトの URL のルートを `octocat.github.io` などの既定値からあなたが所有するドメインに変更することができます。'
title: About custom domains and GitHub Pages
intro: '{% data variables.product.prodname_pages %} supports using custom domains, or changing the root of your site''s URL from the default, like `octocat.github.io`, to any domain you own.'
redirect_from:
- /articles/about-custom-domains-for-github-pages-sites
- /articles/about-supported-custom-domains
@@ -14,62 +14,58 @@ versions:
topics:
- Pages
shortTitle: Custom domains in GitHub Pages
ms.openlocfilehash: a2c5ae3df0e2dd6248db6e03fd7c64e973b14f3d
ms.sourcegitcommit: 47bd0e48c7dba1dde49baff60bc1eddc91ab10c5
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/05/2022
ms.locfileid: '145140372'
---
## サポートされているカスタムドメイン
{% data variables.product.prodname_pages %} では、サブドメインとApexドメインの 2 種類のドメインを使用できます。 サポートされていないカスタム ドメインの一覧については、「[カスタム ドメインと {% data variables.product.prodname_pages %} のトラブルシューティング](/articles/troubleshooting-custom-domains-and-github-pages/#custom-domain-names-that-are-unsupported)」を参照してください。
## Supported custom domains
| サポートされているカスタムドメインの種類 | 例 |
{% data variables.product.prodname_pages %} works with two types of domains: subdomains and apex domains. For a list of unsupported custom domains, see "[Troubleshooting custom domains and {% data variables.product.prodname_pages %}](/articles/troubleshooting-custom-domains-and-github-pages/#custom-domain-names-that-are-unsupported)."
| Supported custom domain type | Example |
|---|---|
| `www` サブドメイン | `www.example.com` |
| カスタム サブドメイン | `blog.example.com` |
|   Apex ドメイン | `example.com` |
| `www` subdomain | `www.example.com` |
| Custom subdomain | `blog.example.com` |
| Apex domain | `example.com` |
サイトには、頂点および `www` サブドメインのいずれか、あるいは両方の構成を設定できます。 頂点ドメインの詳細については、「[{% data variables.product.prodname_pages %} サイトに頂点ドメインを使用する](#using-an-apex-domain-for-your-github-pages-site)」を参照してください。
You can set up either or both of apex and `www` subdomain configurations for your site. For more information on apex domains, see "[Using an apex domain for your {% data variables.product.prodname_pages %} site](#using-an-apex-domain-for-your-github-pages-site)."
頂点ドメインを使用している場合でも、`www` サブドメインを使用することをお勧めします。 頂点ドメインで新しいサイトを作成すると、サイトのコンテンツを提供する際に使用するために `www` サブドメインのセキュリティ保護が自動的に試みられますが、`www` サブドメインを使うための DNS の変更はユーザーが行わなければなりません。 `www` サブドメインを設定すれば、関連する頂点ドメインのセキュリティ保護が自動的に試みられます。 詳細については、「[{% data variables.product.prodname_pages %} サイトのカスタム ドメインを管理する](/articles/managing-a-custom-domain-for-your-github-pages-site)」を参照してください。
We recommend always using a `www` subdomain, even if you also use an apex domain. When you create a new site with an apex domain, we automatically attempt to secure the `www` subdomain for use when serving your site's content, but you need to make the DNS changes to use the `www` subdomain. If you configure a `www` subdomain, we automatically attempt to secure the associated apex domain. For more information, see "[Managing a custom domain for your {% data variables.product.prodname_pages %} site](/articles/managing-a-custom-domain-for-your-github-pages-site)."
ユーザーまたは Organization サイトのカスタム ドメインを設定すると、カスタム ドメインを設定していないアカウントが所有するプロジェクト サイトの URL で、`<user>.github.io` または `<organization>.github.io` の部分がカスタム ドメインによって置き換えられます。 たとえば、サイトのカスタム ドメインが `www.octocat.com` で、`octo-project` というリポジトリから公開されているプロジェクト サイトにカスタム ドメインをまだ設定していない場合、そのリポジトリの {% data variables.product.prodname_pages %} サイトは、`www.octocat.com/octo-project` で公開されます。
After you configure a custom domain for a user or organization site, the custom domain will replace the `<user>.github.io` or `<organization>.github.io` portion of the URL for any project sites owned by the account that do not have a custom domain configured. For example, if the custom domain for your user site is `www.octocat.com`, and you have a project site with no custom domain configured that is published from a repository called `octo-project`, the {% data variables.product.prodname_pages %} site for that repository will be available at `www.octocat.com/octo-project`.
For more information about each type of site and handling custom domains, see "[Types of {% data variables.product.prodname_pages %} sites](/pages/getting-started-with-github-pages/about-github-pages#types-of-github-pages-sites)."
## あなたの {% data variables.product.prodname_pages %} サイトに Apex ドメインを使用する
## Using a subdomain for your {% data variables.product.prodname_pages %} site
サブドメインは、URL のうちルートドメインの前の部分です。 サブドメインは `www` として、またはサイトの個別のセクションとして `blog.example.com` のように構成できます。
A subdomain is the part of a URL before the root domain. You can configure your subdomain as `www` or as a distinct section of your site, like `blog.example.com`.
サブドメインは、DNS プロバイダーを通じて `CNAME` レコードで設定されます。 詳細については、「[{% data variables.product.prodname_pages %} サイトのカスタム ドメインを管理する](/articles/managing-a-custom-domain-for-your-github-pages-site#configuring-a-subdomain)」を参照してください。
Subdomains are configured with a `CNAME` record through your DNS provider. For more information, see "[Managing a custom domain for your {% data variables.product.prodname_pages %} site](/articles/managing-a-custom-domain-for-your-github-pages-site#configuring-a-subdomain)."
### `www` サブドメイン
### `www` subdomains
サブドメインの種類として最もよく使われているのは、`www` サブドメインです。 たとえば、`www.example.com` には `www` サブドメインが含まれています。
A `www` subdomain is the most commonly used type of subdomain. For example, `www.example.com` includes a `www` subdomain.
`www` サブドメインは、最も安定している種類のカスタム ドメインです。{% data variables.product.product_name %} のサーバーの IP アドレスが変更されても、`www` サブドメインは影響を受けないからです。
`www` subdomains are the most stable type of custom domain because `www` subdomains are not affected by changes to the IP addresses of {% data variables.product.product_name %}'s servers.
### カスタム サブドメイン
### Custom subdomains
カスタム サブドメインは、標準の `www` 形式を使わない種類のサブドメインです。 カスタムサブドメインは、サイトに 2 つの独自セクションを作成したい場合に最もよく使われます。 たとえば、`blog.example.com` というサイトを作成し、`www.example.com` から個別にそのセクションをカスタマイズできます。
A custom subdomain is a type of subdomain that doesn't use the standard `www` variant. Custom subdomains are mostly used when you want two distinct sections of your site. For example, you can create a site called `blog.example.com` and customize that section independently from `www.example.com`.
## あなたの {% data variables.product.prodname_pages %} サイトに Apex ドメインを使用する
## Using an apex domain for your {% data variables.product.prodname_pages %} site
頂点ドメインは、`example.com` のようにサブドメインを含まないカスタム ドメインです。 Apex ドメインは、ベースドメイン、ベアドメイン、裸ドメイン、ルート Apex ドメイン、ゾーン Apex ドメインなどとも呼ばれます。
An apex domain is a custom domain that does not contain a subdomain, such as `example.com`. Apex domains are also known as base, bare, naked, root apex, or zone apex domains.
頂点ドメインは、DNS プロバイダーを通じて、`A``ALIAS`、または `ANAME` のレコードを使用して設定されます。 詳細については、「[{% data variables.product.prodname_pages %} サイトのカスタム ドメインを管理する](/articles/managing-a-custom-domain-for-your-github-pages-site#configuring-an-apex-domain)」を参照してください。
An apex domain is configured with an `A`, `ALIAS`, or `ANAME` record through your DNS provider. For more information, see "[Managing a custom domain for your {% data variables.product.prodname_pages %} site](/articles/managing-a-custom-domain-for-your-github-pages-site#configuring-an-apex-domain)."
{% data reusables.pages.www-and-apex-domain-recommendation %}詳細については、「[{% data variables.product.prodname_pages %} サイトのカスタム ドメインを管理する](/github/working-with-github-pages/managing-a-custom-domain-for-your-github-pages-site/#configuring-a-subdomain)」を参照してください。
{% data reusables.pages.www-and-apex-domain-recommendation %} For more information, see "[Managing a custom domain for your {% data variables.product.prodname_pages %} site](/github/working-with-github-pages/managing-a-custom-domain-for-your-github-pages-site/#configuring-a-subdomain)."
## {% data variables.product.prodname_pages %}サイトのためのカスタムドメインの保護
## Securing the custom domain for your {% data variables.product.prodname_pages %} site
{% data reusables.pages.secure-your-domain %}詳細については、「[{% data variables.product.prodname_pages %} のカスタム ドメインの検証](/pages/configuring-a-custom-domain-for-your-github-pages-site/verifying-your-custom-domain-for-github-pages)」および「[{% data variables.product.prodname_pages %} サイトのカスタム ドメインを管理する](/articles/managing-a-custom-domain-for-your-github-pages-site)」を参照してください。
{% data reusables.pages.secure-your-domain %} For more information, see "[Verifying your custom domain for {% data variables.product.prodname_pages %}](/pages/configuring-a-custom-domain-for-your-github-pages-site/verifying-your-custom-domain-for-github-pages)" and "[Managing a custom domain for your {% data variables.product.prodname_pages %} site](/articles/managing-a-custom-domain-for-your-github-pages-site)."
サイトが自動的に無効化される理由は、いくつかあります。
There are a couple of reasons your site might be automatically disabled.
- {% data variables.product.prodname_pro %} から {% data variables.product.prodname_free_user %} へダウングレードすると、アカウント内のプライベートリポジトリから公開されている {% data variables.product.prodname_pages %} のサイトは公開されなくなります。 詳細については、[{% data variables.product.prodname_dotcom %} 課金プランのダウングレード](/articles/downgrading-your-github-billing-plan)に関するページを参照してください。
- {% data variables.product.prodname_free_user %} を利用している個人アカウントへプライベートリポジトリを移譲した場合、そのリポジトリからは {% data variables.product.prodname_pages %} の機能を利用できなくなり、公開されている {% data variables.product.prodname_pages %} は公開されなくなります。 詳細については、「[リポジトリを移譲する](/articles/transferring-a-repository)」を参照してください。
- If you downgrade from {% data variables.product.prodname_pro %} to {% data variables.product.prodname_free_user %}, any {% data variables.product.prodname_pages %} sites that are currently published from private repositories in your account will be unpublished. For more information, see "[Downgrading your {% data variables.product.prodname_dotcom %} billing plan](/articles/downgrading-your-github-billing-plan)."
- If you transfer a private repository to a personal account that is using {% data variables.product.prodname_free_user %}, the repository will lose access to the {% data variables.product.prodname_pages %} feature, and the currently published {% data variables.product.prodname_pages %} site will be unpublished. For more information, see "[Transferring a repository](/articles/transferring-a-repository)."
## 参考資料
## Further reading
- "[カスタム ドメインと {% data variables.product.prodname_pages %} のトラブルシューティング](/articles/troubleshooting-custom-domains-and-github-pages)"
- "[Troubleshooting custom domains and {% data variables.product.prodname_pages %}](/articles/troubleshooting-custom-domains-and-github-pages)"

View File

@@ -54,6 +54,9 @@ For each branch protection rule, you can choose to enable or disable the followi
{%- ifversion required-deployments %}
- [Require deployments to succeed before merging](#require-deployments-to-succeed-before-merging)
{%- endif %}
{%- ifversion lock-branch %}
- [Lock branch](#lock-branch)
{%- endif %}
{% ifversion bypass-branch-protections %}- [Do not allow bypassing the above settings](#do-not-allow-bypassing-the-above-settings){% else %}- [Include administrators](#include-administrators){% endif %}
- [Restrict who can push to matching branches](#restrict-who-can-push-to-matching-branches)
- [Allow force pushes](#allow-force-pushes)
@@ -84,6 +87,10 @@ Optionally, you can restrict the ability to dismiss pull request reviews to spec
Optionally, you can choose to require reviews from code owners. If you do, any pull request that affects code with a code owner must be approved by that code owner before the pull request can be merged into the protected branch.
{% ifversion last-pusher-require-approval %}
Optionally, you can require approvals from someone other than the last person to push to a branch before a pull request can be merged. This ensures more than one person sees pull requests in their final state before they are merged into a protected branch. If you enable this feature, the most recent user to push their changes will need an approval regardless of the required approvals branch protection. Users who have already reviewed a pull request can reapprove after the most recent push to meet this requirement.
{% endif %}
### Require status checks before merging
Required status checks ensure that all required CI tests are passing before collaborators can make changes to a protected branch. Required status checks can be checks or statuses. For more information, see "[About status checks](/github/collaborating-with-issues-and-pull-requests/about-status-checks)."
@@ -151,6 +158,13 @@ Before you can require a linear commit history, your repository must allow squas
You can require that changes are successfully deployed to specific environments before a branch can be merged. For example, you can use this rule to ensure that changes are successfully deployed to a staging environment before the changes merge to your default branch.
{% ifversion lock-branch %}
### Lock branch
Locking a branch ensures that no commits can be made to the branch.
By default, a forked repository does not support syncing from its upstream repository. You can enable **Allow fork syncing** to pull changes from the upstream repository while preventing other contributions to the fork's branch.
{% endif %}
{% ifversion bypass-branch-protections %}### Do not allow bypassing the above settings{% else %}
### Include administrators{% endif %}

View File

@@ -73,6 +73,10 @@ When you create a branch rule, the branch you specify doesn't have to exist yet
{% endif %}
- Optionally, if the repository is part of an organization, select **Restrict who can dismiss pull request reviews**. Then, search for and select the actors who are allowed to dismiss pull request reviews. For more information, see "[Dismissing a pull request review](/pull-requests/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/dismissing-a-pull-request-review)."
![Restrict who can dismiss pull request reviews checkbox]{% ifversion integration-branch-protection-exceptions %}(/assets/images/help/repository/PR-review-required-dismissals-with-apps.png){% else %}(/assets/images/help/repository/PR-review-required-dismissals.png){% endif %}
{% ifversion last-pusher-require-approval %}
- Optionally, to require someone other than the last person to push to a branch to approve a pull request prior to merging, select **Require approval from someone other than the last pusher**. For more information, see "[About protected branches](/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-pull-request-reviews-before-merging)."
![Require review from someone other than the last pusher](/assets/images/help/repository/last-pusher-review-required.png)
{% endif %}
1. Optionally, enable required status checks. For more information, see "[About status checks](/pull-requests/collaborating-with-pull-requests/collaborating-on-repositories-with-code-quality-features/about-status-checks)."
- Select **Require status checks to pass before merging**.
![Required status checks option](/assets/images/help/repository/required-status-checks.png)
@@ -99,6 +103,12 @@ When you create a branch rule, the branch you specify doesn't have to exist yet
1. Optionally, to choose which environments the changes must be successfully deployed to before merging, select **Require deployments to succeed before merging**, then select the environments.
![Require successful deployment option](/assets/images/help/repository/require-successful-deployment.png)
{%- endif %}
{% ifversion lock-branch %}
1. Optionally, select **Lock branch** to make branch read-only.
![Screenshot of the checkbox to lock a branch](/assets/images/help/repository/lock-branch.png)
- Optionally, to allow fork syncing, select **Allow fork syncing**.
![Screenshot of the checkbox to allow fork syncing](/assets/images/help/repository/lock-branch-forksync.png)
{%- endif %}
1. Optionally, select {% ifversion bypass-branch-protections %}**Do not allow bypassing the above settings**.
![Do not allow bypassing the above settings checkbox](/assets/images/help/repository/do-not-allow-bypassing-the-above-settings.png){% else %}**Apply the rules above to administrators**.
![Apply the rules above to administrators checkbox](/assets/images/help/repository/include-admins-protected-branches.png){% endif %}

View File

@@ -97,7 +97,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- run: 'echo "No build required" '
- run: 'echo "No build required"'
```
Now the checks will always pass whenever someone sends a pull request that doesn't change the files listed under `paths` in the first workflow.

View File

@@ -185,7 +185,16 @@ You can also define a custom retention period for a specific artifact created by
{% data reusables.actions.cache-default-size %} However, these default sizes might be different if an enterprise owner has changed them. {% data reusables.actions.cache-eviction-process %}
You can set a total cache storage size for your repository up to the maximum size allowed by the enterprise policy setting.
You can set a total cache storage size for your repository up to the maximum size allowed by the {% ifversion actions-cache-admin-ui %}organization or{% endif %} enterprise policy setting{% ifversion actions-cache-admin-ui %}s{% endif %}.
{% ifversion actions-cache-admin-ui %}
{% data reusables.repositories.navigate-to-repo %}
{% data reusables.repositories.sidebar-settings %}
{% data reusables.repositories.settings-sidebar-actions-general %}
{% data reusables.actions.change-cache-size-limit %}
{% else %}
The repository settings for {% data variables.product.prodname_actions %} cache storage can currently only be modified using the REST API:
@@ -195,3 +204,5 @@ The repository settings for {% data variables.product.prodname_actions %} cache
{% data reusables.actions.cache-no-org-policy %}
{% endif %}
{% endif %}

View File

@@ -2,6 +2,5 @@
{% ifversion ghec %}![Screenshot showing how to enable push protection for {% data variables.product.prodname_secret_scanning %} for an organization](/assets/images/help/organizations/secret-scanning-enable-push-protection-org.png){% elsif ghes > 3.4 or ghae > 3.4 %} ![Screenshot showing how to enable push protection for {% data variables.product.prodname_secret_scanning %} for an organization](/assets/images/help/organizations/secret-scanning-enable-push-protection-org-ghes.png){% endif %}
1. Optionally, click "Automatically enable for repositories added to {% data variables.product.prodname_secret_scanning %}."{% ifversion push-protection-custom-link-orgs %}
1. Optionally, to include a custom link in the message that members will see when they attempt to push a secret, select **Add a resource link in the CLI and web UI when a commit is blocked**, then type a URL, and click **Save link**.
{% ifversion push-protection-custom-link-orgs-beta %}{% indented_data_reference reusables.advanced-security.custom-link-beta spaces=3 %}{% endif %}
![Screenshot showing checkbox and text field for enabling a custom link](/assets/images/help/organizations/secret-scanning-custom-link.png){% endif %}

View File

@@ -1,13 +1,5 @@
---
ms.openlocfilehash: 5f71b486e450ec53e4f144c7cabd87e1e7e8a257
ms.sourcegitcommit: 478f2931167988096ae6478a257f492ecaa11794
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/09/2022
ms.locfileid: "147717638"
---
{% note %}
**注**: {% data variables.product.prodname_codespaces %} に対して定義した Organization ポリシーは、Organization に課金される codespace にのみ適用されます。 個々のユーザーが Organization 内のリポジトリ用の codespace を作成し、その Organization に課金されない場合、codespace はそれらのポリシーに制約されません。 Organization に課金される codespace を作成できるユーザーを選ぶ方法については、「[Organization に対して{% data variables.product.prodname_github_codespaces %} を有効にする](/codespaces/managing-codespaces-for-your-organization/enabling-github-codespaces-for-your-organization#choose-who-can-create-codespaces-that-are-billed-to-your-organization)」を参照してください。
**Note**: Codespace policies only apply to codespaces for which your organization will be billed. If an individual user creates a codespace for a repository in your organization, and the organization is not billed, then the codespace will not be bound by these policies. For information on how to choose who can create codespaces that are billed to your organization, see "[Enabling {% data variables.product.prodname_github_codespaces %} for your organization](/codespaces/managing-codespaces-for-your-organization/enabling-github-codespaces-for-your-organization#choose-who-can-create-codespaces-that-are-billed-to-your-organization)."
{% endnote %}

View File

@@ -1,11 +1,14 @@
1. In the "Change policy target" area, click the dropdown button.
1. Choose either **All repositories** or **Selected repositories** to determine which repositories this policy will apply to.
1. If you chose **Selected repositories**:
1. Click outside of the dialog box to close it.
1. By default the policy is set to apply to all repositories, if you want it to apply only to some of the repositories in your organization, click **All repositories** and then click **Selected repositories** in the dropdown menu.
![Screenshot of choosing 'Selected repositories'](/assets/images/help/codespaces/selected-repositories.png)
With **Selected repositories** selected:
1. Click {% octicon "gear" aria-label="The settings icon" %}.
![Edit the settings for the policy](/assets/images/help/codespaces/policy-edit.png)
![Screenshot of the gear icon for editing the settings](/assets/images/help/codespaces/policy-edit.png)
2. Select the repositories you want this policy to apply to.
3. At the bottom of the repository list, click **Select repositories**.
![Select repositories for this policy](/assets/images/help/codespaces/policy-select-repos.png)
![Screenshot of selected repositories for this policy](/assets/images/help/codespaces/policy-select-repos.png)

View File

@@ -1,18 +1,10 @@
---
ms.openlocfilehash: 073c21c1480e0f9f699687c730aef2bb670654e7
ms.sourcegitcommit: 47bd0e48c7dba1dde49baff60bc1eddc91ab10c5
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 09/05/2022
ms.locfileid: "146689022"
---
以下の表は、各パッケージマネージャについて以下の項目を示しています。
- *dependabot.yml* ファイル中で使う YAML 値
- パッケージマネージャのサポートされているバージョン
- プライベートの{% data variables.product.prodname_dotcom %}リポジトリあるいはレジストリ内の依存関係がサポートされているか
- ベンダーの依存関係がサポートされているか
The following table shows, for each package manager:
- The YAML value to use in the *dependabot.yml* file
- The supported versions of the package manager
- Whether dependencies in private {% data variables.product.prodname_dotcom %} repositories or registries are supported
- Whether vendored dependencies are supported
パッケージ マネージャー | YAML | サポートされているバージョン | プライベートリポジトリ | プライベート レジストリ | ベンダー
Package manager | YAML value | Supported versions | Private repositories | Private registries | Vendoring
---------------|------------------|------------------|:---:|:---:|:---:
Bundler | `bundler` | v1, v2 | | **✓** | **✓** |
Cargo | `cargo` | v1 | **✓** | **✓** | |
@@ -20,36 +12,42 @@ Composer | `composer` | v1, v2 | **✓** | **✓** | |
Docker | `docker` | v1 | **✓** | **✓** | |
Hex | `mix` | v1 | | **✓** | |
elm-package | `elm` | v0.19 | **✓** | **✓** | |
Gitサブモジュール | `gitsubmodule` | N/A (バージョンなし) | **✓** | **✓** | |
GitHub のアクション | `github-actions` | N/A (バージョンなし) | **✓** | **✓** | |
Go モジュール | `gomod` | v1 | **✓** | **✓** | **✓** |
Gradle | `gradle` | N/A (バージョンなし)<sup>[1]</sup> | **✓** | **✓** | |
Maven | `maven` | N/A (バージョンなし)<sup>[2]</sup> | **✓** | **✓** | |
npm | `npm` | v6、v7、v8 | **✓** | **✓** | |
git submodule | `gitsubmodule` | N/A (no version) | **✓** | **✓** | |
GitHub Actions | `github-actions` | N/A (no version) | **✓** | **✓** | |
Go modules | `gomod` | v1 | **✓** | **✓** | **✓** |
Gradle | `gradle` | N/A (no version)<sup>[1]</sup> | **✓** | **✓** | |
Maven | `maven` | N/A (no version)<sup>[2]</sup> | **✓** | **✓** | |
npm | `npm` | v6, v7, v8 | **✓** | **✓** | |
NuGet | `nuget` | <= 4.8<sup>[3]</sup> | **✓** | **✓** | |
pip | `pip` | v21.1.2 | | **✓** | |
pipenv | `pip` | <= 2021-05-29 | | **✓** | |
pip-compile | `pip` | 6.1.0 | | **✓** | |
poetry | `pip` | v1 | | **✓** | |{% ifversion fpt or ghec or ghes > 3.4 %}
pub | `pub` | v2 <sup>[4]</sup> | | | |{% endif %}
Terraform | `terraform` | >= 0.13<= 1.2.x | **✓** | **✓** | |
yarn | `npm` | v1 | **✓** | **✓** | |
Terraform | `terraform` | >= 0.13, <= 1.2.x | **✓** | **✓** | |
{% ifversion dependabot-yarn-v3-update %}yarn | `npm` | v1, v2, v3 | **✓** | **✓** | **✓**<sup>[5]</sup> |{% else %}yarn | `npm` | v1 | **✓** | **✓** | |
{% endif %}
{% tip %}
**ヒント:** `pipenv` `poetry` などのパッケージ マネージャでは、`pip` YAML 値を使う必要があります。 たとえば Python の依存関係を管理するのに `poetry` を使っており、{% data variables.product.prodname_dependabot %} に新しいバージョンのために依存関係のマニフェスト ファイルをモニターさせたい場合は、*dependabot.yml* ファイル中で `package-ecosystem: "pip"` を使ってください。
**Tip:** For package managers such as `pipenv` and `poetry`, you need to use the `pip` YAML value. For example, if you use `poetry` to manage your Python dependencies and want {% data variables.product.prodname_dependabot %} to monitor your dependency manifest file for new versions, use `package-ecosystem: "pip"` in your *dependabot.yml* file.
{% endtip %}
[1] {% data variables.product.prodname_dependabot %} では Gradle は実行されませんが、次のファイルのアップデートがサポートされます: `build.gradle``build.gradle.kts`(Kotlin プロジェクトの場合)、および `apply` 宣言を使用して組み込まれた、ファイル名に `dependencies` を含むファイル。 `apply` では、`apply to`、再帰、または高度な構文 (たとえば、ファイル名がプロパティで定義された、Kotlin の `mapOf` 付き `apply`) はサポートされていないことに注意してください。
[1] {% data variables.product.prodname_dependabot %} doesn't run Gradle but supports updates to the following files: `build.gradle`, `build.gradle.kts` (for Kotlin projects), and files included via the `apply` declaration that have `dependencies` in the filename. Note that `apply` does not support `apply to`, recursion, or advanced syntaxes (for example, Kotlin's `apply` with `mapOf`, filenames defined by property).
[2] {% data variables.product.prodname_dependabot %} は Maven を実行しませんが、`pom.xml` ファイルの更新はサポートします。
[2] {% data variables.product.prodname_dependabot %} doesn't run Maven but supports updates to `pom.xml` files.
[3] {% data variables.product.prodname_dependabot %}はNuGet CLIを実行しませんが、バージョン4.8までのほとんどの機能をサポートします。
[3] {% data variables.product.prodname_dependabot %} doesn't run the NuGet CLI but does support most features up until version 4.8.
{% ifversion fpt or ghec or ghes > 3.4 %} [4] {% ifversion ghes = 3.5 %}`pub` のサポートは、現在ベータ版です。 既知の制限事項は変更される可能性があります。 {% data variables.product.prodname_dependabot %} では、次のことに注意してください。
- `pub` の git 依存関係の更新はサポートされていません。
- 以前のバージョンが使用可能な場合でも、更新を試みるバージョンが無視されても更新は実行されません。
{% ifversion fpt or ghec or ghes > 3.4 %}
[4] {% ifversion ghes = 3.5 %}`pub` support is currently in beta. Any known limitations are subject to change. Note that {% data variables.product.prodname_dependabot %}:
- Doesn't support updating git dependencies for `pub`.
- Won't perform an update when the version that it tries to update to is ignored, even if an earlier version is available.
`pub`_dependabot.yml_ ファイルを構成する方法の詳細については、「[ベータ レベルのエコシステムのサポートを有効にする](/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#enable-beta-ecosystems)」を参照してください。
{%- else %}{% data variables.product.prodname_dependabot %} は、以前のバージョンが使用可能な場合でも、更新を試みるバージョンが無視されているときは `pub` の更新を実行しません。{% endif %} {% endif %}
For information about configuring your _dependabot.yml_ file for `pub`, see "[Enabling support for beta-level ecosystems](/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#enable-beta-ecosystems)."
{%- else %}{% data variables.product.prodname_dependabot %} won't perform an update for `pub` when the version that it tries to update to is ignored, even if an earlier version is available.{% endif %}
{% endif %}
{% ifversion dependabot-yarn-v3-update %}
[5] Dependabot supports vendored dependencies for v2 onwards.{% endif %}

View File

@@ -89,6 +89,7 @@ translations/ja-JP/content/codespaces/customizing-your-codespace/personalizing-c
translations/ja-JP/content/codespaces/customizing-your-codespace/setting-your-default-editor-for-codespaces.md,file deleted because it no longer exists in main
translations/ja-JP/content/codespaces/customizing-your-codespace/setting-your-default-region-for-codespaces.md,file deleted because it no longer exists in main
translations/ja-JP/content/codespaces/customizing-your-codespace/setting-your-timeout-period-for-codespaces.md,file deleted because it no longer exists in main
translations/ja-JP/content/codespaces/developing-in-codespaces/renaming-a-codespace.md,file deleted because it no longer exists in main
translations/ja-JP/content/codespaces/developing-in-codespaces/using-codespaces-for-pull-requests.md,file deleted because it no longer exists in main
translations/ja-JP/content/codespaces/developing-in-codespaces/using-codespaces-in-visual-studio-code.md,file deleted because it no longer exists in main
translations/ja-JP/content/codespaces/developing-in-codespaces/using-codespaces-with-github-cli.md,file deleted because it no longer exists in main
@@ -353,6 +354,7 @@ translations/ja-JP/content/actions/security-guides/encrypted-secrets.md,renderin
translations/ja-JP/content/actions/security-guides/security-hardening-for-github-actions.md,rendering error
translations/ja-JP/content/actions/using-github-hosted-runners/using-larger-runners.md,rendering error
translations/ja-JP/content/actions/using-workflows/about-workflows.md,rendering error
translations/ja-JP/content/actions/using-workflows/caching-dependencies-to-speed-up-workflows.md,broken liquid tags
translations/ja-JP/content/actions/using-workflows/creating-starter-workflows-for-your-organization.md,rendering error
translations/ja-JP/content/actions/using-workflows/events-that-trigger-workflows.md,rendering error
translations/ja-JP/content/actions/using-workflows/reusing-workflows.md,rendering error
@@ -403,6 +405,9 @@ translations/ja-JP/content/admin/configuration/configuring-your-enterprise/initi
translations/ja-JP/content/admin/configuration/configuring-your-enterprise/managing-github-mobile-for-your-enterprise.md,rendering error
translations/ja-JP/content/admin/configuration/configuring-your-enterprise/site-admin-dashboard.md,broken liquid tags
translations/ja-JP/content/admin/configuration/configuring-your-enterprise/troubleshooting-tls-errors.md,broken liquid tags
translations/ja-JP/content/admin/enterprise-management/configuring-clustering/cluster-network-configuration.md,broken liquid tags
translations/ja-JP/content/admin/enterprise-management/configuring-clustering/configuring-high-availability-replication-for-a-cluster.md,broken liquid tags
translations/ja-JP/content/admin/enterprise-management/configuring-high-availability/creating-a-high-availability-replica.md,broken liquid tags
translations/ja-JP/content/admin/enterprise-management/monitoring-your-appliance/accessing-the-monitor-dashboard.md,broken liquid tags
translations/ja-JP/content/admin/enterprise-management/monitoring-your-appliance/configuring-collectd.md,broken liquid tags
translations/ja-JP/content/admin/enterprise-management/monitoring-your-appliance/generating-a-health-check-for-your-enterprise.md,broken liquid tags
@@ -600,6 +605,8 @@ translations/ja-JP/content/code-security/secret-scanning/about-secret-scanning.m
translations/ja-JP/content/code-security/secret-scanning/configuring-secret-scanning-for-your-repositories.md,rendering error
translations/ja-JP/content/code-security/secret-scanning/defining-custom-patterns-for-secret-scanning.md,rendering error
translations/ja-JP/content/code-security/secret-scanning/managing-alerts-from-secret-scanning.md,rendering error
translations/ja-JP/content/code-security/secret-scanning/protecting-pushes-with-secret-scanning.md,broken liquid tags
translations/ja-JP/content/code-security/secret-scanning/pushing-a-branch-blocked-by-push-protection.md,broken liquid tags
translations/ja-JP/content/code-security/security-overview/about-the-security-overview.md,rendering error
translations/ja-JP/content/code-security/security-overview/filtering-alerts-in-the-security-overview.md,rendering error
translations/ja-JP/content/code-security/security-overview/viewing-the-security-overview.md,rendering error
@@ -796,6 +803,7 @@ translations/ja-JP/content/packages/working-with-a-github-packages-registry/work
translations/ja-JP/content/packages/working-with-a-github-packages-registry/working-with-the-npm-registry.md,rendering error
translations/ja-JP/content/packages/working-with-a-github-packages-registry/working-with-the-nuget-registry.md,rendering error
translations/ja-JP/content/packages/working-with-a-github-packages-registry/working-with-the-rubygems-registry.md,rendering error
translations/ja-JP/content/pages/configuring-a-custom-domain-for-your-github-pages-site/about-custom-domains-and-github-pages.md,broken liquid tags
translations/ja-JP/content/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site.md,rendering error
translations/ja-JP/content/pages/getting-started-with-github-pages/about-github-pages.md,broken liquid tags
translations/ja-JP/content/pages/index.md,broken liquid tags
@@ -1000,6 +1008,7 @@ translations/ja-JP/data/reusables/code-scanning/enterprise-enable-code-scanning-
translations/ja-JP/data/reusables/code-scanning/enterprise-enable-code-scanning.md,broken liquid tags
translations/ja-JP/data/reusables/code-scanning/what-is-codeql-cli.md,broken liquid tags
translations/ja-JP/data/reusables/codespaces/codespaces-api-beta-note.md,broken liquid tags
translations/ja-JP/data/reusables/codespaces/codespaces-org-policies-note.md,broken liquid tags
translations/ja-JP/data/reusables/codespaces/codespaces-policy-targets.md,rendering error
translations/ja-JP/data/reusables/codespaces/codespaces-spending-limit-requirement.md,broken liquid tags
translations/ja-JP/data/reusables/codespaces/creating-a-codespace-in-vscode.md,broken liquid tags
@@ -1018,6 +1027,7 @@ translations/ja-JP/data/reusables/dependabot/default-dependencies-allow-ignore.m
translations/ja-JP/data/reusables/dependabot/dependabot-alerts-filters.md,rendering error
translations/ja-JP/data/reusables/dependabot/enabling-disabling-dependency-graph-private-repo.md,rendering error
translations/ja-JP/data/reusables/dependabot/enterprise-enable-dependabot.md,rendering error
translations/ja-JP/data/reusables/dependabot/supported-package-managers.md,broken liquid tags
translations/ja-JP/data/reusables/desktop/get-an-account.md,rendering error
translations/ja-JP/data/reusables/discussions/enabling-or-disabling-github-discussions-for-your-organization.md,broken liquid tags
translations/ja-JP/data/reusables/discussions/navigate-to-repo-or-org.md,broken liquid tags
1 file reason
89 translations/ja-JP/content/codespaces/customizing-your-codespace/setting-your-default-editor-for-codespaces.md file deleted because it no longer exists in main
90 translations/ja-JP/content/codespaces/customizing-your-codespace/setting-your-default-region-for-codespaces.md file deleted because it no longer exists in main
91 translations/ja-JP/content/codespaces/customizing-your-codespace/setting-your-timeout-period-for-codespaces.md file deleted because it no longer exists in main
92 translations/ja-JP/content/codespaces/developing-in-codespaces/renaming-a-codespace.md file deleted because it no longer exists in main
93 translations/ja-JP/content/codespaces/developing-in-codespaces/using-codespaces-for-pull-requests.md file deleted because it no longer exists in main
94 translations/ja-JP/content/codespaces/developing-in-codespaces/using-codespaces-in-visual-studio-code.md file deleted because it no longer exists in main
95 translations/ja-JP/content/codespaces/developing-in-codespaces/using-codespaces-with-github-cli.md file deleted because it no longer exists in main
354 translations/ja-JP/content/actions/security-guides/security-hardening-for-github-actions.md rendering error
355 translations/ja-JP/content/actions/using-github-hosted-runners/using-larger-runners.md rendering error
356 translations/ja-JP/content/actions/using-workflows/about-workflows.md rendering error
357 translations/ja-JP/content/actions/using-workflows/caching-dependencies-to-speed-up-workflows.md broken liquid tags
358 translations/ja-JP/content/actions/using-workflows/creating-starter-workflows-for-your-organization.md rendering error
359 translations/ja-JP/content/actions/using-workflows/events-that-trigger-workflows.md rendering error
360 translations/ja-JP/content/actions/using-workflows/reusing-workflows.md rendering error
405 translations/ja-JP/content/admin/configuration/configuring-your-enterprise/managing-github-mobile-for-your-enterprise.md rendering error
406 translations/ja-JP/content/admin/configuration/configuring-your-enterprise/site-admin-dashboard.md broken liquid tags
407 translations/ja-JP/content/admin/configuration/configuring-your-enterprise/troubleshooting-tls-errors.md broken liquid tags
408 translations/ja-JP/content/admin/enterprise-management/configuring-clustering/cluster-network-configuration.md broken liquid tags
409 translations/ja-JP/content/admin/enterprise-management/configuring-clustering/configuring-high-availability-replication-for-a-cluster.md broken liquid tags
410 translations/ja-JP/content/admin/enterprise-management/configuring-high-availability/creating-a-high-availability-replica.md broken liquid tags
411 translations/ja-JP/content/admin/enterprise-management/monitoring-your-appliance/accessing-the-monitor-dashboard.md broken liquid tags
412 translations/ja-JP/content/admin/enterprise-management/monitoring-your-appliance/configuring-collectd.md broken liquid tags
413 translations/ja-JP/content/admin/enterprise-management/monitoring-your-appliance/generating-a-health-check-for-your-enterprise.md broken liquid tags
605 translations/ja-JP/content/code-security/secret-scanning/configuring-secret-scanning-for-your-repositories.md rendering error
606 translations/ja-JP/content/code-security/secret-scanning/defining-custom-patterns-for-secret-scanning.md rendering error
607 translations/ja-JP/content/code-security/secret-scanning/managing-alerts-from-secret-scanning.md rendering error
608 translations/ja-JP/content/code-security/secret-scanning/protecting-pushes-with-secret-scanning.md broken liquid tags
609 translations/ja-JP/content/code-security/secret-scanning/pushing-a-branch-blocked-by-push-protection.md broken liquid tags
610 translations/ja-JP/content/code-security/security-overview/about-the-security-overview.md rendering error
611 translations/ja-JP/content/code-security/security-overview/filtering-alerts-in-the-security-overview.md rendering error
612 translations/ja-JP/content/code-security/security-overview/viewing-the-security-overview.md rendering error
803 translations/ja-JP/content/packages/working-with-a-github-packages-registry/working-with-the-npm-registry.md rendering error
804 translations/ja-JP/content/packages/working-with-a-github-packages-registry/working-with-the-nuget-registry.md rendering error
805 translations/ja-JP/content/packages/working-with-a-github-packages-registry/working-with-the-rubygems-registry.md rendering error
806 translations/ja-JP/content/pages/configuring-a-custom-domain-for-your-github-pages-site/about-custom-domains-and-github-pages.md broken liquid tags
807 translations/ja-JP/content/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site.md rendering error
808 translations/ja-JP/content/pages/getting-started-with-github-pages/about-github-pages.md broken liquid tags
809 translations/ja-JP/content/pages/index.md broken liquid tags
1008 translations/ja-JP/data/reusables/code-scanning/enterprise-enable-code-scanning.md broken liquid tags
1009 translations/ja-JP/data/reusables/code-scanning/what-is-codeql-cli.md broken liquid tags
1010 translations/ja-JP/data/reusables/codespaces/codespaces-api-beta-note.md broken liquid tags
1011 translations/ja-JP/data/reusables/codespaces/codespaces-org-policies-note.md broken liquid tags
1012 translations/ja-JP/data/reusables/codespaces/codespaces-policy-targets.md rendering error
1013 translations/ja-JP/data/reusables/codespaces/codespaces-spending-limit-requirement.md broken liquid tags
1014 translations/ja-JP/data/reusables/codespaces/creating-a-codespace-in-vscode.md broken liquid tags
1027 translations/ja-JP/data/reusables/dependabot/dependabot-alerts-filters.md rendering error
1028 translations/ja-JP/data/reusables/dependabot/enabling-disabling-dependency-graph-private-repo.md rendering error
1029 translations/ja-JP/data/reusables/dependabot/enterprise-enable-dependabot.md rendering error
1030 translations/ja-JP/data/reusables/dependabot/supported-package-managers.md broken liquid tags
1031 translations/ja-JP/data/reusables/desktop/get-an-account.md rendering error
1032 translations/ja-JP/data/reusables/discussions/enabling-or-disabling-github-discussions-for-your-organization.md broken liquid tags
1033 translations/ja-JP/data/reusables/discussions/navigate-to-repo-or-org.md broken liquid tags

View File

@@ -82,6 +82,7 @@ translations/pt-BR/content/codespaces/customizing-your-codespace/personalizing-c
translations/pt-BR/content/codespaces/customizing-your-codespace/setting-your-default-editor-for-codespaces.md,file deleted because it no longer exists in main
translations/pt-BR/content/codespaces/customizing-your-codespace/setting-your-default-region-for-codespaces.md,file deleted because it no longer exists in main
translations/pt-BR/content/codespaces/customizing-your-codespace/setting-your-timeout-period-for-codespaces.md,file deleted because it no longer exists in main
translations/pt-BR/content/codespaces/developing-in-codespaces/renaming-a-codespace.md,file deleted because it no longer exists in main
translations/pt-BR/content/codespaces/developing-in-codespaces/using-codespaces-for-pull-requests.md,file deleted because it no longer exists in main
translations/pt-BR/content/codespaces/developing-in-codespaces/using-codespaces-in-visual-studio-code.md,file deleted because it no longer exists in main
translations/pt-BR/content/codespaces/developing-in-codespaces/using-codespaces-with-github-cli.md,file deleted because it no longer exists in main
@@ -340,6 +341,7 @@ translations/pt-BR/content/actions/security-guides/security-hardening-for-github
translations/pt-BR/content/actions/using-github-hosted-runners/about-github-hosted-runners.md,rendering error
translations/pt-BR/content/actions/using-github-hosted-runners/using-larger-runners.md,rendering error
translations/pt-BR/content/actions/using-workflows/about-workflows.md,rendering error
translations/pt-BR/content/actions/using-workflows/caching-dependencies-to-speed-up-workflows.md,broken liquid tags
translations/pt-BR/content/actions/using-workflows/creating-starter-workflows-for-your-organization.md,rendering error
translations/pt-BR/content/actions/using-workflows/events-that-trigger-workflows.md,rendering error
translations/pt-BR/content/actions/using-workflows/reusing-workflows.md,rendering error
@@ -390,6 +392,9 @@ translations/pt-BR/content/admin/configuration/configuring-your-enterprise/manag
translations/pt-BR/content/admin/configuration/configuring-your-enterprise/restricting-network-traffic-to-your-enterprise.md,broken liquid tags
translations/pt-BR/content/admin/configuration/configuring-your-enterprise/site-admin-dashboard.md,broken liquid tags
translations/pt-BR/content/admin/configuration/configuring-your-enterprise/troubleshooting-tls-errors.md,broken liquid tags
translations/pt-BR/content/admin/enterprise-management/configuring-clustering/cluster-network-configuration.md,broken liquid tags
translations/pt-BR/content/admin/enterprise-management/configuring-clustering/configuring-high-availability-replication-for-a-cluster.md,broken liquid tags
translations/pt-BR/content/admin/enterprise-management/configuring-high-availability/creating-a-high-availability-replica.md,broken liquid tags
translations/pt-BR/content/admin/enterprise-management/monitoring-your-appliance/accessing-the-monitor-dashboard.md,broken liquid tags
translations/pt-BR/content/admin/enterprise-management/monitoring-your-appliance/configuring-collectd.md,broken liquid tags
translations/pt-BR/content/admin/enterprise-management/monitoring-your-appliance/generating-a-health-check-for-your-enterprise.md,broken liquid tags
@@ -619,7 +624,6 @@ translations/pt-BR/content/codespaces/developing-in-codespaces/creating-a-codesp
translations/pt-BR/content/codespaces/developing-in-codespaces/deleting-a-codespace.md,broken liquid tags
translations/pt-BR/content/codespaces/developing-in-codespaces/developing-in-a-codespace.md,broken liquid tags
translations/pt-BR/content/codespaces/developing-in-codespaces/forwarding-ports-in-your-codespace.md,broken liquid tags
translations/pt-BR/content/codespaces/developing-in-codespaces/renaming-a-codespace.md,broken liquid tags
translations/pt-BR/content/codespaces/developing-in-codespaces/using-github-codespaces-for-pull-requests.md,broken liquid tags
translations/pt-BR/content/codespaces/developing-in-codespaces/using-github-codespaces-in-visual-studio-code.md,broken liquid tags
translations/pt-BR/content/codespaces/developing-in-codespaces/using-github-codespaces-with-github-cli.md,broken liquid tags
@@ -794,6 +798,7 @@ translations/pt-BR/content/packages/working-with-a-github-packages-registry/work
translations/pt-BR/content/packages/working-with-a-github-packages-registry/working-with-the-npm-registry.md,rendering error
translations/pt-BR/content/packages/working-with-a-github-packages-registry/working-with-the-nuget-registry.md,rendering error
translations/pt-BR/content/packages/working-with-a-github-packages-registry/working-with-the-rubygems-registry.md,rendering error
translations/pt-BR/content/pages/configuring-a-custom-domain-for-your-github-pages-site/about-custom-domains-and-github-pages.md,broken liquid tags
translations/pt-BR/content/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site.md,rendering error
translations/pt-BR/content/pages/getting-started-with-github-pages/about-github-pages.md,broken liquid tags
translations/pt-BR/content/pages/getting-started-with-github-pages/creating-a-github-pages-site.md,rendering error
@@ -1025,6 +1030,7 @@ translations/pt-BR/data/reusables/dependabot/beta-security-and-version-updates.m
translations/pt-BR/data/reusables/dependabot/default-dependencies-allow-ignore.md,broken liquid tags
translations/pt-BR/data/reusables/dependabot/enabling-disabling-dependency-graph-private-repo.md,rendering error
translations/pt-BR/data/reusables/dependabot/enterprise-enable-dependabot.md,rendering error
translations/pt-BR/data/reusables/dependabot/supported-package-managers.md,broken liquid tags
translations/pt-BR/data/reusables/desktop/get-an-account.md,broken liquid tags
translations/pt-BR/data/reusables/discussions/enabling-or-disabling-github-discussions-for-your-organization.md,broken liquid tags
translations/pt-BR/data/reusables/discussions/navigate-to-repo-or-org.md,broken liquid tags
1 file reason
82 translations/pt-BR/content/codespaces/customizing-your-codespace/setting-your-default-editor-for-codespaces.md file deleted because it no longer exists in main
83 translations/pt-BR/content/codespaces/customizing-your-codespace/setting-your-default-region-for-codespaces.md file deleted because it no longer exists in main
84 translations/pt-BR/content/codespaces/customizing-your-codespace/setting-your-timeout-period-for-codespaces.md file deleted because it no longer exists in main
85 translations/pt-BR/content/codespaces/developing-in-codespaces/renaming-a-codespace.md file deleted because it no longer exists in main
86 translations/pt-BR/content/codespaces/developing-in-codespaces/using-codespaces-for-pull-requests.md file deleted because it no longer exists in main
87 translations/pt-BR/content/codespaces/developing-in-codespaces/using-codespaces-in-visual-studio-code.md file deleted because it no longer exists in main
88 translations/pt-BR/content/codespaces/developing-in-codespaces/using-codespaces-with-github-cli.md file deleted because it no longer exists in main
341 translations/pt-BR/content/actions/using-github-hosted-runners/about-github-hosted-runners.md rendering error
342 translations/pt-BR/content/actions/using-github-hosted-runners/using-larger-runners.md rendering error
343 translations/pt-BR/content/actions/using-workflows/about-workflows.md rendering error
344 translations/pt-BR/content/actions/using-workflows/caching-dependencies-to-speed-up-workflows.md broken liquid tags
345 translations/pt-BR/content/actions/using-workflows/creating-starter-workflows-for-your-organization.md rendering error
346 translations/pt-BR/content/actions/using-workflows/events-that-trigger-workflows.md rendering error
347 translations/pt-BR/content/actions/using-workflows/reusing-workflows.md rendering error
392 translations/pt-BR/content/admin/configuration/configuring-your-enterprise/restricting-network-traffic-to-your-enterprise.md broken liquid tags
393 translations/pt-BR/content/admin/configuration/configuring-your-enterprise/site-admin-dashboard.md broken liquid tags
394 translations/pt-BR/content/admin/configuration/configuring-your-enterprise/troubleshooting-tls-errors.md broken liquid tags
395 translations/pt-BR/content/admin/enterprise-management/configuring-clustering/cluster-network-configuration.md broken liquid tags
396 translations/pt-BR/content/admin/enterprise-management/configuring-clustering/configuring-high-availability-replication-for-a-cluster.md broken liquid tags
397 translations/pt-BR/content/admin/enterprise-management/configuring-high-availability/creating-a-high-availability-replica.md broken liquid tags
398 translations/pt-BR/content/admin/enterprise-management/monitoring-your-appliance/accessing-the-monitor-dashboard.md broken liquid tags
399 translations/pt-BR/content/admin/enterprise-management/monitoring-your-appliance/configuring-collectd.md broken liquid tags
400 translations/pt-BR/content/admin/enterprise-management/monitoring-your-appliance/generating-a-health-check-for-your-enterprise.md broken liquid tags
624 translations/pt-BR/content/codespaces/developing-in-codespaces/deleting-a-codespace.md broken liquid tags
625 translations/pt-BR/content/codespaces/developing-in-codespaces/developing-in-a-codespace.md broken liquid tags
626 translations/pt-BR/content/codespaces/developing-in-codespaces/forwarding-ports-in-your-codespace.md broken liquid tags
translations/pt-BR/content/codespaces/developing-in-codespaces/renaming-a-codespace.md broken liquid tags
627 translations/pt-BR/content/codespaces/developing-in-codespaces/using-github-codespaces-for-pull-requests.md broken liquid tags
628 translations/pt-BR/content/codespaces/developing-in-codespaces/using-github-codespaces-in-visual-studio-code.md broken liquid tags
629 translations/pt-BR/content/codespaces/developing-in-codespaces/using-github-codespaces-with-github-cli.md broken liquid tags
798 translations/pt-BR/content/packages/working-with-a-github-packages-registry/working-with-the-npm-registry.md rendering error
799 translations/pt-BR/content/packages/working-with-a-github-packages-registry/working-with-the-nuget-registry.md rendering error
800 translations/pt-BR/content/packages/working-with-a-github-packages-registry/working-with-the-rubygems-registry.md rendering error
801 translations/pt-BR/content/pages/configuring-a-custom-domain-for-your-github-pages-site/about-custom-domains-and-github-pages.md broken liquid tags
802 translations/pt-BR/content/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site.md rendering error
803 translations/pt-BR/content/pages/getting-started-with-github-pages/about-github-pages.md broken liquid tags
804 translations/pt-BR/content/pages/getting-started-with-github-pages/creating-a-github-pages-site.md rendering error
1030 translations/pt-BR/data/reusables/dependabot/default-dependencies-allow-ignore.md broken liquid tags
1031 translations/pt-BR/data/reusables/dependabot/enabling-disabling-dependency-graph-private-repo.md rendering error
1032 translations/pt-BR/data/reusables/dependabot/enterprise-enable-dependabot.md rendering error
1033 translations/pt-BR/data/reusables/dependabot/supported-package-managers.md broken liquid tags
1034 translations/pt-BR/data/reusables/desktop/get-an-account.md broken liquid tags
1035 translations/pt-BR/data/reusables/discussions/enabling-or-disabling-github-discussions-for-your-organization.md broken liquid tags
1036 translations/pt-BR/data/reusables/discussions/navigate-to-repo-or-org.md broken liquid tags

View File

@@ -12,6 +12,8 @@ versions:
In addition to the [standard {% data variables.product.prodname_dotcom %}-hosted runners](/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources), {% data variables.product.prodname_dotcom %} also offers customers on {% data variables.product.prodname_team %} and {% data variables.product.prodname_ghe_cloud %} plans a range of {% data variables.actions.hosted_runner %}s with more RAM and CPU. These runners are hosted by {% data variables.product.prodname_dotcom %} and have the runner application and other tools preinstalled.
When {% data variables.actions.hosted_runner %}s are enabled for your organization, a default runner group is automatically created for you with a set of four pre-configured {% data variables.actions.hosted_runner %}s.
When you add a {% data variables.actions.hosted_runner %} to an organization, you are defining a type of machine from a selection of available hardware specifications and operating system images. {% data variables.product.prodname_dotcom %} will then create multiple instances of this runner that scale up and down to match the job demands of your organization, based on the autoscaling limits you define.
## Machine specs for {% data variables.actions.hosted_runner %}s

View File

@@ -1,7 +1,7 @@
---
title: Memorizar dependências para acelerar os fluxos de trabalho
shortTitle: Caching dependencies
intro: 'Para agilizar os seus fluxos de trabalho e torná-los mais eficientes, você pode criar e usar caches para dependências e outros arquivos reutilizados geralmente.'
title: Caching dependencies to speed up workflows
shortTitle: Cache dependencies
intro: 'To make your workflows faster and more efficient, you can create and use caches for dependencies and other commonly reused files.'
redirect_from:
- /github/automating-your-workflow-with-github-actions/caching-dependencies-to-speed-up-workflows
- /actions/automating-your-workflow-with-github-actions/caching-dependencies-to-speed-up-workflows
@@ -14,24 +14,19 @@ type: tutorial
topics:
- Workflows
miniTocMaxHeadingLevel: 3
ms.openlocfilehash: 558d5f186ce75d9ace6f6c6be63e2e3eaeff3230
ms.sourcegitcommit: b0323777cfe4324a09552d0ea268d1afacc3da37
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 08/17/2022
ms.locfileid: '147580668'
---
## <a name="about-caching-workflow-dependencies"></a>Sobre a memorização das dependências do fluxo de trabalho
As execuções do fluxo de trabalho geralmente reutilizam as mesmas saídas ou dependências baixadas de uma execução para outra. Por exemplo, as ferramentas de gerenciamento de pacotes e de dependência, como, por exemplo, Maven, Gradle, npm e Yarn mantêm uma cache local de dependências baixadas.
## About caching workflow dependencies
{% ifversion fpt or ghec %} Os trabalhos nos executores hospedados em {% data variables.product.prodname_dotcom %} começam em uma imagem do executor limpa e devem baixar as dependências todas as vezes, o que gera maior utilização da rede, maior tempo de execução e aumento dos custos. {% endif %}Para ajudar a acelerar o tempo que leva para recriar arquivos como dependências, o {% data variables.product.prodname_dotcom %} pode armazenar em cache os arquivos que você usa frequentemente nos fluxos de trabalho.
Workflow runs often reuse the same outputs or downloaded dependencies from one run to another. For example, package and dependency management tools such as Maven, Gradle, npm, and Yarn keep a local cache of downloaded dependencies.
Para armazenar em cache as dependências de um trabalho, você pode usar a [ação `cache`](https://github.com/actions/cache) do {% data variables.product.prodname_dotcom %}. A ação cria e restaura um cache identificado por uma chave exclusiva. Como alternativa, se você estiver armazenando em cache os gerenciadores de pacotes listados abaixo, o uso das respectivas ações setup-* exigirá configuração mínima e criará e restaurará caches de dependência para você.
{% ifversion fpt or ghec %} Jobs on {% data variables.product.prodname_dotcom %}-hosted runners start in a clean runner image and must download dependencies each time, causing increased network utilization, longer runtime, and increased cost. {% endif %}To help speed up the time it takes to recreate files like dependencies, {% data variables.product.prodname_dotcom %} can cache files you frequently use in workflows.
| Gerenciadores de pacotes | ação setup-* para cache |
To cache dependencies for a job, you can use {% data variables.product.prodname_dotcom %}'s [`cache` action](https://github.com/actions/cache). The action creates and restores a cache identified by a unique key. Alternatively, if you are caching the package managers listed below, using their respective setup-* actions requires minimal configuration and will create and restore dependency caches for you.
| Package managers | setup-* action for caching |
|---|---|
| npm, YARN, pnpm | [setup-node](https://github.com/actions/setup-node#caching-global-packages-data) |
| npm, Yarn, pnpm | [setup-node](https://github.com/actions/setup-node#caching-global-packages-data) |
| pip, pipenv, Poetry | [setup-python](https://github.com/actions/setup-python#caching-packages-dependencies) |
| Gradle, Maven | [setup-java](https://github.com/actions/setup-java#caching-packages-dependencies) |
| RubyGems | [setup-ruby](https://github.com/ruby/setup-ruby#caching-bundle-install-automatically) |
@@ -39,40 +34,40 @@ Para armazenar em cache as dependências de um trabalho, você pode usar a [aç
{% warning %}
**Aviso**: {% ifversion fpt or ghec %}Esteja atento ao seguinte ao usar o cache com o {% data variables.product.prodname_actions %}:
**Warning**: {% ifversion fpt or ghec %}Be mindful of the following when using caching with {% data variables.product.prodname_actions %}:
* {% endif %}Recomendamos que você não armazene nenhuma informação confidencial no cache. Por exemplo, as informações confidenciais podem incluir tokens de acesso ou credenciais de login armazenadas em um arquivo no caminho da cache. Além disso, os programas de CLI (interface de linha de comando) como o `docker login` podem salvar as credenciais de acesso em um arquivo de configuração. Qualquer pessoa com acesso de leitura pode criar uma solicitação de pull em um repositório e acessar o conteúdo de um cache. As bifurcações de um repositório também podem criar pull requests no branch-base e acessar as caches no branch-base.
* {% endif %}We recommend that you don't store any sensitive information in the cache. For example, sensitive information can include access tokens or login credentials stored in a file in the cache path. Also, command line interface (CLI) programs like `docker login` can save access credentials in a configuration file. Anyone with read access can create a pull request on a repository and access the contents of a cache. Forks of a repository can also create pull requests on the base branch and access caches on the base branch.
{%- ifversion fpt or ghec %}
* Ao usar executores auto-hospedados, os caches de execuções de fluxo de trabalho são armazenados em armazenamento em nuvem de propriedade do {% data variables.product.company_short %}. Uma solução de armazenamento de propriedade do cliente só está disponível com {% data variables.product.prodname_ghe_server %}.
* When using self-hosted runners, caches from workflow runs are stored on {% data variables.product.company_short %}-owned cloud storage. A customer-owned storage solution is only available with {% data variables.product.prodname_ghe_server %}.
{%- endif %}
{% endwarning %}
{% data reusables.actions.comparing-artifacts-caching %}
Para obter mais informações sobre artefatos de execução de fluxo, confira "[Como persistir dados de fluxo de trabalho usando artefatos](/github/automating-your-workflow-with-github-actions/persisting-workflow-data-using-artifacts)".
For more information on workflow run artifacts, see "[Persisting workflow data using artifacts](/github/automating-your-workflow-with-github-actions/persisting-workflow-data-using-artifacts)."
## <a name="restrictions-for-accessing-a-cache"></a>Restrições para acessar uma cache
## Restrictions for accessing a cache
Um fluxo de trabalho pode acessar e restaurar um cache criado no branch atual, no branch base (incluindo os branches base de repositórios com fork) ou no branch padrão (geralmente, `main`). Por exemplo, um cache criado no branch-padrão pode ser acessado a partir de qualquer pull request. Além disso, se o branch `feature-b` tiver o branch base `feature-a`, um fluxo de trabalho disparado em `feature-b` terá acesso aos caches criados no branch padrão (`main`) `feature-a` e `feature-b`.
A workflow can access and restore a cache created in the current branch, the base branch (including base branches of forked repositories), or the default branch (usually `main`). For example, a cache created on the default branch would be accessible from any pull request. Also, if the branch `feature-b` has the base branch `feature-a`, a workflow triggered on `feature-b` would have access to caches created in the default branch (`main`), `feature-a`, and `feature-b`.
As restrições de acesso fornecem o isolamento da cache e a segurança ao criar um limite lógico entre os diferentes branches. Por exemplo, um cache criado para o branch `feature-a` (com o `main` base) não ficará acessível para uma solicitação de pull para o branch `feature-c` (com o `main` base).
Access restrictions provide cache isolation and security by creating a logical boundary between different branches or tags. For example, a cache created for the branch `feature-a` (with the base `main`) would not be accessible to a pull request for the branch `feature-c` (with the base `main`). On similar lines, a cache created for the tag `release-a` (from the base `main`) would not be accessible to a workflow triggered for the tag `release-b` (with the base `main`).
Vários fluxos de trabalho dentro de um repositório compartilham entradas de cache. Uma cache criada para um branch de um fluxo de trabalho pode ser acessada e restaurada a partir de outro fluxo de trabalho para o mesmo repositório e branch.
Multiple workflows within a repository share cache entries. A cache created for a branch within a workflow can be accessed and restored from another workflow for the same repository and branch.
## <a name="using-the-cache-action"></a>Como usar a ação `cache`
## Using the `cache` action
A [ação `cache`](https://github.com/actions/cache) tentará restaurar um cache com base na `key` que você fornecer. Quando a ação encontrar um cache, ela vai restaurar os arquivos armazenados em cache no `path` que você configurar.
The [`cache` action](https://github.com/actions/cache) will attempt to restore a cache based on the `key` you provide. When the action finds a cache, the action restores the cached files to the `path` you configure.
Se não houver uma correspondência perfeita, a ação criará automaticamente um cache se o trabalho for concluído com sucesso. O novo cache usará a `key` fornecida e conterá os arquivos especificados no `path`.
If there is no exact match, the action automatically creates a new cache if the job completes successfully. The new cache will use the `key` you provided and contains the files you specify in `path`.
Opcionalmente, você pode fornecer uma lista de `restore-keys` a serem usadas quando a `key` não corresponder a um cache existente. Uma lista de `restore-keys` é útil quando você restaura um cache de outro branch, porque as `restore-keys` podem corresponder parcialmente às chaves do cache. Para obter mais informações sobre as `restore-keys` correspondentes, confira "[Como fazer a correspondência de uma chave de cache](#matching-a-cache-key)".
You can optionally provide a list of `restore-keys` to use when the `key` doesn't match an existing cache. A list of `restore-keys` is useful when you are restoring a cache from another branch because `restore-keys` can partially match cache keys. For more information about matching `restore-keys`, see "[Matching a cache key](#matching-a-cache-key)."
### <a name="input-parameters-for-the-cache-action"></a>Os parâmetros de entrada da ação `cache`
### Input parameters for the `cache` action
- `key`: **Obrigatório** A chave criada ao salvar um cache, e a chave usada para pesquisar um cache. Pode ser qualquer combinação de variáveis, valores de contexto, cadeias de caracteres estáticas e funções. As chaves têm um tamanho máximo de 512 caracteres e as chaves maiores que o tamanho máximo gerarão uma falha na ação.
- `path`: **Obrigatório** Os caminhos no executor para armazenamento em cache ou restauração.
- Você pode especificar um só caminho ou adicionar vários caminhos em linhas separadas. Por exemplo:
- `key`: **Required** The key created when saving a cache and the key used to search for a cache. It can be any combination of variables, context values, static strings, and functions. Keys have a maximum length of 512 characters, and keys longer than the maximum length will cause the action to fail.
- `path`: **Required** The path(s) on the runner to cache or restore.
- You can specify a single path, or you can add multiple paths on separate lines. For example:
```
- name: Cache Gradle packages
@@ -82,9 +77,9 @@ Opcionalmente, você pode fornecer uma lista de `restore-keys` a serem usadas qu
~/.gradle/caches
~/.gradle/wrapper
```
- Você pode especificar direrios ou arquivos únicos e os padrões glob são compatíveis.
- Você pode especificar caminhos absolutos ou caminhos relativos ao diretório do espaço de trabalho.
- `restore-keys`: **Opcional** Uma cadeia de caracteres que contém chaves de restauração alternativas, com cada chave de restauração colocada em uma nova linha. Se não houver nenhuma ocorrência da `key`no cache, essas chaves de restauração serão usadas sequencialmente na ordem fornecida para localizar e restaurar um cache. Por exemplo:
- You can specify either directories or single files, and glob patterns are supported.
- You can specify absolute paths, or paths relative to the workspace directory.
- `restore-keys`: **Optional** A string containing alternative restore keys, with each restore key placed on a new line. If no cache hit occurs for `key`, these restore keys are used sequentially in the order provided to find and restore a cache. For example:
{% raw %}
```yaml
@@ -95,13 +90,13 @@ Opcionalmente, você pode fornecer uma lista de `restore-keys` a serem usadas qu
```
{% endraw %}
### <a name="output-parameters-for-the-cache-action"></a>Parâmetros de saída para a ação `cache`
### Output parameters for the `cache` action
- `cache-hit`: um valor booliano para indicar que uma correspondência exata foi encontrada para a chave.
- `cache-hit`: A boolean value to indicate an exact match was found for the key.
### <a name="example-using-the-cache-action"></a>Exemplo que usa a ação `cache`
### Example using the `cache` action
Este exemplo cria um cache quando os pacotes do arquivo `package-lock.json` são alterados ou quando o sistema operacional do executor é alterado. A chave de cache usa contextos e expressões para gerar uma chave que inclui o sistema operacional do executor e um hash SHA-256 do arquivo `package-lock.json`.
This example creates a new cache when the packages in `package-lock.json` file change, or when the runner's operating system changes. The cache key uses contexts and expressions to generate a key that includes the runner's operating system and a SHA-256 hash of the `package-lock.json` file.
```yaml{:copy}
name: Caching with npm
@@ -141,27 +136,27 @@ jobs:
run: npm test
```
Quando a `key` corresponde a um cache existente, isso é chamado de _ocorrência no cache_ e a ação restaura os arquivos armazenados em cache no diretório `path`.
When `key` matches an existing cache, it's called a _cache hit_, and the action restores the cached files to the `path` directory.
Quando `key` não corresponde a um cache existente, isso é chamado de _perda no cache_ e um cache é criado automaticamente se o trabalho for concluído com sucesso.
When `key` doesn't match an existing cache, it's called a _cache miss_, and a new cache is automatically created if the job completes successfully.
Quando ocorre uma perda no cache, a ação também pesquisa o `restore-keys` especificado para encontrar correspondências:
When a cache miss occurs, the action also searches your specified `restore-keys` for any matches:
1. Se você fornecer `restore-keys`, a ação `cache` vai procurar sequencialmente os caches que correspondem à lista de `restore-keys`.
- Se houver uma correspondência exata, a ação vai restaurar os arquivos no cache no diretório `path`.
- Se não houver correspondências exatas, a ação pesquisará correspondências parciais das chaves de restauração. Quando uma ação encontra uma correspondência parcial, o último cache é restaurado no diretório `path`.
1. A ação `cache` é concluída e a próxima etapa é executada no trabalho.
1. Se o trabalho for concluído com sucesso, a ação criará automaticamente um cache com o conteúdo do diretório `path`.
1. If you provide `restore-keys`, the `cache` action sequentially searches for any caches that match the list of `restore-keys`.
- When there is an exact match, the action restores the files in the cache to the `path` directory.
- If there are no exact matches, the action searches for partial matches of the restore keys. When the action finds a partial match, the most recent cache is restored to the `path` directory.
1. The `cache` action completes and the next step in the job runs.
1. If the job completes successfully, the action automatically creates a new cache with the contents of the `path` directory.
Para obter uma explicação mais detalhada do processo de correspondência de cache, confira "[Correspondência de uma chave de cache](#matching-a-cache-key)". Após criar uma cache, você não poderá alterar o conteúdo de uma cache existente, mas você poderá criar uma nova cache com uma nova chave.
For a more detailed explanation of the cache matching process, see "[Matching a cache key](#matching-a-cache-key)." Once you create a cache, you cannot change the contents of an existing cache but you can create a new cache with a new key.
### <a name="using-contexts-to-create-cache-keys"></a>Usar contextos para criar chaves da cache
### Using contexts to create cache keys
Uma chave da cache pode incluir quaisquer contextos, funções, literais e operadores suportados por {% data variables.product.prodname_actions %}. Para obter mais informações, confira "[Contextos](/actions/learn-github-actions/contexts)" e "[Expressões](/actions/learn-github-actions/expressions)".
A cache key can include any of the contexts, functions, literals, and operators supported by {% data variables.product.prodname_actions %}. For more information, see "[Contexts](/actions/learn-github-actions/contexts)" and "[Expressions](/actions/learn-github-actions/expressions)."
O uso de expressões para criar uma `key` permite que você crie automaticamente um cache quando as dependências são alteradas.
Using expressions to create a `key` allows you to automatically create a new cache when dependencies change.
Por exemplo, você pode criar uma `key` usando uma expressão que calcula o hash de um arquivo `package-lock.json` npm. Portanto, quando as dependências que compõem o arquivo `package-lock.json` são alteradas, a chave de cache é alterada e um novo cache é criado automaticamente.
For example, you can create a `key` using an expression that calculates the hash of an npm `package-lock.json` file. So, when the dependencies that make up the `package-lock.json` file change, the cache key changes and a new cache is automatically created.
{% raw %}
```yaml
@@ -169,17 +164,17 @@ npm-${{ hashFiles('package-lock.json') }}
```
{% endraw %}
O {% data variables.product.prodname_dotcom %} avalia a expressão `hash "package-lock.json"` para obter a `key` final.
{% data variables.product.prodname_dotcom %} evaluates the expression `hash "package-lock.json"` to derive the final `key`.
```yaml
npm-d5ea0750
```
### <a name="using-the-output-of-the-cache-action"></a>Usando a saída da ação `cache`
### Using the output of the `cache` action
Você pode usar a saída da ação `cache` para fazer algo com base na ocorrência ou na perda no cache. Quando uma correspondência exata é encontrada para um cache do `key` especificado, a saída `cache-hit` é definida como `true`.
You can use the output of the `cache` action to do something based on whether a cache hit or miss occurred. When an exact match is found for a cache for the specified `key`, the `cache-hit` output is set to `true`.
No exemplo de fluxo de trabalho acima, há uma etapa que lista o estado dos módulos de nó na ocorrência de uma perda no cache:
In the example workflow above, there is a step that lists the state of the Node modules if a cache miss occurred:
```yaml
- if: {% raw %}${{ steps.cache-npm.outputs.cache-hit != 'true' }}{% endraw %}
@@ -188,13 +183,13 @@ No exemplo de fluxo de trabalho acima, há uma etapa que lista o estado dos mód
run: npm list
```
## <a name="matching-a-cache-key"></a>Corresponder uma chave da cache
## Matching a cache key
A ação `cache` primeiro procura `key` e `restore-keys` nas ocorrências no cache, no branch que contém a execução de fluxo de trabalho. Se não houver nenhuma ocorrência no branch atual, a ação `cache` vai procurar `key` e `restore-keys` nos branches pai e upstream.
The `cache` action first searches for cache hits for `key` and `restore-keys` in the branch containing the workflow run. If there are no hits in the current branch, the `cache` action searches for `key` and `restore-keys` in the parent branch and upstream branches.
O `restore-keys` permite que você especifique uma lista de chaves de restauração alternativas a serem usadas quando houver uma perda no cache na `key`. Você pode criar múltiplas chaves de restauração ordenadas da mais específica para a menos específica. A ação `cache` procura o `restore-keys` em ordem sequencial. Quando uma chave não corresponde diretamente, a ação pesquisa as chaves prefixadas com a chave de restauração. Se houver múltiplas correspondências parciais para uma chave de restauração, a ação retornará a cache criada por último.
`restore-keys` allows you to specify a list of alternate restore keys to use when there is a cache miss on `key`. You can create multiple restore keys ordered from the most specific to least specific. The `cache` action searches the `restore-keys` in sequential order. When a key doesn't match directly, the action searches for keys prefixed with the restore key. If there are multiple partial matches for a restore key, the action returns the most recently created cache.
### <a name="example-using-multiple-restore-keys"></a>Exemplo do uso de múltiplas chaves de restauração
### Example using multiple restore keys
{% raw %}
```yaml
@@ -205,7 +200,7 @@ restore-keys: |
```
{% endraw %}
O executor avalia as expressões, que são resolvidas para estas `restore-keys`:
The runner evaluates the expressions, which resolve to these `restore-keys`:
{% raw %}
```yaml
@@ -216,13 +211,13 @@ restore-keys: |
```
{% endraw %}
A chave de restauração `npm-feature-` corresponde a qualquer chave que comece com a cadeia de caracteres `npm-feature-`. Por exemplo, as chaves `npm-feature-fd3052de` e `npm-feature-a9b253ff` correspondem à chave de restauração. Será usada a cache com a data de criação mais recente. As chaves neste exemplo são pesquisadas na ordem a seguir:
The restore key `npm-feature-` matches any key that starts with the string `npm-feature-`. For example, both of the keys `npm-feature-fd3052de` and `npm-feature-a9b253ff` match the restore key. The cache with the most recent creation date would be used. The keys in this example are searched in the following order:
1. **`npm-feature-d5ea0750`** corresponde a um hash específico.
1. **`npm-feature-`** corresponde às chaves de cache precedidas com `npm-feature-`.
1. **`npm-`** corresponde a qualquer chave precedida com `npm-`.
1. **`npm-feature-d5ea0750`** matches a specific hash.
1. **`npm-feature-`** matches cache keys prefixed with `npm-feature-`.
1. **`npm-`** matches any keys prefixed with `npm-`.
#### <a name="example-of-search-priority"></a>Exemplo de prioridade de pesquisa
#### Example of search priority
```yaml
key:
@@ -232,30 +227,81 @@ restore-keys: |
npm-
```
Por exemplo, se uma solicitação de pull contiver um branch `feature` e for direcionada ao branch padrão (`main`), a ação vai procurar `key` e `restore-keys` na seguinte ordem:
For example, if a pull request contains a `feature` branch and targets the default branch (`main`), the action searches for `key` and `restore-keys` in the following order:
1. Chave `npm-feature-d5ea0750` no branch `feature`
1. Chave `npm-feature-` no branch `feature`
1. Chave `npm-` no branch `feature`
1. Chave `npm-feature-d5ea0750` no branch `main`
1. Chave `npm-feature-` no branch `main`
1. Chave `npm-` no branch `main`
1. Key `npm-feature-d5ea0750` in the `feature` branch
1. Key `npm-feature-` in the `feature` branch
1. Key `npm-` in the `feature` branch
1. Key `npm-feature-d5ea0750` in the `main` branch
1. Key `npm-feature-` in the `main` branch
1. Key `npm-` in the `main` branch
## <a name="usage-limits-and-eviction-policy"></a>Limites de uso e política de eliminação
## Usage limits and eviction policy
{% data variables.product.prodname_dotcom %} removerá todas as entradas da cache não acessadas há mais de 7 dias. Não há limite no número de caches que você pode armazenar, mas o tamanho total de todos os caches em um repositório é limitado{% ifversion actions-cache-policy-apis %}. Por padrão, o limite é de 10 GB por repositório, mas esse limite pode ser diferente dependendo das políticas definidas pelos proprietários corporativos ou administradores de repositório.{% else %} para 10 GB.{% endif %}
{% data variables.product.prodname_dotcom %} will remove any cache entries that have not been accessed in over 7 days. There is no limit on the number of caches you can store, but the total size of all caches in a repository is limited{% ifversion actions-cache-policy-apis %}. By default, the limit is 10 GB per repository, but this limit might be different depending on policies set by your enterprise owners or repository administrators.{% else %} to 10 GB.{% endif %}
{% data reusables.actions.cache-eviction-process %}
{% data reusables.actions.cache-eviction-process %} {% ifversion actions-cache-ui %}The cache eviction process may cause cache thrashing, where caches are created and deleted at a high frequency. To reduce this, you can review the caches for a repository and take corrective steps, such as removing caching from specific workflows. For more information, see "[Managing caches](#managing-caches)."{% endif %}{% ifversion actions-cache-admin-ui %} You can also increase the cache size limit for a repository. For more information, see "[Managing {% data variables.product.prodname_actions %} settings for a repository](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#configuring-cache-storage-for-a-repository)."
{% elsif actions-cache-policy-apis %}
For information on changing the policies for the repository cache size limit, see "[Enforcing policies for {% data variables.product.prodname_actions %} in your enterprise](/admin/policies/enforcing-policies-for-your-enterprise/enforcing-policies-for-github-actions-in-your-enterprise#enforcing-a-policy-for-cache-storage-in-your-enterprise)" and "[Managing {% data variables.product.prodname_actions %} settings for a repository](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#configuring-cache-storage-for-a-repository)."
{% ifversion actions-cache-policy-apis %} Para obter informações de como alterar as políticas de limite de tamanho do cache do repositório, confira "[Como impor políticas para o {% data variables.product.prodname_actions %} na empresa](/admin/policies/enforcing-policies-for-your-enterprise/enforcing-policies-for-github-actions-in-your-enterprise#enforcing-a-policy-for-cache-storage-in-your-enterprise)" e "[Como gerenciar as configurações do {% data variables.product.prodname_actions %} em um repositório](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#configuring-cache-storage-for-a-repository)".
{% endif %}
{% ifversion actions-cache-management %}
## <a name="managing-caches"></a>Gerenciando caches
## Managing caches
Você pode usar a API REST de {% data variables.product.product_name %} para gerenciar seus caches. {% ifversion actions-cache-list-delete-apis %}Você pode usar a API para listar e excluir entradas de cache e ver o uso do cache. {% elsif actions-cache-management %}No momento, você pode usar a API para ver o uso do cache. As atualizações futuras trarão mais funcionalidades. {% endif %} Para obter mais informações, confira a documentação da API REST "[Cache do {% data variables.product.prodname_actions %}](/rest/actions/cache)".
{% ifversion actions-cache-ui %}
Você também pode instalar uma extensão do {% data variables.product.prodname_cli %} para gerenciar seus caches da linha de comando. Para obter mais informações sobre a extensão, confira [a documentação da extensão](https://github.com/actions/gh-actions-cache#readme). Para obter mais informações sobre as extensões do {% data variables.product.prodname_cli %}, confira "[Como usar as extensões da CLI do GitHub](/github-cli/github-cli/using-github-cli-extensions)."
To manage caches created from your workflows, you can:
- View a list of all cache entries for a repository.
- Filter and sort the list of caches using specific metadata such as cache size, creation time, or last accessed time.
- Delete cache entries from a repository.
- Monitor aggregate cache usage for repositories and organizations.
There are multiple ways to manage caches for your repositories:
- Using the {% data variables.product.prodname_dotcom %} web interface, as shown below.
- Using the REST API. For more information, see the "[{% data variables.product.prodname_actions %} Cache](/rest/actions/cache)" REST API documentation.
- Installing a {% data variables.product.prodname_cli %} extension to manage your caches from the command line. For more information, see the [gh-actions-cache](https://github.com/actions/gh-actions-cache) extension.
{% else %}
You can use the {% data variables.product.product_name %} REST API to manage your caches. {% ifversion actions-cache-list-delete-apis %}You can use the API to list and delete cache entries, and see your cache usage.{% elsif actions-cache-management %}At present, you can use the API to see your cache usage, with more functionality expected in future updates.{% endif %} For more information, see the "[{% data variables.product.prodname_actions %} Cache](/rest/actions/cache)" REST API documentation.
You can also install a {% data variables.product.prodname_cli %} extension to manage your caches from the command line. For more information about the extension, see [the extension documentation](https://github.com/actions/gh-actions-cache#readme). For more information about {% data variables.product.prodname_cli %} extensions, see "[Using GitHub CLI extensions](/github-cli/github-cli/using-github-cli-extensions)."
{% endif %}
{% ifversion actions-cache-ui %}
### Viewing cache entries
You can use the web interface to view a list of cache entries for a repository. In the cache list, you can see how much disk space each cache is using, when the cache was created, and when the cache was last used.
{% data reusables.repositories.navigate-to-repo %}
{% data reusables.repositories.actions-tab %}
{% data reusables.repositories.actions-cache-list %}
1. Review the list of cache entries for the repository.
* To search for cache entries used for a specific branch, click the **Branch** dropdown menu and select a branch. The cache list will display all of the caches used for the selected branch.
* To search for cache entries with a specific cache key, use the syntax `key: key-name` in the **Filter caches** field. The cache list will display caches from all branches where the key was used.
![Screenshot of the list of cache entries](/assets/images/help/repository/actions-cache-entry-list.png)
### Deleting cache entries
Users with `write` access to a repository can use the {% data variables.product.prodname_dotcom %} web interface to delete cache entries.
{% data reusables.repositories.navigate-to-repo %}
{% data reusables.repositories.actions-tab %}
{% data reusables.repositories.actions-cache-list %}
1. To the right of the cache entry you want to delete, click {% octicon "trash" aria-label="The trash icon" %}.
![Screenshot of the list of cache entries](/assets/images/help/repository/actions-cache-delete.png)
{% endif %}
{% endif %}

View File

@@ -1,6 +1,6 @@
---
title: Configuração de rede de cluster
intro: 'O funcionamento correto do clustering do {% data variables.product.prodname_ghe_server %} depende da resolução adequada de nome DNS, do balanceamento de carga e da comunicação entre os nós.'
title: Cluster network configuration
intro: '{% data variables.product.prodname_ghe_server %} clustering relies on proper DNS name resolution, load balancing, and communication between nodes to operate properly.'
redirect_from:
- /enterprise/admin/clustering/cluster-network-configuration
- /enterprise/admin/enterprise-management/cluster-network-configuration
@@ -14,112 +14,106 @@ topics:
- Infrastructure
- Networking
shortTitle: Configure a cluster network
ms.openlocfilehash: d6e4d50077cccc3e5582be0af39bdae0046cd8c8
ms.sourcegitcommit: fcf3546b7cc208155fb8acdf68b81be28afc3d2d
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/10/2022
ms.locfileid: '145093989'
---
## Considerações de rede
## Network considerations
A composição de rede mais simples para o clustering é deixar os nós em uma única LAN. Se um cluster abranger sub-redes, não recomendamos configurar quaisquer regras de firewall entre as redes. A latência entre os nós deve ser inferior a 1 milissegundo.
The simplest network design for clustering is to place the nodes on a single LAN. If a cluster must span subnetworks, we do not recommend configuring any firewall rules between the networks. The latency between nodes should be less than 1 millisecond.
{% ifversion ghes %}Para alta disponibilidade, a latência entre a rede com os nós ativos e a rede com os nós passivos precisa ser inferior a 70 milissegundos. Não recomendamos configurar um firewall entre as duas redes.{% endif %}
{% data reusables.enterprise_clustering.network-latency %}
### Portas de aplicativo para usuários finais
### Application ports for end users
As portas de aplicativo fornecem aplicativos da web e acesso dos usuários finais ao Git.
Application ports provide web application and Git access for end users.
| Porta | Descrição | Criptografado |
| Port | Description | Encrypted |
| :------------- | :------------- | :------------- |
| 22/TCP | Git em SSH | Sim |
| 25/TCP | SMTP | Requer STARTTLS |
| 80/TCP | HTTP | No<br>(Quando o SSL está habilitado, essa porta é redirecionada para HTTPS) |
| 443/TCP | HTTPS | Sim |
| 9418/TCP | Porta de protocolo simples do Git<br>(Desabilitada no modo privado) | No |
| 22/TCP | Git over SSH | Yes |
| 25/TCP | SMTP | Requires STARTTLS |
| 80/TCP | HTTP | No<br>(When SSL is enabled this port redirects to HTTPS) |
| 443/TCP | HTTPS | Yes |
| 9418/TCP | Simple Git protocol port<br>(Disabled in private mode) | No |
### Portas administrativas
### Administrative ports
Não é preciso haver portas administrativas para os usuários finais aproveitarem os recursos básicos do aplicativo.
Administrative ports are not required for basic application use by end users.
| Porta | Descrição | Criptografado |
| Port | Description | Encrypted |
| :------------- | :------------- | :------------- |
| ICMP | Ping ICMP | No |
| 122/TCP | SSH administrativa | Sim |
| ICMP | ICMP Ping | No |
| 122/TCP | Administrative SSH | Yes |
| 161/UDP | SNMP | No |
| 8080/TCP | HTTP de console de gerenciamento | No<br>(Quando o SSL está habilitado, essa porta é redirecionada para HTTPS) |
| 8443/TCP | HTTPS de console de gerenciamento | Sim |
| 8080/TCP | Management Console HTTP | No<br>(When SSL is enabled this port redirects to HTTPS) |
| 8443/TCP | Management Console HTTPS | Yes |
### Portas de comunicação de cluster
### Cluster communication ports
Se houver um firewall no nível da rede entre os nós, essas portas terão que estar acessíveis. A comunicação entre os nós não é criptografada, e essas portas não devem ficar acessíveis externamente.
If a network level firewall is in place between nodes, these ports will need to be accessible. The communication between nodes is not encrypted. These ports should not be accessible externally.
| Porta | Descrição |
| Port | Description |
| :------------- | :------------- |
| 1336/TCP | API Interna |
| 3033/TCP | Acesso SVN interno |
| 3037/TCP | Acesso SVN interno |
| 1336/TCP | Internal API |
| 3033/TCP | Internal SVN access |
| 3037/TCP | Internal SVN access |
| 3306/TCP | MySQL |
| 4486/TCP | Acesso do controlador |
| 5115/TCP | Back-end de armazenamento |
| 5208/TCP | Acesso SVN interno |
| 4486/TCP | Governor access |
| 5115/TCP | Storage backend |
| 5208/TCP | Internal SVN access |
| 6379/TCP | Redis |
| 8001/TCP | Grafana |
| 8090/TCP | Acesso GPG interno |
| 8149/TCP | Acesso GitRPC ao servidor de arquivos |
| 8090/TCP | Internal GPG access |
| 8149/TCP | GitRPC file server access |
| 8300/TCP | Consul |
| 8301/TCP | Consul |
| 8302/TCP | Consul |
| 9000/TCP | Git Daemon |
| 9102/TCP | Servidor de arquivos do Pages |
| 9105/TCP | Servidor LFS |
| 9102/TCP | Pages file server |
| 9105/TCP | LFS server |
| 9200/TCP | Elasticsearch |
| 9203/TCP | Serviço de código semântico |
| 9203/TCP | Semantic code service |
| 9300/TCP | Elasticsearch |
| 11211/TCP | Memcache |
| 161/UDP | SNMP |
| 8125/UDP | Statsd |
| 8301 (UDP) | Consul |
| 8302 (UDP) | Consul |
| 8301/UDP | Consul |
| 8302/UDP | Consul |
| 25827/UDP | Collectd |
## Configurar um balanceador de carga
## Configuring a load balancer
É recomendável usar um balanceador de carga baseado em TCP compatível com o protocolo PROXY para distribuir o tráfego entre os nós. Veja estas configurações de balanceador de carga:
We recommend an external TCP-based load balancer that supports the PROXY protocol to distribute traffic across nodes. Consider these load balancer configurations:
- As portas TCP (mostradas abaixo) devem ser encaminhadas para os nós que executam o serviço `web-server`. são os únicos nós que funcionam com solicitações de clientes externos.
- Sessões temporárias não devem ser habilitadas.
- TCP ports (shown below) should be forwarded to nodes running the `web-server` service. These are the only nodes that serve external client requests.
- Sticky sessions shouldn't be enabled.
{% data reusables.enterprise_installation.terminating-tls %}
## Informações de conexão do cliente
## Handling client connection information
Como as conexões do cliente com o cluster vêm do balanceador de carga, pode ocorrer a perda do endereço IP do cliente. Para captar as informações de conexão do cliente de maneira adequada, é preciso fazer considerações adicionais.
Because client connections to the cluster come from the load balancer, the client IP address can be lost. To properly capture the client connection information, additional consideration is required.
{% data reusables.enterprise_clustering.proxy_preference %}
{% data reusables.enterprise_clustering.proxy_xff_firewall_warning %}
### Habilitar o suporte PROXY no {% data variables.product.prodname_ghe_server %}
### Enabling PROXY support on {% data variables.product.prodname_ghe_server %}
É altamente recomendável ativar o suporte PROXY para sua instância e o balanceador de carga.
We strongly recommend enabling PROXY support for both your instance and the load balancer.
{% data reusables.enterprise_installation.proxy-incompatible-with-aws-nlbs %}
- Na instância, use este comando:
- For your instance, use this command:
```shell
$ ghe-config 'loadbalancer.proxy-protocol' 'true' && ghe-cluster-config-apply
```
- No balanceador de carga, siga as instruções do seu fornecedor.
- For the load balancer, use the instructions provided by your vendor.
{% data reusables.enterprise_clustering.proxy_protocol_ports %}
### Habilitar o suporte X-Forwarded-For no {% data variables.product.prodname_ghe_server %}
### Enabling X-Forwarded-For support on {% data variables.product.prodname_ghe_server %}
{% data reusables.enterprise_clustering.x-forwarded-for %}
Para habilitar o cabeçalho `X-Forwarded-For`, use este comando:
To enable the `X-Forwarded-For` header, use this command:
```shell
$ ghe-config 'loadbalancer.http-forward' 'true' && ghe-cluster-config-apply
@@ -127,11 +121,12 @@ $ ghe-config 'loadbalancer.http-forward' 'true' && ghe-cluster-config-apply
{% data reusables.enterprise_clustering.without_proxy_protocol_ports %}
### Configurar verificações de integridade
As verificações de integridade permitem que um balanceador de carga pare de enviar tráfego para um nó que não responde em caso de falha na verificação pré-configurada do nó em questão. Em caso de falha em um nó do cluster, as verificações de integridade emparelhadas com nós redundantes fornecerão alta disponibilidade.
### Configuring Health Checks
Health checks allow a load balancer to stop sending traffic to a node that is not responding if a pre-configured check fails on that node. If a cluster node fails, health checks paired with redundant nodes provides high availability.
{% data reusables.enterprise_clustering.health_checks %} {% data reusables.enterprise_site_admin_settings.maintenance-mode-status %}
{% data reusables.enterprise_clustering.health_checks %}
{% data reusables.enterprise_site_admin_settings.maintenance-mode-status %}
## Requisitos de DNS
## DNS Requirements
{% data reusables.enterprise_clustering.load_balancer_dns %}

View File

@@ -1,6 +1,6 @@
---
title: Configurar alta disponibilidade de replicação de um cluster
intro: 'Você pode configurar uma réplica passiva de todo o seu cluster de {% data variables.product.prodname_ghe_server %} em um local diferente, permitindo que o seu cluster falhe em nós redundantes.'
title: Configuring high availability replication for a cluster
intro: 'You can configure a passive replica of your entire {% data variables.product.prodname_ghe_server %} cluster in a different location, allowing your cluster to fail over to redundant nodes.'
miniTocMaxHeadingLevel: 3
redirect_from:
- /enterprise/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster
@@ -14,86 +14,80 @@ topics:
- High availability
- Infrastructure
shortTitle: Configure HA replication
ms.openlocfilehash: 3663fe290fab6644c5650c3f1ff435dfae87bcf4
ms.sourcegitcommit: fb047f9450b41b24afc43d9512a5db2a2b750a2a
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/11/2022
ms.locfileid: '145095956'
---
## Sobre a alta disponibilidade de replicação de clusters
## About high availability replication for clusters
Você pode configurar uma implantação de cluster de {% data variables.product.prodname_ghe_server %} para alta disponibilidade, em que um conjunto idêntico de nós passivos estejam sincronizados com os nós no seu cluster ativo. Se falhas no hardware ou software afetarem o centro de dados com o seu cluster ativo, você poderá transferir a falha manualmente para os nós da réplica e continuar processando as solicitações do usuário, minimizando o impacto da interrupção.
You can configure a cluster deployment of {% data variables.product.prodname_ghe_server %} for high availability, where an identical set of passive nodes sync with the nodes in your active cluster. If hardware or software failures affect the datacenter with your active cluster, you can manually fail over to the replica nodes and continue processing user requests, minimizing the impact of the outage.
Em modo de alta disponibilidade, cada nó ativo é sincronizado regularmente com um nó passivo correspondente. O nó passivo é executado em modo de espera e não atende a aplicativos nem processa solicitações de usuário.
In high availability mode, each active node syncs regularly with a corresponding passive node. The passive node runs in standby and does not serve applications or process user requests.
Recomendamos configurar uma alta disponibilidade como parte de um plano de recuperação de desastres abrangente para {% data variables.product.prodname_ghe_server %}. Também recomendamos realizar backups regulares. Para obter mais informações, confira "[Como configurar backups no seu dispositivo](/enterprise/admin/configuration/configuring-backups-on-your-appliance)".
We recommend configuring high availability as a part of a comprehensive disaster recovery plan for {% data variables.product.prodname_ghe_server %}. We also recommend performing regular backups. For more information, see "[Configuring backups on your appliance](/enterprise/admin/configuration/configuring-backups-on-your-appliance)."
## Pré-requisitos
## Prerequisites
### Hardware e software
### Hardware and software
Para cada nó existente no seu cluster ativo, você precisará fornecer uma segunda máquina virtual com recursos de hardware idênticos. Por exemplo, se o cluster tiver 11 nós e cada nó tiver 12 vCPUs, 96 GB de RAM e 750 GB de armazenamento anexado, você precisará fornecer 11 novas máquinas virtuais, tendo cada uma 12 vCPUs, 96 GB de RAM e 750 GB de armazenamento anexado.
For each existing node in your active cluster, you'll need to provision a second virtual machine with identical hardware resources. For example, if your cluster has 11 nodes and each node has 12 vCPUs, 96 GB of RAM, and 750 GB of attached storage, you must provision 11 new virtual machines that each have 12 vCPUs, 96 GB of RAM, and 750 GB of attached storage.
Em cada nova máquina virtual, instale a mesma versão do {% data variables.product.prodname_ghe_server %} que é executada nos nós do seu cluster ativo. Você não precisa fazer o upload de uma licença ou executar qualquer configuração adicional. Para obter mais informações, confira "[Como configurar uma instância do {% data variables.product.prodname_ghe_server %}](/enterprise/admin/installation/setting-up-a-github-enterprise-server-instance)".
On each new virtual machine, install the same version of {% data variables.product.prodname_ghe_server %} that runs on the nodes in your active cluster. You don't need to upload a license or perform any additional configuration. For more information, see "[Setting up a {% data variables.product.prodname_ghe_server %} instance](/enterprise/admin/installation/setting-up-a-github-enterprise-server-instance)."
{% note %}
**Observação**: os nós que você pretende usar para a replicação de alta disponibilidade devem ser instâncias independentes do {% data variables.product.prodname_ghe_server %}. Não inicialize os nós passivos como um segundo cluster.
**Note**: The nodes that you intend to use for high availability replication should be standalone {% data variables.product.prodname_ghe_server %} instances. Don't initialize the passive nodes as a second cluster.
{% endnote %}
### Rede
### Network
Você deve atribuir um endereço IP estático a cada novo nó que você fornecer e você deve configurar um balanceador de carga para aceitar conexões e direcioná-las para os nós na sua camada frontal do cluster.
You must assign a static IP address to each new node that you provision, and you must configure a load balancer to accept connections and direct them to the nodes in your cluster's front-end tier.
Não recomendamos configurar um firewall entre a rede com o seu cluster ativo e a rede com o seu cluster passivo. A latência entre a rede com os nós ativos e a rede com os nós passivos deve ser inferior a 70 milissegundos. Para obter mais informações sobre a conectividade de rede entre os nós no cluster passivo, confira "[Configuração da rede de cluster](/enterprise/admin/enterprise-management/cluster-network-configuration)".
{% data reusables.enterprise_clustering.network-latency %} For more information about network connectivity between nodes in the passive cluster, see "[Cluster network configuration](/enterprise/admin/enterprise-management/cluster-network-configuration)."
## Criar uma alta réplica de disponibilidade para um cluster
## Creating a high availability replica for a cluster
- [Como atribuir nós ativos ao datacenter primário](#assigning-active-nodes-to-the-primary-datacenter)
- [Como adicionar nós passivos ao arquivo de configuração do cluster](#adding-passive-nodes-to-the-cluster-configuration-file)
- [Configuração de exemplo](#example-configuration)
- [Assigning active nodes to the primary datacenter](#assigning-active-nodes-to-the-primary-datacenter)
- [Adding passive nodes to the cluster configuration file](#adding-passive-nodes-to-the-cluster-configuration-file)
- [Example configuration](#example-configuration)
### Atribuindo nós ativos ao centro de dados primário
### Assigning active nodes to the primary datacenter
Antes de definir um centro de dados secundário para seus nós passivos, certifique-se de atribuir seus nós ativos para o centro de dados primário.
Before you define a secondary datacenter for your passive nodes, ensure that you assign your active nodes to the primary datacenter.
{% data reusables.enterprise_clustering.ssh-to-a-node %}
{% data reusables.enterprise_clustering.open-configuration-file %}
3. Observe o nome do centro de dados primário do seu cluster. A seção `[cluster]` no início do arquivo de configuração do cluster define o nome do datacenter primário usando o par chave-valor `primary-datacenter`. Por padrão, o datacenter primário do cluster é chamado `default`.
3. Note the name of your cluster's primary datacenter. The `[cluster]` section at the top of the cluster configuration file defines the primary datacenter's name, using the `primary-datacenter` key-value pair. By default, the primary datacenter for your cluster is named `default`.
```shell
[cluster]
mysql-master = <em>HOSTNAME</em>
redis-master = <em>HOSTNAME</em>
mysql-master = HOSTNAME
redis-master = HOSTNAME
<strong>primary-datacenter = default</strong>
```
- Opcionalmente, altere o nome do datacenter primário para algo mais descritivo ou preciso editando o valor de `primary-datacenter`.
- Optionally, change the name of the primary datacenter to something more descriptive or accurate by editing the value of `primary-datacenter`.
4. {% data reusables.enterprise_clustering.configuration-file-heading %} Embaixo do cabeçalho de cada nó, adicione um novo par chave-valor para atribuir o nó a um centro de dados. Use o mesmo valor de `primary-datacenter` da etapa 3 acima. Por exemplo, caso você deseje usar o nome padrão (`default`), adicione o par chave-valor a seguir à seção de cada nó.
4. {% data reusables.enterprise_clustering.configuration-file-heading %} Under each node's heading, add a new key-value pair to assign the node to a datacenter. Use the same value as `primary-datacenter` from step 3 above. For example, if you want to use the default name (`default`), add the following key-value pair to the section for each node.
```
datacenter = default
```
Ao concluir, a seção para cada nó no arquivo de configuração de cluster deve parecer-se com o exemplo a seguir. {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %}
When you're done, the section for each node in the cluster configuration file should look like the following example. {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %}
```shell
[cluster "<em>HOSTNAME</em>"]
[cluster "HOSTNAME"]
<strong>datacenter = default</strong>
hostname = <em>HOSTNAME</em>
ipv4 = <em>IP ADDRESS</em>
hostname = HOSTNAME
ipv4 = IP-ADDRESS
...
...
```
{% note %}
**Observação**: se você alterou o nome do datacenter primário na etapa 3, localize o par chave-valor `consul-datacenter` na seção de cada nó e altere o valor para o datacenter primário renomeado. Por exemplo, se você nomeou o datacenter primário `primary`, use o par chave-valor a seguir para cada nó.
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
```
consul-datacenter = primary
@@ -105,123 +99,123 @@ Antes de definir um centro de dados secundário para seus nós passivos, certifi
{% data reusables.enterprise_clustering.configuration-finished %}
Após {% data variables.product.prodname_ghe_server %} encaminhar você para a instrução, isso significa que você terminou de atribuir seus nós para o centro de dados primário do cluster.
After {% data variables.product.prodname_ghe_server %} returns you to the prompt, you've finished assigning your nodes to the cluster's primary datacenter.
### Adicionar nós passivos ao arquivo de configuração do cluster
### Adding passive nodes to the cluster configuration file
Para configurar a alta disponibilidade, você deve definir um nó passivo correspondente para cada nó ativo no seu cluster. As instruções a seguir criam uma nova configuração de cluster que define tanto nós ativos quanto passivos. Você terá a oportunidade de:
To configure high availability, you must define a corresponding passive node for every active node in your cluster. The following instructions create a new cluster configuration that defines both active and passive nodes. You will:
- Criar uma cópia do arquivo de configuração do cluster ativo.
- Editar a cópia para definir nós passivos que correspondem aos nós ativos, adicionando os endereços IP das novas máquinas virtuais que você forneceu.
- Mescle a cópia modificada da configuração do cluster de volta à sua configuração ativa.
- Aplique a nova configuração para iniciar a replicação.
- Create a copy of the active cluster configuration file.
- Edit the copy to define passive nodes that correspond to the active nodes, adding the IP addresses of the new virtual machines that you provisioned.
- Merge the modified copy of the cluster configuration back into your active configuration.
- Apply the new configuration to start replication.
Para ver um exemplo de configuração, confira "[Exemplo de configuração](#example-configuration)".
For an example configuration, see "[Example configuration](#example-configuration)."
1. Para cada nó no seu cluster, forneça uma máquina virtual correspondente com especificações idênticas, executando a mesma versão do {% data variables.product.prodname_ghe_server %}. Observe o endereço de host e endereço IPv4 para cada novo nó de cluster. Para obter mais informações, confira "[Pré-requisitos](#prerequisites)".
1. For each node in your cluster, provision a matching virtual machine with identical specifications, running the same version of {% data variables.product.prodname_ghe_server %}. Note the IPv4 address and hostname for each new cluster node. For more information, see "[Prerequisites](#prerequisites)."
{% note %}
**Observação**: se você estiver reconfigurando a alta disponibilidade após um failover, use os nós antigos do datacenter primário.
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
{% endnote %}
{% data reusables.enterprise_clustering.ssh-to-a-node %}
3. Faça o backup da sua configuração de cluster existente.
3. Back up your existing cluster configuration.
```
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
```
4. Crie uma cópia do arquivo de configuração de cluster existente em um local temporário, como _/home/admin/cluster-passive.conf_. Exclua os pares chave-valor exclusivos de endereços IP (`ipv*`), os UUIDs (`uuid`) e as chaves públicas do WireGuard (`wireguard-pubkey`).
4. Create a copy of your existing cluster configuration file in a temporary location, like _/home/admin/cluster-passive.conf_. Delete unique key-value pairs for IP addresses (`ipv*`), UUIDs (`uuid`), and public keys for WireGuard (`wireguard-pubkey`).
```
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
```
5. Remova a seção `[cluster]` do arquivo de configuração temporário do cluster que você copiou na etapa anterior.
5. Remove the `[cluster]` section from the temporary cluster configuration file that you copied in the previous step.
```
git config -f ~/cluster-passive.conf --remove-section cluster
```
6. Defina um nome para o centro de dados secundário onde você forneceu seus nós passivos e, em seguida, atualize o arquivo de configuração temporário do cluster com o novo nome do centro de dados. Substitua `SECONDARY` pelo nome escolhido.
6. Decide on a name for the secondary datacenter where you provisioned your passive nodes, then update the temporary cluster configuration file with the new datacenter name. Replace `SECONDARY` with the name you choose.
```shell
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
sed -i 's/datacenter = default/datacenter = SECONDARY/g' ~/cluster-passive.conf
```
7. Defina um padrão para os nomes de host dos nós passivos.
7. Decide on a pattern for the passive nodes' hostnames.
{% warning %}
**Aviso**: os nomes do host dos nós passivos precisam ser exclusivos e diferentes do nome do host do nó ativo correspondente.
**Warning**: Hostnames for passive nodes must be unique and differ from the hostname for the corresponding active node.
{% endwarning %}
8. Abra o arquivo de configuração temporário do cluster da etapa 3 em um editor de texto. Por exemplo, você pode usar o Vim.
8. Open the temporary cluster configuration file from step 3 in a text editor. For example, you can use Vim.
```shell
sudo vim ~/cluster-passive.conf
```
9. Em cada seção dentro do arquivo de configuração temporária, atualize as configurações do nó. {% data reusables.enterprise_clustering.configuration-file-heading %}
9. In each section within the temporary cluster configuration file, update the node's configuration. {% data reusables.enterprise_clustering.configuration-file-heading %}
- Altere o nome do host citado no título da seção e o valor para `hostname` na seção do nome do host do nó passivo pelo padrão escolhido na etapa 7 acima.
- Adicione uma nova chave chamada `ipv4` e defina o valor como o endereço IPv4 estático do nó passivo.
- Adicione um novo par chave-valor, `replica = enabled`.
- Change the quoted hostname in the section heading and the value for `hostname` within the section to the passive node's hostname, per the pattern you chose in step 7 above.
- Add a new key named `ipv4`, and set the value to the passive node's static IPv4 address.
- Add a new key-value pair, `replica = enabled`.
```shell
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
[cluster "NEW PASSIVE NODE HOSTNAME"]
...
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
hostname = NEW PASSIVE NODE HOSTNAME
ipv4 = NEW PASSIVE NODE IPV4 ADDRESS
<strong>replica = enabled</strong>
...
...
```
10. Adicione o conteúdo do arquivo de configuração de cluster temporário que você criou na etapa 4 ao arquivo de configuração ativo.
10. Append the contents of the temporary cluster configuration file that you created in step 4 to the active configuration file.
```shell
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
```
11. Nomeie os nós primários do MySQL e Redis no centro de dados secundário. Substitua `REPLICA MYSQL PRIMARY HOSTNAME` e `REPLICA REDIS PRIMARY HOSTNAME` pelos nomes do host do nó passivo que você provisionou para corresponder aos primários existentes do MySQL e do Redis.
11. Designate the primary MySQL and Redis nodes in the secondary datacenter. Replace `REPLICA MYSQL PRIMARY HOSTNAME` and `REPLICA REDIS PRIMARY HOSTNAME` with the hostnames of the passives node that you provisioned to match your existing MySQL and Redis primaries.
```shell
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica REPLICA-MYSQL-PRIMARY-HOSTNAME
git config -f /data/user/common/cluster.conf cluster.redis-master-replica REPLICA-REDIS-PRIMARY-HOSTNAME
```
{% warning %}
**Aviso**: revise o arquivo de configuração do cluster antes de prosseguir.
**Warning**: Review your cluster configuration file before proceeding.
- Na seção `[cluster]` de nível superior, verifique se os valores de `mysql-master-replica` e `redis-master-replica` são os nomes do host corretos dos nós passivos no datacenter secundário servirão como os primários do MySQL e do Redis após um failover.
- Em cada seção de um nó ativo chamado <code>[cluster "<em>ACTIVE NODE HOSTNAME</em>"]</code>, verifique novamente os pares chave-valor a seguir.
- `datacenter` deve corresponder ao valor de `primary-datacenter` na seção `[cluster]` de nível superior.
- `consul-datacenter` deve corresponder ao valor de `datacenter`, que deve ser o mesmo que o valor de `primary-datacenter` na seção `[cluster]` de nível superior.
- Verifique se, para cada nó ativo, a configuração tem **uma** seção correspondente para **um** passivo com as mesmas funções. Em cada seção para um nó passivo, verifique novamente cada par de chave-valor.
- `datacenter` deve corresponder a todos os outros nós passivos.
- `consul-datacenter` deve corresponder a todos os outros nós passivos.
- `hostname` deve corresponder ao nome do host no título da seção.
- `ipv4` deve corresponder ao endereço IPv4 estático exclusivo do nó.
- `replica` deve ser configurado como `enabled`.
- Aproveite a oportunidade para remover seções para nós off-line que não estão mais sendo usados.
- In the top-level `[cluster]` section, ensure that the values for `mysql-master-replica` and `redis-master-replica` are the correct hostnames for the passive nodes in the secondary datacenter that will serve as the MySQL and Redis primaries after a failover.
- In each section for an active node named <code>[cluster "ACTIVE NODE HOSTNAME"]</code>, double-check the following key-value pairs.
- `datacenter` should match the value of `primary-datacenter` in the top-level `[cluster]` section.
- `consul-datacenter` should match the value of `datacenter`, which should be the same as the value for `primary-datacenter` in the top-level `[cluster]` section.
- Ensure that for each active node, the configuration has **one** corresponding section for **one** passive node with the same roles. In each section for a passive node, double-check each key-value pair.
- `datacenter` should match all other passive nodes.
- `consul-datacenter` should match all other passive nodes.
- `hostname` should match the hostname in the section heading.
- `ipv4` should match the node's unique, static IPv4 address.
- `replica` should be configured as `enabled`.
- Take the opportunity to remove sections for offline nodes that are no longer in use.
Para revisar um exemplo de configuração, confira "[Exemplo de configuração](#example-configuration)".
To review an example configuration, see "[Example configuration](#example-configuration)."
{% endwarning %}
13. Inicializar a nova configuração de cluster. {% data reusables.enterprise.use-a-multiplexer %}
13. Initialize the new cluster configuration. {% data reusables.enterprise.use-a-multiplexer %}
```shell
ghe-cluster-config-init
```
14. Após a conclusão da inicialização , {% data variables.product.prodname_ghe_server %} exibirá a seguinte mensagem.
14. After the initialization finishes, {% data variables.product.prodname_ghe_server %} displays the following message.
```shell
Finished cluster initialization
@@ -231,33 +225,33 @@ Para ver um exemplo de configuração, confira "[Exemplo de configuração](#exa
{% data reusables.enterprise_clustering.configuration-finished %}
17. Configure um balanceador de carga que aceitará conexões de usuários se você gerar uma falha para os nós passivos. Para obter mais informações, confira "[Configuração da rede de cluster](/enterprise/admin/enterprise-management/cluster-network-configuration#configuring-a-load-balancer)".
17. Configure a load balancer that will accept connections from users if you fail over to the passive nodes. For more information, see "[Cluster network configuration](/enterprise/admin/enterprise-management/cluster-network-configuration#configuring-a-load-balancer)."
Você terminou de configurar uma replicação de alta disponibilidade para os nós do seu cluster. Cada nó ativo começa a replicar a configuração e os dados para o seu nó passivo correspondente e você pode direcionar o tráfego para o balanceador de carga para o centro de dados secundário em caso de falha. Para obter mais informações sobre o failover, confira "[Como iniciar um failover para o cluster de réplica](/enterprise/admin/enterprise-management/initiating-a-failover-to-your-replica-cluster)".
You've finished configuring high availability replication for the nodes in your cluster. Each active node begins replicating configuration and data to its corresponding passive node, and you can direct traffic to the load balancer for the secondary datacenter in the event of a failure. For more information about failing over, see "[Initiating a failover to your replica cluster](/enterprise/admin/enterprise-management/initiating-a-failover-to-your-replica-cluster)."
### Configuração de exemplo
### Example configuration
A configuração de `[cluster]` de nível superior será parecida com o exemplo a seguir.
The top-level `[cluster]` configuration should look like the following example.
```shell
[cluster]
mysql-master = <em>HOSTNAME OF ACTIVE MYSQL MASTER</em>
redis-master = <em>HOSTNAME OF ACTIVE REDIS MASTER</em>
primary-datacenter = <em>PRIMARY DATACENTER NAME</em>
mysql-master-replica = <em>HOSTNAME OF PASSIVE MYSQL MASTER</em>
redis-master-replica = <em>HOSTNAME OF PASSIVE REDIS MASTER</em>
mysql-master = HOSTNAME-OF-ACTIVE-MYSQL-MASTER
redis-master = HOSTNAME-OF-ACTIVE-REDIS-MASTER
primary-datacenter = PRIMARY-DATACENTER-NAME
mysql-master-replica = HOSTNAME-OF-PASSIVE-MYSQL-MASTER
redis-master-replica = HOSTNAME-OF-PASSIVE-REDIS-MASTER
mysql-auto-failover = false
...
```
A configuração para um nó ativo no nível de armazenamento do seu grupo deve parecer o seguinte exemplo.
The configuration for an active node in your cluster's storage tier should look like the following example.
```shell
...
[cluster "<em>UNIQUE ACTIVE NODE HOSTNAME</em>"]
[cluster "UNIQUE ACTIVE NODE HOSTNAME"]
datacenter = default
hostname = <em>UNIQUE ACTIVE NODE HOSTNAME</em>
ipv4 = <em>IPV4 ADDRESS</em>
hostname = UNIQUE-ACTIVE-NODE-HOSTNAME
ipv4 = IPV4-ADDRESS
consul-datacenter = default
consul-server = true
git-server = true
@@ -268,26 +262,26 @@ A configuração para um nó ativo no nível de armazenamento do seu grupo deve
memcache-server = true
metrics-server = true
storage-server = true
vpn = <em>IPV4 ADDRESS SET AUTOMATICALLY</em>
uuid = <em>UUID SET AUTOMATICALLY</em>
wireguard-pubkey = <em>PUBLIC KEY SET AUTOMATICALLY</em>
vpn = IPV4 ADDRESS SET AUTOMATICALLY
uuid = UUID SET AUTOMATICALLY
wireguard-pubkey = PUBLIC KEY SET AUTOMATICALLY
...
```
A configuração para o nó passivo correspondente no nível de armazenamento deve parecer-se com o seguinte exemplo.
The configuration for the corresponding passive node in the storage tier should look like the following example.
- Diferenças importantes do nó ativo correspondente são destacadas em **negrito**.
- O {% data variables.product.prodname_ghe_server %} atribui valores para `vpn`, `uuid` e `wireguard-pubkey` automaticamente, ou seja, você não deve definir os valores para os nós passivos que serão inicializados.
- As funções do servidor, definidas pelas chaves `*-server`, correspondem ao nó ativo correspondente.
- Important differences from the corresponding active node are **bold**.
- {% data variables.product.prodname_ghe_server %} assigns values for `vpn`, `uuid`, and `wireguard-pubkey` automatically, so you shouldn't define the values for passive nodes that you will initialize.
- The server roles, defined by `*-server` keys, match the corresponding active node.
```shell
...
<strong>[cluster "<em>UNIQUE PASSIVE NODE HOSTNAME</em>"]</strong>
<strong>[cluster "UNIQUE PASSIVE NODE HOSTNAME"]</strong>
<strong>replica = enabled</strong>
<strong>ipv4 = <em>IPV4 ADDRESS OF NEW VM WITH IDENTICAL RESOURCES</em></strong>
<strong>datacenter = <em>SECONDARY DATACENTER NAME</em></strong>
<strong>hostname = <em>UNIQUE PASSIVE NODE HOSTNAME</em></strong>
<strong>consul-datacenter = <em>SECONDARY DATACENTER NAME</em></strong>
<strong>ipv4 = IPV4 ADDRESS OF NEW VM WITH IDENTICAL RESOURCES</strong>
<strong>datacenter = SECONDARY DATACENTER NAME</strong>
<strong>hostname = UNIQUE PASSIVE NODE HOSTNAME</strong>
<strong>consul-datacenter = SECONDARY DATACENTER NAME</strong>
consul-server = true
git-server = true
pages-server = true
@@ -297,73 +291,73 @@ A configuração para o nó passivo correspondente no nível de armazenamento de
memcache-server = true
metrics-server = true
storage-server = true
<strong>vpn = <em>DO NOT DEFINE</em></strong>
<strong>uuid = <em>DO NOT DEFINE</em></strong>
<strong>wireguard-pubkey = <em>DO NOT DEFINE</em></strong>
<strong>vpn = DO NOT DEFINE</strong>
<strong>uuid = DO NOT DEFINE</strong>
<strong>wireguard-pubkey = DO NOT DEFINE</strong>
...
```
## Monitoramento de replicação entre nós de cluster ativos e passivos
## Monitoring replication between active and passive cluster nodes
A replicação inicial entre os nós ativos e passivos do seu cluster leva tempo. A quantidade de tempo depende da quantidade de dados para a replicação e dos níveis de atividade para {% data variables.product.prodname_ghe_server %}.
Initial replication between the active and passive nodes in your cluster takes time. The amount of time depends on the amount of data to replicate and the activity levels for {% data variables.product.prodname_ghe_server %}.
Você pode monitorar o progresso em qualquer nó do cluster, usando ferramentas de linha de comando disponíveis através do shell administrativo do {% data variables.product.prodname_ghe_server %}. Para obter mais informações sobre o shell administrativo, confira "[Como acessar o shell administrativo (SSH)](/enterprise/admin/configuration/accessing-the-administrative-shell-ssh)".
You can monitor the progress on any node in the cluster, using command-line tools available via the {% data variables.product.prodname_ghe_server %} administrative shell. For more information about the administrative shell, see "[Accessing the administrative shell (SSH)](/enterprise/admin/configuration/accessing-the-administrative-shell-ssh)."
- Monitorar replicação dos bancos de dados:
- Monitor replication of databases:
```
/usr/local/share/enterprise/ghe-cluster-status-mysql
```
- Monitorar replicação do repositório e dos dados do Gist:
- Monitor replication of repository and Gist data:
```
ghe-spokes status
```
- Monitorar replicação dos anexo e dos dados de LFS:
- Monitor replication of attachment and LFS data:
```
ghe-storage replication-status
```
- Monitorar replicação dos dados das páginas:
- Monitor replication of Pages data:
```
ghe-dpages replication-status
```
Use `ghe-cluster-status` para analisar a integridade geral do cluster. Para obter mais informações, confira "[Utilitários de linha de comando](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)".
You can use `ghe-cluster-status` to review the overall health of your cluster. For more information, see "[Command-line utilities](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)."
## Reconfigurar a replicação de alta disponibilidade após um failover
## Reconfiguring high availability replication after a failover
Após gerar um failover dos nós ativos do cluster para os nós passivos do cluster, você pode reconfigurar a replicação de alta disponibilidade de duas maneiras.
After you fail over from the cluster's active nodes to the cluster's passive nodes, you can reconfigure high availability replication in two ways.
### Provisionamento e configuração de novos nós passivos
### Provisioning and configuring new passive nodes
Após um failover, você pode reconfigurar alta disponibilidade de duas maneiras. O método escolhido dependerá da razão pela qual você gerou o failover e do estado dos nós ativos originais.
After a failover, you can reconfigure high availability in two ways. The method you choose will depend on the reason that you failed over, and the state of the original active nodes.
1. Forneça e configure um novo conjunto de nós passivos para cada um dos novos nós ativos no seu centro de dados secundário.
1. Provision and configure a new set of passive nodes for each of the new active nodes in your secondary datacenter.
2. Use os antigos nós ativos como os novos nós passivos.
2. Use the old active nodes as the new passive nodes.
O processo de reconfiguração de alta disponibilidade é idêntico à configuração inicial de alta disponibilidade. Para obter mais informações, confira "[Como criar uma réplica de alta disponibilidade para um cluster](#creating-a-high-availability-replica-for-a-cluster)".
The process for reconfiguring high availability is identical to the initial configuration of high availability. For more information, see "[Creating a high availability replica for a cluster](#creating-a-high-availability-replica-for-a-cluster)."
## Desabilitar a replicação de alta disponibilidade para um cluster
## Disabling high availability replication for a cluster
Você pode parar a replicação nos nós passivos para a sua implantação de cluster de {% data variables.product.prodname_ghe_server %}.
You can stop replication to the passive nodes for your cluster deployment of {% data variables.product.prodname_ghe_server %}.
{% data reusables.enterprise_clustering.ssh-to-a-node %}
{% data reusables.enterprise_clustering.open-configuration-file %}
3. Na seção `[cluster]` de nível superior, exclua os pares chave-valor `redis-master-replica` e `mysql-master-replica`.
3. In the top-level `[cluster]` section, delete the `redis-master-replica`, and `mysql-master-replica` key-value pairs.
4. Exclua cada seção para um nó passivo. Para os nós passivos, `replica` é configurado como `enabled`.
4. Delete each section for a passive node. For passive nodes, `replica` is configured as `enabled`.
{% data reusables.enterprise_clustering.apply-configuration %}
{% data reusables.enterprise_clustering.configuration-finished %}
Após {% data variables.product.prodname_ghe_server %} encaminhar você para a instrução, isso significa que você terminou de desabilitar a replicação de alta disponibilidade.
After {% data variables.product.prodname_ghe_server %} returns you to the prompt, you've finished disabling high availability replication.

View File

@@ -1,6 +1,6 @@
---
title: Criar réplica de alta disponibilidade
intro: 'Em uma configuração ativa/passiva, o appliance réplica é uma cópia redundante do appliance primário. Em caso de falha no appliance primário, o modo de alta disponibilidade permitirá que a réplica atue como appliance primário, mitigando as interrupções de serviço.'
title: Creating a high availability replica
intro: 'In an active/passive configuration, the replica appliance is a redundant copy of the primary appliance. If the primary appliance fails, high availability mode allows the replica to act as the primary appliance, allowing minimal service disruption.'
redirect_from:
- /enterprise/admin/installation/creating-a-high-availability-replica
- /enterprise/admin/enterprise-management/creating-a-high-availability-replica
@@ -13,94 +13,92 @@ topics:
- High availability
- Infrastructure
shortTitle: Create HA replica
ms.openlocfilehash: 0b838049fe0d520be8cb88382314b25c5bba2b28
ms.sourcegitcommit: dc42bb4a4826b414751ffa9eed38962c3e3fea8e
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 07/13/2022
ms.locfileid: '146332757'
---
{% data reusables.enterprise_installation.replica-limit %}
## <a name="creating-a-high-availability-replica"></a>Criar réplica de alta disponibilidade
## Creating a high availability replica
1. Configure um novo appliance do {% data variables.product.prodname_ghe_server %} na plataforma desejada. O appliance réplica deve refletir as configurações de CPU, RAM e armazenamento do appliance primário. É recomendável instalar o appliance réplica em um ambiente independente. Hardware, software e componentes de rede subjacentes devem ser isolados dos do appliance primário. Se estiver em um provedor de nuvem, use uma região ou zona separada. Para obter mais informações, confira "[Como configurar uma instância do {% data variables.product.prodname_ghe_server %}](/enterprise/admin/guides/installation/setting-up-a-github-enterprise-server-instance)".
1. Certifique-se de que o dispositivo primário e o novo dispositivo da réplica possam se comunicar entre si por meio das portas 122/TCP e 1194/UDP. Para obter mais informações, confira "[Portas de rede](/admin/configuration/configuring-network-settings/network-ports#administrative-ports)".
1. Em um navegador, vá até o novo endereço IP do appliance réplica e faça o upload da sua licença do {% data variables.product.prodname_enterprise %}.
1. Set up a new {% data variables.product.prodname_ghe_server %} appliance on your desired platform. The replica appliance should mirror the primary appliance's CPU, RAM, and storage settings. We recommend that you install the replica appliance in an independent environment. The underlying hardware, software, and network components should be isolated from those of the primary appliance. If you are a using a cloud provider, use a separate region or zone. For more information, see ["Setting up a {% data variables.product.prodname_ghe_server %} instance"](/enterprise/admin/guides/installation/setting-up-a-github-enterprise-server-instance).
1. Ensure that the new appliance can communicate with all other appliances in this high availability environment over ports 122/TCP and 1194/UDP. For more information, see "[Network ports](/admin/configuration/configuring-network-settings/network-ports#administrative-ports)."
1. In a browser, navigate to the new replica appliance's IP address and upload your {% data variables.product.prodname_enterprise %} license.
{% data reusables.enterprise_installation.replica-steps %}
1. Conecte-se ao endereço IP do appliance réplica usando SSH.
1. Connect to the replica appliance's IP address using SSH.
```shell
$ ssh -p 122 admin@<em>REPLICA IP</em>
$ ssh -p 122 admin@REPLICA_IP
```
{% data reusables.enterprise_installation.generate-replication-key-pair %} {% data reusables.enterprise_installation.add-ssh-key-to-primary %}
1. Para verificar a conexão com o primário e habilitar o modo de réplica para a nova réplica, execute `ghe-repl-setup` novamente.
{% data reusables.enterprise_installation.generate-replication-key-pair %}
{% data reusables.enterprise_installation.add-ssh-key-to-primary %}
1. To verify the connection to the primary and enable replica mode for the new replica, run `ghe-repl-setup` again.
```shell
$ ghe-repl-setup <em>PRIMARY IP</em>
$ ghe-repl-setup PRIMARY_IP
```
{% data reusables.enterprise_installation.replication-command %} {% data reusables.enterprise_installation.verify-replication-channel %}
{% data reusables.enterprise_installation.replication-command %}
{% data reusables.enterprise_installation.verify-replication-channel %}
## <a name="creating-geo-replication-replicas"></a>Criar réplicas com replicação geográfica
## Creating geo-replication replicas
Este exemplo de configuração usa um primário e duas réplicas, localizados em três regiões geográficas diferentes. Mesmo que os três nós estejam em redes diferentes, todos os nós precisam estar acessíveis entre si. No mínimo, as portas administrativas necessárias devem ficar abertas para todos os outros nós. Para obter mais informações sobre os requisitos de porta, confira "[Portas de rede](/enterprise/admin/guides/installation/network-ports/#administrative-ports)".
This example configuration uses a primary and two replicas, which are located in three different geographic regions. While the three nodes can be in different networks, all nodes are required to be reachable from all the other nodes. At the minimum, the required administrative ports should be open to all the other nodes. For more information about the port requirements, see "[Network Ports](/enterprise/admin/guides/installation/network-ports/#administrative-ports)."
1. Crie a primeira réplica da mesma forma que você faria para uma configuração padrão de dois nós executando `ghe-repl-setup` na primeira réplica.
{% data reusables.enterprise_clustering.network-latency %}{% ifversion ghes > 3.2 %} If latency is more than 70 milliseconds, we recommend cache replica nodes instead. For more information, see "[Configuring a repository cache](/admin/enterprise-management/caching-repositories/configuring-a-repository-cache)."{% endif %}
1. Create the first replica the same way you would for a standard two node configuration by running `ghe-repl-setup` on the first replica.
```shell
(replica1)$ ghe-repl-setup <em>PRIMARY IP</em>
(replica1)$ ghe-repl-setup PRIMARY_IP
(replica1)$ ghe-repl-start
```
2. Crie uma segunda réplica e use o comando `ghe-repl-setup --add`. O sinalizador `--add` impede que ele substitua a configuração de replicação existente e adiciona a nova réplica à configuração.
2. Create a second replica and use the `ghe-repl-setup --add` command. The `--add` flag prevents it from overwriting the existing replication configuration and adds the new replica to the configuration.
```shell
(replica2)$ ghe-repl-setup --add <em>PRIMARY IP</em>
(replica2)$ ghe-repl-setup --add PRIMARY_IP
(replica2)$ ghe-repl-start
```
3. Por padrão, as réplicas são configuradas no mesmo centro de dados e agora tentarão propagar a partir de um nó existente no mesmo centro de dados. Configure as réplicas para datacenters diferentes definindo outros valores na opção do datacenter. Você pode especificar os valores que preferir, desde que sejam diferentes uns dos outros. Execute o comando `ghe-repl-node` em cada nó e especifique o datacenter.
3. By default, replicas are configured to the same datacenter, and will now attempt to seed from an existing node in the same datacenter. Configure the replicas for different datacenters by setting a different value for the datacenter option. The specific values can be anything you would like as long as they are different from each other. Run the `ghe-repl-node` command on each node and specify the datacenter.
No primário:
On the primary:
```shell
(primary)$ ghe-repl-node --datacenter <em>[PRIMARY DC NAME]</em>
(primary)$ ghe-repl-node --datacenter [PRIMARY DC NAME]
```
Na primeira réplica:
On the first replica:
```shell
(replica1)$ ghe-repl-node --datacenter <em>[FIRST REPLICA DC NAME]</em>
(replica1)$ ghe-repl-node --datacenter [FIRST REPLICA DC NAME]
```
Na segunda réplica:
On the second replica:
```shell
(replica2)$ ghe-repl-node --datacenter <em>[SECOND REPLICA DC NAME]</em>
(replica2)$ ghe-repl-node --datacenter [SECOND REPLICA DC NAME]
```
{% tip %}
**Dica:** você pode definir as opções `--datacenter` e `--active` ao mesmo tempo.
**Tip:** You can set the `--datacenter` and `--active` options at the same time.
{% endtip %}
4. Um nó de réplica ativo armazenará cópias dos dados do appliance e solicitações do usuário final do serviço. Um nó inativo armazenará cópias dos dados do appliance, mas não as solicitações do usuário final do serviço. Habilite o modo ativo usando o sinalizador `--active` ou o modo inativo usando o sinalizador `--inactive`.
4. An active replica node will store copies of the appliance data and service end user requests. An inactive node will store copies of the appliance data but will be unable to service end user requests. Enable active mode using the `--active` flag or inactive mode using the `--inactive` flag.
Na primeira réplica:
On the first replica:
```shell
(replica1)$ ghe-repl-node --active
```
Na segunda réplica:
On the second replica:
```shell
(replica2)$ ghe-repl-node --active
```
5. Para aplicar a configuração, use o comando `ghe-config-apply` no primário.
5. To apply the configuration, use the `ghe-config-apply` command on the primary.
```shell
(primary)$ ghe-config-apply
```
## <a name="configuring-dns-for-geo-replication"></a>Configurar DNS de localização geográfica
## Configuring DNS for geo-replication
Configure o Geo DNS usando os endereços IP dos nós primário e das réplicas. Crie também um DNS CNAME para o nó primário (por exemplo, `primary.github.example.com`) para acessar o nó primário por meio do SSH ou para fazer backup por meio de `backup-utils`.
Configure Geo DNS using the IP addresses of the primary and replica nodes. You can also create a DNS CNAME for the primary node (e.g. `primary.github.example.com`) to access the primary node via SSH or to back it up via `backup-utils`.
Para teste, você pode adicionar entradas ao arquivo `hosts` da estação de trabalho local (por exemplo, `/etc/hosts`). Esses exemplos de entradas resolverão as solicitações de `HOSTNAME` para `replica2`. É possível segmentar hosts específicos comentando linhas diferentes.
For testing, you can add entries to the local workstation's `hosts` file (for example, `/etc/hosts`). These example entries will resolve requests for `HOSTNAME` to `replica2`. You can target specific hosts by commenting out different lines.
```
# <primary IP> <em>HOSTNAME</em>
# <replica1 IP> <em>HOSTNAME</em>
<replica2 IP> <em>HOSTNAME</em>
# <primary IP> HOSTNAME
# <replica1 IP> HOSTNAME
<replica2 IP> HOSTNAME
```
## <a name="further-reading"></a>Leitura adicional
## Further reading
- "[Sobre a configuração de alta disponibilidade](/enterprise/admin/guides/installation/about-high-availability-configuration)"
- "[Utilitários para o gerenciamento de replicações](/enterprise/admin/guides/installation/about-high-availability-configuration/#utilities-for-replication-management)"
- "[Sobre a replicação geográfica](/enterprise/admin/guides/installation/about-geo-replication/)"
- "[About high availability configuration](/enterprise/admin/guides/installation/about-high-availability-configuration)"
- "[Utilities for replication management](/enterprise/admin/guides/installation/about-high-availability-configuration/#utilities-for-replication-management)"
- "[About geo-replication](/enterprise/admin/guides/installation/about-geo-replication/)"

View File

@@ -36,7 +36,9 @@ When you use external authentication, {% data variables.location.product_locatio
If you use an enterprise with {% data variables.product.prodname_emus %}, members of your enterprise authenticate to access {% data variables.product.prodname_dotcom %} through your SAML identity provider (IdP). For more information, see "[About {% data variables.product.prodname_emus %}](/admin/identity-and-access-management/using-enterprise-managed-users-and-saml-for-iam/about-enterprise-managed-users)" and "[About authentication for your enterprise](/admin/identity-and-access-management/managing-iam-for-your-enterprise/about-authentication-for-your-enterprise#authentication-methods-for-github-enterprise-server)."
{% data variables.product.product_name %} automatically creates a username for each person when their user account is provisioned via SCIM, by normalizing an identifier provided by your IdP. If multiple identifiers are normalized into the same username, a username conflict occurs, and only the first user account is created. {% data reusables.enterprise-accounts.emu-only-emails-within-the-enterprise-can-conflict %} You can resolve username conflicts by making a change in your IdP so that the normalized usernames will be unique.
{% data variables.product.prodname_dotcom %} automatically creates a username for each person when their user account is provisioned via SCIM, by normalizing an identifier provided by your IdP, then adding an underscore and short code. If multiple identifiers are normalized into the same username, a username conflict occurs, and only the first user account is created. You can resolve username problems by making a change in your IdP so that the normalized usernames will be unique and within the 39-character limit.
{% data reusables.enterprise-accounts.emu-only-emails-within-the-enterprise-can-conflict %}
{% elsif ghae %}
@@ -62,7 +64,7 @@ These rules may result in your IdP providing the same _IDP-USERNAME_ for multipl
- `bob@fabrikam.com`
- `bob#EXT#fabrikamcom@contoso.com`
This will cause a username conflict, and only the first user will be provisioned. For more information, see "[Resolving username conflicts](#resolving-username-conflicts)."
This will cause a username conflict, and only the first user will be provisioned. For more information, see "[Resolving username problems](#resolving-username-problems)."
{% endif %}
Usernames{% ifversion ghec %}, including underscore and short code,{% endif %} must not exceed 39 characters.
@@ -83,7 +85,7 @@ When you configure SAML authentication, {% data variables.product.product_name %
1. Usernames created from email addresses are created from the normalized characters that precede the `@` character.
1. If multiple accounts are normalized into the same {% data variables.product.product_name %} username, only the first user account is created. Subsequent users with the same username won't be able to sign in. {% ifversion ghec %}For more information, see "[Resolving username conflicts](#resolving-username-conflicts)."{% endif %}
1. If multiple accounts are normalized into the same {% data variables.product.product_name %} username, only the first user account is created. Subsequent users with the same username won't be able to sign in. {% ifversion ghec %}For more information, see "[Resolving username problems](#resolving-username-problems)."{% endif %}
### Examples of username normalization
@@ -121,11 +123,16 @@ When you configure SAML authentication, {% data variables.product.product_name %
{% endif %}
{% ifversion ghec %}
## Resolving username conflicts
## Resolving username problems
When a new user is being provisioned, if the user's normalized username conflicts with an existing user in the enterprise, the provisioning attempt will fail with a `409` error.
When a new user is being provisioned, if the username is longer than 39 characters (including underscore and short code), or conflicts with an existing user in the enterprise, the provisioning attempt will fail with a `409` error.
To resolve this problem, you must make a change in your IdP so that the normalized usernames will be unique. If you cannot change the identifier that's being normalized, you can change the attribute mapping for the `userName` attribute. If you change the attribute mapping, usernames of existing {% data variables.enterprise.prodname_managed_users %} will be updated, but nothing else about the accounts will change, including activity history.
To resolve this problem, you must make one of the following changes in your IdP so that all normalized usernames will be within the character limit and unique.
- Change the `userName` attribute value for individual users that are causing problems
- Change the `userName` attribute mapping for all users
- Configure a custom `userName` attribute for all users
When you change the attribute mapping, usernames of existing {% data variables.enterprise.prodname_managed_users %} will be updated, but nothing else about the accounts will change, including activity history.
{% note %}
@@ -133,9 +140,9 @@ To resolve this problem, you must make a change in your IdP so that the normaliz
{% endnote %}
### Resolving username conflicts with Azure AD
### Resolving username problems with Azure AD
To resolve username conflicts in Azure AD, either modify the User Principal Name value for the conflicting user or modify the attribute mapping for the `userName` attribute. If you modify the attribute mapping, you can choose an existing attribute or use an expression to ensure that all provisioned users have a unique normalized alias.
To resolve username problems in Azure AD, either modify the User Principal Name value for the conflicting user or modify the attribute mapping for the `userName` attribute. If you modify the attribute mapping, you can choose an existing attribute or use an expression to ensure that all provisioned users have a unique normalized alias.
1. In Azure AD, open the {% data variables.product.prodname_emu_idp_application %} application.
1. In the left sidebar, click **Provisioning**.
@@ -146,9 +153,9 @@ To resolve username conflicts in Azure AD, either modify the User Principal Name
- To map an existing attribute in Azure AD to the `userName` attribute in {% data variables.product.prodname_dotcom %}, click your desired attribute field. Then, save and wait for a provisioning cycle to occur within about 40 minutes.
- To use an expression instead of an existing attribute, change the Mapping type to "Expression", then add a custom expression that will make this value unique for all users. For example, you could use `[FIRST NAME]-[LAST NAME]-[EMPLOYEE ID]`. For more information, see [Reference for writing expressions for attribute mappings in Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/functions-for-customizing-application-data) in Microsoft Docs.
### Resolving username conflicts with Okta
### Resolving username problems with Okta
To resolve username conflicts in Okta, update the attribute mapping settings for the {% data variables.product.prodname_emu_idp_application %} application.
To resolve username problems in Okta, update the attribute mapping settings for the {% data variables.product.prodname_emu_idp_application %} application.
1. In Okta, open the {% data variables.product.prodname_emu_idp_application %} application.
1. Click **Sign On**.

View File

@@ -136,7 +136,9 @@ By default, when an unauthenticated user attempts to access an enterprise that u
{% data variables.product.product_name %} automatically creates a username for each person by normalizing an identifier provided by your IdP. For more information, see "[Username considerations for external authentication](/admin/identity-and-access-management/managing-iam-for-your-enterprise/username-considerations-for-external-authentication)."
A conflict may occur when provisioning users if the unique parts of the identifier provided by your IdP are removed during normalization. {% data reusables.enterprise-accounts.emu-only-emails-within-the-enterprise-can-conflict %} If you're unable to provision a user due to a username conflict, you should modify the username provided by your IdP. For more information, see "[Resolving username conflicts](/admin/identity-and-access-management/managing-iam-for-your-enterprise/username-considerations-for-external-authentication#resolving-username-conflicts)."
A conflict may occur when provisioning users if the unique parts of the identifier provided by your IdP are removed during normalization. If you're unable to provision a user due to a username conflict, you should modify the username provided by your IdP. For more information, see "[Resolving username problems](/admin/identity-and-access-management/managing-iam-for-your-enterprise/username-considerations-for-external-authentication#resolving-username-problems)."
{% data reusables.enterprise-accounts.emu-only-emails-within-the-enterprise-can-conflict %}
The profile name and email address of a {% data variables.enterprise.prodname_managed_user %} is also provided by the IdP. {% data variables.enterprise.prodname_managed_users_caps %} cannot change their profile name or email address on {% data variables.product.prodname_dotcom %}, and the IdP can only provide a single email address.

View File

@@ -168,9 +168,19 @@ By default, when you create a new enterprise, workflows are not allowed to creat
{% data reusables.actions.cache-default-size %} {% data reusables.actions.cache-eviction-process %}
However, you can set an enterprise policy to customize both the default total cache size for each repository, as well as the maximum total cache size allowed for a repository. For example, you might want the default total cache size for each repository to be 5 GB, but also allow repository administrators to configure a total cache size up to 15 GB if necessary.
However, you can set an enterprise policy to customize both the default total cache size for each repository, as well as the maximum total cache size allowed for a repository. For example, you might want the default total cache size for each repository to be 5 GB, but also allow {% ifversion actions-cache-admin-ui %}organization owners and{% endif %} repository administrators to configure a total cache size up to 15 GB if necessary.
People with admin access to a repository can set a total cache size for their repository up to the maximum cache size allowed by the enterprise policy setting.
{% ifversion actions-cache-admin-ui %}Organization owners can set a lower total cache size that applies to each repository in their organization. {% endif %}People with admin access to a repository can set a total cache size for their repository up to the maximum cache size allowed by the enterprise {% ifversion actions-cache-admin-ui %}or organization{% endif %} policy setting.
{% ifversion actions-cache-admin-ui %}
{% data reusables.enterprise-accounts.access-enterprise %}
{% data reusables.enterprise-accounts.policies-tab %}
{% data reusables.enterprise-accounts.actions-tab %}
1. In the "Artifact, cache and log settings" section, under **Maximum cache size limit**, enter a value, then click **Save** to apply the setting.
1. In the "Artifact, cache and log settings" section, under **Default cache size limit**, enter a value, then click **Save** to apply the setting.
{% else %}
The policy settings for {% data variables.product.prodname_actions %} cache storage can currently only be modified using the REST API:
@@ -180,3 +190,5 @@ The policy settings for {% data variables.product.prodname_actions %} cache stor
{% data reusables.actions.cache-no-org-policy %}
{% endif %}
{% endif %}

View File

@@ -125,7 +125,7 @@ Before adding a new SSH key to the ssh-agent to manage your keys, you should hav
* Open your `~/.ssh/config` file, then modify the file to contain the following lines. If your SSH key file has a different name or path than the example code, modify the filename or path to match your current setup.
```
Host *
Host *.{% ifversion ghes or ghae %}HOSTNAME{% else %}github.com{% endif %}
AddKeysToAgent yes
UseKeychain yes
IdentityFile ~/.ssh/id_{% ifversion ghae %}ecdsa{% else %}ed25519{% endif %}
@@ -137,10 +137,10 @@ Before adding a new SSH key to the ssh-agent to manage your keys, you should hav
- If you chose not to add a passphrase to your key, you should omit the `UseKeychain` line.
- If you see a `Bad configuration option: usekeychain` error, add an additional line to the configuration's' `Host *` section.
- If you see a `Bad configuration option: usekeychain` error, add an additional line to the configuration's' `Host *.{% ifversion ghes or ghae %}HOSTNAME{% else %}github.com{% endif %}` section.
```
Host *
Host *.{% ifversion ghes or ghae %}HOSTNAME{% else %}github.com{% endif %}
IgnoreUnknown UseKeychain
```
{% endnote %}

View File

@@ -35,8 +35,6 @@ When you create a {% data variables.product.pat_generic %}, we recommend that yo
If a valid OAuth token, {% data variables.product.prodname_github_app %} token, or {% data variables.product.pat_generic %} is pushed to a public repository or public gist, the token will be automatically revoked.
OAuth tokens and personal {% data variables.product.pat_v1_plural %} pushed to public repositories and public gists will only be revoked if the token has scopes.{% ifversion pat-v2 %} {% data variables.product.pat_v2_caps %}s will always be revoked.{% endif %}
{% endif %}
{% ifversion fpt or ghec %}

View File

@@ -860,7 +860,7 @@ registries:
The `npm-registry` type supports username and password, or token.
When using username and password, your `.npmrc`'s auth token may contain a `base64` encoded `_password`; however, the password referenced in your {% data variables.product.prodname_dependabot %} configuration file must be the original (unencoded) password.
When using username and password, your `.npmrc`'s auth token may contain a `base64` encoded `_password`; however, the password referenced in your {% data variables.product.prodname_dependabot %} configuration file must be the original (unencoded) password.
{% raw %}
```yaml
@@ -882,6 +882,8 @@ registries:
token: ${{secrets.MY_GITHUB_PERSONAL_TOKEN}}
```
{% endraw %}
{% ifversion dependabot-yarn-v3-update %}
For security reasons, {% data variables.product.prodname_dependabot %} does not set environment variables. Yarn (v2 and later) requires that any accessed environment variables are set. When accessing environment variables in your `.yarnrc.yml` file, you should provide a fallback value such as {% raw %}`${ENV_VAR-fallback}`{% endraw %} or {% raw %}`${ENV_VAR:-fallback}`{% endraw %}. For more information, see [Yarnrc files](https://yarnpkg.com/configuration/yarnrc) in the Yarn documentation.{% endif %}
### `nuget-feed`

View File

@@ -1,6 +1,6 @@
---
title: Protecting pushes with secret scanning
intro: 'You can use {% data variables.product.prodname_secret_scanning %} to prevent supported secrets from being pushed into your organization or repository by enabling push protection.'
intro: 'You can use {% data variables.product.prodname_secret_scanning %} to prevent supported secrets from being pushed into your {% ifversion secret-scanning-enterprise-level %}enterprise,{% endif %} organization{% ifversion secret-scanning-enterprise-level %},{% endif %} or repository by enabling push protection.'
product: '{% data reusables.gated-features.secret-scanning %}'
miniTocMaxHeadingLevel: 3
versions:
@@ -34,10 +34,18 @@ For information on the secrets and service providers supported for push protecti
## Enabling {% data variables.product.prodname_secret_scanning %} as a push protection
For you to use {% data variables.product.prodname_secret_scanning %} as a push protection, the organization or repository needs to have both {% data variables.product.prodname_GH_advanced_security %} and {% data variables.product.prodname_secret_scanning %} enabled. For more information, see "[Managing security and analysis settings for your organization](/organizations/keeping-your-organization-secure/managing-security-and-analysis-settings-for-your-organization)," "[Managing security and analysis settings for your repository](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-security-and-analysis-settings-for-your-repository)," and "[About {% data variables.product.prodname_GH_advanced_security %}](/get-started/learning-about-github/about-github-advanced-security)."
For you to use {% data variables.product.prodname_secret_scanning %} as a push protection, the {% ifversion secret-scanning-enterprise-level %}enterprise,{% endif %} organization{% ifversion secret-scanning-enterprise-level %},{% endif %} or repository needs to have both {% data variables.product.prodname_GH_advanced_security %} and {% data variables.product.prodname_secret_scanning %} enabled. For more information, see {% ifversion secret-scanning-enterprise-level %}"[Managing security and analysis settings for your enterprise](/admin/code-security/managing-github-advanced-security-for-your-enterprise/managing-github-advanced-security-features-for-your-enterprise),"{% endif %} "[Managing security and analysis settings for your organization](/organizations/keeping-your-organization-secure/managing-security-and-analysis-settings-for-your-organization)," "[Managing security and analysis settings for your repository](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-security-and-analysis-settings-for-your-repository)," and "[About {% data variables.product.prodname_GH_advanced_security %}](/get-started/learning-about-github/about-github-advanced-security)."
Organization owners, security managers, and repository administrators can enable push protection for {% data variables.product.prodname_secret_scanning %} via the UI and API. For more information, see "[Repositories](/rest/reference/repos#update-a-repository)" and expand the "Properties of the `security_and_analysis` object" section in the REST API documentation.
{% ifversion secret-scanning-enterprise-level %}
### Enabling {% data variables.product.prodname_secret_scanning %} as a push protection for your enterprise
{% data reusables.enterprise-accounts.access-enterprise %}
{% data reusables.enterprise-accounts.settings-tab %}
1. In the left sidebar, click **Code security and analysis**.
{% data reusables.advanced-security.secret-scanning-push-protection-enterprise %}
{% endif %}
### Enabling {% data variables.product.prodname_secret_scanning %} as a push protection for an organization
{% data reusables.organizations.navigate-to-org %}
@@ -64,8 +72,6 @@ Up to five detected secrets will be displayed at a time on the command line. If
Organization admins can provide a custom link that will be displayed when a push is blocked. This custom link can contain organization-specific resources and advice, such as directions on using a recommended secrets vault or who to contact for questions relating to the blocked secret.
{% ifversion push-protection-custom-link-orgs-beta %}{% data reusables.advanced-security.custom-link-beta %}{% endif %}
![Screenshot showing that a push is blocked when a user attempts to push a secret to a repository](/assets/images/help/repository/secret-scanning-push-protection-with-custom-link.png)
{% else %}
@@ -104,9 +110,6 @@ If {% data variables.product.prodname_dotcom %} blocks a secret that you believe
{% ifversion push-protection-custom-link-orgs %}
Organization admins can provide a custom link that will be displayed when a push is blocked. This custom link can contain resources and advice specific to your organization. For example, the custom link can point to a README file with information about the organization's secret vault, which teams and individuals to escalate questions to, or the organization's approved policy for working with secrets and rewriting commit history.
{% ifversion push-protection-custom-link-orgs-beta %}{% data reusables.advanced-security.custom-link-beta %}{% endif %}
{% endif %}
You can remove the secret from the file using the web UI. Once you remove the secret, the banner at the top of the page will change and tell you that you can now commit your changes.

View File

@@ -30,9 +30,6 @@ If {% data variables.product.prodname_dotcom %} blocks a secret that you believe
{% ifversion push-protection-custom-link-orgs %}
Organization admins can provide a custom link that will be included in the message from {% data variables.product.product_name %} when your push is blocked. This custom link can contain resources and advice specific to your organization and its policies.
{% ifversion push-protection-custom-link-orgs-beta %}{% data reusables.advanced-security.custom-link-beta %}{% endif %}
{% endif %}
## Resolving a blocked push on the command line

View File

@@ -11,6 +11,7 @@ topics:
- Codespaces
children:
- /personalizing-github-codespaces-for-your-account
- /renaming-a-codespace
- /changing-the-machine-type-for-your-codespace
- /setting-your-default-editor-for-github-codespaces
- /setting-your-default-region-for-github-codespaces

View File

@@ -59,6 +59,8 @@ In the example `postCreate.sh` file below, the contents of the `config` director
ln -sf $PWD/.devcontainer/config $HOME/config && set +x
```
For more information, see "[Introduction to dev containers](/codespaces/setting-up-your-project-for-codespaces/introduction-to-dev-containers#applying-configuration-changes-to-a-codespace)."
## Stopping a codespace
{% data reusables.codespaces.stopping-a-codespace %} For more information, see "[Stopping and starting a codespace](/codespaces/developing-in-codespaces/stopping-and-starting-a-codespace)."

View File

@@ -16,7 +16,6 @@ children:
- /using-source-control-in-your-codespace
- /using-github-codespaces-for-pull-requests
- /stopping-and-starting-a-codespace
- /renaming-a-codespace
- /forwarding-ports-in-your-codespace
- /default-environment-variables-for-your-codespace
- /connecting-to-a-private-network

View File

@@ -1,56 +0,0 @@
---
title: Renaming a codespace
intro: You can use the {% data variables.product.prodname_cli %} to change the codespace display name to one of your choice.
product: '{% data reusables.gated-features.codespaces %}'
versions:
fpt: '*'
ghec: '*'
type: how_to
topics:
- Codespaces
- Fundamentals
- Developer
shortTitle: Rename a codespace
---
## About renaming a codespace
Each codespace is assigned an auto-generated display name. If you have multiple codespaces, the display name helps you to differentiate between codespaces. For example: `literate space parakeet`. You can change the display name for your codespace.
To find the display name of a codespace:
- On {% data variables.product.product_name %}, view your list of codespaces at https://github.com/codespaces.
![Screenshot of the list of codespaces in GitHub](/assets/images/help/codespaces/codespaces-list-display-name.png)
- In the {% data variables.product.prodname_vscode %} desktop application, or the {% data variables.product.prodname_vscode_shortname %} web client, click the Remote Explorer. The display name is shown below the repository name. For example: `symmetrical space telegram` in the screenshot below.
![Screenshot of the Remote Explorer in VS Code](/assets/images/help/codespaces/codespaces-remote-explorer.png)
{% indented_data_reference reusables.codespaces.remote-explorer spaces=2 %}
- In a terminal window on your local machine, use this {% data variables.product.prodname_cli %} command: `gh codespace list`.
### Permanent codespace names
In addition to the display name, when you create a codespace, a permanent name is also assigned to the codespace. The name is a combination of your {% data variables.product.company_short %} handle, the repository name, and some random characters. For example: `octocat-myrepo-gmc7`. You can't change this name.
To find the permanent name of a codespace:
* On {% data variables.product.product_name %}, the permanent name is shown in a pop-up when you hover over the **Open in browser** option on https://github.com/codespaces.
![Screenshot of the codespace name shown on hover over](/assets/images/help/codespaces/find-codespace-name-github.png)
* In a codespace, use this command in the terminal: `echo $CODESPACE_NAME`.
* In a terminal window on your local machine, use this {% data variables.product.prodname_cli %} command: `gh codespace list`.
## Renaming a codespace
Changing the display name of a codespace can be useful if you have multiple codespaces that you will be using for an extended period. An appropriate name helps you identify a codespace that you use for a particular purpose. You can change the display name for your codespace by using the {% data variables.product.prodname_cli %}.
To rename a codespace, use the `gh codespace edit` subcommand:
```shell
gh codespace edit -c PERMANENT-NAME-OF-CODESPACE -d NEW-DISPLAY-NAME
```
In this example, replace `permanent name of the codespace` with the permanent name of the codespace. Replace `new display name` with the desired display name.

View File

@@ -6,6 +6,7 @@ product: '{% data reusables.gated-features.codespaces %}'
miniTocMaxHeadingLevel: 3
versions:
fpt: '*'
ghec: '*'
type: how_to
topics:
- Codespaces
@@ -24,6 +25,7 @@ You can work with {% data variables.product.prodname_github_codespaces %} in the
- [Create a new codespace](#create-a-new-codespace)
- [Stop a codespace](#stop-a-codespace)
- [Delete a codespace](#delete-a-codespace)
- [Rename a codespace](#rename-a-codespace)
- [SSH into a codespace](#ssh-into-a-codespace)
- [Open a codespace in {% data variables.product.prodname_vscode %}](#open-a-codespace-in--data-variablesproductprodname_vscode-)
- [Open a codespace in JupyterLab](#open-a-codespace-in-jupyterlab)
@@ -74,6 +76,8 @@ gh codespace list
The list includes the unique name of each codespace, which you can use in other `gh codespace` commands.
An asterisk at the end of the branch name for a codespace indicates that there are uncommitted or unpushed changes in that codespace.
### Create a new codespace
```shell
@@ -98,6 +102,14 @@ gh codespace delete -c CODESPACE-NAME
For more information, see "[Deleting a codespace](/codespaces/developing-in-codespaces/deleting-a-codespace)."
### Rename a codespace
```shell
gh codespace edit -c CODESPACE-NAME -d DISPLAY-NAME
```
For more information, see "[Renaming a codespace](/codespaces/customizing-your-codespace/renaming-a-codespace)."
### SSH into a codespace
To run commands on the remote codespace machine, from your terminal, you can SSH into the codespace.
@@ -215,4 +227,4 @@ You can use the {% data variables.product.prodname_cli %} extension to create a
gh codespace edit -m <em>machine-type-name</em>
```
For more information, see the "{% data variables.product.prodname_cli %}" tab of "[Changing the machine type for your codespace](/codespaces/customizing-your-codespace/changing-the-machine-type-for-your-codespace)."
For more information, see the "{% data variables.product.prodname_cli %}" tab of "[Changing the machine type for your codespace](/codespaces/customizing-your-codespace/changing-the-machine-type-for-your-codespace)."

View File

@@ -35,7 +35,7 @@ When you create a codespace, a [shallow clone](https://github.blog/2020-12-21-ge
### Step 2: Container is created
{% data variables.product.prodname_github_codespaces %} uses a container as the development environment. This container is created based on the configurations that you can define in a `devcontainer.json` file and/or Dockerfile in your repository. If you don't [configure a container](/codespaces/customizing-your-codespace/configuring-codespaces-for-your-project), {% data variables.product.prodname_github_codespaces %} uses a [default image](/codespaces/customizing-your-codespace/configuring-codespaces-for-your-project#using-the-default-configuration), which has many languages and runtimes available. For information on what the default image contains, see the [`vscode-dev-containers`](https://github.com/microsoft/vscode-dev-containers/tree/main/containers/codespaces-linux) repository.
{% data variables.product.prodname_github_codespaces %} uses a container as the development environment. This container is created based on the configurations that you can define in a `devcontainer.json` file and/or Dockerfile in your repository. If you don't specify a custom Docker image in your configuration, {% data variables.product.prodname_codespaces %} uses a default image, which has many languages and runtimes available. For information, see "[Introduction to dev containers](/codespaces/setting-up-your-project-for-codespaces/introduction-to-dev-containers#using-the-default-dev-container-configuration)." For details of what the default image contains, see the [`vscode-dev-containers`](https://github.com/microsoft/vscode-dev-containers/tree/main/containers/codespaces-linux) repository.
{% note %}

View File

@@ -92,11 +92,10 @@ Within a codespace, you have access to the {% data variables.product.prodname_vs
1. In the left sidebar, click the Extensions icon.
1. In the search bar, enter `fairyfloss` and install the fairyfloss extension.
1. In the search bar, type `fairyfloss` and click **Install**.
![Add an extension](/assets/images/help/codespaces/add-extension.png)
1. Click **Install in Codespaces**.
1. Select the `fairyfloss` theme by selecting it from the list.
![Select the fairyfloss theme](/assets/images/help/codespaces/fairyfloss.png)

View File

@@ -44,7 +44,8 @@ includeGuides:
- /codespaces/managing-codespaces-for-your-organization/managing-billing-for-codespaces-in-your-organization
- /codespaces/managing-codespaces-for-your-organization/managing-encrypted-secrets-for-your-repository-and-organization-for-codespaces
- /codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types
- /codespaces/managing-codespaces-for-your-organization/retricting-the-idle-timeout-period
- /codespaces/managing-codespaces-for-your-organization/restricting-the-base-image-for-codespaces.md
- /codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period
- /codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces
- /codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports
- /codespaces/managing-codespaces-for-your-organization/reviewing-your-organizations-audit-logs-for-codespaces

View File

@@ -16,6 +16,7 @@ children:
- /managing-repository-access-for-your-organizations-codespaces
- /reviewing-your-organizations-audit-logs-for-github-codespaces
- /restricting-access-to-machine-types
- /restricting-the-base-image-for-codespaces
- /restricting-the-visibility-of-forwarded-ports
- /restricting-the-idle-timeout-period
- /restricting-the-retention-period-for-codespaces

View File

@@ -14,7 +14,9 @@ topics:
## Overview
Typically, when you create a codespace you are offered a choice of specifications for the machine that will run your codespace. You can choose the machine type that best suits your needs. For more information, see "[Creating a codespace](/codespaces/developing-in-codespaces/creating-a-codespace#creating-a-codespace)." If you pay for using {% data variables.product.prodname_github_codespaces %} then your choice of machine type will affect how much your are billed. For more information about pricing, see "[About billing for {% data variables.product.prodname_github_codespaces %}](/billing/managing-billing-for-github-codespaces/about-billing-for-github-codespaces)."
Typically, when you create a codespace you are offered a choice of specifications for the machine that will run your codespace. You can choose the machine type that best suits your needs. For more information, see "[Creating a codespace](/codespaces/developing-in-codespaces/creating-a-codespace#creating-a-codespace)."
If you pay for using {% data variables.product.prodname_github_codespaces %} then your choice of machine type will affect how much your are billed. The compute cost for a codespace is proportional to the number of processor cores in the machine type you choose. For example, the compute cost of using a codespace for an hour on a 16-core machine is eight times greater than a 2-core machine. For more information about pricing, see "[About billing for {% data variables.product.prodname_github_codespaces %}](/billing/managing-billing-for-github-codespaces/about-billing-for-github-codespaces)."
As an organization owner, you may want to configure constraints on the types of machine that are available. For example, if the work in your organization doesn't require significant compute power or storage space, you can remove the highly resourced machines from the list of options that people can choose from. You do this by defining one or more policies in the {% data variables.product.prodname_github_codespaces %} settings for your organization.
@@ -52,21 +54,29 @@ If you add an organization-wide policy, you should set it to the largest choice
{% data reusables.codespaces.codespaces-org-policies %}
1. Click **Add constraint** and choose **Machine types**.
![Add a constraint for machine types](/assets/images/help/codespaces/add-constraint-dropdown.png)
![Screenshot of the 'Add constraint' dropdown menu](/assets/images/help/codespaces/add-constraint-dropdown.png)
1. Click {% octicon "pencil" aria-label="The edit icon" %} to edit the constraint, then clear the selection of any machine types that you don't want to be available.
![Edit the machine type constraint](/assets/images/help/codespaces/edit-machine-constraint.png)
![Screenshot of the pencil icon for editing the constraint](/assets/images/help/codespaces/edit-machine-constraint.png)
{% data reusables.codespaces.codespaces-policy-targets %}
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see "[Restricting the visibility of forwarded ports](/codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports)," "[Restricting the idle timeout period](/codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period)," and "[Restricting the retention period for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces)."
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see:
* "[Restricting the base image for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-base-image-for-codespaces)"
* "[Restricting the visibility of forwarded ports](/codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports)"
* "[Restricting the idle timeout period](/codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period)"
* "[Restricting the retention period for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces)"
1. After you've finished adding constraints to your policy, click **Save**.
The policy will be applied to all new codespaces that are billable to your organization. The machine type constraint is also applied to existing codespaces when someone attempts to restart a stopped codespace or reconnect to an active codespace.
## Editing a policy
You can edit an existing policy. For example, you may want to add or remove constraints to or from a policy.
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to limit the available machine types](#adding-a-policy-to-limit-the-available-machine-types)."
1. Click the name of the policy you want to edit.
1. Click the pencil icon ({% octicon "pencil" aria-label="The edit icon" %}) beside the "Machine types" constraint.
1. Make the required changes then click **Save**.
## Deleting a policy
@@ -74,7 +84,7 @@ You can edit an existing policy. For example, you may want to add or remove cons
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to limit the available machine types](#adding-a-policy-to-limit-the-available-machine-types)."
1. Click the delete button to the right of the policy you want to delete.
![The delete button for a policy](/assets/images/help/codespaces/policy-delete.png)
![Screenshot of the delete button for a policy](/assets/images/help/codespaces/policy-delete.png)
## Further reading

View File

@@ -49,21 +49,25 @@ If you add an organization-wide policy with a timeout constraint, you should set
{% data reusables.codespaces.codespaces-org-policies %}
1. Click **Add constraint** and choose **Maximum idle timeout**.
![Add a constraint for idle timeout](/assets/images/help/codespaces/add-constraint-dropdown-timeout.png)
![Screenshot of the 'Add constraint' dropdown menu](/assets/images/help/codespaces/add-constraint-dropdown-timeout.png)
1. Click {% octicon "pencil" aria-label="The edit icon" %} to edit the constraint.
![Edit the timeout constraint](/assets/images/help/codespaces/edit-timeout-constraint.png)
![Screenshot of the pencil icon for editing the constraint](/assets/images/help/codespaces/edit-timeout-constraint.png)
1. Enter the maximum number of minutes codespaces can remain inactive before they time out, then click **Save**.
![Set the maximum timeout in minutes](/assets/images/help/codespaces/maximum-minutes-timeout.png)
![Screenshot of setting the maximum timeout in minutes](/assets/images/help/codespaces/maximum-minutes-timeout.png)
{% data reusables.codespaces.codespaces-policy-targets %}
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see "[Restricting access to machine types](/codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types)," "[Restricting the visibility of forwarded ports](/codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports)," and "[Restricting the retention period for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces)."
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see:
* "[Restricting access to machine types](/codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types)"
* "[Restricting the base image for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-base-image-for-codespaces)"
* "[Restricting the visibility of forwarded ports](/codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports)"
* "[Restricting the retention period for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces)"
1. After you've finished adding constraints to your policy, click **Save**.
The policy will be applied to all new codespaces that are created, and to existing codespaces the next time they are started.
The policy will be applied to all new codespaces that are billable to your organization. The timeout constraint is also applied to existing codespaces the next time they are started.
## Editing a policy
@@ -71,6 +75,7 @@ You can edit an existing policy. For example, you may want to add or remove cons
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to set a maximum idle timeout period](#adding-a-policy-to-set-a-maximum-idle-timeout-period)."
1. Click the name of the policy you want to edit.
1. Click the pencil icon ({% octicon "pencil" aria-label="The edit icon" %}) beside the "Maximum idle timeout" constraint.
1. Make the required changes then click **Save**.
## Deleting a policy
@@ -78,4 +83,4 @@ You can edit an existing policy. For example, you may want to add or remove cons
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to set a maximum idle timeout period](#adding-a-policy-to-set-a-maximum-idle-timeout-period)."
1. Click the delete button to the right of the policy you want to delete.
![The delete button for a policy](/assets/images/help/codespaces/policy-delete.png)
![Screenshot of the delete button for a policy](/assets/images/help/codespaces/policy-delete.png)

View File

@@ -35,15 +35,15 @@ If you add an organization-wide policy with a retention constraint, you should s
{% data reusables.codespaces.codespaces-org-policies %}
1. Click **Add constraint** and choose **Retention period**.
![Add a constraint for retention periods](/assets/images/help/codespaces/add-constraint-dropdown-retention.png)
![Screenshot of the 'Add constraint' dropdown menu](/assets/images/help/codespaces/add-constraint-dropdown-retention.png)
1. Click {% octicon "pencil" aria-label="The edit icon" %} to edit the constraint.
![Edit the timeout constraint](/assets/images/help/codespaces/edit-timeout-constraint.png)
![Screenshot of the pencil icon for editing the constraint](/assets/images/help/codespaces/edit-timeout-constraint.png)
1. Enter the maximum number of days codespaces can remain stopped before they are automatically deleted, then click **Save**.
![Set the retention period in days](/assets/images/help/codespaces/maximum-days-retention.png)
![Screenshot of setting the retention period in days](/assets/images/help/codespaces/maximum-days-retention.png)
{% note %}
@@ -55,10 +55,14 @@ If you add an organization-wide policy with a retention constraint, you should s
{% endnote %}
{% data reusables.codespaces.codespaces-policy-targets %}
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see "[Restricting access to machine types](/codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types)," "[Restricting the visibility of forwarded ports](/codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports)," and "[Restricting the idle timeout period](/codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period)."
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see:
* "[Restricting access to machine types](/codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types)"
* "[Restricting the base image for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-base-image-for-codespaces)"
* "[Restricting the visibility of forwarded ports](/codespaces/managing-codespaces-for-your-organization/restricting-the-visibility-of-forwarded-ports)"
* "[Restricting the idle timeout period](/codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period)"
1. After you've finished adding constraints to your policy, click **Save**.
The policy will be applied to all new codespaces that are created.
The policy will be applied to all new codespaces that are billable to your organization. The retention period constraint is only applied on codespace creation.
## Editing a policy
@@ -68,6 +72,7 @@ The retention period constraint is only applied to codespaces when they are crea
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to set a maximum codespace retention period](#adding-a-policy-to-set-a-maximum-codespace-retention-period)."
1. Click the name of the policy you want to edit.
1. Click the pencil icon ({% octicon "pencil" aria-label="The edit icon" %}) beside the "Retention period" constraint.
1. Make the required changes then click **Save**.
## Deleting a policy
@@ -77,4 +82,4 @@ You can delete a policy at any time. Deleting a policy has no effect on existing
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to set a maximum codespace retention period](#adding-a-policy-to-set-a-maximum-codespace-retention-period)."
1. Click the delete button to the right of the policy you want to delete.
![The delete button for a policy](/assets/images/help/codespaces/policy-delete.png)
![Screenshot of the delete button for a policy](/assets/images/help/codespaces/policy-delete.png)

View File

@@ -45,25 +45,33 @@ If you add an organization-wide policy, you should set it to the most lenient vi
{% data reusables.codespaces.codespaces-org-policies %}
1. Click **Add constraint** and choose **Port visibility**.
![Add a constraint for port visibility](/assets/images/help/codespaces/add-constraint-dropdown-ports.png)
![Screenshot of the 'Add constraint' dropdown menu](/assets/images/help/codespaces/add-constraint-dropdown-ports.png)
1. Click {% octicon "pencil" aria-label="The edit icon" %} to edit the constraint.
![Edit the port visibility constraint](/assets/images/help/codespaces/edit-port-visibility-constraint.png)
![Screenshot of the pencil icon for editing the constraint](/assets/images/help/codespaces/edit-port-visibility-constraint.png)
1. Clear the selection of the port visibility options (**Org** or **Public**) that you don't want to be available.
![Choose the port visibility options](/assets/images/help/codespaces/choose-port-visibility-options.png)
![Screenshot of clearing a port visibility option](/assets/images/help/codespaces/choose-port-visibility-options.png)
{% data reusables.codespaces.codespaces-policy-targets %}
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see "[Restricting access to machine types](/codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types)," "[Restricting the idle timeout period](/codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period)," and "[Restricting the retention period for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces)."
1. If you want to add another constraint to the policy, click **Add constraint** and choose another constraint. For information about other constraints, see:
* "[Restricting access to machine types](/codespaces/managing-codespaces-for-your-organization/restricting-access-to-machine-types)"
* "[Restricting the base image for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-base-image-for-codespaces)"
* "[Restricting the idle timeout period](/codespaces/managing-codespaces-for-your-organization/restricting-the-idle-timeout-period)"
* "[Restricting the retention period for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-retention-period-for-codespaces)"
1. After you've finished adding constraints to your policy, click **Save**.
The policy will be applied to all new codespaces that are billable to your organization. The port visibility constraint is also applied to existing codespaces the next time they are started.
## Editing a policy
You can edit an existing policy. For example, you may want to add or remove constraints to or from a policy.
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to limit the port visibility options](#adding-a-policy-to-limit-the-port-visibility-options)."
1. Click the name of the policy you want to edit.
1. Click the pencil icon ({% octicon "pencil" aria-label="The edit icon" %}) beside the "Port visibility" constraint.
1. Make the required changes then click **Save**.
## Deleting a policy
@@ -71,4 +79,4 @@ You can edit an existing policy. For example, you may want to add or remove cons
1. Display the "Codespace policies" page. For more information, see "[Adding a policy to limit the port visibility options](#adding-a-policy-to-limit-the-port-visibility-options)."
1. Click the delete button to the right of the policy you want to delete.
![The delete button for a policy](/assets/images/help/codespaces/policy-delete.png)
![Screenshot of the delete button for a policy](/assets/images/help/codespaces/policy-delete.png)

View File

@@ -93,10 +93,10 @@ You can use secrets in a codespace after the codespace is built and is running.
* When launching an application from the integrated terminal or ssh session.
* Within a dev container lifecycle script that is run after the codespace is running. For more information about dev container lifecycle scripts, see the documentation on containers.dev: [Specification](https://containers.dev/implementors/json_reference/#lifecycle-scripts).
Codespace secrets cannot be used during:
Codespace secrets cannot be used:
* Codespace build time (that is, within a Dockerfile or custom entry point).
* Within a dev container feature. For more information, see the `features` attribute in the documentation on containers.dev: [Specification](https://containers.dev/implementors/json_reference/#general-properties).
* During codespace build time (that is, within a Dockerfile or custom entry point).
* Within a dev container feature. For more information, see the `features` property in the [dev containers specification](https://containers.dev/implementors/json_reference/#general-properties) on containers.dev.
## Further reading

View File

@@ -65,7 +65,7 @@ The Dockerfile for a dev container is typically located in the `.devcontainer` f
{% note %}
**Note**: As an alternative to using a Dockerfile you can use the `image` property in the `devcontainer.json` file to refer directly to an existing image you want to use. If neither a Dockerfile nor an image is found then the default container image is used. For more information, see "[Using the default dev container configuration](#using-the-default-dev-container-configuration)."
**Note**: As an alternative to using a Dockerfile you can use the `image` property in the `devcontainer.json` file to refer directly to an existing image you want to use. The image you specify here must be allowed by any organization image policy that has been set. For more information, see "[Restricting the base image for codespaces](/codespaces/managing-codespaces-for-your-organization/restricting-the-base-image-for-codespaces)." If neither a Dockerfile nor an image is found then the default container image is used. For more information, see "[Using the default dev container configuration](#using-the-default-dev-container-configuration)."
{% endnote %}

View File

@@ -104,14 +104,12 @@ The newly added `devcontainer.json` file defines a few properties that are descr
// "ASPNETCORE_Kestrel__Certificates__Default__Path": "/home/vscode/.aspnet/https/aspnetapp.pfx",
// },
//
// 3. Do one of the following depending on your scenario:
// * When using GitHub Codespaces and/or Remote - Containers:
// 1. Start the container
// 2. Drag ~/.aspnet/https/aspnetapp.pfx into the root of the file explorer
// 3. Open a terminal in VS Code and run "mkdir -p /home/vscode/.aspnet/https && mv aspnetapp.pfx /home/vscode/.aspnet/https"
// 3. Start the container.
//
// 4. Drag ~/.aspnet/https/aspnetapp.pfx into the root of the file explorer.
//
// 5. Open a terminal in VS Code and run "mkdir -p /home/vscode/.aspnet/https && mv aspnetapp.pfx /home/vscode/.aspnet/https".
//
// * If only using Remote - Containers with a local container, uncomment this line instead:
// "mounts": [ "source=${env:HOME}${env:USERPROFILE}/.aspnet/https,target=/home/vscode/.aspnet/https,type=bind" ],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "dotnet restore",

View File

@@ -32,7 +32,7 @@ This guide shows you how to set up your Java project in {% data variables.produc
If you dont see this option, {% data variables.product.prodname_github_codespaces %} isn't available for your project. See [Access to {% data variables.product.prodname_github_codespaces %}](/codespaces/developing-in-codespaces/creating-a-codespace#access-to-github-codespaces) for more information.
When you create a codespace, your project is created on a remote VM that is dedicated to you. By default, the container for your codespace has many languages and runtimes including Java, nvm, npm, and Yarn. It also includes a common set of tools like git, wget, rsync, openssh, and nano.
When you create a codespace, your project is created on a remote VM that is dedicated to you. By default, the container for your codespace has many languages and runtimes including Java, nvm, npm, and Yarn. It also includes a set of commonly used tools such as git, wget, rsync, openssh, and nano.
{% data reusables.codespaces.customize-vcpus-and-ram %}

View File

@@ -154,3 +154,37 @@ By default, when you create a new organization, workflows are not allowed to {%
1. Click **Save** to apply the settings.
{% endif %}
{% ifversion actions-cache-org-ui %}
## Managing {% data variables.product.prodname_actions %} cache storage for your organization
Organization administrators can view {% ifversion actions-cache-admin-ui %}and manage {% endif %}{% data variables.product.prodname_actions %} cache storage for all repositories in the organization.
### Viewing {% data variables.product.prodname_actions %} cache storage by repository
For each repository in your organization, you can see how much cache storage a repository is using, the number of active caches, and if a repository is near the total cache size limit. For more information about the cache usage and eviction process, see "[Caching dependencies to speed up workflows](/actions/using-workflows/caching-dependencies-to-speed-up-workflows#usage-limits-and-eviction-policy)."
{% data reusables.profile.access_profile %}
{% data reusables.profile.access_org %}
{% data reusables.profile.org_settings %}
1. In the left sidebar, click {% octicon "play" aria-label="The {% data variables.product.prodname_actions %} icon" %} **Actions**, then click **Caches**.
1. Review the list of repositories for information about their {% data variables.product.prodname_actions %} caches. You can click on a repository name to see more detail about the repository's caches.
{% ifversion actions-cache-admin-ui %}
### Configuring {% data variables.product.prodname_actions %} cache storage for your organization
{% data reusables.actions.cache-default-size %}
You can configure the size limit for {% data variables.product.prodname_actions %} caches that will apply to each repository in your organization. The cache size limit for an organization cannot exceed the cache size limit set in the enterprise policy. Repository admins will be able to set a smaller limit in their repositories.
{% data reusables.profile.access_profile %}
{% data reusables.profile.access_org %}
{% data reusables.profile.org_settings %}
{% data reusables.organizations.settings-sidebar-actions-general %}
{% data reusables.actions.change-cache-size-limit %}
{% endif %}
{% endif %}

View File

@@ -1,6 +1,6 @@
---
title: Sobre domínios personalizados e GitHub Pages
intro: 'O {% data variables.product.prodname_pages %} permite o uso de domínios personalizados, ou a alteração da raiz do URL do seu site do padrão, como `octocat.github.io`, para qualquer domínio que você possua.'
title: About custom domains and GitHub Pages
intro: '{% data variables.product.prodname_pages %} supports using custom domains, or changing the root of your site''s URL from the default, like `octocat.github.io`, to any domain you own.'
redirect_from:
- /articles/about-custom-domains-for-github-pages-sites
- /articles/about-supported-custom-domains
@@ -14,62 +14,58 @@ versions:
topics:
- Pages
shortTitle: Custom domains in GitHub Pages
ms.openlocfilehash: a2c5ae3df0e2dd6248db6e03fd7c64e973b14f3d
ms.sourcegitcommit: 47bd0e48c7dba1dde49baff60bc1eddc91ab10c5
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/05/2022
ms.locfileid: '145128237'
---
## Domínios personalizados compatíveis
O {% data variables.product.prodname_pages %} trabalha com dois tipos de domínio: subdomínios e domínios apex. Para ver uma lista dos domínios personalizados sem suporte, confira "[Solução de problemas de domínios personalizados e do {% data variables.product.prodname_pages %}](/articles/troubleshooting-custom-domains-and-github-pages/#custom-domain-names-that-are-unsupported)".
## Supported custom domains
| Tipo de domínio personalizado compatível | Exemplo |
{% data variables.product.prodname_pages %} works with two types of domains: subdomains and apex domains. For a list of unsupported custom domains, see "[Troubleshooting custom domains and {% data variables.product.prodname_pages %}](/articles/troubleshooting-custom-domains-and-github-pages/#custom-domain-names-that-are-unsupported)."
| Supported custom domain type | Example |
|---|---|
| Subdomínio `www` | `www.example.com` |
| Subdomínio personalizado | `blog.example.com` |
| Domínio Apex | `example.com` |
| `www` subdomain | `www.example.com` |
| Custom subdomain | `blog.example.com` |
| Apex domain | `example.com` |
Você pode definir configurações de apex e de subdomínio `www` para seu site. Para obter mais informações sobre os domínios apex, confira "[Como usar um domínio apex para seu site do {% data variables.product.prodname_pages %}](#using-an-apex-domain-for-your-github-pages-site)".
You can set up either or both of apex and `www` subdomain configurations for your site. For more information on apex domains, see "[Using an apex domain for your {% data variables.product.prodname_pages %} site](#using-an-apex-domain-for-your-github-pages-site)."
Recomendamos sempre usar um subdomínio `www`, mesmo que você também use um domínio apex. Quando você cria um site com um domínio apex, tentamos proteger automaticamente o subdomínio `www` para uso ao fornecer o conteúdo do seu site, mas você precisa fazer as alterações de DNS para usar o subdomínio `www`. Se você configurar um subdomínio `www`, tentaremos proteger automaticamente o domínio apex associado. Para obter mais informações, confira "[Como gerenciar um domínio personalizado para seu site do {% data variables.product.prodname_pages %}](/articles/managing-a-custom-domain-for-your-github-pages-site)".
We recommend always using a `www` subdomain, even if you also use an apex domain. When you create a new site with an apex domain, we automatically attempt to secure the `www` subdomain for use when serving your site's content, but you need to make the DNS changes to use the `www` subdomain. If you configure a `www` subdomain, we automatically attempt to secure the associated apex domain. For more information, see "[Managing a custom domain for your {% data variables.product.prodname_pages %} site](/articles/managing-a-custom-domain-for-your-github-pages-site)."
Depois de configurar um domínio personalizado para um site de usuário ou de organização, o domínio personalizado substituirá a parte `<user>.github.io` ou `<organization>.github.io` da URL para os sites de projeto pertencentes à conta que não tenham um domínio personalizado configurado. Por exemplo, se o domínio personalizado para seu site de usuário for `www.octocat.com` e você tiver um site de projeto sem domínio personalizado configurado que seja publicado em um repositório chamado `octo-project`, o site do {% data variables.product.prodname_pages %} para esse repositório estará disponível em `www.octocat.com/octo-project`.
After you configure a custom domain for a user or organization site, the custom domain will replace the `<user>.github.io` or `<organization>.github.io` portion of the URL for any project sites owned by the account that do not have a custom domain configured. For example, if the custom domain for your user site is `www.octocat.com`, and you have a project site with no custom domain configured that is published from a repository called `octo-project`, the {% data variables.product.prodname_pages %} site for that repository will be available at `www.octocat.com/octo-project`.
For more information about each type of site and handling custom domains, see "[Types of {% data variables.product.prodname_pages %} sites](/pages/getting-started-with-github-pages/about-github-pages#types-of-github-pages-sites)."
## Usar um subdomínio para seu site do {% data variables.product.prodname_pages %}
## Using a subdomain for your {% data variables.product.prodname_pages %} site
Um subdomínio é a parte de um URL antes do domínio raiz. Você pode configurar seu subdomínio como `www` ou como uma seção distinta do seu site, como `blog.example.com`.
A subdomain is the part of a URL before the root domain. You can configure your subdomain as `www` or as a distinct section of your site, like `blog.example.com`.
Os subdomínios são configurados com um registro `CNAME` por meio do provedor DNS. Para obter mais informações, confira "[Como gerenciar um domínio personalizado para seu site do {% data variables.product.prodname_pages %}](/articles/managing-a-custom-domain-for-your-github-pages-site#configuring-a-subdomain)".
Subdomains are configured with a `CNAME` record through your DNS provider. For more information, see "[Managing a custom domain for your {% data variables.product.prodname_pages %} site](/articles/managing-a-custom-domain-for-your-github-pages-site#configuring-a-subdomain)."
### Subdomínios `www`
### `www` subdomains
Um subdomínio `www` é o tipo de subdomínio mais usado. Por exemplo, `www.example.com` inclui um subdomínio `www`.
A `www` subdomain is the most commonly used type of subdomain. For example, `www.example.com` includes a `www` subdomain.
Os subdomínios `www` são o tipo mais estável de domínio personalizado porque os subdomínios `www` não são afetados por alterações nos endereços IP dos servidores do {% data variables.product.product_name %}.
`www` subdomains are the most stable type of custom domain because `www` subdomains are not affected by changes to the IP addresses of {% data variables.product.product_name %}'s servers.
### Subdomínios personalizados
### Custom subdomains
Um subdomínio personalizado é um tipo de subdomínio que não usa a variante padrão `www`. Os subdomínios personalizados são usados mais frequentemente quando você deseja duas seções distintas do site. Por exemplo, você pode criar um site chamado `blog.example.com` e personalizar essa seção independentemente de `www.example.com`.
A custom subdomain is a type of subdomain that doesn't use the standard `www` variant. Custom subdomains are mostly used when you want two distinct sections of your site. For example, you can create a site called `blog.example.com` and customize that section independently from `www.example.com`.
## Usar um domínio apex para seu site do {% data variables.product.prodname_pages %}
## Using an apex domain for your {% data variables.product.prodname_pages %} site
Um domínio apex é um domínio personalizado que não contém um subdomínio, como `example.com`. Os domínios apex também são conhecidos como domínios base, bare, naked, apex raiz ou apex de zona.
An apex domain is a custom domain that does not contain a subdomain, such as `example.com`. Apex domains are also known as base, bare, naked, root apex, or zone apex domains.
Um domínio apex é configurado com um registro `A`, `ALIAS` ou `ANAME` por meio do provedor DNS. Para obter mais informações, confira "[Como gerenciar um domínio personalizado para seu site do {% data variables.product.prodname_pages %}](/articles/managing-a-custom-domain-for-your-github-pages-site#configuring-an-apex-domain)".
An apex domain is configured with an `A`, `ALIAS`, or `ANAME` record through your DNS provider. For more information, see "[Managing a custom domain for your {% data variables.product.prodname_pages %} site](/articles/managing-a-custom-domain-for-your-github-pages-site#configuring-an-apex-domain)."
{% data reusables.pages.www-and-apex-domain-recommendation %} Para obter mais informações, confira "[Como gerenciar um domínio personalizado para seu site do {% data variables.product.prodname_pages %}](/github/working-with-github-pages/managing-a-custom-domain-for-your-github-pages-site/#configuring-a-subdomain)".
{% data reusables.pages.www-and-apex-domain-recommendation %} For more information, see "[Managing a custom domain for your {% data variables.product.prodname_pages %} site](/github/working-with-github-pages/managing-a-custom-domain-for-your-github-pages-site/#configuring-a-subdomain)."
## Protegendo o domínio personalizado para o seu site do {% data variables.product.prodname_pages %}
## Securing the custom domain for your {% data variables.product.prodname_pages %} site
{% data reusables.pages.secure-your-domain %} Para obter mais informações, confira "[Como verificar seu domínio personalizado para o {% data variables.product.prodname_pages %}](/pages/configuring-a-custom-domain-for-your-github-pages-site/verifying-your-custom-domain-for-github-pages)" e "[Como gerenciar um domínio personalizado para seu site do {% data variables.product.prodname_pages %}](/articles/managing-a-custom-domain-for-your-github-pages-site)".
{% data reusables.pages.secure-your-domain %} For more information, see "[Verifying your custom domain for {% data variables.product.prodname_pages %}](/pages/configuring-a-custom-domain-for-your-github-pages-site/verifying-your-custom-domain-for-github-pages)" and "[Managing a custom domain for your {% data variables.product.prodname_pages %} site](/articles/managing-a-custom-domain-for-your-github-pages-site)."
Há alguns motivos para que seu site possa ser desabilitado automaticamente.
There are a couple of reasons your site might be automatically disabled.
- Se você fizer downgrade do {% data variables.product.prodname_pro %} para o {% data variables.product.prodname_free_user %}, qualquer site do {% data variables.product.prodname_pages %} que esteja publicado no momento usando repositórios privados em sua conta terão a publicação cancelada. Para obter mais informações, confira "[Como fazer downgrade do seu plano de cobrança do {% data variables.product.prodname_dotcom %}](/articles/downgrading-your-github-billing-plan)".
- Se você transferir um repositório privado para uma conta pessoal que esteja usando o {% data variables.product.prodname_free_user %}, o repositório perderá o acesso ao recurso {% data variables.product.prodname_pages %} e o site do {% data variables.product.prodname_pages %} atualmente publicado terá a publicação cancelada. Para obter mais informações, confira "[Como transferir um repositório](/articles/transferring-a-repository)".
- If you downgrade from {% data variables.product.prodname_pro %} to {% data variables.product.prodname_free_user %}, any {% data variables.product.prodname_pages %} sites that are currently published from private repositories in your account will be unpublished. For more information, see "[Downgrading your {% data variables.product.prodname_dotcom %} billing plan](/articles/downgrading-your-github-billing-plan)."
- If you transfer a private repository to a personal account that is using {% data variables.product.prodname_free_user %}, the repository will lose access to the {% data variables.product.prodname_pages %} feature, and the currently published {% data variables.product.prodname_pages %} site will be unpublished. For more information, see "[Transferring a repository](/articles/transferring-a-repository)."
## Leitura adicional
## Further reading
- "[Solução de problemas de domínios personalizados e do {% data variables.product.prodname_pages %}](/articles/troubleshooting-custom-domains-and-github-pages)"
- "[Troubleshooting custom domains and {% data variables.product.prodname_pages %}](/articles/troubleshooting-custom-domains-and-github-pages)"

View File

@@ -54,6 +54,9 @@ For each branch protection rule, you can choose to enable or disable the followi
{%- ifversion required-deployments %}
- [Require deployments to succeed before merging](#require-deployments-to-succeed-before-merging)
{%- endif %}
{%- ifversion lock-branch %}
- [Lock branch](#lock-branch)
{%- endif %}
{% ifversion bypass-branch-protections %}- [Do not allow bypassing the above settings](#do-not-allow-bypassing-the-above-settings){% else %}- [Include administrators](#include-administrators){% endif %}
- [Restrict who can push to matching branches](#restrict-who-can-push-to-matching-branches)
- [Allow force pushes](#allow-force-pushes)
@@ -84,6 +87,10 @@ Optionally, you can restrict the ability to dismiss pull request reviews to spec
Optionally, you can choose to require reviews from code owners. If you do, any pull request that affects code with a code owner must be approved by that code owner before the pull request can be merged into the protected branch.
{% ifversion last-pusher-require-approval %}
Optionally, you can require approvals from someone other than the last person to push to a branch before a pull request can be merged. This ensures more than one person sees pull requests in their final state before they are merged into a protected branch. If you enable this feature, the most recent user to push their changes will need an approval regardless of the required approvals branch protection. Users who have already reviewed a pull request can reapprove after the most recent push to meet this requirement.
{% endif %}
### Require status checks before merging
Required status checks ensure that all required CI tests are passing before collaborators can make changes to a protected branch. Required status checks can be checks or statuses. For more information, see "[About status checks](/github/collaborating-with-issues-and-pull-requests/about-status-checks)."
@@ -151,6 +158,13 @@ Before you can require a linear commit history, your repository must allow squas
You can require that changes are successfully deployed to specific environments before a branch can be merged. For example, you can use this rule to ensure that changes are successfully deployed to a staging environment before the changes merge to your default branch.
{% ifversion lock-branch %}
### Lock branch
Locking a branch ensures that no commits can be made to the branch.
By default, a forked repository does not support syncing from its upstream repository. You can enable **Allow fork syncing** to pull changes from the upstream repository while preventing other contributions to the fork's branch.
{% endif %}
{% ifversion bypass-branch-protections %}### Do not allow bypassing the above settings{% else %}
### Include administrators{% endif %}

View File

@@ -73,6 +73,10 @@ When you create a branch rule, the branch you specify doesn't have to exist yet
{% endif %}
- Optionally, if the repository is part of an organization, select **Restrict who can dismiss pull request reviews**. Then, search for and select the actors who are allowed to dismiss pull request reviews. For more information, see "[Dismissing a pull request review](/pull-requests/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/dismissing-a-pull-request-review)."
![Restrict who can dismiss pull request reviews checkbox]{% ifversion integration-branch-protection-exceptions %}(/assets/images/help/repository/PR-review-required-dismissals-with-apps.png){% else %}(/assets/images/help/repository/PR-review-required-dismissals.png){% endif %}
{% ifversion last-pusher-require-approval %}
- Optionally, to require someone other than the last person to push to a branch to approve a pull request prior to merging, select **Require approval from someone other than the last pusher**. For more information, see "[About protected branches](/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-pull-request-reviews-before-merging)."
![Require review from someone other than the last pusher](/assets/images/help/repository/last-pusher-review-required.png)
{% endif %}
1. Optionally, enable required status checks. For more information, see "[About status checks](/pull-requests/collaborating-with-pull-requests/collaborating-on-repositories-with-code-quality-features/about-status-checks)."
- Select **Require status checks to pass before merging**.
![Required status checks option](/assets/images/help/repository/required-status-checks.png)
@@ -99,6 +103,12 @@ When you create a branch rule, the branch you specify doesn't have to exist yet
1. Optionally, to choose which environments the changes must be successfully deployed to before merging, select **Require deployments to succeed before merging**, then select the environments.
![Require successful deployment option](/assets/images/help/repository/require-successful-deployment.png)
{%- endif %}
{% ifversion lock-branch %}
1. Optionally, select **Lock branch** to make branch read-only.
![Screenshot of the checkbox to lock a branch](/assets/images/help/repository/lock-branch.png)
- Optionally, to allow fork syncing, select **Allow fork syncing**.
![Screenshot of the checkbox to allow fork syncing](/assets/images/help/repository/lock-branch-forksync.png)
{%- endif %}
1. Optionally, select {% ifversion bypass-branch-protections %}**Do not allow bypassing the above settings**.
![Do not allow bypassing the above settings checkbox](/assets/images/help/repository/do-not-allow-bypassing-the-above-settings.png){% else %}**Apply the rules above to administrators**.
![Apply the rules above to administrators checkbox](/assets/images/help/repository/include-admins-protected-branches.png){% endif %}

View File

@@ -97,7 +97,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- run: 'echo "No build required" '
- run: 'echo "No build required"'
```
Now the checks will always pass whenever someone sends a pull request that doesn't change the files listed under `paths` in the first workflow.

View File

@@ -185,7 +185,16 @@ You can also define a custom retention period for a specific artifact created by
{% data reusables.actions.cache-default-size %} However, these default sizes might be different if an enterprise owner has changed them. {% data reusables.actions.cache-eviction-process %}
You can set a total cache storage size for your repository up to the maximum size allowed by the enterprise policy setting.
You can set a total cache storage size for your repository up to the maximum size allowed by the {% ifversion actions-cache-admin-ui %}organization or{% endif %} enterprise policy setting{% ifversion actions-cache-admin-ui %}s{% endif %}.
{% ifversion actions-cache-admin-ui %}
{% data reusables.repositories.navigate-to-repo %}
{% data reusables.repositories.sidebar-settings %}
{% data reusables.repositories.settings-sidebar-actions-general %}
{% data reusables.actions.change-cache-size-limit %}
{% else %}
The repository settings for {% data variables.product.prodname_actions %} cache storage can currently only be modified using the REST API:
@@ -195,3 +204,5 @@ The repository settings for {% data variables.product.prodname_actions %} cache
{% data reusables.actions.cache-no-org-policy %}
{% endif %}
{% endif %}

View File

@@ -2,6 +2,5 @@
{% ifversion ghec %}![Screenshot showing how to enable push protection for {% data variables.product.prodname_secret_scanning %} for an organization](/assets/images/help/organizations/secret-scanning-enable-push-protection-org.png){% elsif ghes > 3.4 or ghae > 3.4 %} ![Screenshot showing how to enable push protection for {% data variables.product.prodname_secret_scanning %} for an organization](/assets/images/help/organizations/secret-scanning-enable-push-protection-org-ghes.png){% endif %}
1. Optionally, click "Automatically enable for repositories added to {% data variables.product.prodname_secret_scanning %}."{% ifversion push-protection-custom-link-orgs %}
1. Optionally, to include a custom link in the message that members will see when they attempt to push a secret, select **Add a resource link in the CLI and web UI when a commit is blocked**, then type a URL, and click **Save link**.
{% ifversion push-protection-custom-link-orgs-beta %}{% indented_data_reference reusables.advanced-security.custom-link-beta spaces=3 %}{% endif %}
![Screenshot showing checkbox and text field for enabling a custom link](/assets/images/help/organizations/secret-scanning-custom-link.png){% endif %}

View File

@@ -1,30 +1,22 @@
---
ms.openlocfilehash: 073c21c1480e0f9f699687c730aef2bb670654e7
ms.sourcegitcommit: 47bd0e48c7dba1dde49baff60bc1eddc91ab10c5
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/05/2022
ms.locfileid: "146689018"
---
A tabela a seguir mostra, para cada gerenciador de pacotes:
- O valor YAML a ser usado no arquivo *dependabot.yml*
- As versões compatíveis do gerenciador de pacotes
- Se as dependências em repositórios ou registros de {% data variables.product.prodname_dotcom %} privados são compatíveis
- Se as dependências delegadas são compatíveis
The following table shows, for each package manager:
- The YAML value to use in the *dependabot.yml* file
- The supported versions of the package manager
- Whether dependencies in private {% data variables.product.prodname_dotcom %} repositories or registries are supported
- Whether vendored dependencies are supported
Gerenciador de pacotes | Valor do YAML | Versões com suporte | Repositórios privados | Registros privados | Delegação
Package manager | YAML value | Supported versions | Private repositories | Private registries | Vendoring
---------------|------------------|------------------|:---:|:---:|:---:
bundler | `bundler` | v1, v2 | | **✓** | **✓** |
Bundler | `bundler` | v1, v2 | | **✓** | **✓** |
Cargo | `cargo` | v1 | **✓** | **✓** | |
Compositor | `composer` | v1, v2 | **✓** | **✓** | |
Composer | `composer` | v1, v2 | **✓** | **✓** | |
Docker | `docker` | v1 | **✓** | **✓** | |
Hex | `mix` | v1 | | **✓** | |
elm-package | `elm` | v0.19 | **✓** | **✓** | |
git submodule | `gitsubmodule` | N/A (sem versão) | **✓** | **✓** | |
GitHub Actions | `github-actions` | N/A (sem versão) | **✓** | **✓** | |
Módulos Go | `gomod` | v1 | **✓** | **✓** | **✓** |
Gradle | `gradle` | N/D (sem versão)<sup>[1]</sup> | **✓** | **✓** | |
Maven | `maven` | N/D (sem versão)<sup>[2]</sup> | **✓** | **✓** | |
git submodule | `gitsubmodule` | N/A (no version) | **✓** | **✓** | |
GitHub Actions | `github-actions` | N/A (no version) | **✓** | **✓** | |
Go modules | `gomod` | v1 | **✓** | **✓** | **✓** |
Gradle | `gradle` | N/A (no version)<sup>[1]</sup> | **✓** | **✓** | |
Maven | `maven` | N/A (no version)<sup>[2]</sup> | **✓** | **✓** | |
npm | `npm` | v6, v7, v8 | **✓** | **✓** | |
NuGet | `nuget` | <= 4.8<sup>[3]</sup> | **✓** | **✓** | |
pip | `pip` | v21.1.2 | | **✓** | |
@@ -33,23 +25,29 @@ pip-compile | `pip` | 6.1.0 | | **✓** | |
poetry | `pip` | v1 | | **✓** | |{% ifversion fpt or ghec or ghes > 3.4 %}
pub | `pub` | v2 <sup>[4]</sup> | | | |{% endif %}
Terraform | `terraform` | >= 0.13, <= 1.2.x | **✓** | **✓** | |
YARN | `npm` | v1 | **✓** | **✓** | |
{% ifversion dependabot-yarn-v3-update %}yarn | `npm` | v1, v2, v3 | **✓** | **✓** | **✓**<sup>[5]</sup> |{% else %}yarn | `npm` | v1 | **✓** | **✓** | |
{% endif %}
{% tip %}
**Dica:** para gerenciadores de pacotes como `pipenv` e `poetry`, você precisa usar o valor `pip` do YAML. Por exemplo, se você usar `poetry` para gerenciar suas dependências do Python e quiser que o {% data variables.product.prodname_dependabot %} monitore seu arquivo de manifesto de dependência em busca de novas versões, use `package-ecosystem: "pip"` no arquivo *dependabot.yml*.
**Tip:** For package managers such as `pipenv` and `poetry`, you need to use the `pip` YAML value. For example, if you use `poetry` to manage your Python dependencies and want {% data variables.product.prodname_dependabot %} to monitor your dependency manifest file for new versions, use `package-ecosystem: "pip"` in your *dependabot.yml* file.
{% endtip %}
[1] O {% data variables.product.prodname_dependabot %} não executa o Gradle, mas dá suporte a atualizações nos seguintes arquivos: `build.gradle`, `build.gradle.kts` (para projetos do Kotlin) e arquivos incluídos por meio da declaração `apply` que têm `dependencies` no nome do arquivo. Observe que `apply` não dá suporte a `apply to`, recursão ou sintaxes avançadas (por exemplo, `apply` do Kotlin com `mapOf` e nomes de arquivo definidos por propriedade).
[1] {% data variables.product.prodname_dependabot %} doesn't run Gradle but supports updates to the following files: `build.gradle`, `build.gradle.kts` (for Kotlin projects), and files included via the `apply` declaration that have `dependencies` in the filename. Note that `apply` does not support `apply to`, recursion, or advanced syntaxes (for example, Kotlin's `apply` with `mapOf`, filenames defined by property).
[2] O {% data variables.product.prodname_dependabot %} não executa o Maven, mas dá suporte a atualizações em arquivos `pom.xml`.
[2] {% data variables.product.prodname_dependabot %} doesn't run Maven but supports updates to `pom.xml` files.
[3] {% data variables.product.prodname_dependabot %} não executa o NuGet CLI, mas é compatível com a maioria dos recursos até a versão 4.8.
[3] {% data variables.product.prodname_dependabot %} doesn't run the NuGet CLI but does support most features up until version 4.8.
O suporte a {% ifversion fpt or ghec or ghes > 3.4 %} [4] {% ifversion ghes = 3.5 %}`pub` está em versão beta. Todas as limitações conhecidas estão sujeitas a alterações. Observe que o {% data variables.product.prodname_dependabot %}:
- Não dá suporte à atualização de dependências do Git em `pub`.
- Não executará uma atualização quando a versão para a qual ele tenta atualizar for ignorada, mesmo que uma versão anterior esteja disponível.
{% ifversion fpt or ghec or ghes > 3.4 %}
[4] {% ifversion ghes = 3.5 %}`pub` support is currently in beta. Any known limitations are subject to change. Note that {% data variables.product.prodname_dependabot %}:
- Doesn't support updating git dependencies for `pub`.
- Won't perform an update when the version that it tries to update to is ignored, even if an earlier version is available.
Para obter informações sobre como configurar o arquivo _dependabot.yml_ para `pub`, confira "[Como habilitar o suporte para ecossistemas de nível beta](/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#enable-beta-ecosystems)".
{%- else %}{% data variables.product.prodname_dependabot %} não executará uma atualização para `pub` quando a versão para a qual ele tenta atualizar for ignorada, mesmo se uma versão anterior estiver disponível.{% endif %} {% endif %}
For information about configuring your _dependabot.yml_ file for `pub`, see "[Enabling support for beta-level ecosystems](/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#enable-beta-ecosystems)."
{%- else %}{% data variables.product.prodname_dependabot %} won't perform an update for `pub` when the version that it tries to update to is ignored, even if an earlier version is available.{% endif %}
{% endif %}
{% ifversion dependabot-yarn-v3-update %}
[5] Dependabot supports vendored dependencies for v2 onwards.{% endif %}