Remove all ifversion that has all versions (2) (#49466)
This commit is contained in:
@@ -142,54 +142,10 @@ The `build-push-action` options required for {% data variables.product.prodname_
|
|||||||
|
|
||||||
For example, for an image named `octo-image` stored on {% data variables.product.prodname_dotcom %} at `http://github.com/octo-org/octo-repo`, the `tags` option should be set to `docker.pkg.github.com/octo-org/octo-repo/octo-image:latest`{% endif %}. You can set a single tag as shown below, or specify multiple tags in a list.{% endif %}
|
For example, for an image named `octo-image` stored on {% data variables.product.prodname_dotcom %} at `http://github.com/octo-org/octo-repo`, the `tags` option should be set to `docker.pkg.github.com/octo-org/octo-repo/octo-image:latest`{% endif %}. You can set a single tag as shown below, or specify multiple tags in a list.{% endif %}
|
||||||
|
|
||||||
{% ifversion fpt or ghec or ghes %}
|
|
||||||
{% data reusables.package_registry.publish-docker-image %}
|
{% data reusables.package_registry.publish-docker-image %}
|
||||||
|
|
||||||
The above workflow is triggered by a push to the "release" branch. It checks out the GitHub repository, and uses the `login-action` to log in to the {% data variables.product.prodname_container_registry %}. It then extracts labels and tags for the Docker image. Finally, it uses the `build-push-action` action to build the image and publish it on the {% data variables.product.prodname_container_registry %}.
|
The above workflow is triggered by a push to the "release" branch. It checks out the GitHub repository, and uses the `login-action` to log in to the {% data variables.product.prodname_container_registry %}. It then extracts labels and tags for the Docker image. Finally, it uses the `build-push-action` action to build the image and publish it on the {% data variables.product.prodname_container_registry %}.
|
||||||
|
|
||||||
{% else %}
|
|
||||||
|
|
||||||
```yaml copy
|
|
||||||
{% data reusables.actions.actions-not-certified-by-github-comment %}
|
|
||||||
|
|
||||||
{% data reusables.actions.actions-use-sha-pinning-comment %}
|
|
||||||
|
|
||||||
name: Publish Docker image
|
|
||||||
|
|
||||||
on:
|
|
||||||
release:
|
|
||||||
types: [published]
|
|
||||||
jobs:
|
|
||||||
push_to_registry:
|
|
||||||
name: Push Docker image to GitHub Packages
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
packages: write
|
|
||||||
contents: read
|
|
||||||
steps:
|
|
||||||
- name: Check out the repo
|
|
||||||
uses: {% data reusables.actions.action-checkout %}
|
|
||||||
|
|
||||||
- name: Log in to GitHub Docker Registry
|
|
||||||
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a
|
|
||||||
with:
|
|
||||||
registry: docker.pkg.github.com
|
|
||||||
username: {% raw %}${{ github.actor }}{% endraw %}
|
|
||||||
password: {% raw %}${{ secrets.GITHUB_TOKEN }}{% endraw %}
|
|
||||||
|
|
||||||
- name: Build and push Docker image
|
|
||||||
uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671
|
|
||||||
with:
|
|
||||||
context: .
|
|
||||||
push: true
|
|
||||||
tags: |
|
|
||||||
docker.pkg.github.com{% raw %}/${{ github.repository }}/octo-image:${{ github.sha }}{% endraw %}
|
|
||||||
docker.pkg.github.com{% raw %}/${{ github.repository }}/octo-image:${{ github.event.release.tag_name }}{% endraw %}
|
|
||||||
```
|
|
||||||
|
|
||||||
The above workflow checks out the {% data variables.product.product_name %} repository, uses the `login-action` to log in to the registry, and then uses the `build-push-action` action to: build a Docker image based on your repository's `Dockerfile`; push the image to the Docker registry, and apply the commit SHA and release version as image tags.
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
## Publishing images to Docker Hub and {% data variables.product.prodname_registry %}
|
## Publishing images to Docker Hub and {% data variables.product.prodname_registry %}
|
||||||
|
|
||||||
{% ifversion ghes %}
|
{% ifversion ghes %}
|
||||||
@@ -228,7 +184,7 @@ jobs:
|
|||||||
username: {% raw %}${{ secrets.DOCKER_USERNAME }}{% endraw %}
|
username: {% raw %}${{ secrets.DOCKER_USERNAME }}{% endraw %}
|
||||||
password: {% raw %}${{ secrets.DOCKER_PASSWORD }}{% endraw %}
|
password: {% raw %}${{ secrets.DOCKER_PASSWORD }}{% endraw %}
|
||||||
|
|
||||||
- name: Log in to the {% ifversion fpt or ghec or ghes %}Container{% else %}Docker{% endif %} registry
|
- name: Log in to the Container registry
|
||||||
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
|
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
|
||||||
with:
|
with:
|
||||||
registry: {% ifversion fpt or ghec %}ghcr.io{% elsif ghes %}{% data reusables.package_registry.container-registry-hostname %}{% else %}docker.pkg.github.com{% endif %}
|
registry: {% ifversion fpt or ghec %}ghcr.io{% elsif ghes %}{% data reusables.package_registry.container-registry-hostname %}{% else %}docker.pkg.github.com{% endif %}
|
||||||
@@ -241,7 +197,7 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
images: |
|
images: |
|
||||||
my-docker-hub-namespace/my-docker-hub-repository
|
my-docker-hub-namespace/my-docker-hub-repository
|
||||||
{% ifversion fpt or ghec or ghes %}{% data reusables.package_registry.container-registry-hostname %}/{% raw %}${{ github.repository }}{% endraw %}{% else %}{% raw %}docker.pkg.github.com/${{ github.repository }}/my-image{% endraw %}{% endif %}
|
{% data reusables.package_registry.container-registry-hostname %}/{% raw %}${{ github.repository }}{% endraw %}
|
||||||
|
|
||||||
- name: Build and push Docker images
|
- name: Build and push Docker images
|
||||||
uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671
|
uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671
|
||||||
@@ -253,4 +209,4 @@ jobs:
|
|||||||
```
|
```
|
||||||
|
|
||||||
The above workflow checks out the {% data variables.product.product_name %} repository, uses the `login-action` twice to log in to both registries and generates tags and labels with the `metadata-action` action.
|
The above workflow checks out the {% data variables.product.product_name %} repository, uses the `login-action` twice to log in to both registries and generates tags and labels with the `metadata-action` action.
|
||||||
Then the `build-push-action` action builds and pushes the Docker image to Docker Hub and the {% ifversion fpt or ghec or ghes %}{% data variables.product.prodname_container_registry %}{% else %}Docker registry{% endif %}.
|
Then the `build-push-action` action builds and pushes the Docker image to Docker Hub and the {% data variables.product.prodname_container_registry %}.
|
||||||
|
|||||||
@@ -166,16 +166,12 @@ For more information, see "[AUTOTITLE](/code-security/code-scanning/introduction
|
|||||||
|
|
||||||
To help mitigate the risk of an exposed token, consider restricting the assigned permissions. For more information, see "[AUTOTITLE](/actions/security-guides/automatic-token-authentication#modifying-the-permissions-for-the-github_token)."
|
To help mitigate the risk of an exposed token, consider restricting the assigned permissions. For more information, see "[AUTOTITLE](/actions/security-guides/automatic-token-authentication#modifying-the-permissions-for-the-github_token)."
|
||||||
|
|
||||||
{% ifversion fpt or ghec or ghes %}
|
|
||||||
|
|
||||||
## Using OpenID Connect to access cloud resources
|
## Using OpenID Connect to access cloud resources
|
||||||
|
|
||||||
{% data reusables.actions.about-oidc-short-overview %}
|
{% data reusables.actions.about-oidc-short-overview %}
|
||||||
|
|
||||||
{% data reusables.actions.oidc-custom-claims-aws-restriction %}
|
{% data reusables.actions.oidc-custom-claims-aws-restriction %}
|
||||||
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
## Using third-party actions
|
## Using third-party actions
|
||||||
|
|
||||||
The individual jobs in a workflow can interact with (and compromise) other jobs. For example, a job querying the environment variables used by a later job, writing files to a shared directory that a later job processes, or even more directly by interacting with the Docker socket and inspecting other running containers and executing commands in them.
|
The individual jobs in a workflow can interact with (and compromise) other jobs. For example, a job querying the environment variables used by a later job, writing files to a shared directory that a later job processes, or even more directly by interacting with the Docker socket and inspecting other running containers and executing commands in them.
|
||||||
@@ -367,14 +363,10 @@ A self-hosted runner can be added to various levels in your {% data variables.pr
|
|||||||
- If each team will manage their own self-hosted runners, then the recommendation is to add the runners at the highest level of team ownership. For example, if each team owns their own organization, then it will be simplest if the runners are added at the organization level too.
|
- If each team will manage their own self-hosted runners, then the recommendation is to add the runners at the highest level of team ownership. For example, if each team owns their own organization, then it will be simplest if the runners are added at the organization level too.
|
||||||
- You could also add runners at the repository level, but this will add management overhead and also increases the numbers of runners you need, since you cannot share runners between repositories.
|
- You could also add runners at the repository level, but this will add management overhead and also increases the numbers of runners you need, since you cannot share runners between repositories.
|
||||||
|
|
||||||
{% ifversion fpt or ghec or ghes %}
|
|
||||||
|
|
||||||
### Authenticating to your cloud provider
|
### Authenticating to your cloud provider
|
||||||
|
|
||||||
If you are using {% data variables.product.prodname_actions %} to deploy to a cloud provider, or intend to use HashiCorp Vault for secret management, then its recommended that you consider using OpenID Connect to create short-lived, well-scoped access tokens for your workflow runs. For more information, see "[AUTOTITLE](/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect)."
|
If you are using {% data variables.product.prodname_actions %} to deploy to a cloud provider, or intend to use HashiCorp Vault for secret management, then its recommended that you consider using OpenID Connect to create short-lived, well-scoped access tokens for your workflow runs. For more information, see "[AUTOTITLE](/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect)."
|
||||||
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
## Auditing {% data variables.product.prodname_actions %} events
|
## Auditing {% data variables.product.prodname_actions %} events
|
||||||
|
|
||||||
You can use the security log to monitor activity for your user account and the audit log to monitor activity in your organization{% ifversion ghec or ghes %} or enterprise{% endif %}. The security and audit log records the type of action, when it was run, and which personal account performed the action.
|
You can use the security log to monitor activity for your user account and the audit log to monitor activity in your organization{% ifversion ghec or ghes %} or enterprise{% endif %}. The security and audit log records the type of action, when it was run, and which personal account performed the action.
|
||||||
|
|||||||
@@ -818,7 +818,7 @@ The maximum number of minutes to run the step before killing the process.
|
|||||||
|
|
||||||
The maximum number of minutes to let a job run before {% data variables.product.prodname_dotcom %} automatically cancels it. Default: 360
|
The maximum number of minutes to let a job run before {% data variables.product.prodname_dotcom %} automatically cancels it. Default: 360
|
||||||
|
|
||||||
If the timeout exceeds the job execution time limit for the runner, the job will be canceled when the execution time limit is met instead. For more information about job execution time limits, see {% ifversion fpt or ghec or ghes %}"[AUTOTITLE](/actions/learn-github-actions/usage-limits-billing-and-administration#usage-limits)" for {% data variables.product.prodname_dotcom %}-hosted runners and {% endif %}"[AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners#usage-limits)" for self-hosted runner usage limits.
|
If the timeout exceeds the job execution time limit for the runner, the job will be canceled when the execution time limit is met instead. For more information about job execution time limits, see "[AUTOTITLE](/actions/learn-github-actions/usage-limits-billing-and-administration#usage-limits)" for {% data variables.product.prodname_dotcom %}-hosted runners and "[AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners#usage-limits)" for self-hosted runner usage limits.
|
||||||
|
|
||||||
{% note %}
|
{% note %}
|
||||||
|
|
||||||
|
|||||||
@@ -157,8 +157,6 @@ If any of these services are at or near 100% CPU utilization, or the memory is n
|
|||||||
When running `ghe-config-apply`, if you see output like `Failed to run nomad job '/etc/nomad-jobs/<name>.hcl'`, then the change has likely over-allocated CPU or memory resources. If this happens, edit the configuration files again and lower the allocated CPU or memory, then re-run `ghe-config-apply`.
|
When running `ghe-config-apply`, if you see output like `Failed to run nomad job '/etc/nomad-jobs/<name>.hcl'`, then the change has likely over-allocated CPU or memory resources. If this happens, edit the configuration files again and lower the allocated CPU or memory, then re-run `ghe-config-apply`.
|
||||||
1. After the configuration is applied, run `ghe-actions-check` to verify that the {% data variables.product.prodname_actions %} services are operational.
|
1. After the configuration is applied, run `ghe-actions-check` to verify that the {% data variables.product.prodname_actions %} services are operational.
|
||||||
|
|
||||||
{% ifversion fpt or ghec or ghes %}
|
|
||||||
|
|
||||||
## Troubleshooting failures when {% data variables.product.prodname_dependabot %} triggers existing workflows
|
## Troubleshooting failures when {% data variables.product.prodname_dependabot %} triggers existing workflows
|
||||||
|
|
||||||
After you set up {% data variables.product.prodname_dependabot %} updates for {% data variables.location.product_location %}, you may see failures when existing workflows are triggered by {% data variables.product.prodname_dependabot %} events.
|
After you set up {% data variables.product.prodname_dependabot %} updates for {% data variables.location.product_location %}, you may see failures when existing workflows are triggered by {% data variables.product.prodname_dependabot %} events.
|
||||||
@@ -188,8 +186,6 @@ There are three ways to resolve this problem:
|
|||||||
|
|
||||||
1. Return to {% data variables.product.prodname_ghe_server %}.
|
1. Return to {% data variables.product.prodname_ghe_server %}.
|
||||||
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
<a name="bundled-actions"></a>
|
<a name="bundled-actions"></a>
|
||||||
|
|
||||||
## Troubleshooting bundled actions in {% data variables.product.prodname_actions %}
|
## Troubleshooting bundled actions in {% data variables.product.prodname_actions %}
|
||||||
|
|||||||
@@ -55,7 +55,7 @@ Name | Description
|
|||||||
 `public_repo`| Limits access to public repositories. That includes read/write access to code, commit statuses, repository projects, collaborators, and deployment statuses for public repositories and organizations. Also required for starring public repositories.
|
 `public_repo`| Limits access to public repositories. That includes read/write access to code, commit statuses, repository projects, collaborators, and deployment statuses for public repositories and organizations. Also required for starring public repositories.
|
||||||
 `repo:invite` | Grants accept/decline abilities for invitations to collaborate on a repository. This scope is only necessary to grant other users or services access to invites _without_ granting access to the code.{% ifversion fpt or ghes or ghec %}
|
 `repo:invite` | Grants accept/decline abilities for invitations to collaborate on a repository. This scope is only necessary to grant other users or services access to invites _without_ granting access to the code.{% ifversion fpt or ghes or ghec %}
|
||||||
 `security_events` | Grants: <br/> read and write access to security events in the [{% data variables.product.prodname_code_scanning %} API](/rest/code-scanning) {%- ifversion ghec %}<br/> read and write access to security events in the [{% data variables.product.prodname_secret_scanning %} API](/rest/secret-scanning){%- endif %} <br/> This scope is only necessary to grant other users or services access to security events _without_ granting access to the code.{% endif %}
|
 `security_events` | Grants: <br/> read and write access to security events in the [{% data variables.product.prodname_code_scanning %} API](/rest/code-scanning) {%- ifversion ghec %}<br/> read and write access to security events in the [{% data variables.product.prodname_secret_scanning %} API](/rest/secret-scanning){%- endif %} <br/> This scope is only necessary to grant other users or services access to security events _without_ granting access to the code.{% endif %}
|
||||||
**`admin:repo_hook`** | Grants read, write, ping, and delete access to repository hooks in {% ifversion fpt %}public or private{% elsif ghec or ghes %}public, private, or internal{% endif %} repositories. The `repo` {% ifversion fpt or ghec or ghes %}and `public_repo` scopes grant{% else %}scope grants{% endif %} full access to repositories, including repository hooks. Use the `admin:repo_hook` scope to limit access to only repository hooks.
|
**`admin:repo_hook`** | Grants read, write, ping, and delete access to repository hooks in {% ifversion fpt %}public or private{% elsif ghec or ghes %}public, private, or internal{% endif %} repositories. The `repo` and `public_repo` scopes grant full access to repositories, including repository hooks. Use the `admin:repo_hook` scope to limit access to only repository hooks.
|
||||||
 `write:repo_hook` | Grants read, write, and ping access to hooks in {% ifversion fpt %}public or private{% elsif ghec or ghes %}public, private, or internal{% endif %} repositories.
|
 `write:repo_hook` | Grants read, write, and ping access to hooks in {% ifversion fpt %}public or private{% elsif ghec or ghes %}public, private, or internal{% endif %} repositories.
|
||||||
 `read:repo_hook`| Grants read and ping access to hooks in {% ifversion fpt %}public or private{% elsif ghec or ghes %}public, private, or internal{% endif %} repositories.
|
 `read:repo_hook`| Grants read and ping access to hooks in {% ifversion fpt %}public or private{% elsif ghec or ghes %}public, private, or internal{% endif %} repositories.
|
||||||
**`admin:org`** | Fully manage the organization and its teams, projects, and memberships.
|
**`admin:org`** | Fully manage the organization and its teams, projects, and memberships.
|
||||||
|
|||||||
@@ -208,9 +208,7 @@ After using either the BFG tool or `git filter-repo` to remove the sensitive dat
|
|||||||
|
|
||||||
## Avoiding accidental commits in the future
|
## Avoiding accidental commits in the future
|
||||||
|
|
||||||
{% ifversion fpt or ghec or ghes %}
|
|
||||||
Preventing contributors from making accidental commits can help you prevent sensitive information from being exposed. For more information see "[AUTOTITLE](/code-security/getting-started/best-practices-for-preventing-data-leaks-in-your-organization)."
|
Preventing contributors from making accidental commits can help you prevent sensitive information from being exposed. For more information see "[AUTOTITLE](/code-security/getting-started/best-practices-for-preventing-data-leaks-in-your-organization)."
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
There are a few simple tricks to avoid committing things you don't want committed:
|
There are a few simple tricks to avoid committing things you don't want committed:
|
||||||
|
|
||||||
|
|||||||
@@ -65,12 +65,10 @@ Repository administrators can enforce required commit signing on a branch to blo
|
|||||||
|
|
||||||
{% data reusables.identity-and-permissions.verification-status-check %}
|
{% data reusables.identity-and-permissions.verification-status-check %}
|
||||||
|
|
||||||
{% ifversion fpt or ghec or ghes %}
|
|
||||||
{% ifversion ghes %}If a site administrator has enabled web commit signing, {% data variables.product.product_name %} will automatically use GPG to sign commits you make using the web interface. Commits signed by {% data variables.product.product_name %} will have a verified status. You can verify the signature locally using the public key available at `https://HOSTNAME/web-flow.gpg`. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/configuring-web-commit-signing)."
|
{% ifversion ghes %}If a site administrator has enabled web commit signing, {% data variables.product.product_name %} will automatically use GPG to sign commits you make using the web interface. Commits signed by {% data variables.product.product_name %} will have a verified status. You can verify the signature locally using the public key available at `https://HOSTNAME/web-flow.gpg`. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/configuring-web-commit-signing)."
|
||||||
{% else %}{% data variables.product.prodname_dotcom %} will automatically use GPG to sign commits you make using the web interface. Commits signed by {% data variables.product.prodname_dotcom %} will have a verified status. You can verify the signature locally using the public key available at https://github.com/web-flow.gpg.
|
{% else %}{% data variables.product.prodname_dotcom %} will automatically use GPG to sign commits you make using the web interface. Commits signed by {% data variables.product.prodname_dotcom %} will have a verified status. You can verify the signature locally using the public key available at https://github.com/web-flow.gpg.
|
||||||
|
|
||||||
You can optionally choose to have {% data variables.product.prodname_dotcom %} GPG sign commits you make in {% data variables.product.prodname_github_codespaces %}. For more information about enabling GPG verification for your codespaces, see "[AUTOTITLE](/codespaces/managing-your-codespaces/managing-gpg-verification-for-github-codespaces)."{% endif %}
|
You can optionally choose to have {% data variables.product.prodname_dotcom %} GPG sign commits you make in {% data variables.product.prodname_github_codespaces %}. For more information about enabling GPG verification for your codespaces, see "[AUTOTITLE](/codespaces/managing-your-codespaces/managing-gpg-verification-for-github-codespaces)."{% endif %}
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
## GPG commit signature verification
|
## GPG commit signature verification
|
||||||
|
|
||||||
|
|||||||
@@ -196,7 +196,7 @@ In general, you do not need to worry about where the {% data variables.code-scan
|
|||||||
db-location: {% raw %}'${{ github.runner_temp }}/my_location'{% endraw %}
|
db-location: {% raw %}'${{ github.runner_temp }}/my_location'{% endraw %}
|
||||||
```
|
```
|
||||||
|
|
||||||
The {% data variables.code-scanning.codeql_workflow %} will expect the path provided in `db-location` to be writable, and either not exist, or be an empty directory. When using this parameter in a job running on a self-hosted runner or using a Docker container, it's the responsibility of the user to ensure that the chosen directory is cleared between runs, or that the databases are removed once they are no longer needed. {% ifversion fpt or ghec or ghes %} This is not necessary for jobs running on {% data variables.product.prodname_dotcom %}-hosted runners, which obtain a fresh instance and a clean filesystem each time they run. For more information, see "[AUTOTITLE](/actions/using-github-hosted-runners/about-github-hosted-runners)."{% endif %}
|
The {% data variables.code-scanning.codeql_workflow %} will expect the path provided in `db-location` to be writable, and either not exist, or be an empty directory. When using this parameter in a job running on a self-hosted runner or using a Docker container, it's the responsibility of the user to ensure that the chosen directory is cleared between runs, or that the databases are removed once they are no longer needed. This is not necessary for jobs running on {% data variables.product.prodname_dotcom %}-hosted runners, which obtain a fresh instance and a clean filesystem each time they run. For more information, see "[AUTOTITLE](/actions/using-github-hosted-runners/about-github-hosted-runners)."
|
||||||
|
|
||||||
If this parameter is not used, the {% data variables.code-scanning.codeql_workflow %} will create databases in a temporary location of its own choice. Currently the default value is {% raw %}`${{ github.runner_temp }}/codeql_databases`{% endraw %}.
|
If this parameter is not used, the {% data variables.code-scanning.codeql_workflow %} will create databases in a temporary location of its own choice. Currently the default value is {% raw %}`${{ github.runner_temp }}/codeql_databases`{% endraw %}.
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
title: Keeping your supply chain secure with Dependabot
|
title: Keeping your supply chain secure with Dependabot
|
||||||
shortTitle: Dependabot
|
shortTitle: Dependabot
|
||||||
intro: 'Monitor vulnerabilities in dependencies used in your project{% ifversion fpt or ghec or ghes %} and keep your dependencies up-to-date{% endif %} with {% data variables.product.prodname_dependabot %}.'
|
intro: 'Monitor vulnerabilities in dependencies used in your project and keep your dependencies up-to-date with {% data variables.product.prodname_dependabot %}.'
|
||||||
allowTitleToDifferFromFilename: true
|
allowTitleToDifferFromFilename: true
|
||||||
versions:
|
versions:
|
||||||
fpt: '*'
|
fpt: '*'
|
||||||
|
|||||||
@@ -15,8 +15,8 @@ featuredLinks:
|
|||||||
- '{% ifversion code-scanning-without-workflow %}/code-security/code-scanning/enabling-code-scanning/configuring-default-setup-for-code-scanning{% endif %}'
|
- '{% ifversion code-scanning-without-workflow %}/code-security/code-scanning/enabling-code-scanning/configuring-default-setup-for-code-scanning{% endif %}'
|
||||||
- '{% ifversion ghes < 3.9 %}/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/configuring-advanced-setup-for-code-scanning{% endif %}'
|
- '{% ifversion ghes < 3.9 %}/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/configuring-advanced-setup-for-code-scanning{% endif %}'
|
||||||
guideCards:
|
guideCards:
|
||||||
- '{% ifversion fpt or ghec or ghes %}/code-security/dependabot/dependabot-security-updates/configuring-dependabot-security-updates{% endif %}'
|
- '/code-security/dependabot/dependabot-security-updates/configuring-dependabot-security-updates'
|
||||||
- '{% ifversion fpt or ghec or ghes %}/code-security/dependabot/dependabot-version-updates/configuring-dependabot-version-updates{% endif %}'
|
- '/code-security/dependabot/dependabot-version-updates/configuring-dependabot-version-updates'
|
||||||
- '{% ifversion code-scanning-without-workflow %}/code-security/code-scanning/enabling-code-scanning/configuring-default-setup-for-code-scanning{% endif %}'
|
- '{% ifversion code-scanning-without-workflow %}/code-security/code-scanning/enabling-code-scanning/configuring-default-setup-for-code-scanning{% endif %}'
|
||||||
- '{% ifversion ghes < 3.9 %}/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/configuring-advanced-setup-for-code-scanning{% endif %}'
|
- '{% ifversion ghes < 3.9 %}/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/configuring-advanced-setup-for-code-scanning{% endif %}'
|
||||||
- /code-security/supply-chain-security/end-to-end-supply-chain/end-to-end-supply-chain-overview
|
- /code-security/supply-chain-security/end-to-end-supply-chain/end-to-end-supply-chain-overview
|
||||||
|
|||||||
Reference in New Issue
Block a user