1
0
mirror of synced 2026-01-08 21:02:10 -05:00

Merge pull request #18679 from github/repo-sync

repo sync
This commit is contained in:
Octomerger Bot
2022-06-16 17:52:50 -05:00
committed by GitHub
90 changed files with 2077 additions and 190 deletions

View File

@@ -138,10 +138,10 @@ jobs:
- name: Deploy to Azure Web App
id: deploy-to-webapp
uses: azure/webapps-deploy@0b651ed7546ecfc75024011f76944cb9b381ef1e
with:
app-name: {% raw %}${{ env.AZURE_WEBAPP_NAME }}{% endraw %}
publish-profile: {% raw %}${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}{% endraw %}
images: 'ghcr.io/{% raw %}${{ env.REPO }}{% endraw %}:{% raw %}${{ github.sha }}{% endraw %}'
with:
app-name: {% raw %}${{ env.AZURE_WEBAPP_NAME }}{% endraw %}
publish-profile: {% raw %}${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}{% endraw %}
images: 'ghcr.io/{% raw %}${{ env.REPO }}{% endraw %}:{% raw %}${{ github.sha }}{% endraw %}'
```
## Recursos adicionales

View File

@@ -0,0 +1,530 @@
---
title: Customizing the containers used by jobs
intro: You can customize how your self-hosted runner invokes a container for a job.
versions:
feature: container-hooks
type: reference
miniTocMaxHeadingLevel: 4
shortTitle: Customize containers used by jobs
---
{% note %}
**Note**: This feature is currently in beta and is subject to change.
{% endnote %}
## About container customization
{% data variables.product.prodname_actions %} allows you to run a job within a container, using the `container:` statement in your workflow file. For more information, see "[Running jobs in a container](/actions/using-jobs/running-jobs-in-a-container)." To process container-based jobs, the self-hosted runner creates a container for each job.
{% data variables.product.prodname_actions %} supports commands that let you customize the way your containers are created by the self-hosted runner. For example, you can use these commands to manage the containers through Kubernetes or Podman, and you can also customize the `docker run` or `docker create` commands used to invoke the container. The customization commands are run by a script, which is automatically triggered when a specific environment variable is set on the runner. For more information, see "[Triggering the customization script](#triggering-the-customization-script)" below.
This customization is only available for Linux-based self-hosted runners, and root user access is not required.
## Container customization commands
{% data variables.product.prodname_actions %} includes the following commands for container customization:
- [`prepare_job`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#prepare_job): Called when a job is started.
- [`cleanup_job`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#cleanup_job): Called at the end of a job.
- [`run_container_step`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#run_container_step): Called once for each container action in the job.
- [`run_script_step`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#run_script_step): Runs any step that is not a container action.
Each of these customization commands must be defined in its own JSON file. The file name must match the command name, with the extension `.json`. For example, the `prepare_job` command is defined in `prepare_job.json`. These JSON files will then be run together on the self-hosted runner, as part of the main `index.js` script. This process is described in more detail in "[Generating the customization script](#generating-the-customization-script)."
These commands also include configuration arguments, explained below in more detail.
### `prepare_job`
The `prepare_job` command is called when a job is started. {% data variables.product.prodname_actions %} passes in any job or service containers the job has. This command will be called if you have any service or job containers in the job.
{% data variables.product.prodname_actions %} assumes that you will do the following tasks in the `prepare_job` command:
- Prune anything from previous jobs, if needed.
- Create a network, if needed.
- Pull the job and service containers.
- Start the job container.
- Start the service containers.
- Write to the response file any information that {% data variables.product.prodname_actions %} will need:
- Required: State whether the container is an `alpine` linux container (using the `isAlpine` boolean).
- Optional: Any context fields you want to set on the job context, otherwise they will be unavailable for users to use. For more information, see "[`job` context](/actions/learn-github-actions/contexts#job-context)."
- Return `0` when the health checks have succeeded and the job/service containers are started.
#### Argumentos
- `jobContainer`: **Optional**. An object containing information about the specified job container.
- `image`: **Required**. A string containing the Docker image.
- `workingDirectory`: **Required**. A string containing the absolute path of the working directory.
- `createOptions`: **Optional**. The optional _create_ options specified in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
- `userMountVolumes`: **Optional**. An array of user mount volumes set in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `systemMountVolumes`: **Required**. An array of mounts to mount into the container, same fields as above.
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `registro` **Optional**. The Docker registry credentials for a private container registry.
- `username`: **Optional**. The username of the registry account.
- `password`: **Optional**. The password to the registry account.
- `serverUrl`: **Optional**. The registry URL.
- `portMappings`: **Optional**. A key value hash of _source:target_ ports to map into the container.
- `services`: **Optional**. An array of service containers to spin up.
- `contextName`: **Required**. The name of the service in the Job context.
- `image`: **Required**. A string containing the Docker image.
- `createOptions`: **Optional**. The optional _create_ options specified in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
- `userMountVolumes`: **Optional**. An array of mounts to mount into the container, same fields as above.
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `registro` **Optional**. The Docker registry credentials for the private container registry.
- `username`: **Optional**. The username of the registry account.
- `password`: **Optional**. The password to the registry account.
- `serverUrl`: **Optional**. The registry URL.
- `portMappings`: **Optional**. A key value hash of _source:target_ ports to map into the container.
#### Example input
```json{:copy}
{
"command": "prepare_job",
"responseFile": "/users/octocat/runner/_work/{guid}.json",
"state": {},
"args": {
"jobContainer": {
"image": "node:14.16",
"workingDirectory": "/__w/octocat-test2/octocat-test2",
"createOptions": "--cpus 1",
"environmentVariables": {
"NODE_ENV": "development"
},
"userMountVolumes": [
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
}
],
"systemMountVolumes": [
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work",
"targetVolumePath": "/__w",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/externals",
"targetVolumePath": "/__e",
"readOnly": true
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow",
"readOnly": false
}
],
"registry": {
"username": "octocat",
"password": "examplePassword",
"serverUrl": "https://index.docker.io/v1"
},
"portMappings": { "80": "801" }
},
"services": [
{
"contextName": "redis",
"image": "redis",
"createOptions": "--cpus 1",
"environmentVariables": {},
"userMountVolumes": [],
"portMappings": { "80": "801" },
"registry": {
"username": "octocat",
"password": "examplePassword",
"serverUrl": "https://index.docker.io/v1"
}
}
]
}
}
```
#### Example output
This example output is the contents of the `responseFile` defined in the input above.
```json{:copy}
{
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"serviceContainers": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"context": {
"container": {
"id": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"network": "example_network_53269bd575972817b43f7733536b200c"
},
"services": {
"redis": {
"id": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105",
"ports": {
"8080": "8080"
},
"network": "example_network_53269bd575972817b43f7733536b200c"
}
},
"isAlpine": true
}
}
```
### `cleanup_job`
The `cleanup_job` command is called at the end of a job. {% data variables.product.prodname_actions %} assumes that you will do the following tasks in the `cleanup_job` command:
- Stop any running service or job containers (or the equivalent pod).
- Stop the network (if one exists).
- Delete any job or service containers (or the equivalent pod).
- Delete the network (if one exists).
- Cleanup anything else that was created for the job.
#### Argumentos
No arguments are provided for `cleanup_job`.
#### Example input
```json{:copy}
{
"command": "cleanup_job",
"responseFile": null,
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"serviceContainers": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"args": {}
}
```
#### Example output
No output is expected for `cleanup_job`.
### `run_container_step`
The `run_container_step` command is called once for each container action in your job. {% data variables.product.prodname_actions %} assumes that you will do the following tasks in the `run_container_step` command:
- Pull or build the required container (or fail if you cannot).
- Run the container action and return the exit code of the container.
- Stream any step logs output to stdout and stderr.
- Cleanup the container after it executes.
#### Argumentos
- `image`: **Optional**. A string containing the docker image. Otherwise a dockerfile must be provided.
- `dockerfile`: **Optional**. A string containing the path to the dockerfile, otherwise an image must be provided.
- `entryPointArgs`: **Optional**. A list containing the entry point args.
- `entryPoint`: **Optional**. The container entry point to use if the default image entrypoint should be overwritten.
- `workingDirectory`: **Required**. A string containing the absolute path of the working directory.
- `createOptions`: **Optional**. The optional _create_ options specified in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
- `prependPath`: **Optional**. An array of additional paths to prepend to the `$PATH` variable.
- `userMountVolumes`: **Optional**. an array of user mount volumes set in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `systemMountVolumes`: **Required**. An array of mounts to mount into the container, using the same fields as above.
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `registro` **Optional**. The Docker registry credentials for a private container registry.
- `username`: **Optional**. The username of the registry account.
- `password`: **Optional**. The password to the registry account.
- `serverUrl`: **Optional**. The registry URL.
- `portMappings`: **Optional**. A key value hash of the _source:target_ ports to map into the container.
#### Example input for image
If you're using a Docker image, you can specify the image name in the `"image":` parameter.
```json{:copy}
{
"command": "run_container_step",
"responseFile": null,
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"serviceContainers": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"args": {
"image": "node:14.16",
"dockerfile": null,
"entryPointArgs": ["-f", "/dev/null"],
"entryPoint": "tail",
"workingDirectory": "/__w/octocat-test2/octocat-test2",
"createOptions": "--cpus 1",
"environmentVariables": {
"NODE_ENV": "development"
},
"prependPath": ["/foo/bar", "bar/foo"],
"userMountVolumes": [
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
}
],
"systemMountVolumes": [
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work",
"targetVolumePath": "/__w",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/externals",
"targetVolumePath": "/__e",
"readOnly": true
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow",
"readOnly": false
}
],
"registry": null,
"portMappings": { "80": "801" }
}
}
```
#### Example input for Dockerfile
If your container is defined by a Dockerfile, this example demonstrates how to specify the path to a `Dockerfile` in your input, using the `"dockerfile":` parameter.
```json{:copy}
{
"command": "run_container_step",
"responseFile": null,
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"services": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"args": {
"image": null,
"dockerfile": "/__w/_actions/foo/dockerfile",
"entryPointArgs": ["hello world"],
"entryPoint": "echo",
"workingDirectory": "/__w/octocat-test2/octocat-test2",
"createOptions": "--cpus 1",
"environmentVariables": {
"NODE_ENV": "development"
},
"prependPath": ["/foo/bar", "bar/foo"],
"userMountVolumes": [
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
}
],
"systemMountVolumes": [
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work",
"targetVolumePath": "/__w",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/externals",
"targetVolumePath": "/__e",
"readOnly": true
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow",
"readOnly": false
}
],
"registry": null,
"portMappings": { "80": "801" }
}
}
```
#### Example output
No output is expected for `run_container_step`.
### `run_script_step`
{% data variables.product.prodname_actions %} assumes that you will do the following tasks:
- Invoke the provided script inside the job container and return the exit code.
- Stream any step log output to stdout and stderr.
#### Argumentos
- `entryPointArgs`: **Optional**. A list containing the entry point arguments.
- `entryPoint`: **Optional**. The container entry point to use if the default image entrypoint should be overwritten.
- `prependPath`: **Optional**. An array of additional paths to prepend to the `$PATH` variable.
- `workingDirectory`: **Required**. A string containing the absolute path of the working directory.
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
#### Example input
```json{:copy}
{
"command": "run_script_step",
"responseFile": null,
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"serviceContainers": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"args": {
"entryPointArgs": ["-e", "/runner/temp/example.sh"],
"entryPoint": "bash",
"environmentVariables": {
"NODE_ENV": "development"
},
"prependPath": ["/foo/bar", "bar/foo"],
"workingDirectory": "/__w/octocat-test2/octocat-test2"
}
}
```
#### Example output
No output is expected for `run_script_step`.
## Generating the customization script
{% data variables.product.prodname_dotcom %} has created an example repository that demonstrates how to generate customization scripts for Docker and Kubernetes.
{% note %}
**Note:** The resulting scripts are available for testing purposes, and you will need to determine whether they are appropriate for your requirements.
{% endnote %}
1. Clone the [actions/runner-container-hooks](https://github.com/actions/runner-container-hooks) repository to your self-hosted runner.
1. The `examples/` directory contains some existing customization commands, each with its own JSON file. You can review these examples and use them as a starting point for your own customization commands.
- `prepare_job.json`
- `run_script_step.json`
- `run_container_step.json`
1. Build the npm packages. These commands generate the `index.js` files inside `packages/docker/dist` and `packages/k8s/dist`.
```shell
npm install && npm run bootstrap && npm run build-all
```
When the resulting `index.js` is triggered by {% data variables.product.prodname_actions %}, it will run the customization commands defined in the JSON files. To trigger the `index.js`, you will need to add it your `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER` environment variable, as described in the next section.
## Triggering the customization script
The custom script must be located on the runner, but should not be stored in the self-hosted runner application directory. The scripts are executed in the security context of the service account that's running the runner service.
{% note %}
**Note**: The triggered script is processed synchronously, so it will block job execution while running.
{% endnote %}
The script is automatically executed when the runner has the following environment variable containing an absolute path to the script:
- `ACTIONS_RUNNER_CONTAINER_HOOK`: The script defined in this environment variable is triggered when a job has been assigned to a runner, but before the job starts running.
To set this environment variable, you can either add it to the operating system, or add it to a file named `.env` within the self-hosted runner application directory. For example, the following `.env` entry will have the runner automatically run the script at `/Users/octocat/runner/index.js` before each container-based job runs:
```bash
ACTIONS_RUNNER_CONTAINER_HOOK=/Users/octocat/runner/index.js
```
If you want to ensure that your job always runs inside a container, and subsequently always applies your container customizations, you can set the `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER` variable on the self hosted runner to `true`. This will fail jobs that do not specify a job container.
## Solución de problemas
### No timeout setting
There is currently no timeout setting available for the script executed by `ACTIONS_RUNNER_CONTAINER_HOOK`. As a result, you could consider adding timeout handling to your script.
### Reviewing the workflow run log
To confirm whether your scripts are executing, you can review the logs for that job. For more information on checking the logs, see "[Viewing logs to diagnose failures](/actions/monitoring-and-troubleshooting-workflows/using-workflow-run-logs#viewing-logs-to-diagnose-failures)."

View File

@@ -20,6 +20,7 @@ children:
- /adding-self-hosted-runners
- /autoscaling-with-self-hosted-runners
- /running-scripts-before-or-after-a-job
- /customizing-the-containers-used-by-jobs
- /configuring-the-self-hosted-runner-application-as-a-service
- /using-a-proxy-server-with-self-hosted-runners
- /using-labels-with-self-hosted-runners

View File

@@ -86,7 +86,7 @@ Si quieres permitir respuestas de correo electrónico para las notificaciones, d
### Crea un Paquete de soporte
If you cannot determine what is wrong from the displayed error message, you can download a [support bundle](/enterprise/admin/guides/enterprise-support/providing-data-to-github-support) containing the entire SMTP conversation between your mail server and {% data variables.product.prodname_ghe_server %}. Una vez que hayas descargado y extraído el paquete, verifica las entradas en *enterprise-manage-logs/unicorn.log* para toda la bitácora de conversaciones de SMTP y cualquier error relacionado.
Si no puedes determinar lo que está mal desde el mensaje de error mostrado, puedes descargar un [paquete de soporte](/enterprise/admin/guides/enterprise-support/providing-data-to-github-support) que contiene toda la conversación SMTP entre tu servidor de correo y {% data variables.product.prodname_ghe_server %}. Una vez que hayas descargado y extraído el paquete, verifica las entradas en *enterprise-manage-logs/unicorn.log* para toda la bitácora de conversaciones de SMTP y cualquier error relacionado.
El registro unicornio debería mostrar una transacción similar a la siguiente:

View File

@@ -22,7 +22,7 @@ topics:
## Configurar el primer nodo
1. Conèctate al nodo que se designarà como el primario de MySQL en la `cluster.conf`. For more information, see "[About the cluster configuration file](/enterprise/admin/guides/clustering/initializing-the-cluster/#about-the-cluster-configuration-file)."
1. Conèctate al nodo que se designarà como el primario de MySQL en la `cluster.conf`. Para obtener màs informaciòn, consulta la secciòn "[Acerca del archivo de configuraciòn de clùster](/enterprise/admin/guides/clustering/initializing-the-cluster/#about-the-cluster-configuration-file)".
2. En tu navegador web, visita `https://<ip address>:8443/setup/`.
{% data reusables.enterprise_installation.upload-a-license-file %}
{% data reusables.enterprise_installation.save-settings-in-web-based-mgmt-console %}
@@ -30,7 +30,7 @@ topics:
## Inicializar la agrupación
Para inicializar la agrupación, necesitas un archivo de configuración de agrupación (`cluster.conf`). For more information, see "[About the cluster configuration file](/enterprise/admin/guides/clustering/initializing-the-cluster/#about-the-cluster-configuration-file)".
Para inicializar la agrupación, necesitas un archivo de configuración de agrupación (`cluster.conf`). Para obtener màs informaciòn, consulta la secciòn "[Acerca del archivo de configuraciòn de clùster](/enterprise/admin/guides/clustering/initializing-the-cluster/#about-the-cluster-configuration-file)".
1. Desde el primer nodo que se configuró, ejecuta `ghe-cluster-config. init`. De esta manera, se inicializará la agrupación si existen nodos en el archivo de configuración de la agrupación que no están configurados.
2. Ejecuta `ghe-cluster-config-apply`. Esto validará el archivo `cluster.conf`, aplicará la configuración a cada archivo del nodo y traerá los servicios configurados en cada nodo.
@@ -39,7 +39,7 @@ Para comprobar el estado de una agrupación en funcionamiento, usa el comando `g
## Acerca del archivo de configuración de la agrupación
El archivo de configuración de la agrupación (`cluster.conf`) define los nodos en la agrupación, y los servicios que ejecutan. For more information, see "[About cluster nodes](/enterprise/admin/guides/clustering/about-cluster-nodes)."
El archivo de configuración de la agrupación (`cluster.conf`) define los nodos en la agrupación, y los servicios que ejecutan. Para obtener más información, consulta la sección "[Acerca de los nodos de clúster](/enterprise/admin/guides/clustering/about-cluster-nodes)".
Este ejemplo `cluster.conf` define una agrupación con cinco nodos.

View File

@@ -95,4 +95,4 @@ Para actualizar a la versión más reciente {% data variables.product.prodname_e
{% endnote %}
15. Cambia el tráfico de red de usuario desde la instancia anterior a la nueva instancia utilizando la asignación de DNS o la dirección IP.
16. Upgrade to the latest patch release of {% data variables.product.prodname_ghe_server %}. Para obtener más información, consulta "[Actualizar {% data variables.product.prodname_ghe_server %}](/enterprise/admin/guides/installation/upgrading-github-enterprise-server/)."
16. Mejora al lanzamiento de parche más reciente de {% data variables.product.prodname_ghe_server %}. Para obtener más información, consulta "[Actualizar {% data variables.product.prodname_ghe_server %}](/enterprise/admin/guides/installation/upgrading-github-enterprise-server/)."

View File

@@ -25,7 +25,7 @@ You must enter unique values from your SAML IdP when configuring SAML SSO for {%
{% ifversion ghec %}
The SP metadata for {% data variables.product.product_name %} is available for either organizations or enterprises with SAML SSO. {% data variables.product.product_name %} uses the `urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST` binding.
The SP metadata for {% data variables.product.product_name %} is available for either organizations or enterprises with SAML SSO. {% data variables.product.product_name %} utiliza el enlace `urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST`.
### Organizaciones
@@ -33,13 +33,13 @@ You can configure SAML SSO for an individual organization in your enterprise. Yo
The SP metadata for an organization on {% data variables.product.product_location %} is available at `https://github.com/orgs/ORGANIZATION/saml/metadata`, where **ORGANIZATION** is the name of your organization on {% data variables.product.product_location %}.
| Valor | Otros nombres | Descripción | Ejemplo |
|:--------------------------------------------------------- |:------------------------------------ |:---------------------------------------------------------------------------------------- |:--------------------------------------------------- |
| ID de Entidad de SP | SP URL, audience restriction | The top-level URL for your organization on {% data variables.product.product_location %} | `https://github.com/orgs/ORGANIZATION` |
| URL del Servicio de Consumidor de Aserciones (ACS) del SP | Reply, recipient, or destination URL | URL a la que el IdP enviará respuestas de SAML | `https://github.com/orgs/ORGANIZATION/saml/consume` |
| URL de inicio de sesión único (SSO) del SP | | URL en donde el IdP comienza con SSO | `https://github.com/orgs/ORGANIZATION/saml/sso` |
| Valor | Otros nombres | Descripción | Ejemplo |
|:--------------------------------------------------------- |:---------------------------------------- |:---------------------------------------------------------------------------------------- |:--------------------------------------------------- |
| ID de Entidad de SP | URL de SP, restricción de la audiencia | The top-level URL for your organization on {% data variables.product.product_location %} | `https://github.com/orgs/ORGANIZATION` |
| URL del Servicio de Consumidor de Aserciones (ACS) del SP | URL de respuesta, receptora o de destino | URL a la que el IdP enviará respuestas de SAML | `https://github.com/orgs/ORGANIZATION/saml/consume` |
| URL de inicio de sesión único (SSO) del SP | | URL en donde el IdP comienza con SSO | `https://github.com/orgs/ORGANIZATION/saml/sso` |
### Enterprises
### Empresas
The SP metadata for an enterprise on {% data variables.product.product_location %} is available at `https://github.com/enterprises/ENTERPRISE/saml/metadata`, where **ENTERPRISE** is the name of your enterprise on {% data variables.product.product_location %}.
@@ -53,11 +53,11 @@ The SP metadata for an enterprise on {% data variables.product.product_location
The SP metadata for {% data variables.product.product_location %} is available at `http(s)://HOSTNAME/saml/metadata`, where **HOSTNAME** is the hostname for your instance. {% data variables.product.product_name %} utiliza el enlace `urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST`.
| Valor | Otros nombres | Descripción | Ejemplo |
|:--------------------------------------------------------- |:---------------------------------------- |:---------------------------------------------------------------- |:--------------------------------- |
| ID de Entidad de SP | URL de SP, restricción de la audiencia | Your top-level URL for {% data variables.product.product_name %} | `http(s)://HOSTNAME` |
| URL del Servicio de Consumidor de Aserciones (ACS) del SP | URL de respuesta, receptora o de destino | URL a la que el IdP enviará respuestas de SAML | `http(s)://HOSTNAME/saml/consume` |
| URL de inicio de sesión único (SSO) del SP | | URL en donde el IdP comienza con SSO | `http(s)://HOSTNAME/sso` |
| Valor | Otros nombres | Descripción | Ejemplo |
|:--------------------------------------------------------- |:---------------------------------------- |:----------------------------------------------------------------------- |:--------------------------------- |
| ID de Entidad de SP | URL de SP, restricción de la audiencia | Tu URL de más alto nivel para {% data variables.product.product_name %} | `http(s)://HOSTNAME` |
| URL del Servicio de Consumidor de Aserciones (ACS) del SP | URL de respuesta, receptora o de destino | URL a la que el IdP enviará respuestas de SAML | `http(s)://HOSTNAME/saml/consume` |
| URL de inicio de sesión único (SSO) del SP | | URL en donde el IdP comienza con SSO | `http(s)://HOSTNAME/sso` |
{% elsif ghae %}
@@ -80,9 +80,9 @@ The following SAML attributes are available for {% data variables.product.produc
| `ID del nombre` | Sí | Un identificador de usuario persistente. Se puede usar cualquier formato de identificador de nombre persistente. {% ifversion ghec %}If you use an enterprise with {% data variables.product.prodname_emus %}, {% endif %}{% data variables.product.product_name %} will normalize the `NameID` element to use as a username unless one of the alternative assertions is provided. Para obtener más información, consulta la sección "[Consideraciones de nombre de usuario para la autenticación externa](/admin/identity-and-access-management/managing-iam-for-your-enterprise/username-considerations-for-external-authentication)". |
| `SessionNotOnOrAfter` | No | The date that {% data variables.product.product_name %} invalidates the associated session. After invalidation, the person must authenticate once again to access {% ifversion ghec or ghae %}your enterprise's resources{% elsif ghes %}{% data variables.product.product_location %}{% endif %}. For more information, see "[Session duration and timeout](#session-duration-and-timeout)." |
{%- ifversion ghes or ghae %}
| `administrator` | No | When the value is `true`, {% data variables.product.product_name %} will automatically promote the user to be a {% ifversion ghes %}site administrator{% elsif ghae %}enterprise owner{% endif %}. Any other value or a non-existent value will demote the account and remove administrative access. | | `username` | No | The username for {% data variables.product.product_location %}. |
| `administrator` | No | When the value is `true`, {% data variables.product.product_name %} will automatically promote the user to be a {% ifversion ghes %}site administrator{% elsif ghae %}enterprise owner{% endif %}. Setting this attribute to anything but `true` will result in demotion, as long as the value is not blank. Omitting this attribute or leaving the value blank will not change the role of the user. | | `username` | No | The username for {% data variables.product.product_location %}. |
{%- endif %}
| `full_name` | No | {% ifversion ghec %}If you configure SAML SSO for an enterprise and you use {% data variables.product.prodname_emus %}, the{% else %}The{% endif %} full name of the user to display on the user's profile page. | | `emails` | No | Las direcciones de correo electrónico del usuario.{% ifversion ghes or ghae %} Puedes especificar más de una dirección.{% endif %}{% ifversion ghec or ghes %} Si sincronizas el uso de licencias entre {% data variables.product.prodname_ghe_server %} y {% data variables.product.prodname_ghe_cloud %}, {% data variables.product.prodname_github_connect %} utiliza `emails` para identificar a los usuarios únicos entre los productos. Para obtener más información, consulta la sección "[Sincronizar el uso de licencias entre {% data variables.product.prodname_ghe_server %} y {% data variables.product.prodname_ghe_cloud %}](/billing/managing-your-license-for-github-enterprise/syncing-license-usage-between-github-enterprise-server-and-github-enterprise-cloud)".{% endif %} | | `public_keys` | No | {% ifversion ghec %}Si configuras el SSO de SAML para una empresa y utilizas {% data variables.product.prodname_emus %}, las{% else %}Las{% endif %}llaves SSH públicas para el usuario. You can specify more than one key. | | `gpg_keys` | No | {% ifversion ghec %}If you configure SAML SSO for an enterprise and you use {% data variables.product.prodname_emus %}, the{% else %}The{% endif %} GPG keys for the user. Puedes especificar más de una clave. |
| `full_name` | No | {% ifversion ghec %}If you configure SAML SSO for an enterprise and you use {% data variables.product.prodname_emus %}, the{% else %}The{% endif %} full name of the user to display on the user's profile page. | | `emails` | No | Las direcciones de correo electrónico del usuario.{% ifversion ghes or ghae %} Puedes especificar más de una dirección.{% endif %}{% ifversion ghec or ghes %} Si sincronizas el uso de licencias entre {% data variables.product.prodname_ghe_server %} y {% data variables.product.prodname_ghe_cloud %}, {% data variables.product.prodname_github_connect %} utiliza `emails` para identificar a los usuarios únicos entre los productos. Para obtener más información, consulta la sección "[Sincronizar el uso de licencias entre {% data variables.product.prodname_ghe_server %} y {% data variables.product.prodname_ghe_cloud %}](/billing/managing-your-license-for-github-enterprise/syncing-license-usage-between-github-enterprise-server-and-github-enterprise-cloud)".{% endif %} | | `public_keys` | No | {% ifversion ghec %}Si configuras el SSO de SAML para una empresa y utilizas {% data variables.product.prodname_emus %}, las{% else %}Las{% endif %}llaves SSH públicas para el usuario. Puedes especificar más de una clave. | | `gpg_keys` | No | {% ifversion ghec %}If you configure SAML SSO for an enterprise and you use {% data variables.product.prodname_emus %}, the{% else %}The{% endif %} GPG keys for the user. Puedes especificar más de una clave. |
Para especificar más de un valor para un atributo, utiliza elementos múltiples de `<saml2:AttributeValue>`.

View File

@@ -42,7 +42,6 @@ Si compraste {% data variables.product.prodname_enterprise %} mediante un Acuerd
### Facturación para las precompilaciones de los {% data variables.product.prodname_codespaces %}
{% data reusables.codespaces.prebuilds-beta-note %}
{% data reusables.codespaces.billing-for-prebuilds %}

View File

@@ -23,7 +23,7 @@ shortTitle: Filtrar alertas
## Acerca de filtrar el resumen de seguridad
Puedes utilizar filtros en el resumen de seguridad para reducir tu enfoque con base en una serie de factores, como el nivel de riesgo de la alerta, el tipo de esta y la habilitación de características. Los diversos filtros se encuentran disponibles dependiendo de la vista específica y de si estás analizando a nivel de organización, de equipo o de repositorio.
Puedes utilizar filtros en el resumen de seguridad para reducir tu enfoque con base en una serie de factores, como el nivel de riesgo de la alerta, el tipo de esta y la habilitación de características. Different filters are available depending on the specific view and whether your analysis is at the organization, team or repository level.
## Filtrar por repositorio

View File

@@ -50,11 +50,19 @@ The dependency review feature becomes available when you enable the dependency g
{% data reusables.dependency-review.dependency-review-action-beta-note %}
You can use the Dependency Review GitHub Action in your repository to enforce dependency reviews on your pull requests. The action scans for vulnerable versions of dependencies introduced by package version changes in pull requests, and warns you about the associated security vulnerabilities. This gives you better visibility of what's changing in a pull request, and helps prevent vulnerabilities being added to your repository. For more information, see [`dependency-review-action`](https://github.com/actions/dependency-review-action).
The action is available for all {% ifversion fpt or ghec %}public repositories, as well as private {% endif %}repositories that have {% data variables.product.prodname_GH_advanced_security %} enabled.
You can use the {% data variables.product.prodname_dependency_review_action %} in your repository to enforce dependency reviews on your pull requests. The action scans for vulnerable versions of dependencies introduced by package version changes in pull requests, and warns you about the associated security vulnerabilities. This gives you better visibility of what's changing in a pull request, and helps prevent vulnerabilities being added to your repository. For more information, see [`dependency-review-action`](https://github.com/actions/dependency-review-action).
![Dependency review action example](/assets/images/help/graphs/dependency-review-action.png)
The Dependency Review GitHub Action check will fail if it discovers any vulnerable package, but will only block a pull request from being merged if the repository owner has required the check to pass before merging. For more information, see "[About protected branches](/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-status-checks-before-merging)."
By default, the {% data variables.product.prodname_dependency_review_action %} check will fail if it discovers any vulnerable packages. A failed check blocks a pull request from being merged when the repository owner requires the dependency review check to pass. For more information, see "[About protected branches](/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-status-checks-before-merging)."
The action uses the Dependency Review REST API to get the diff of dependency changes between the base commit and head commit. You can use the Dependency Review API to get the diff of dependency changes, including vulnerability data, between any two commits on a repository. For more information, see "[Dependency review](/rest/reference/dependency-graph#dependency-review)."
{% ifversion dependency-review-action-configuration %}
You can configure the {% data variables.product.prodname_dependency_review_action %} to better suit your needs. For example, you can specify the severity level that will make the action fail, or set an allow or deny list for licenses to scan. For more information, see "[Configuring dependency review](/code-security/supply-chain-security/understanding-your-software-supply-chain/configuring-dependency-review#configuring-the-dependency-review-github-action)."
{% endif %}
{% endif %}

View File

@@ -47,3 +47,56 @@ La revisión de dependencias se encuentra disponible cuando se habilita la gráf
1. Under "Configure security and analysis features", check if the dependency graph is enabled.
1. If dependency graph is enabled, click **Enable** next to "{% data variables.product.prodname_GH_advanced_security %}" to enable {% data variables.product.prodname_advanced_security %}, including dependency review. El botón de habilitar está inhabilitado si tu empresa no tiene licencias disponibles para la {% data variables.product.prodname_advanced_security %}.{% ifversion ghes < 3.3 %} ![Screenshot of "Code security and analysis" features"](/assets/images/enterprise/3.2/repository/code-security-and-analysis-enable-ghas-3.2.png){% endif %}{% ifversion ghes > 3.2 %} ![Screenshot of "Code security and analysis" features"](/assets/images/enterprise/3.4/repository/code-security-and-analysis-enable-ghas-3.4.png){% endif %}
{% endif %}
{% ifversion dependency-review-action-configuration %}
## Configuring the {% data variables.product.prodname_dependency_review_action %}
{% data reusables.dependency-review.dependency-review-action-beta-note %}
{% data reusables.dependency-review.dependency-review-action-overview %}
The following configuration options are available.
| Opción | Requerido | Uso |
| ------------------ | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `fail-on-severity` | Opcional | Defines the threshold for level of severity (`low`, `moderate`, `high`, `critical`).</br>The action will fail on any pull requests that introduce vulnerabilities of the specified severity level or higher. |
| `allow-licenses` | Opcional | Contains a list of allowed licenses. You can find the possible values for this parameter in the [Licenses](/rest/licenses) page of the API documentation.</br>The action will fail on pull requests that introduce dependencies with licenses that do not match the list. |
| `deny-licenses` | Opcional | Contains a list of prohibited licenses. You can find the possible values for this parameter in the [Licenses](/rest/licenses) page of the API documentation.</br>The action will fail on pull requests that introduce dependencies with licenses that match the list. |
{% tip %}
**Tip:** The `allow-licenses` and `deny-licenses` options are mutually exclusive.
{% endtip %}
This {% data variables.product.prodname_dependency_review_action %} example file illustrates how you can use these configuration options.
```yaml{:copy}
name: 'Dependency Review'
on: [pull_request]
permissions:
contents: read
jobs:
dependency-review:
runs-on: ubuntu-latest
steps:
- name: 'Checkout Repository'
uses: {% data reusables.actions.action-checkout %}
- name: Dependency Review
uses: actions/dependency-review-action@v2
with:
# Possible values: "critical", "high", "moderate", "low"
fail-on-severity: critical
# You can only can only include one of these two options: `allow-licenses` and `deny-licences`
# ([String]). Only allow these licenses (optional)
# Possible values: Any `spdx_id` value(s) from https://docs.github.com/en/rest/licenses
# allow-licenses: GPL-3.0, BSD-3-Clause, MIT
# ([String]). Block the pull request on these licenses (optional)
# Possible values: Any `spdx_id` value(s) from https://docs.github.com/en/rest/licenses
# deny-licenses: LGPL-2.0, BSD-2-Clause
```
For further details about the configuration options, see [`dependency-review-action`](https://github.com/actions/dependency-review-action#readme).
{% endif %}

View File

@@ -1,7 +1,7 @@
---
title: Acerca de las precompilaciones de los codespaces
shortTitle: Acerca de las precompilaciones
intro: Las precompilaciones de los codespaces te ayudan a acelerar la creación de los codespaces nuevos.
intro: Codespaces prebuilds help to speed up the creation of new codespaces for large or complex repositories.
versions:
fpt: '*'
ghec: '*'
@@ -10,15 +10,13 @@ topics:
product: '{% data reusables.gated-features.codespaces %}'
---
{% data reusables.codespaces.prebuilds-beta-note %}
## Resumen
El precompilar tus codespaces te permite ser más productivo y acceder a tu codespace más rápidamente, sin importar el tamaño y complejidad de tu proyecto. Esto es porque cualquier código fuente, extensiones del editor, dependencias de proyecto, comandos y configuraciones ya se han descargado, instalado y aplicado antes de que crees un codespace para tu proyecto. Piensa en la precompilación como una plantilla "lista para utilizarse" para un codespace.
Prebuilding your codespaces allows you to be more productive and access your codespace faster, particularly if your repository is large or complex and new codespaces currently take more than 2 minutes to start. Esto es porque cualquier código fuente, extensiones del editor, dependencias de proyecto, comandos y configuraciones ya se han descargado, instalado y aplicado antes de que crees un codespace para tu proyecto. Piensa en la precompilación como una plantilla "lista para utilizarse" para un codespace.
Predeterminadamente, cada que subas cambios a tu repositorio, {% data variables.product.prodname_codespaces %} utiliza {% data variables.product.prodname_actions %} para actualizar tus precompilaciones automáticamente.
Cuando las precompilaciones están disponibles para una rama en particular de un repositorio y para tu región, verás la etiqueta "{% octicon "zap" aria-label="The zap icon" %} Prebuild ready" en la lista de opciones de tipo de máquina al crear un codespace. Para obtener más información, consulta la sección "[Crear un codespace](/codespaces/developing-in-codespaces/creating-a-codespace#creating-a-codespace)".
Cuando las precompilaciones están disponibles para una rama en particular de un repositorio y para tu región, verás la etiqueta "{% octicon "zap" aria-label="The zap icon" %} Prebuild ready" en la lista de opciones de tipo de máquina al crear un codespace. If a prebuild is still being created, you will see the "{% octicon "history" aria-label="The history icon" %} Prebuild in progress" label. Para obtener más información, consulta la sección "[Crear un codespace](/codespaces/developing-in-codespaces/creating-a-codespace#creating-a-codespace)".
![La caja de diálogo para elegir un tipo de máquina](/assets/images/help/codespaces/choose-custom-machine-type.png)

View File

@@ -13,8 +13,6 @@ product: '{% data reusables.gated-features.codespaces %}'
permissions: People with admin access to a repository can configure prebuilds for the repository.
---
{% data reusables.codespaces.prebuilds-beta-note %}
Puedes ajustar una configuración de precompilación para una rama específica de tu repositorio.
Habitualmente, a cualquier rama que se cree de una rama base con precompilación habilitada habitualmente también se le asignará una precompilación durante la creación del codespace. Esto es cierto si el contenedor dev en la rama es el mismo que en la rama base. Esto es porque la mayoría de las configuraciones de precompilación de las ramas con la misma configuración de contenedor dev son idénticas, así que los desarrolladores también pueden beneficiarse de tener tiempos más rápidos de creación de codespaces en dichas ramas. Para obtener más información, consulta la sección "[Introducción a los contenedores dev](/codespaces/setting-up-your-project-for-codespaces/introduction-to-dev-containers)".
@@ -48,7 +46,15 @@ Antes de que configures las precompilaciones para tu proyecto, se debe cumplir c
{% endnote %}
1. Elige las regiones en las que quieres configurar una precompilación. Los desarrolladores deben ubicarse en una región que selecciones para poder crear codespaces desde una precompilación. Como alternativa, selecciona **Todas las regiones**.
1. Elige cómo quieres activar automáticamente las actualizaciones de la plantilla de precompilación.
* **Cada subida** (el ajuste predeterminado) - Con este ajuste, las configuraciones de precompilación se actualizarán en cada subida que se haga a la rama predeterminada. Esto garantizará que los codespaces que se generen de una plantilla de precompilación siempre contengan la configuración de codespace más reciente, incluyendo cualquier dependencia que se haya actualizado o agregado recientemente.
* **En el cambio de configuración** - Con este ajuste, as configuraciones de precompilación se actualizarán cada que lo hagan los archivos de configuración asociados para cada repositorio y rama en cuestión. Esto garantiza que los cambios a los archivos de configuración del contenedor dev para el repositorio se utilicen cuando se genera un codespace desde una plantilla de precompilación. El flujo de trabajo de acciones que actualiza la plantilla de precompilación se ejecutará con menor frecuencia, así que esta opción utilizará menos minutos de las acciones. Sin embargo, esta opción no garantiza que los cdespaces siempre incluyan dependencias recientemente actualizadas o agregadas, así que estas podrían tener que agregarse o actualizarse manualmente después de que un codespace se haya creado.
* **Programado** - Con este ajuste, puedes hacer que tus configuraciones de precompilación se actualicen en un itinerario personalizado que tú defines. This can reduce consumption of Actions minutes, however, with this option, codespaces may be created that do not use the latest dev container configuration changes.
![Las opciones de activación de precompilación](/assets/images/help/codespaces/prebuilds-triggers.png)
1. Select **Reduce prebuild available to only specific regions** to limit access to your prebuilt image, then select which regions you want it available in. Developers can only create codespaces from a prebuild if they are located in a region you select. By default, your prebuilt image is available to all regions where codespaces is available and storage costs apply for each region.
![Las opciones de selección de región](/assets/images/help/codespaces/prebuilds-regions.png)
@@ -60,13 +66,17 @@ Antes de que configures las precompilaciones para tu proyecto, se debe cumplir c
{% endnote %}
1. Elige cómo quieres activar automáticamente las actualizaciones de la plantilla de precompilación.
1. Set the number of prebuild template versions to be retained. You can input any number between 1 and 5. The default number of saved versions is 2, which means that only the latest template version and the previous version are saved.
* **Cada subida** (el ajuste predeterminado) - Con este ajuste, las configuraciones de precompilación se actualizarán en cada subida que se haga a la rama predeterminada. Esto garantizará que los codespaces que se generen de una plantilla de precompilación siempre contengan la configuración de codespace más reciente, incluyendo cualquier dependencia que se haya actualizado o agregado recientemente.
* **En el cambio de configuración** - Con este ajuste, as configuraciones de precompilación se actualizarán cada que lo hagan los archivos de configuración asociados para cada repositorio y rama en cuestión. Esto garantiza que los cambios a los archivos de configuración del contenedor dev para el repositorio se utilicen cuando se genera un codespace desde una plantilla de precompilación. El flujo de trabajo de acciones que actualiza la plantilla de precompilación se ejecutará con menor frecuencia, así que esta opción utilizará menos minutos de las acciones. Sin embargo, esta opción no garantiza que los cdespaces siempre incluyan dependencias recientemente actualizadas o agregadas, así que estas podrían tener que agregarse o actualizarse manualmente después de que un codespace se haya creado.
* **Programado** - Con este ajuste, puedes hacer que tus configuraciones de precompilación se actualicen en un itinerario personalizado que tú defines. Esto puede reducir el consumo de minutos de acciones y también la cantidad de tiempo durante la cual las precompilaciones no están disponibles porque se están actualizando. Sin embargo, con esta opción, se podrían crear codespaces que no utilicen los cambios de configuración más recientes al contenedor dev.
Depending on your prebuild trigger settings, your prebuild template could change with each push or on each dev container configuration change. Retaining older versions of prebuild templates enables you to create a prebuild from an older commit with a different dev container configuration than the current prebuild template. Since there is a storage cost associated with retaining prebuild template versions, you can choose the number of versions to be retained based on the needs of your team. For more information on billing, see "[About billing for {% data variables.product.prodname_codespaces %}](/billing/managing-billing-for-github-codespaces/about-billing-for-codespaces#codespaces-pricing)."
![Las opciones de activación de precompilación](/assets/images/help/codespaces/prebuilds-triggers.png)
If you set the number of prebuild template versions to save to 1, {% data variables.product.prodname_codespaces %} will only save the latest version of the prebuild template and will delete the older version each time the template is updated. This means you will not get a prebuilt codespace if you go back to an older dev container configuration.
![The prebuild template history setting](/assets/images/help/codespaces/prebuilds-template-history-setting.png)
1. Add users or teams to notify when the prebuild workflow run fails for this configuration. You can begin typing a username, team name, or full name, then click the name once it appears to add them to the list. The users or teams you add will receive an email when prebuild failures occur, containing a link to the workflow run logs to help with further investigation.
![The prebuild failure notification setting](/assets/images/help/codespaces/prebuilds-failure-notification-setting.png)
1. Da clic en **Crear**.

View File

@@ -15,5 +15,4 @@ children:
- /managing-prebuilds
- /testing-dev-container-changes
---
{% data reusables.codespaces.prebuilds-beta-note %}

View File

@@ -12,8 +12,6 @@ product: '{% data reusables.gated-features.codespaces %}'
miniTocMaxHeadingLevel: 3
---
{% data reusables.codespaces.prebuilds-beta-note %}
## Verificar, cambiar y borrar tus configuraciones de precompilación
Las precompilaciones que configuras para un repositorio se crean y actualizan utilizando un flujo de trabajo de {% data variables.product.prodname_actions %} que admistra el servicio de {% data variables.product.prodname_codespaces %}.

View File

@@ -14,8 +14,6 @@ product: '{% data reusables.gated-features.codespaces %}'
permissions: People with write permissions to a repository can create or edit the dev container configuration for a branch.
---
{% data reusables.codespaces.prebuilds-beta-note %}
Cualquier cambio que hagas en la configuración del contenedor dev para una rama con precompilación habilitada dará como resultado una actualización a la configuración de codespace y a la plantilla precompilada asociada. Por lo tanto, es importante probar estos cambios en un codespace de una rama de prueba antes de confirmar tus cambios en una rama de tu repositorio que se esté utilizando activamente. Esto garantizará que no estás introduciendo cambios importantes para tu equipo.
Para obtener más información, consulta la sección "[Introducción a los contenedores dev](/codespaces/setting-up-your-project-for-codespaces/introduction-to-dev-containers)".

View File

@@ -12,8 +12,6 @@ product: '{% data reusables.gated-features.codespaces %}'
miniTocMaxHeadingLevel: 3
---
{% data reusables.codespaces.prebuilds-beta-note %}
Para obtener más información sobre las precompilaciones de los {% data variables.product.prodname_codespaces %}, consulta la sección "[Precompilar tus codespaces](/codespaces/prebuilding-your-codespaces)".
## Verificar si un codespace se creó desde una precompilación

View File

@@ -159,7 +159,7 @@ Ya que los permisos a nivel de usuario se otorgan individualmente, puedes agrega
## Solicitudes de usuario a servidor
Mientras que la mayoría de tu interacción con la API deberá darse utilizando tus tokens de acceso a la instalación de servidor a servidor, ciertas terminales te permiten llevar a cabo acciones a través de la API utilizando un token de acceso. Tu app puede hacer las siguientes solicitudes utilizando las terminales de [GraphQL v4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql) o de [REST v3](/rest).
Mientras que la mayoría de tu interacción con la API deberá darse utilizando tus tokens de acceso a la instalación de servidor a servidor, ciertas terminales te permiten llevar a cabo acciones a través de la API utilizando un token de acceso. Your app can make the following requests using [GraphQL]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql) or [REST](/rest) endpoints.
### Terminales compatibles

View File

@@ -53,7 +53,7 @@ Te recomendamos revisar la lista de terminales de la API que necesitas tan pront
### Diseñar con apego a los límites de tasa de la API
Las GitHub Apps utilizan [reglas móviles para los límites de tasa](/apps/building-github-apps/understanding-rate-limits-for-github-apps/), las cuales pueden incrementar con base en la cantidad de repositorios y usuarios de la organización. Una GitHub App también puede hacer uso de [solicitudes condicionales](/rest/overview/resources-in-the-rest-api#conditional-requests) o de solicitudes consolidadas si utiliza la [API de GraphQL V4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
Las GitHub Apps utilizan [reglas móviles para los límites de tasa](/apps/building-github-apps/understanding-rate-limits-for-github-apps/), las cuales pueden incrementar con base en la cantidad de repositorios y usuarios de la organización. A GitHub App can also make use of [conditional requests](/rest/overview/resources-in-the-rest-api#conditional-requests) or consolidate requests by using the [GraphQL API]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
### Registrar una GitHub App nueva

View File

@@ -14,7 +14,7 @@ topics:
- API
---
Hay dos versiones estables de la API de GitHub: la [API de REST](/rest) y la [API de GraphQL]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql). Cuando utilizas la API de REST, te exhortamos a que [solicites la v3 a través del encabezado de `Accept`](/v3/media/#request-specific-version). Para obtener más información sobre cómo utilizar la API de GraphQL, consulta los [documentos de la v4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
Hay dos versiones estables de la API de GitHub: la [API de REST](/rest) y la [API de GraphQL]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
## Versiones obsoletas

View File

@@ -1343,7 +1343,7 @@ El conjunto de datos de asesoría de seguridad también impulsa las {% data vari
## security_and_analysis
Activity related to enabling or disabling code security and analysis features for a repository or organization.
Actividad relacionada con habilitar o inhabilitar la seguridad de código y características de seguridad para un repositorio u organización.
### Disponibilidad
@@ -1353,9 +1353,9 @@ Activity related to enabling or disabling code security and analysis features fo
### Objeto de carga útil del webhook
| Clave | Tipo | Descripción |
| --------- | -------- | ---------------------------------------------------------------------- |
| `changes` | `objeto` | The changes that were made to the code security and analysis features. |
| Clave | Tipo | Descripción |
| --------- | -------- | ------------------------------------------------------------------------------------ |
| `changes` | `objeto` | Los cambios que se hicieron a la seguridad del código y características de análisis. |
{% data reusables.webhooks.repo_desc %}
{% data reusables.webhooks.org_desc %}
{% data reusables.webhooks.app_desc %}

View File

@@ -40,6 +40,16 @@ Marcar un repositorio como favorito es un proceso simple de dos pasos.
1. Opcionalmente, para dejar de marcar un repositorio como favorito, haz clic en **Desmarcar como favorito**. ![Dejar de marcar a un repositorio como favorito](/assets/images/help/stars/unstarring-a-repository.png)
{% ifversion fpt or ghec %}
## Viewing who has starred a repository
You can view everyone who has starred a public repository or a private repository you have access to.
To view everyone who has starred a repository, add `/stargazers` to the end of the URL of a repository. For example, to view stargazers for the github/docs repository, visit https://github.com/github/docs/stargazers.
## Organizar los repositorios marcados como favoritos con las listas
{% note %}

View File

@@ -81,7 +81,7 @@ gh repo fork <em>repository</em> --clone=true
## Hacer y subir cambios
Puedes proceder y hacer algunos cambios al proyecto utilizando tu editor de texto favorito, como [Atom](https://atom.io). Podrías, por ejemplo, cambiar el texto en `index.html` para agregar tu nombre de usuario de GitHub.
Go ahead and make a few changes to the project using your favorite text editor, like [Visual Studio Code](https://code.visualstudio.com). Podrías, por ejemplo, cambiar el texto en `index.html` para agregar tu nombre de usuario de GitHub.
Cuando estés listo para enviar tus cambios, pruébalos y confírmalos. `git add .` le dice a Git que quieres incluir todos tus cambios en la siguiente confirmación. `git commit` toma una captura de estos cambios.

View File

@@ -12,7 +12,7 @@ topics:
- API
---
Puedes acceder a la mayoría de objetos en GitHub (usuarios, informes de problemas, solicitudes de extracción, etc.) utilizando ya sea la API de Rest o la de GraphQL. Puedes encontrar la **ID de nodo global** de muchos objetos desde dentro de la API de REST y utilizar estas ID en tus operaciones de GraphQL. Para obtener más información, consulta la sección "[Vista previa de las ID de nodo de la API de GraphQL v4 en los recursos de la API de REST v3](https://developer.github.com/changes/2017-12-19-graphql-node-id/)".
Puedes acceder a la mayoría de objetos en GitHub (usuarios, informes de problemas, solicitudes de extracción, etc.) utilizando ya sea la API de Rest o la de GraphQL. Puedes encontrar la **ID de nodo global** de muchos objetos desde dentro de la API de REST y utilizar estas ID en tus operaciones de GraphQL. For more information, see "[Preview GraphQL API Node IDs in REST API resources](https://developer.github.com/changes/2017-12-19-graphql-node-id/)."
{% note %}

View File

@@ -14,7 +14,7 @@ topics:
## Límite de nodo
Para pasar la validación del [modelo](/graphql/guides/introduction-to-graphql#schema), todas las [llamadas](/graphql/guides/forming-calls-with-graphql) la API v4 de GraphQL deben cumplir con los siguientes estándares:
To pass [schema](/graphql/guides/introduction-to-graphql#schema) validation, all GraphQL API [calls](/graphql/guides/forming-calls-with-graphql) must meet these standards:
* Los clientes deben suministrar un argumento `first` o `last` en cualquier [conexión](/graphql/guides/introduction-to-graphql#connection).
* Los valores de `first` y `last` deben estar dentro de 1-100.
@@ -130,30 +130,30 @@ Estos dos ejemplos te muestran cómo calcular los nodos totales en una llamada.
## Limite de tasa
El límite de la API v4 de GraphQL es diferente a los [límites de tasa](/rest/overview/resources-in-the-rest-api#rate-limiting) de la API v3 de REST.
The GraphQL API limit is different from the REST API's [rate limits](/rest/overview/resources-in-the-rest-api#rate-limiting).
¿Por qué son diferentes los límites de tasa de la API? Con [GraphQL](/graphql), una llamada de GraphQL puede reemplazar [varias llamadas de REST](/graphql/guides/migrating-from-rest-to-graphql). Una sola llamada compleja de GraphQL puede ser el equivalente a miles de solicitudes de REST. Si bien una sola llamada de GraphQL caería muy debajo del límite de tasa de la API de REST, la consulta podría ser igual de cara en términos de procesamiento para los servidores de GitHub.
Para representar con precisión el costo de una consulta al servidor, la API v4 de GraphQL calcula la **puntuación de tasa límite** de una llamada con base en una escala de puntos normalizada. Los factores de puntuación de una consulta en argumentos "firs" y "last" en una conexión padre y sus hijos.
To accurately represent the server cost of a query, the GraphQL API calculates a call's **rate limit score** based on a normalized scale of points. Los factores de puntuación de una consulta en argumentos "firs" y "last" en una conexión padre y sus hijos.
* La fórmula utiliza los argumentos `first` y `last` en una conexión padre y en sus hijos para pre-calcular la carga potencial en los sistemas de GitHub, tal como MySQL, ElasticSearch y Git.
* Cada conexión nueva tiene su propio valor de puntos. Los puntos se combinan con otros puntos desde la llamada en una puntuación de tasa límite general.
El límite de tasa de la API v4 de GraphQL es de **5,000 puntos por hora**.
The GraphQL API rate limit is **5,000 points per hour**.
Nota que 5,000 puntos por hora no es lo mismo que 5,000 llamadas por hora: la API v4 de GraphQL y la API v3 de REST utilizan límites de tasa diferentes.
Note that 5,000 points per hour is not the same as 5,000 calls per hour: the GraphQL API and REST API use different rate limits.
{% note %}
**Nota**: La fórmula y el límite de tasa actuales están sujetos a cambio mientras observamos cómo los desarrolladores utilizan la API v4 de GraphQL.
**Note**: The current formula and rate limit are subject to change as we observe how developers use the GraphQL API.
{% endnote %}
### Recuperar el estado de límite de tasa de una llamada
Con la API v3 de REST, puedes revisar el estado de límite de tasa si [inspeccionas](/rest/overview/resources-in-the-rest-api#rate-limiting) los encabezados HTTP devueltos.
With the REST API, you can check the rate limit status by [inspecting](/rest/overview/resources-in-the-rest-api#rate-limiting) the returned HTTP headers.
Con la API v4 de GraphQL, puedes revisar el estado de límite de tasa si consultas los campos en el objeto `rateLimit`:
With the GraphQL API, you can check the rate limit status by querying fields on the `rateLimit` object:
```graphql
query {
@@ -186,7 +186,7 @@ Al consultar el objeto `rateLimit` se devuelve el puntaje de una llamada, pero e
{% note %}
**Nota**: El costo mínimo de una llamada a la API v4 de GraphQL es **1**, lo cual representa solo una solicitud.
**Note**: The minimum cost of a call to the GraphQL API is **1**, representing a single request.
{% endnote %}

View File

@@ -40,16 +40,16 @@ You can filter files in a pull request by file extension type, such as `.html` o
{% data reusables.repositories.sidebar-pr %}
1. In the list of pull requests, click the pull request you'd like to filter.
{% data reusables.repositories.changed-files %}
1. If the file tree is hidden, click **Show file tree** to display the file tree.
1. Click on a file in the file tree to view the corresponding file diff. If the file tree is hidden, click {% octicon "sidebar-collapse" aria-label="The sidebar collapse icon" %} to display the file tree.
{% note %}
**Note**: The file tree will not display if your screen width is too narrow or if the pull request only includes one file.
{% endnote %}
1. Click on a file in the file tree to view the corresponding file diff.
![Pull request file tree](/assets/images/help/pull_requests/pr-file-tree.png)
![Screenshot of filter changed files search box and file tree emphasized](/assets/images/help/repository/file-tree.png)
1. To filter by file path, enter part or all of the file path in the **Filter changed files** search box. Alternatively, use the file filter dropdown. For more information, see "[Using the file filter dropdown](#using-the-file-filter-dropdown)."
{% endif %}

View File

@@ -35,9 +35,14 @@ shortTitle: Review dependency changes
Dependency review allows you to "shift left". You can use the provided predictive information to catch vulnerable dependencies before they hit production. For more information, see "[About dependency review](/code-security/supply-chain-security/about-dependency-review)."
{% ifversion fpt or ghec or ghes > 3.5 or ghae-issue-6396 %}
You can use the Dependency Review GitHub Action to help enforce dependency reviews on pull requests in your repository. For more information, see "[Dependency review enforcement](/code-security/supply-chain-security/understanding-your-software-supply-chain/about-dependency-review#dependency-review-enforcement)."
You can use the {% data variables.product.prodname_dependency_review_action %} to help enforce dependency reviews on pull requests in your repository. {% data reusables.dependency-review.dependency-review-action-overview %}
{% ifversion dependency-review-action-configuration %}
You can configure the {% data variables.product.prodname_dependency_review_action %} to better suit your needs by specifying the type of dependency vulnerability you wish to catch. For more information, see "[Configuring dependency review](/code-security/supply-chain-security/understanding-your-software-supply-chain/configuring-dependency-review#configuring-the-dependency-review-github-action)."
{% endif %}
{% endif %}
## Reviewing dependencies in a pull request
{% data reusables.repositories.sidebar-pr %}

View File

@@ -77,14 +77,8 @@ Before you can sync your fork with an upstream repository, you must [configure a
> 2 files changed, 7 insertions(+), 9 deletions(-)
> delete mode 100644 README
> create mode 100644 README.md
``` If your local branch didn't have any unique commits, Git will instead perform a "fast-forward":
```shell
$ git merge upstream/main
> Updating 34e91da..16c56ad
> Fast-forward
> README.md | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
```
```
{% tip %}

View File

@@ -41,5 +41,28 @@ Once the commit is on the default branch, any tags that contain the commit will
![Screenshot of commit with commit tag emphasized](/assets/images/help/commits/commit-tag-label.png)
{% ifversion commit-tree-view %}
## Using the file tree
You can use the file tree to navigate between files in a commit.
{% data reusables.repositories.navigate-to-repo %}
{% data reusables.repositories.navigate-to-commit-page %}
1. Navigate to the commit by clicking the commit message link.
![Screenshot of commit with commit message link emphasized](/assets/images/help/commits/commit-message-link.png)
1. Click on a file in the file tree to view the corresponding file diff. If the file tree is hidden, click {% octicon "sidebar-collapse" aria-label="The sidebar collapse icon" %} to display the file tree.
{% note %}
**Note**: The file tree will not display if your screen width is too narrow or if the commit only includes one file.
{% endnote %}
![Screenshot of filter changed files search box and file tree emphasized](/assets/images/help/repository/file-tree.png)
1. To filter by file path, enter part or all of the file path in the **Filter changed files** search box.
{% endif %}
## Further reading
- "[Committing and reviewing changes to your project](/desktop/contributing-to-projects/committing-and-reviewing-changes-to-your-project#about-commits)" on {% data variables.product.prodname_desktop %}

View File

@@ -24,12 +24,6 @@ topics:
</div>
</div>
{% warning %}
Advertencia: Desde la segunda mitad de octubre del 2021, ya no se están manteniendo las librerías oficiales de Octokit. Para obtener más información, consulta [este debate en el repositorio de octokit.js](https://github.com/octokit/octokit.js/discussions/620).
{% endwarning %}
# Librería de terceros
### Clojure

View File

@@ -185,7 +185,7 @@ _Buscar_
- [`PUT /repos/:owner/:repo/topics`](/rest/reference/repos#replace-all-repository-topics) (:write)
- [`POST /repos/:owner/:repo/transfer`](/rest/reference/repos#transfer-a-repository) (:write)
{% ifversion fpt or ghec -%}
- [`GET /repos/:owner/:repo/vulnerability-alerts`](/rest/reference/repos#enable-vulnerability-alerts) (:write)
- [`GET /repos/:owner/:repo/vulnerability-alerts`](/rest/reference/repos#enable-vulnerability-alerts) (:read)
{% endif -%}
{% ifversion fpt or ghec -%}
- [`PUT /repos/:owner/:repo/vulnerability-alerts`](/rest/reference/repos#enable-vulnerability-alerts) (:write)

View File

@@ -24,7 +24,7 @@ Predeterminadamente, todas las solicitudes a `{% data variables.product.api_url_
{% ifversion fpt or ghec %}
Para obtener más información acerca de la API de GraphQL de GitHub, consulta la [documentación de la V4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql). Para obtener más información acerca de cómo migrarse a GraphQL, consulta la sección "[Migrarse desde REST]({% ifversion ghec%}/free-pro-team@latest{% endif %}/graphql/guides/migrating-from-rest-to-graphql)".
For information about GitHub's GraphQL API, see the [documentation]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql). Para obtener más información acerca de cómo migrarse a GraphQL, consulta la sección "[Migrarse desde REST]({% ifversion ghec%}/free-pro-team@latest{% endif %}/graphql/guides/migrating-from-rest-to-graphql)".
{% endif %}

View File

@@ -0,0 +1,8 @@
---
#Issue 6662
#Commit file tree view
versions:
fpt: '*'
ghec: '*'
ghes: '>=3.6'
ghae: 'issue-6662'

View File

@@ -0,0 +1,7 @@
---
#Reference: #7070
#Actions Runner Container Hooks
versions:
fpt: '*'
ghec: '*'
ghae: 'issue-7070'

View File

@@ -0,0 +1,7 @@
---
#Reference: Issue #7061 Configuring the dependency review action - [Public Beta]
versions:
fpt: '*'
ghec: '*'
ghes: '>3.5'
ghae: 'issue-7061'

View File

@@ -100,8 +100,8 @@ upcoming_changes:
owner: cheshire137
-
location: DependencyGraphDependency.packageLabel
description: '`packageLabel` will be removed. Use normalized `packageName` field instead.'
reason: '`packageLabel` will be removed.'
description: '`packageLabel` se eliminará. Utiliza el campo normalizado `packageName` en su lugar.'
reason: '`packageLabel` se eliminará.'
date: '2022-10-01T00:00:00+00:00'
criticality: breaking
owner: github/dependency_graph

View File

@@ -100,8 +100,8 @@ upcoming_changes:
owner: cheshire137
-
location: DependencyGraphDependency.packageLabel
description: '`packageLabel` will be removed. Use normalized `packageName` field instead.'
reason: '`packageLabel` will be removed.'
description: '`packageLabel` se eliminará. Utiliza el campo normalizado `packageName` en su lugar.'
reason: '`packageLabel` se eliminará.'
date: '2022-10-01T00:00:00+00:00'
criticality: breaking
owner: github/dependency_graph

View File

@@ -109,5 +109,8 @@ sections:
- heading: 'Obsoletización del soporte para XenServer Hypervisor'
notes:
- 'Desde {% data variables.product.prodname_ghe_server %} 3.1, comenzaremos a descontinuar el soporte par Xen Hypervisor. La obsoletización completa está programada para {% data variables.product.prodname_ghe_server %} 3.3, siguiendo la ventana de obsoletización estándar de un año.'
- heading: 'Change to the format of authentication tokens affects GitHub Connect'
notes:
- "GitHub Connect will no longer work after June 3rd for instances running GitHub Enterprise Server 3.1 or older, due to the format of GitHub authentication tokens changing. To continue using GitHub Connect, upgrade to GitHub Enterprise Server 3.2 or later. For more information, see the [GitHub Blog](https://github.blog/2022-05-20-action-needed-by-github-connect-customers-using-ghes-3-1-and-older-to-adopt-new-authentication-token-format-updates/). [Updated: 2022-06-14]\n"
backups:
- '{% data variables.product.prodname_ghe_server %} 3.1 requiere por lo menos de una versión [3.1.0 de las Utilidades de Respaldo de GitHub Enterprise](https://github.com/github/backup-utils) para los [Respaldos y la Recuperación de Desastres](/enterprise-server@3.1/admin/configuration/configuring-backups-on-your-appliance).'

View File

@@ -0,0 +1,21 @@
---
date: '2022-06-09'
sections:
security_fixes:
- Los paquetes se actualizaron a las últimas versiones de seguridad.
bugs:
- An internal script to validate hostnames in the {% data variables.product.prodname_ghe_server %} configuration file would return an error if the hostname string started with a "." (period character).
- In HA configurations where the primary node's hostname was longer than 60 characters, MySQL would fail to be configured.
- The calculation of "maximum committers across entire instance" reported in the site admin dashboard was incorrect.
- An incorrect database entry for repository replicas caused database corruption when performing a restore using {% data variables.product.prodname_enterprise_backup_utilities %}.
changes:
- In HA configurations where Elasticsearch reported a valid yellow status, changes introduced in a previous fix would block the `ghe-repl-stop` command and not allow replication to be stopped. Using `ghe-repo-stop --force` will now force Elasticsearch to stop when the service is in a normal or valid yellow status.
known_issues:
- El registor de npm del {% data variables.product.prodname_registry %} ya no regresa un valor de tiempo en las respuestas de metadatos. Esto se hizo para permitir mejoras de rendimiento sustanciales. Seguimos teniendo todos los datos necesarios para devolver un valor de tiempo como parte de la respuesta de metadatos y terminaremos de devolver este valor ene l futuro una vez que hayamos resuelto los problemas de rendimiento existentes.
- En una instancia recién configurada de {% data variables.product.prodname_ghe_server %} sin ningún usuario, un atacante podría crear el primer usuario adminsitrador.
- Las reglas de cortafuegos personalizadas se eliminan durante el proceso de actualización.
- Los archivos rastreados del LFS de Git que se [cargaron mediante la interface web](https://github.com/blog/2105-upload-files-to-your-repositories) se agregaron incorrecta y directamente al repositorio.
- Las propuestas no pudieron cerrarse si contenían un permalink a un blob en el mismo repositorio en donde la ruta de archvio del blob era más grande a 255 caracteres.
- Cuando se habilita la opción "Los usuarios pueden buscar en GitHub.com" con las propuestas de {% data variables.product.prodname_github_connect %}, las propuestas en los repositorios internos y privados no se incluyen en los resultados de búsqueda de {% data variables.product.prodname_dotcom_the_website %}.
- Si se habilitan las {% data variables.product.prodname_actions %} para {% data variables.product.prodname_ghe_server %}, el desmontar un nodo de réplica con `ghe-repl-teardown` tendrá éxito, pero podría devolver un `ERROR:Running migrations`.
- Los límites de recursos que son específicos para procesar ganchos de pre-recepción podrían ocasionar que fallen algunos ganchos de pre-recepción.

View File

@@ -306,6 +306,11 @@ sections:
Two legacy GitHub Apps-related webhook events have been removed: `integration_installation` and `integration_installation_repositories`. You should instead be listening to the `installation` and `installation_repositories` events.
- |
The following REST API endpoint has been removed: `POST /installations/{installation_id}/access_tokens`. You should instead be using the namespaced equivalent `POST /app/installations/{installation_id}/access_tokens`.
- heading: Change to the format of authentication tokens affects GitHub Connect
notes:
# https://github.com/github/releases/issues/1235
- |
GitHub Connect will no longer work after June 3rd for instances running GitHub Enterprise Server 3.1 or older, due to the format of GitHub authentication tokens changing. To continue using GitHub Connect, upgrade to GitHub Enterprise Server 3.2 or later. For more information, see the [GitHub Blog](https://github.blog/2022-05-20-action-needed-by-github-connect-customers-using-ghes-3-1-and-older-to-adopt-new-authentication-token-format-updates/). [Updated: 2022-06-14]
backups:
- '{% data variables.product.prodname_ghe_server %} 3.2 requires at least [GitHub Enterprise Backup Utilities 3.2.0](https://github.com/github/backup-utils) for [Backups and Disaster Recovery](/enterprise-server@3.2/admin/configuration/configuring-backups-on-your-appliance).'

View File

@@ -0,0 +1,23 @@
---
date: '2022-06-09'
sections:
security_fixes:
- Los paquetes se actualizaron a las últimas versiones de seguridad.
bugs:
- An internal script to validate hostnames in the {% data variables.product.prodname_ghe_server %} configuration file would return an error if the hostname string started with a "." (period character).
- In HA configurations where the primary node's hostname was longer than 60 characters, MySQL would fail to be configured.
- The `--gateway` argument was added to the `ghe-setup-network` command, to allow passing the gateway address when configuring network settings using the command line.
- Image attachments that were deleted would return a `500 Internal Server Error` instead of a `404 Not Found` error.
- The calculation of "maximum committers across entire instance" reported in the site admin dashboard was incorrect.
- An incorrect database entry for repository replicas caused database corruption when performing a restore using {% data variables.product.prodname_enterprise_backup_utilities %}.
changes:
- Optimised the inclusion of metrics when generating a cluster support bundle.
- In HA configurations where Elasticsearch reported a valid yellow status, changes introduced in a previous fix would block the `ghe-repl-stop` command and not allow replication to be stopped. Using `ghe-repo-stop --force` will now force Elasticsearch to stop when the service is in a normal or valid yellow status.
known_issues:
- En una instancia recién configurada de {% data variables.product.prodname_ghe_server %} sin ningún usuario, un atacante podría crear el primer usuario adminsitrador.
- Las reglas de cortafuegos personalizadas se eliminan durante el proceso de actualización.
- Los archivos rastreados del LFS de Git que se [cargaron mediante la interface web](https://github.com/blog/2105-upload-files-to-your-repositories) se agregaron incorrecta y directamente al repositorio.
- Las propuestas no pudieron cerrarse si contenían un permalink a un blob en el mismo repositorio en donde la ruta de archvio del blob era más grande a 255 caracteres.
- Cuando se habilita la opción "Los usuarios pueden buscar en GitHub.com" con las propuestas de {% data variables.product.prodname_github_connect %}, las propuestas en los repositorios internos y privados no se incluyen en los resultados de búsqueda de {% data variables.product.prodname_dotcom_the_website %}.
- El registor de npm del {% data variables.product.prodname_registry %} ya no regresa un valor de tiempo en las respuestas de metadatos. Esto se hizo para permitir mejoras de rendimiento sustanciales. Seguimos teniendo todos los datos necesarios para devolver un valor de tiempo como parte de la respuesta de metadatos y terminaremos de devolver este valor ene l futuro una vez que hayamos resuelto los problemas de rendimiento existentes.
- Los límites de recursos que son específicos para procesar ganchos de pre-recepción podrían ocasionar que fallen algunos ganchos de pre-recepción.

View File

@@ -113,5 +113,8 @@ sections:
- heading: 'Obsoletización de extensiones de bit-caché personalizadas'
notes:
- "Desde {% data variables.product.prodname_ghe_server %} 3.1, el soporte de las extensiones bit-cache propietarias de {% data variables.product.company_short %} se comenzó a eliminar paulatinamente. Estas extensiones ahora son obsoletas en {% data variables.product.prodname_ghe_server %} 3.3.\n\nCualquier repositorio que ya haya estado presente y activo en {% data variables.product.product_location %} ejecutando la versión 3.1 o 3.2 ya se actualizó atuomáticamente.\n\nLos repositorios que no estuvieron presentes y activos antes de mejorar a {% data variables.product.prodname_ghe_server %} 3.3 podrían no funcionar de forma óptima sino hasta que se ejecute una tarea de mantenimiento de repositorio y esta se complete exitosamente.\n\nPara iniciar una tarea de mantenimiento de repositorio manualmente, dirígete a `https://<hostname>/stafftools/repositories/<owner>/<repository>/network` en cada repositorio afectado y haz clic en el botón **Schedule**.\n"
- heading: 'Change to the format of authentication tokens affects GitHub Connect'
notes:
- "GitHub Connect will no longer work after June 3rd for instances running GitHub Enterprise Server 3.1 or older, due to the format of GitHub authentication tokens changing. To continue using GitHub Connect, upgrade to GitHub Enterprise Server 3.2 or later. For more information, see the [GitHub Blog](https://github.blog/2022-05-20-action-needed-by-github-connect-customers-using-ghes-3-1-and-older-to-adopt-new-authentication-token-format-updates/). [Updated: 2022-06-14]\n"
backups:
- '{% data variables.product.prodname_ghe_server %} 3.3 requiere por lo menos de las [Utilidades de Respaldo de GitHub Enterprise Backup 3.3.0](https://github.com/github/backup-utils) para hacer [Respaldos y Recuperación de Desastres](/admin/configuration/configuring-your-enterprise/configuring-backups-on-your-appliance).'

View File

@@ -0,0 +1,26 @@
---
date: '2022-06-09'
sections:
security_fixes:
- Los paquetes se actualizaron a las últimas versiones de seguridad.
bugs:
- An internal script to validate hostnames in the {% data variables.product.prodname_ghe_server %} configuration file would return an error if the hostname string started with a "." (period character).
- In HA configurations where the primary node's hostname was longer than 60 characters, MySQL would fail to be configured
- The `--gateway` argument was added to the `ghe-setup-network` command, to allow passing the gateway address when configuring network settings using the command line.
- Image attachments that were deleted would return a `500 Internal Server Error` instead of a `404 Not Found` error.
- The calculation of "maximum committers across entire instance" reported in the site admin dashboard was incorrect.
- An incorrect database entry for repository replicas caused database corruption when performing a restore using {% data variables.product.prodname_enterprise_backup_utilities %}.
changes:
- Optimised the inclusion of metrics when generating a cluster support bundle.
- In HA configurations where Elasticsearch reported a valid yellow status, changes introduced in a previous fix would block the `ghe-repl-stop` command and not allow replication to be stopped. Using `ghe-repo-stop --force` will now force Elasticsearch to stop when the service is in a normal or valid yellow status.
- When using `ghe-migrator` or exporting from {% data variables.product.prodname_dotcom_the_website %}, migrations would fail to export pull request attachments.
known_issues:
- Después de haber actualizado a {% data variables.product.prodname_ghe_server %} 3.3, podría que las {% data variables.product.prodname_actions %} no inicien automáticamente. Para resolver este problema, conéctate al aplicativo a través de SSH y ejecuta el comando `ghe-actions-start`.
- En una instancia recién configurada de {% data variables.product.prodname_ghe_server %} sin ningún usuario, un atacante podría crear el primer usuario adminsitrador.
- Las reglas de cortafuegos personalizadas se eliminan durante el proceso de actualización.
- Los archivos rastreados del LFS de Git que se [cargaron mediante la interface web](https://github.com/blog/2105-upload-files-to-your-repositories) se agregaron incorrecta y directamente al repositorio.
- Las propuestas no pudieron cerrarse si contenían un permalink a un blob en el mismo repositorio en donde la ruta de archvio del blob era más grande a 255 caracteres.
- Cuando se habilita la opción "Los usuarios pueden buscar en GitHub.com" con las propuestas de {% data variables.product.prodname_github_connect %}, las propuestas en los repositorios internos y privados no se incluyen en los resultados de búsqueda de {% data variables.product.prodname_dotcom_the_website %}.
- El registor de npm del {% data variables.product.prodname_registry %} ya no regresa un valor de tiempo en las respuestas de metadatos. Esto se hizo para permitir mejoras de rendimiento sustanciales. Seguimos teniendo todos los datos necesarios para devolver un valor de tiempo como parte de la respuesta de metadatos y terminaremos de devolver este valor ene l futuro una vez que hayamos resuelto los problemas de rendimiento existentes.
- Los límites de recursos que son específicos para procesar ganchos de pre-recepción podrían ocasionar que fallen algunos ganchos de pre-recepción.
- 'Los ajustes de almacenamiento de {% data variables.product.prodname_actions %} no pueden validarse y guardarse en la {% data variables.enterprise.management_console %} cuando se selecciona "Forzar estilo de ruta" y, en su lugar, debe configurarse la utilidad de línea de comando `ghe-actions-precheck`.'

View File

@@ -199,5 +199,10 @@ sections:
Los repositorios que no estuvieron presentes y activos antes de mejorar a {% data variables.product.prodname_ghe_server %} 3.3 podrían no funcionar de forma óptima sino hasta que se ejecute una tarea de mantenimiento de repositorio y esta se complete exitosamente.
Para iniciar una tarea de mantenimiento de repositorio manualmente, dirígete a `https://<hostname>/stafftools/repositories/<owner>/<repository>/network` en cada repositorio afectado y haz clic en el botón **Schedule**.
-
heading: Change to the format of authentication tokens affects GitHub Connect
notes:
- |
GitHub Connect will no longer work after June 3rd for instances running GitHub Enterprise Server 3.1 or older, due to the format of GitHub authentication tokens changing. For more information, see the [GitHub changelog](https://github.blog/2022-05-20-action-needed-by-github-connect-customers-using-ghes-3-1-and-older-to-adopt-new-authentication-token-format-updates/). [Updated: 2022-06-14]
backups:
- '{% data variables.product.prodname_ghe_server %} 3.4 requiere por lo menos de las [Utilidades de Respaldo de GitHub Enterprise 3.4.0](https://github.com/github/backup-utils) para la [Recuperación de Desastres y Respaldos](/admin/configuration/configuring-your-enterprise/configuring-backups-on-your-appliance).'

View File

@@ -0,0 +1,34 @@
---
date: '2022-06-09'
sections:
security_fixes:
- Los paquetes se actualizaron a las últimas versiones de seguridad.
bugs:
- An internal script to validate hostnames in the {% data variables.product.prodname_ghe_server %} configuration file would return an error if the hostname string started with a "." (period character).
- In HA configurations where the primary node's hostname was longer than 60 characters, MySQL would fail to be configured.
- When {% data variables.product.prodname_actions %} was enabled but TLS was disabled on {% data variables.product.prodname_ghe_server %} 3.4.1 and later, applying a configuration update would fail.
- The `--gateway` argument was added to the `ghe-setup-network` command, to allow passing the gateway address when configuring network settings using the command line.
- 'The [{% data variables.product.prodname_GH_advanced_security %} billing API](/rest/enterprise-admin/billing#get-github-advanced-security-active-committers-for-an-enterprise) endpoints were not enabled and accessible.'
- Image attachments that were deleted would return a `500 Internal Server Error` instead of a `404 Not Found` error.
- In environments configured with a repository cache server, the `ghe-repl-status` command incorrectly showed gists as being under-replicated.
- The "Get a commit" and "Compare two commits" endpoints in the [Commit API](/rest/commits/commits) would return a `500` error if a file path in the diff contained an encoded and escaped unicode character.
- The calculation of "maximum committers across entire instance" reported in the site admin dashboard was incorrect.
- An incorrect database entry for repository replicas caused database corruption when performing a restore using {% data variables.product.prodname_enterprise_backup_utilities %}.
- The activity timeline for secret scanning alerts wasn't displayed.
changes:
- Optimised the inclusion of metrics when generating a cluster support bundle.
- In HA configurations where Elasticsearch reported a valid yellow status, changes introduced in a previous fix would block the `ghe-repl-stop` command and not allow replication to be stopped. Using `ghe-repo-stop --force` will now force Elasticsearch to stop when the service is in a normal or valid yellow status.
known_issues:
- En una instancia recién configurada de {% data variables.product.prodname_ghe_server %} sin ningún usuario, un atacante podría crear el primer usuario adminsitrador.
- Las reglas de cortafuegos personalizadas se eliminan durante el proceso de actualización.
- Los archivos rastreados del LFS de Git que se [cargaron mediante la interface web](https://github.com/blog/2105-upload-files-to-your-repositories) se agregaron incorrecta y directamente al repositorio.
- Las propuestas no pudieron cerrarse si contenían un permalink a un blob en el mismo repositorio en donde la ruta de archvio del blob era más grande a 255 caracteres.
- Cuando se habilita la opción "Los usuarios pueden buscar en GitHub.com" con las propuestas de {% data variables.product.prodname_github_connect %}, las propuestas en los repositorios internos y privados no se incluyen en los resultados de búsqueda de {% data variables.product.prodname_dotcom_the_website %}.
- El registor de npm del {% data variables.product.prodname_registry %} ya no regresa un valor de tiempo en las respuestas de metadatos. Esto se hizo para permitir mejoras de rendimiento sustanciales. Seguimos teniendo todos los datos necesarios para devolver un valor de tiempo como parte de la respuesta de metadatos y terminaremos de devolver este valor ene l futuro una vez que hayamos resuelto los problemas de rendimiento existentes.
- Los límites de recursos que son específicos para procesar ganchos de pre-recepción podrían ocasionar que fallen algunos ganchos de pre-recepción.
- |
Cuando utilizas las aserciones cifradas con {% data variables.product.prodname_ghe_server %} 3.4.0 y 3.4.1, un atributo nuevo de XML `WantAssertionsEncrypted` en el `SPSSODescriptor` contiene un atributo inválido para los metadatos de SAML. Los IdP que consumen esta terminal de metadatos de SAML podrían encontrar errores al validar el modelo XML de los metadatos de SAML. Habrá una corrección disponible en el siguiente lanzamiento de parche. [Actualizado: 2022-04-11]
Para darle una solución a este problema, puedes tomar una de las dos acciones siguientes.
- Reconfigurar el IdP cargando una copia estática de los metadatos de SAML sin el atributo `WantAssertionsEncrypted`.
- Copiar los metadatos de SAML, eliminar el atributo `WantAssertionsEncrypted`, hospedarlo en un servidor web y reconfigurar el IdP para que apunte a esa URL.

View File

@@ -319,10 +319,10 @@ sections:
MinIO has announced the removal of the MinIO Gateways starting June 1st, 2022. While MinIO Gateway for NAS continues to be one of the supported storage providers for Github Actions and Github Packages, we recommend moving to MinIO LTS support to avail support and bug fixes from MinIO. For more information about rate limits, see "[Scheduled removal of MinIO Gateway for GCS, Azure, HDFS in the minio/minio repository](https://github.com/minio/minio/issues/14331)."
deprecations:
-
heading: Change to the format of authentication tokens
heading: Change to the format of authentication tokens affects GitHub Connect
notes:
- |
GitHub Connect will no longer work after June 3rd for instances running GitHub Enterprise Server 3.1 or older, due to the format of GitHub authentication tokens changing. For more information, see the [GitHub changelog](https://github.blog/changelog/2021-03-31-authentication-token-format-updates-are-generally-available/).
GitHub Connect will no longer work after June 3rd for instances running GitHub Enterprise Server 3.1 or older, due to the format of GitHub authentication tokens changing. To continue using GitHub Connect, upgrade to GitHub Enterprise Server 3.2 or later. For more information, see the [GitHub Blog](https://github.blog/2022-05-20-action-needed-by-github-connect-customers-using-ghes-3-1-and-older-to-adopt-new-authentication-token-format-updates/). [Updated: 2022-06-14]
-
heading: CodeQL runner deprecated in favor of CodeQL CLI
notes:

View File

@@ -0,0 +1,32 @@
---
date: '2022-06-09'
sections:
security_fixes:
- Los paquetes se actualizaron a las últimas versiones de seguridad.
bugs:
- An internal script to validate hostnames in the {% data variables.product.prodname_ghe_server %} configuration file would return an error if the hostname string started with a "." (period character).
- In HA configurations where the primary node's hostname was longer than 60 characters, MySQL would fail to be configured.
- When {% data variables.product.prodname_actions %} was enabled but TLS was disabled on {% data variables.product.prodname_ghe_server %} 3.4.1 and later, applying a configuration update would fail.
- The `--gateway` argument was added to the `ghe-setup-network` command, to allow passing the gateway address when configuring network settings using the command line.
- 'The [{% data variables.product.prodname_GH_advanced_security %} billing API](/rest/enterprise-admin/billing#get-github-advanced-security-active-committers-for-an-enterprise) endpoints were not enabled and accessible.'
- Image attachments that were deleted would return a `500 Internal Server Error` instead of a `404 Not Found` error.
- In environments configured with a repository cache server, the `ghe-repl-status` command incorrectly showed gists as being under-replicated.
- The "Get a commit" and "Compare two commits" endpoints in the [Commit API](/rest/commits/commits) would return a `500` error if a file path in the diff contained an encoded and escaped unicode character.
- The calculation of "maximum committers across entire instance" reported in the site admin dashboard was incorrect.
- An incorrect database entry for repository replicas caused database corruption when performing a restore using {% data variables.product.prodname_enterprise_backup_utilities %}.
- 'A {% data variables.product.prodname_github_app %} would not be able to subscribe to the [`secret_scanning_alert_location` webhook event](/developers/webhooks-and-events/webhooks/webhook-events-and-payloads#secret_scanning_alert_location) on an installation.'
- The activity timeline for secret scanning alerts wasn't displayed.
- Deleted repos were not purged after 90 days.
changes:
- Optimised the inclusion of metrics when generating a cluster support bundle.
- In HA configurations where Elasticsearch reported a valid yellow status, changes introduced in a previous fix would block the `ghe-repl-stop` command and not allow replication to be stopped. Using `ghe-repo-stop --force` will now force Elasticsearch to stop when the service is in a normal or valid yellow status.
known_issues:
- En una instancia recién configurada de {% data variables.product.prodname_ghe_server %} sin ningún usuario, un atacante podría crear el primer usuario adminsitrador.
- Las reglas de cortafuegos personalizadas se eliminan durante el proceso de actualización.
- Los archivos rastreados del LFS de Git que se [cargaron mediante la interface web](https://github.com/blog/2105-upload-files-to-your-repositories) se agregaron incorrecta y directamente al repositorio.
- Las propuestas no pudieron cerrarse si contenían un permalink a un blob en el mismo repositorio en donde la ruta de archvio del blob era más grande a 255 caracteres.
- Cuando se habilita "Los usuarios pueden buscar en GitHub.com" con GitHub Connect, las propuestas en los repositorios privados e internos no se incluirán en los resultados de búsqueda de GitHub.com.
- El registor de npm del {% data variables.product.prodname_registry %} ya no regresa un valor de tiempo en las respuestas de metadatos. Esto se hizo para permitir mejoras de rendimiento sustanciales. Seguimos teniendo todos los datos necesarios para devolver un valor de tiempo como parte de la respuesta de metadatos y terminaremos de devolver este valor ene l futuro una vez que hayamos resuelto los problemas de rendimiento existentes.
- Los límites de recursos que son específicos para procesar ganchos de pre-recepción podrían ocasionar que fallen algunos ganchos de pre-recepción.
- Actions services need to be restarted after restoring an appliance from a backup taken on a different host.
- 'Deleted repositories will not be purged from disk automatically after the 90-day retention period ends. This issue is resolved in the 3.5.1 release. [Updated: 2022-06-10]'

View File

@@ -4,9 +4,14 @@ Si no configuras un `container`, todos los pasos se ejecutan directamente en el
### Ejemplo: Ejecutar un job dentro de un contenedor
```yaml
```yaml{:copy}
name: CI
on:
push:
branches: [ main ]
jobs:
my_job:
container-test-job:
runs-on: ubuntu-latest
container:
image: node:14.16
env:
@@ -16,12 +21,16 @@ jobs:
volumes:
- my_docker_volume:/volume_mount
options: --cpus 1
steps:
- name: Check for dockerenv file
run: (ls /.dockerenv && echo Found dockerenv) || (echo No dockerenv)
```
Cuando solo especificas una imagen de contenedor, puedes omitir la palabra clave `image`.
```yaml
jobs:
my_job:
container-test-job:
runs-on: ubuntu-latest
container: node:14.16
```

View File

@@ -1 +0,0 @@
{% data variables.product.prodname_codespaces %} es de uso gratuito durante el beta. Cuando {% data variables.product.prodname_codespaces %} se hace generalmente disponible, se te cobrará por el almacenamiento y uso del procesamiento.

View File

@@ -1,5 +0,0 @@
Durante el beta, la funcionalidad será limitada.
- {% data reusables.codespaces.use-chrome %}
- Úicamente estará disponible un tamaño único de codespace.
- Únicamente serán compatibles los contenedores Linux.
- Un codespace no se puede reanudar completamente. Los procesos que estuvieran ejecutándose al momento en el que se paró un codespace no se reiniciarán.

View File

@@ -1,7 +1,7 @@
By default, a {% data variables.product.prodname_actions %} workflow is triggered every time you create or update a prebuild template, or push to a prebuild-enabled branch. As with other workflows, while prebuild workflows are running they will either consume some of the Actions minutes included with your account, if you have any, or they will incur charges for Actions minutes. For more information about pricing for Actions minutes, see "[About billing for {% data variables.product.prodname_actions %}](/billing/managing-billing-for-github-actions/about-billing-for-github-actions)."
If you are an organization owner, you can track usage of prebuild workflows by downloading a {% data variables.product.prodname_actions %} usage report for your organization. You can identify workflow runs for prebuilds by filtering the CSV output to only include the workflow called "Create Codespaces Prebuilds." Para obtener más información, consulta la sección "[Visualizar tu uso de {% data variables.product.prodname_actions %}](/billing/managing-billing-for-github-actions/viewing-your-github-actions-usage#viewing-github-actions-usage-for-your-organization)".
Alongside {% data variables.product.prodname_actions %} minutes, you will also be billed for the storage of prebuild templates associated with each prebuild configuration for a given repository and region. Storage of prebuild templates is billed at the same rate as storage of codespaces. For more information, see "[Calculating storage usage](#calculating-storage-usage)."
To reduce consumption of Actions minutes, you can set a prebuild template to be updated only when you make a change to your dev container configuration files, or only on a custom schedule. Para obtener más información, consulta la sección "[Configurar las precompilaciones](/codespaces/prebuilding-your-codespaces/configuring-prebuilds#configuring-a-prebuild)".
To reduce consumption of Actions minutes, you can set a prebuild template to be updated only when you make a change to your dev container configuration files, or only on a custom schedule. You can also manage your storage usage by adjusting the number of template versions to be retained for your prebuild configurations. Para obtener más información, consulta la sección "[Configurar las precompilaciones](/codespaces/prebuilding-your-codespaces/configuring-prebuilds#configuring-a-prebuild)".
While {% data variables.product.prodname_codespaces %} prebuilds is in beta there is no charge for storage of templates. When prebuilds become generally available, you will be billed for storing prebuild templates for each prebuild configuration in each region selected for that configuration.
If you are an organization owner, you can track usage of prebuild workflows and storage by downloading a {% data variables.product.prodname_actions %} usage report for your organization. You can identify workflow runs for prebuilds by filtering the CSV output to only include the workflow called "Create Codespaces Prebuilds." Para obtener más información, consulta la sección "[Visualizar tu uso de {% data variables.product.prodname_actions %}](/billing/managing-billing-for-github-actions/viewing-your-github-actions-usage#viewing-github-actions-usage-for-your-organization)".

View File

@@ -1,5 +0,0 @@
{% note %}
**Note:** The ability to prebuild codespaces is currently in beta and subject to change.
{% endnote %}

View File

@@ -1 +0,0 @@
Durante el beta, no serán compatibles los repositorios privados que pertenezcan a las organizaciones o cualquier repositorio perteneciente a una organización que requiera el inicio de sesión único de SAML.

View File

@@ -1,5 +1,5 @@
{% note %}
**Note**: The Dependency Review GitHub Action is currently in public beta and subject to change.
**Note**: The {% data variables.product.prodname_dependency_review_action %} is currently in public beta and subject to change.
{% endnote %}

View File

@@ -0,0 +1,3 @@
The {% data variables.product.prodname_dependency_review_action %} scans your pull requests for dependency changes and raises an error if any new dependencies have known vulnerabilities. The action is supported by an API endpoint that compares the dependencies between two revisions and reports any differences.
For more information about the action and the API endpoint, see "[About dependency review](/code-security/supply-chain-security/understanding-your-software-supply-chain/about-dependency-review#dependency-review-reinforcement)," and "[Dependency review](/rest/dependency-graph/dependency-review)" in the API documentation, respectively.

View File

@@ -1 +1 @@
By default, all activity types trigger workflows that run on this event. Puedes limitar tus ejecuciones de flujo de trabajo a tipos de actividad específicos usando la palabra clave `types`. Para obtener más información, consulta "[Sintaxis del flujo de trabajo para {% data variables.product.prodname_actions %}](/articles/workflow-syntax-for-github-actions#onevent_nametypes)".
Predeterminadamente, todos los tipos de actividad activan flujos de trabajo que pueden ejecutarse en este evento. Puedes limitar tus ejecuciones de flujo de trabajo a tipos de actividad específicos usando la palabra clave `types`. Para obtener más información, consulta "[Sintaxis del flujo de trabajo para {% data variables.product.prodname_actions %}](/articles/workflow-syntax-for-github-actions#onevent_nametypes)".

View File

@@ -1 +1 @@
1. Optionally, next to "Billing & plans", click **Get usage report** to email a CSV report of storage use for {% data variables.product.prodname_actions %}, {% data variables.product.prodname_registry %}, and {% data variables.product.prodname_codespaces %} to the account's primary email address. ![Descargar reporte en CSV](/assets/images/help/billing/actions-packages-report-download-org.png)
1. Optionally, next to "Usage this month", click **Get usage report** to get an email containing a link for downloading a CSV report of storage use for {% data variables.product.prodname_actions %}, {% data variables.product.prodname_registry %}, and {% data variables.product.prodname_codespaces %}. The email is sent to your account's primary email address. You can choose whether the report should cover the last 7, 30, 90, or 180 days. ![Descargar reporte en CSV](/assets/images/help/billing/actions-packages-report-download.png)

View File

@@ -1 +1 @@
1. Above the list of files, click {% octicon "git-branch" aria-label="The branch icon" %} **Branches**. ![Vínculo de ramas en página de resumen](/assets/images/help/branches/branches-overview-link.png)
1. Sobre la lista de archivos, haz clic en {% octicon "git-branch" aria-label="The branch icon" %} **Ramas**. ![Vínculo de ramas en página de resumen](/assets/images/help/branches/branches-overview-link.png)

View File

@@ -143,6 +143,7 @@ prodname_code_scanning_capc: 'Escaneo de código'
prodname_codeql_runner: 'Ejecutor de CodeQL'
prodname_advisory_database: 'GitHub Advisory Database'
prodname_codeql_workflow: 'Flujo de trabajo de análisis de CodeQL'
prodname_dependency_review_action: 'Dependency Review GitHub Action'
#Visual Studio
prodname_vs: 'Visual Studio'
prodname_vscode_shortname: 'VS Code'

View File

@@ -169,6 +169,7 @@ translations/zh-CN/content/code-security/getting-started/securing-your-repositor
translations/zh-CN/content/code-security/secret-scanning/about-secret-scanning.md,broken liquid tags
translations/zh-CN/content/code-security/secret-scanning/secret-scanning-patterns.md,broken liquid tags
translations/zh-CN/content/code-security/supply-chain-security/end-to-end-supply-chain/securing-accounts.md,broken liquid tags
translations/zh-CN/content/code-security/supply-chain-security/understanding-your-software-supply-chain/about-dependency-review.md,broken liquid tags
translations/zh-CN/content/code-security/supply-chain-security/understanding-your-software-supply-chain/about-supply-chain-security.md,broken liquid tags
translations/zh-CN/content/code-security/supply-chain-security/understanding-your-software-supply-chain/about-the-dependency-graph.md,Listed in localization-support#489
translations/zh-CN/content/code-security/supply-chain-security/understanding-your-software-supply-chain/troubleshooting-the-dependency-graph.md,broken liquid tags
1 file reason
169 translations/zh-CN/content/code-security/secret-scanning/about-secret-scanning.md broken liquid tags
170 translations/zh-CN/content/code-security/secret-scanning/secret-scanning-patterns.md broken liquid tags
171 translations/zh-CN/content/code-security/supply-chain-security/end-to-end-supply-chain/securing-accounts.md broken liquid tags
172 translations/zh-CN/content/code-security/supply-chain-security/understanding-your-software-supply-chain/about-dependency-review.md broken liquid tags
173 translations/zh-CN/content/code-security/supply-chain-security/understanding-your-software-supply-chain/about-supply-chain-security.md broken liquid tags
174 translations/zh-CN/content/code-security/supply-chain-security/understanding-your-software-supply-chain/about-the-dependency-graph.md Listed in localization-support#489
175 translations/zh-CN/content/code-security/supply-chain-security/understanding-your-software-supply-chain/troubleshooting-the-dependency-graph.md broken liquid tags

View File

@@ -138,10 +138,10 @@ jobs:
- name: Deploy to Azure Web App
id: deploy-to-webapp
uses: azure/webapps-deploy@0b651ed7546ecfc75024011f76944cb9b381ef1e
with:
app-name: {% raw %}${{ env.AZURE_WEBAPP_NAME }}{% endraw %}
publish-profile: {% raw %}${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}{% endraw %}
images: 'ghcr.io/{% raw %}${{ env.REPO }}{% endraw %}:{% raw %}${{ github.sha }}{% endraw %}'
with:
app-name: {% raw %}${{ env.AZURE_WEBAPP_NAME }}{% endraw %}
publish-profile: {% raw %}${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}{% endraw %}
images: 'ghcr.io/{% raw %}${{ env.REPO }}{% endraw %}:{% raw %}${{ github.sha }}{% endraw %}'
```
## Recursos adicionais

View File

@@ -0,0 +1,530 @@
---
title: Customizing the containers used by jobs
intro: You can customize how your self-hosted runner invokes a container for a job.
versions:
feature: container-hooks
type: reference
miniTocMaxHeadingLevel: 4
shortTitle: Customize containers used by jobs
---
{% note %}
**Observação**: Este recurso está atualmente na versão beta e está sujeito a alterações.
{% endnote %}
## About container customization
{% data variables.product.prodname_actions %} allows you to run a job within a container, using the `container:` statement in your workflow file. For more information, see "[Running jobs in a container](/actions/using-jobs/running-jobs-in-a-container)." To process container-based jobs, the self-hosted runner creates a container for each job.
{% data variables.product.prodname_actions %} supports commands that let you customize the way your containers are created by the self-hosted runner. For example, you can use these commands to manage the containers through Kubernetes or Podman, and you can also customize the `docker run` or `docker create` commands used to invoke the container. The customization commands are run by a script, which is automatically triggered when a specific environment variable is set on the runner. For more information, see "[Triggering the customization script](#triggering-the-customization-script)" below.
This customization is only available for Linux-based self-hosted runners, and root user access is not required.
## Container customization commands
{% data variables.product.prodname_actions %} includes the following commands for container customization:
- [`prepare_job`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#prepare_job): Called when a job is started.
- [`cleanup_job`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#cleanup_job): Called at the end of a job.
- [`run_container_step`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#run_container_step): Called once for each container action in the job.
- [`run_script_step`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#run_script_step): Runs any step that is not a container action.
Each of these customization commands must be defined in its own JSON file. The file name must match the command name, with the extension `.json`. For example, the `prepare_job` command is defined in `prepare_job.json`. These JSON files will then be run together on the self-hosted runner, as part of the main `index.js` script. This process is described in more detail in "[Generating the customization script](#generating-the-customization-script)."
These commands also include configuration arguments, explained below in more detail.
### `prepare_job`
The `prepare_job` command is called when a job is started. {% data variables.product.prodname_actions %} passes in any job or service containers the job has. This command will be called if you have any service or job containers in the job.
{% data variables.product.prodname_actions %} assumes that you will do the following tasks in the `prepare_job` command:
- Prune anything from previous jobs, if needed.
- Create a network, if needed.
- Pull the job and service containers.
- Start the job container.
- Start the service containers.
- Write to the response file any information that {% data variables.product.prodname_actions %} will need:
- Required: State whether the container is an `alpine` linux container (using the `isAlpine` boolean).
- Optional: Any context fields you want to set on the job context, otherwise they will be unavailable for users to use. For more information, see "[`job` context](/actions/learn-github-actions/contexts#job-context)."
- Return `0` when the health checks have succeeded and the job/service containers are started.
#### Argumentos
- `jobContainer`: **Optional**. An object containing information about the specified job container.
- `image`: **Required**. A string containing the Docker image.
- `workingDirectory`: **Required**. A string containing the absolute path of the working directory.
- `createOptions`: **Optional**. The optional _create_ options specified in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
- `userMountVolumes`: **Optional**. An array of user mount volumes set in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `systemMountVolumes`: **Required**. An array of mounts to mount into the container, same fields as above.
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `registro` **Optional**. The Docker registry credentials for a private container registry.
- `username`: **Optional**. The username of the registry account.
- `password`: **Optional**. The password to the registry account.
- `serverUrl`: **Optional**. The registry URL.
- `portMappings`: **Optional**. A key value hash of _source:target_ ports to map into the container.
- `services`: **Optional**. An array of service containers to spin up.
- `contextName`: **Required**. The name of the service in the Job context.
- `image`: **Required**. A string containing the Docker image.
- `createOptions`: **Optional**. The optional _create_ options specified in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
- `userMountVolumes`: **Optional**. An array of mounts to mount into the container, same fields as above.
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `registro` **Optional**. The Docker registry credentials for the private container registry.
- `username`: **Optional**. The username of the registry account.
- `password`: **Optional**. The password to the registry account.
- `serverUrl`: **Optional**. The registry URL.
- `portMappings`: **Optional**. A key value hash of _source:target_ ports to map into the container.
#### Example input
```json{:copy}
{
"command": "prepare_job",
"responseFile": "/users/octocat/runner/_work/{guid}.json",
"state": {},
"args": {
"jobContainer": {
"image": "node:14.16",
"workingDirectory": "/__w/octocat-test2/octocat-test2",
"createOptions": "--cpus 1",
"environmentVariables": {
"NODE_ENV": "development"
},
"userMountVolumes": [
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
}
],
"systemMountVolumes": [
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work",
"targetVolumePath": "/__w",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/externals",
"targetVolumePath": "/__e",
"readOnly": true
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow",
"readOnly": false
}
],
"registry": {
"username": "octocat",
"password": "examplePassword",
"serverUrl": "https://index.docker.io/v1"
},
"portMappings": { "80": "801" }
},
"services": [
{
"contextName": "redis",
"image": "redis",
"createOptions": "--cpus 1",
"environmentVariables": {},
"userMountVolumes": [],
"portMappings": { "80": "801" },
"registry": {
"username": "octocat",
"password": "examplePassword",
"serverUrl": "https://index.docker.io/v1"
}
}
]
}
}
```
#### Example output
This example output is the contents of the `responseFile` defined in the input above.
```json{:copy}
{
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"serviceContainers": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"context": {
"container": {
"id": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"network": "example_network_53269bd575972817b43f7733536b200c"
},
"services": {
"redis": {
"id": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105",
"ports": {
"8080": "8080"
},
"network": "example_network_53269bd575972817b43f7733536b200c"
}
},
"isAlpine": true
}
}
```
### `cleanup_job`
The `cleanup_job` command is called at the end of a job. {% data variables.product.prodname_actions %} assumes that you will do the following tasks in the `cleanup_job` command:
- Stop any running service or job containers (or the equivalent pod).
- Stop the network (if one exists).
- Delete any job or service containers (or the equivalent pod).
- Delete the network (if one exists).
- Cleanup anything else that was created for the job.
#### Argumentos
No arguments are provided for `cleanup_job`.
#### Example input
```json{:copy}
{
"command": "cleanup_job",
"responseFile": null,
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"serviceContainers": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"args": {}
}
```
#### Example output
No output is expected for `cleanup_job`.
### `run_container_step`
The `run_container_step` command is called once for each container action in your job. {% data variables.product.prodname_actions %} assumes that you will do the following tasks in the `run_container_step` command:
- Pull or build the required container (or fail if you cannot).
- Run the container action and return the exit code of the container.
- Stream any step logs output to stdout and stderr.
- Cleanup the container after it executes.
#### Argumentos
- `image`: **Optional**. A string containing the docker image. Otherwise a dockerfile must be provided.
- `dockerfile`: **Optional**. A string containing the path to the dockerfile, otherwise an image must be provided.
- `entryPointArgs`: **Optional**. A list containing the entry point args.
- `entryPoint`: **Optional**. The container entry point to use if the default image entrypoint should be overwritten.
- `workingDirectory`: **Required**. A string containing the absolute path of the working directory.
- `createOptions`: **Optional**. The optional _create_ options specified in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
- `prependPath`: **Optional**. An array of additional paths to prepend to the `$PATH` variable.
- `userMountVolumes`: **Optional**. an array of user mount volumes set in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `systemMountVolumes`: **Required**. An array of mounts to mount into the container, using the same fields as above.
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `registro` **Optional**. The Docker registry credentials for a private container registry.
- `username`: **Optional**. The username of the registry account.
- `password`: **Optional**. The password to the registry account.
- `serverUrl`: **Optional**. The registry URL.
- `portMappings`: **Optional**. A key value hash of the _source:target_ ports to map into the container.
#### Example input for image
If you're using a Docker image, you can specify the image name in the `"image":` parameter.
```json{:copy}
{
"command": "run_container_step",
"responseFile": null,
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"serviceContainers": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"args": {
"image": "node:14.16",
"dockerfile": null,
"entryPointArgs": ["-f", "/dev/null"],
"entryPoint": "tail",
"workingDirectory": "/__w/octocat-test2/octocat-test2",
"createOptions": "--cpus 1",
"environmentVariables": {
"NODE_ENV": "development"
},
"prependPath": ["/foo/bar", "bar/foo"],
"userMountVolumes": [
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
}
],
"systemMountVolumes": [
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work",
"targetVolumePath": "/__w",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/externals",
"targetVolumePath": "/__e",
"readOnly": true
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow",
"readOnly": false
}
],
"registry": null,
"portMappings": { "80": "801" }
}
}
```
#### Example input for Dockerfile
If your container is defined by a Dockerfile, this example demonstrates how to specify the path to a `Dockerfile` in your input, using the `"dockerfile":` parameter.
```json{:copy}
{
"command": "run_container_step",
"responseFile": null,
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"services": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"args": {
"image": null,
"dockerfile": "/__w/_actions/foo/dockerfile",
"entryPointArgs": ["hello world"],
"entryPoint": "echo",
"workingDirectory": "/__w/octocat-test2/octocat-test2",
"createOptions": "--cpus 1",
"environmentVariables": {
"NODE_ENV": "development"
},
"prependPath": ["/foo/bar", "bar/foo"],
"userMountVolumes": [
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
}
],
"systemMountVolumes": [
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work",
"targetVolumePath": "/__w",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/externals",
"targetVolumePath": "/__e",
"readOnly": true
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow",
"readOnly": false
}
],
"registry": null,
"portMappings": { "80": "801" }
}
}
```
#### Example output
No output is expected for `run_container_step`.
### `run_script_step`
{% data variables.product.prodname_actions %} assumes that you will do the following tasks:
- Invoke the provided script inside the job container and return the exit code.
- Stream any step log output to stdout and stderr.
#### Argumentos
- `entryPointArgs`: **Optional**. A list containing the entry point arguments.
- `entryPoint`: **Optional**. The container entry point to use if the default image entrypoint should be overwritten.
- `prependPath`: **Optional**. An array of additional paths to prepend to the `$PATH` variable.
- `workingDirectory`: **Required**. A string containing the absolute path of the working directory.
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
#### Example input
```json{:copy}
{
"command": "run_script_step",
"responseFile": null,
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"serviceContainers": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"args": {
"entryPointArgs": ["-e", "/runner/temp/example.sh"],
"entryPoint": "bash",
"environmentVariables": {
"NODE_ENV": "development"
},
"prependPath": ["/foo/bar", "bar/foo"],
"workingDirectory": "/__w/octocat-test2/octocat-test2"
}
}
```
#### Example output
No output is expected for `run_script_step`.
## Generating the customization script
{% data variables.product.prodname_dotcom %} has created an example repository that demonstrates how to generate customization scripts for Docker and Kubernetes.
{% note %}
**Note:** The resulting scripts are available for testing purposes, and you will need to determine whether they are appropriate for your requirements.
{% endnote %}
1. Clone the [actions/runner-container-hooks](https://github.com/actions/runner-container-hooks) repository to your self-hosted runner.
1. The `examples/` directory contains some existing customization commands, each with its own JSON file. You can review these examples and use them as a starting point for your own customization commands.
- `prepare_job.json`
- `run_script_step.json`
- `run_container_step.json`
1. Build the npm packages. These commands generate the `index.js` files inside `packages/docker/dist` and `packages/k8s/dist`.
```shell
npm install && npm run bootstrap && npm run build-all
```
When the resulting `index.js` is triggered by {% data variables.product.prodname_actions %}, it will run the customization commands defined in the JSON files. To trigger the `index.js`, you will need to add it your `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER` environment variable, as described in the next section.
## Triggering the customization script
The custom script must be located on the runner, but should not be stored in the self-hosted runner application directory. Os scripts são executados no contexto de segurança da conta de serviço que está executando o serviço do executor.
{% note %}
**Note**: The triggered script is processed synchronously, so it will block job execution while running.
{% endnote %}
The script is automatically executed when the runner has the following environment variable containing an absolute path to the script:
- `ACTIONS_RUNNER_CONTAINER_HOOK`: The script defined in this environment variable is triggered when a job has been assigned to a runner, but before the job starts running.
To set this environment variable, you can either add it to the operating system, or add it to a file named `.env` within the self-hosted runner application directory. For example, the following `.env` entry will have the runner automatically run the script at `/Users/octocat/runner/index.js` before each container-based job runs:
```bash
ACTIONS_RUNNER_CONTAINER_HOOK=/Users/octocat/runner/index.js
```
If you want to ensure that your job always runs inside a container, and subsequently always applies your container customizations, you can set the `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER` variable on the self hosted runner to `true`. This will fail jobs that do not specify a job container.
## Solução de Problemas
### Sem configuração de tempo limite
There is currently no timeout setting available for the script executed by `ACTIONS_RUNNER_CONTAINER_HOOK`. Como resultado, você pode considerar adicionar tempo limite de manipulação ao seu script.
### Revisando o log de execução do fluxo de trabalho
Para confirmar se seus scripts estão sendo executados, você pode revisar os logs para esse trabalho. Para obter mais informações sobre a verificação dos logs, consulte "[Visualizando os logs para diagnosticar as falhas](/actions/monitoring-and-troubleshooting-workflows/using-workflow-run-logs#viewing-logs-to-diagnose-failures)".

View File

@@ -20,6 +20,7 @@ children:
- /adding-self-hosted-runners
- /autoscaling-with-self-hosted-runners
- /running-scripts-before-or-after-a-job
- /customizing-the-containers-used-by-jobs
- /configuring-the-self-hosted-runner-application-as-a-service
- /using-a-proxy-server-with-self-hosted-runners
- /using-labels-with-self-hosted-runners

View File

@@ -23,7 +23,7 @@ shortTitle: Filtrando alertas
## Sobre a filtragem da visão geral de segurança
Você pode usar filtros na visão geral de segurança para restringir seu foco baseado em uma série de fatores como, por exemplo, o nível de risco de alerta, tipo de alerta e habilitação do recurso. Existem filtros diferentes disponíveis, dependendo da visualização específica e da análise no nível da organização, da equipe ou do repositório.
Você pode usar filtros na visão geral de segurança para restringir seu foco baseado em uma série de fatores como, por exemplo, o nível de risco de alerta, tipo de alerta e habilitação do recurso. Different filters are available depending on the specific view and whether your analysis is at the organization, team or repository level.
## Filtrar por repositório

View File

@@ -159,7 +159,7 @@ Como as permissões de nível de usuário são concedidas em uma base de usuári
## Solicitações de usuário para servidor
Embora a maior parte da interação da sua API deva ocorrer usando os tokens de acesso de servidor para servidor, certos pontos de extremidade permitem que você execute ações por meio da API usando um token de acesso do usuário. Seu aplicativo pode fazer as seguintes solicitações usando pontos de extremidade do [GraphQL v4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql) ou [REST v3](/rest).
Embora a maior parte da interação da sua API deva ocorrer usando os tokens de acesso de servidor para servidor, certos pontos de extremidade permitem que você execute ações por meio da API usando um token de acesso do usuário. Your app can make the following requests using [GraphQL]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql) or [REST](/rest) endpoints.
### Pontos de extremidade compatíveis

View File

@@ -53,7 +53,7 @@ Recomendamos que você reveja a lista de pontos finais de API de que você preci
### Projete para permanecer dentro dos limites de taxa da API
Os aplicativos GitHub usam [regras móveis para limites de taxa](/apps/building-github-apps/understanding-rate-limits-for-github-apps/), que podem aumentar com base no número de repositórios e usuários da organização. Um aplicativo do GitHub também pode usar [solicitações condicionais](/rest/overview/resources-in-the-rest-api#conditional-requests) ou consolidar solicitações usando [GraphQL API V4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
Os aplicativos GitHub usam [regras móveis para limites de taxa](/apps/building-github-apps/understanding-rate-limits-for-github-apps/), que podem aumentar com base no número de repositórios e usuários da organização. A GitHub App can also make use of [conditional requests](/rest/overview/resources-in-the-rest-api#conditional-requests) or consolidate requests by using the [GraphQL API]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
### Cadastre um novo aplicativo GitHub

View File

@@ -14,7 +14,7 @@ topics:
- API
---
Existem duas versões estáveis da API do GitHub: a [API REST](/rest) e a [API do GraphQL]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql). Ao usar a API REST, incentivamos que você a [solicite a v3 por meio do cabeçalho `Aceitar`](/v3/media/#request-specific-version). Para obter informações sobre como usar a API do GraphQL, consulte a [documentação da v4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
Existem duas versões estáveis da API do GitHub: a [API REST](/rest) e a [API do GraphQL]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
## Versões obsoletas

View File

@@ -12,7 +12,7 @@ topics:
- API
---
Você pode acessar a maioria dos objetos do GitHub (usuários, problemas, pull requests, etc.) usando a API REST ou a API do GraphQL. Você pode encontrar o **ID do nó global** de muitos objetos de dentro da API REST e usar esses IDs nas operações do GraphQL. Para obter mais informações, consulte "[Pré-visualizações dos IDs do nó da API do GraphQL v4 nos recursos da API REST v3](https://developer.github.com/changes/2017-12-19-graphql-node-id/)".
Você pode acessar a maioria dos objetos do GitHub (usuários, problemas, pull requests, etc.) usando a API REST ou a API do GraphQL. Você pode encontrar o **ID do nó global** de muitos objetos de dentro da API REST e usar esses IDs nas operações do GraphQL. For more information, see "[Preview GraphQL API Node IDs in REST API resources](https://developer.github.com/changes/2017-12-19-graphql-node-id/)."
{% note %}

View File

@@ -14,7 +14,7 @@ topics:
## Limite de nó
Para passar a validação do [esquema](/graphql/guides/introduction-to-graphql#schema), todas as [chamadas ](/graphql/guides/forming-calls-with-graphql) da API v4 do GraphQL devem atender a esses padrões:
To pass [schema](/graphql/guides/introduction-to-graphql#schema) validation, all GraphQL API [calls](/graphql/guides/forming-calls-with-graphql) must meet these standards:
* Os clientes devem fornecer um `primeiro` ou `último` argumento [conexão](/graphql/guides/introduction-to-graphql#connection).
* Os valores de `primeiro` e `último` devem ser entre 1 e 100.
@@ -130,30 +130,30 @@ Estes dois exemplos mostram como calcular os nós totais em uma chamada.
## Limite de taxa
O [limite de taxas](/rest/overview/resources-in-the-rest-api#rate-limiting) do GraphQL API v4 é diferente dos limites de taxa do REST API v3.
The GraphQL API limit is different from the REST API's [rate limits](/rest/overview/resources-in-the-rest-api#rate-limiting).
Por que os limites de taxa de API são diferentes? Com o [GraphQL](/graphql), uma chamada do GraphQL pode substituir [várias chamadas de REST](/graphql/guides/migrating-from-rest-to-graphql). Uma chamada única e complexa do GraphQL poderia ser o equivalente a milhares de solicitações de REST. Embora uma única chamada GraphQL fique bem abaixo do limite de taxa de API REST, a consulta pode ser muito cara para os servidores do GitHub calcularem.
Para representar com precisão o custo de servidor de uma consulta, a API v4 do GraphQL calcula a **pontuação de um limite de taxa** de uma chamada com base em uma escala normalizada de pontos. Os fatores de pontuação de uma consulta no primeiro e último argumentos em uma conexão principal e suas conexões auxiliares.
To accurately represent the server cost of a query, the GraphQL API calculates a call's **rate limit score** based on a normalized scale of points. Os fatores de pontuação de uma consulta no primeiro e último argumentos em uma conexão principal e suas conexões auxiliares.
* A fórmula usa argumentos `primeiros` e `último` na conexão principal e nas conexões secundárias para pré-calcular o potencial de carga nos sistemas do GitHub, como MySQL, ElasticSearch e Git.
* Cada nova conexão tem o seu valor próprio de pontos. Os pontos são combinados com outros pontos da chamada para uma pontuação de limite de taxa geral.
O limite de taxa de câmbio da API v4 do GraphQL é **5.000 pontos por hora**.
The GraphQL API rate limit is **5,000 points per hour**.
Observe que 5.000 pontos por hora não é o mesmo que 5.000 chamadas por hora: a API v4 do GraphQL e a API v3 de REST usam diferentes limites de taxa.
Note that 5,000 points per hour is not the same as 5,000 calls per hour: the GraphQL API and REST API use different rate limits.
{% note %}
**Observação**: A fórmula atual e o limite de taxa estão sujeitos a alterações à medida que observamos como os desenvolvedores usam a API v4 do GraphQL.
**Note**: The current formula and rate limit are subject to change as we observe how developers use the GraphQL API.
{% endnote %}
### Retornar o status de limite da chamada
Com a API REST v3, você pode verificar o status do limite de taxa [inspecionando](/rest/overview/resources-in-the-rest-api#rate-limiting) os cabeçalhos HTTP retornados.
With the REST API, you can check the rate limit status by [inspecting](/rest/overview/resources-in-the-rest-api#rate-limiting) the returned HTTP headers.
Com o GraphQL API v4, você pode verificar o status do limite de taxa consultando campos no objeto `rateLimit`:
With the GraphQL API, you can check the rate limit status by querying fields on the `rateLimit` object:
```graphql
query {
@@ -186,7 +186,7 @@ Consultar o objeto `rateLimit` retorna a pontuação de uma chamada, mas executa
{% note %}
**Observação**: O custo mínimo de uma chamada para a API v4 do GraphQL é **1**, o que representa uma única solicitação.
**Note**: The minimum cost of a call to the GraphQL API is **1**, representing a single request.
{% endnote %}

View File

@@ -24,7 +24,7 @@ Por padrão, todas as solicitações para `{% data variables.product.api_url_cod
{% ifversion fpt or ghec %}
Para obter informações sobre a API do GraphQL do GitHub, consulte a [documentação v4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql). Para obter informações sobre migração para o GraphQL, consulte "[Fazendo a migração do REST]({% ifversion ghec%}/free-pro-team@latest{% endif %}/graphql/guides/migrating-from-rest-to-graphql)".
For information about GitHub's GraphQL API, see the [documentation]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql). Para obter informações sobre migração para o GraphQL, consulte "[Fazendo a migração do REST]({% ifversion ghec%}/free-pro-team@latest{% endif %}/graphql/guides/migrating-from-rest-to-graphql)".
{% endif %}

View File

@@ -0,0 +1,7 @@
---
#Reference: #7070
#Actions Runner Container Hooks
versions:
fpt: '*'
ghec: '*'
ghae: 'issue-7070'

View File

@@ -4,9 +4,14 @@ Se você não definir um `container`, todas as etapas serão executadas diretame
### Exemplo: Executar um trabalho dentro de um contêiner
```yaml
```yaml{:copy}
name: CI
on:
push:
branches: [ main ]
jobs:
my_job:
container-test-job:
runs-on: ubuntu-latest
container:
image: node:14.16
env:
@@ -16,12 +21,16 @@ jobs:
volumes:
- my_docker_volume:/volume_mount
options: --cpus 1
steps:
- name: Check for dockerenv file
run: (ls /.dockerenv && echo Found dockerenv) || (echo No dockerenv)
```
Ao especificar somente uma imagem de contêiner, você pode omitir a palavra-chave `image`.
```yaml
jobs:
my_job:
container-test-job:
runs-on: ubuntu-latest
container: node:14.16
```

View File

@@ -138,10 +138,10 @@ jobs:
- name: Deploy to Azure Web App
id: deploy-to-webapp
uses: azure/webapps-deploy@0b651ed7546ecfc75024011f76944cb9b381ef1e
with:
app-name: {% raw %}${{ env.AZURE_WEBAPP_NAME }}{% endraw %}
publish-profile: {% raw %}${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}{% endraw %}
images: 'ghcr.io/{% raw %}${{ env.REPO }}{% endraw %}:{% raw %}${{ github.sha }}{% endraw %}'
with:
app-name: {% raw %}${{ env.AZURE_WEBAPP_NAME }}{% endraw %}
publish-profile: {% raw %}${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}{% endraw %}
images: 'ghcr.io/{% raw %}${{ env.REPO }}{% endraw %}:{% raw %}${{ github.sha }}{% endraw %}'
```
## 其他资源

View File

@@ -0,0 +1,530 @@
---
title: Customizing the containers used by jobs
intro: You can customize how your self-hosted runner invokes a container for a job.
versions:
feature: container-hooks
type: reference
miniTocMaxHeadingLevel: 4
shortTitle: Customize containers used by jobs
---
{% note %}
**注意**:此功能目前在测试中,可能会更改。
{% endnote %}
## About container customization
{% data variables.product.prodname_actions %} allows you to run a job within a container, using the `container:` statement in your workflow file. For more information, see "[Running jobs in a container](/actions/using-jobs/running-jobs-in-a-container)." To process container-based jobs, the self-hosted runner creates a container for each job.
{% data variables.product.prodname_actions %} supports commands that let you customize the way your containers are created by the self-hosted runner. For example, you can use these commands to manage the containers through Kubernetes or Podman, and you can also customize the `docker run` or `docker create` commands used to invoke the container. The customization commands are run by a script, which is automatically triggered when a specific environment variable is set on the runner. For more information, see "[Triggering the customization script](#triggering-the-customization-script)" below.
This customization is only available for Linux-based self-hosted runners, and root user access is not required.
## Container customization commands
{% data variables.product.prodname_actions %} includes the following commands for container customization:
- [`prepare_job`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#prepare_job): Called when a job is started.
- [`cleanup_job`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#cleanup_job): Called at the end of a job.
- [`run_container_step`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#run_container_step): Called once for each container action in the job.
- [`run_script_step`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#run_script_step): Runs any step that is not a container action.
Each of these customization commands must be defined in its own JSON file. The file name must match the command name, with the extension `.json`. For example, the `prepare_job` command is defined in `prepare_job.json`. These JSON files will then be run together on the self-hosted runner, as part of the main `index.js` script. This process is described in more detail in "[Generating the customization script](#generating-the-customization-script)."
These commands also include configuration arguments, explained below in more detail.
### `prepare_job`
The `prepare_job` command is called when a job is started. {% data variables.product.prodname_actions %} passes in any job or service containers the job has. This command will be called if you have any service or job containers in the job.
{% data variables.product.prodname_actions %} assumes that you will do the following tasks in the `prepare_job` command:
- Prune anything from previous jobs, if needed.
- Create a network, if needed.
- Pull the job and service containers.
- Start the job container.
- Start the service containers.
- Write to the response file any information that {% data variables.product.prodname_actions %} will need:
- Required: State whether the container is an `alpine` linux container (using the `isAlpine` boolean).
- Optional: Any context fields you want to set on the job context, otherwise they will be unavailable for users to use. For more information, see "[`job` context](/actions/learn-github-actions/contexts#job-context)."
- Return `0` when the health checks have succeeded and the job/service containers are started.
#### 参数
- `jobContainer`: **Optional**. An object containing information about the specified job container.
- `image`: **Required**. A string containing the Docker image.
- `workingDirectory`: **Required**. A string containing the absolute path of the working directory.
- `createOptions`: **Optional**. The optional _create_ options specified in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
- `userMountVolumes`: **Optional**. An array of user mount volumes set in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `systemMountVolumes`: **Required**. An array of mounts to mount into the container, same fields as above.
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `注册表` **Optional**. The Docker registry credentials for a private container registry.
- `username`: **Optional**. The username of the registry account.
- `password`: **Optional**. The password to the registry account.
- `serverUrl`: **Optional**. The registry URL.
- `portMappings`: **Optional**. A key value hash of _source:target_ ports to map into the container.
- `services`: **Optional**. An array of service containers to spin up.
- `contextName`: **Required**. The name of the service in the Job context.
- `image`: **Required**. A string containing the Docker image.
- `createOptions`: **Optional**. The optional _create_ options specified in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
- `userMountVolumes`: **Optional**. An array of mounts to mount into the container, same fields as above.
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `注册表` **Optional**. The Docker registry credentials for the private container registry.
- `username`: **Optional**. The username of the registry account.
- `password`: **Optional**. The password to the registry account.
- `serverUrl`: **Optional**. The registry URL.
- `portMappings`: **Optional**. A key value hash of _source:target_ ports to map into the container.
#### Example input
```json{:copy}
{
"command": "prepare_job",
"responseFile": "/users/octocat/runner/_work/{guid}.json",
"state": {},
"args": {
"jobContainer": {
"image": "node:14.16",
"workingDirectory": "/__w/octocat-test2/octocat-test2",
"createOptions": "--cpus 1",
"environmentVariables": {
"NODE_ENV": "development"
},
"userMountVolumes": [
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
}
],
"systemMountVolumes": [
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work",
"targetVolumePath": "/__w",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/externals",
"targetVolumePath": "/__e",
"readOnly": true
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow",
"readOnly": false
}
],
"registry": {
"username": "octocat",
"password": "examplePassword",
"serverUrl": "https://index.docker.io/v1"
},
"portMappings": { "80": "801" }
},
"services": [
{
"contextName": "redis",
"image": "redis",
"createOptions": "--cpus 1",
"environmentVariables": {},
"userMountVolumes": [],
"portMappings": { "80": "801" },
"registry": {
"username": "octocat",
"password": "examplePassword",
"serverUrl": "https://index.docker.io/v1"
}
}
]
}
}
```
#### Example output
This example output is the contents of the `responseFile` defined in the input above.
```json{:copy}
{
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"serviceContainers": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"context": {
"container": {
"id": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"network": "example_network_53269bd575972817b43f7733536b200c"
},
"services": {
"redis": {
"id": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105",
"ports": {
"8080": "8080"
},
"network": "example_network_53269bd575972817b43f7733536b200c"
}
},
"isAlpine": true
}
}
```
### `cleanup_job`
The `cleanup_job` command is called at the end of a job. {% data variables.product.prodname_actions %} assumes that you will do the following tasks in the `cleanup_job` command:
- Stop any running service or job containers (or the equivalent pod).
- Stop the network (if one exists).
- Delete any job or service containers (or the equivalent pod).
- Delete the network (if one exists).
- Cleanup anything else that was created for the job.
#### 参数
No arguments are provided for `cleanup_job`.
#### Example input
```json{:copy}
{
"command": "cleanup_job",
"responseFile": null,
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"serviceContainers": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"args": {}
}
```
#### Example output
No output is expected for `cleanup_job`.
### `run_container_step`
The `run_container_step` command is called once for each container action in your job. {% data variables.product.prodname_actions %} assumes that you will do the following tasks in the `run_container_step` command:
- Pull or build the required container (or fail if you cannot).
- Run the container action and return the exit code of the container.
- Stream any step logs output to stdout and stderr.
- Cleanup the container after it executes.
#### 参数
- `image`: **Optional**. A string containing the docker image. Otherwise a dockerfile must be provided.
- `dockerfile`: **Optional**. A string containing the path to the dockerfile, otherwise an image must be provided.
- `entryPointArgs`: **Optional**. A list containing the entry point args.
- `entryPoint`: **Optional**. The container entry point to use if the default image entrypoint should be overwritten.
- `workingDirectory`: **Required**. A string containing the absolute path of the working directory.
- `createOptions`: **Optional**. The optional _create_ options specified in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
- `prependPath`: **Optional**. An array of additional paths to prepend to the `$PATH` variable.
- `userMountVolumes`: **Optional**. an array of user mount volumes set in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `systemMountVolumes`: **Required**. An array of mounts to mount into the container, using the same fields as above.
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
- `注册表` **Optional**. The Docker registry credentials for a private container registry.
- `username`: **Optional**. The username of the registry account.
- `password`: **Optional**. The password to the registry account.
- `serverUrl`: **Optional**. The registry URL.
- `portMappings`: **Optional**. A key value hash of the _source:target_ ports to map into the container.
#### Example input for image
If you're using a Docker image, you can specify the image name in the `"image":` parameter.
```json{:copy}
{
"command": "run_container_step",
"responseFile": null,
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"serviceContainers": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"args": {
"image": "node:14.16",
"dockerfile": null,
"entryPointArgs": ["-f", "/dev/null"],
"entryPoint": "tail",
"workingDirectory": "/__w/octocat-test2/octocat-test2",
"createOptions": "--cpus 1",
"environmentVariables": {
"NODE_ENV": "development"
},
"prependPath": ["/foo/bar", "bar/foo"],
"userMountVolumes": [
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
}
],
"systemMountVolumes": [
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work",
"targetVolumePath": "/__w",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/externals",
"targetVolumePath": "/__e",
"readOnly": true
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow",
"readOnly": false
}
],
"registry": null,
"portMappings": { "80": "801" }
}
}
```
#### Example input for Dockerfile
If your container is defined by a Dockerfile, this example demonstrates how to specify the path to a `Dockerfile` in your input, using the `"dockerfile":` parameter.
```json{:copy}
{
"command": "run_container_step",
"responseFile": null,
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"services": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"args": {
"image": null,
"dockerfile": "/__w/_actions/foo/dockerfile",
"entryPointArgs": ["hello world"],
"entryPoint": "echo",
"workingDirectory": "/__w/octocat-test2/octocat-test2",
"createOptions": "--cpus 1",
"environmentVariables": {
"NODE_ENV": "development"
},
"prependPath": ["/foo/bar", "bar/foo"],
"userMountVolumes": [
{
"sourceVolumePath": "my_docker_volume",
"targetVolumePath": "/volume_mount",
"readOnly": false
}
],
"systemMountVolumes": [
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work",
"targetVolumePath": "/__w",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/externals",
"targetVolumePath": "/__e",
"readOnly": true
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp",
"targetVolumePath": "/__w/_temp",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_actions",
"targetVolumePath": "/__w/_actions",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_tool",
"targetVolumePath": "/__w/_tool",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_home",
"targetVolumePath": "/github/home",
"readOnly": false
},
{
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_workflow",
"targetVolumePath": "/github/workflow",
"readOnly": false
}
],
"registry": null,
"portMappings": { "80": "801" }
}
}
```
#### Example output
No output is expected for `run_container_step`.
### `run_script_step`
{% data variables.product.prodname_actions %} assumes that you will do the following tasks:
- Invoke the provided script inside the job container and return the exit code.
- Stream any step log output to stdout and stderr.
#### 参数
- `entryPointArgs`: **Optional**. A list containing the entry point arguments.
- `entryPoint`: **Optional**. The container entry point to use if the default image entrypoint should be overwritten.
- `prependPath`: **Optional**. An array of additional paths to prepend to the `$PATH` variable.
- `workingDirectory`: **Required**. A string containing the absolute path of the working directory.
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
#### Example input
```json{:copy}
{
"command": "run_script_step",
"responseFile": null,
"state": {
"network": "example_network_53269bd575972817b43f7733536b200c",
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
"serviceContainers": {
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
}
},
"args": {
"entryPointArgs": ["-e", "/runner/temp/example.sh"],
"entryPoint": "bash",
"environmentVariables": {
"NODE_ENV": "development"
},
"prependPath": ["/foo/bar", "bar/foo"],
"workingDirectory": "/__w/octocat-test2/octocat-test2"
}
}
```
#### Example output
No output is expected for `run_script_step`.
## Generating the customization script
{% data variables.product.prodname_dotcom %} has created an example repository that demonstrates how to generate customization scripts for Docker and Kubernetes.
{% note %}
**Note:** The resulting scripts are available for testing purposes, and you will need to determine whether they are appropriate for your requirements.
{% endnote %}
1. Clone the [actions/runner-container-hooks](https://github.com/actions/runner-container-hooks) repository to your self-hosted runner.
1. The `examples/` directory contains some existing customization commands, each with its own JSON file. You can review these examples and use them as a starting point for your own customization commands.
- `prepare_job.json`
- `run_script_step.json`
- `run_container_step.json`
1. Build the npm packages. These commands generate the `index.js` files inside `packages/docker/dist` and `packages/k8s/dist`.
```shell
npm install && npm run bootstrap && npm run build-all
```
When the resulting `index.js` is triggered by {% data variables.product.prodname_actions %}, it will run the customization commands defined in the JSON files. To trigger the `index.js`, you will need to add it your `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER` environment variable, as described in the next section.
## Triggering the customization script
The custom script must be located on the runner, but should not be stored in the self-hosted runner application directory. 这些脚本在执行运行器服务的服务帐户的安全上下文中执行。
{% note %}
**Note**: The triggered script is processed synchronously, so it will block job execution while running.
{% endnote %}
The script is automatically executed when the runner has the following environment variable containing an absolute path to the script:
- `ACTIONS_RUNNER_CONTAINER_HOOK`: The script defined in this environment variable is triggered when a job has been assigned to a runner, but before the job starts running.
To set this environment variable, you can either add it to the operating system, or add it to a file named `.env` within the self-hosted runner application directory. For example, the following `.env` entry will have the runner automatically run the script at `/Users/octocat/runner/index.js` before each container-based job runs:
```bash
ACTIONS_RUNNER_CONTAINER_HOOK=/Users/octocat/runner/index.js
```
If you want to ensure that your job always runs inside a container, and subsequently always applies your container customizations, you can set the `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER` variable on the self hosted runner to `true`. This will fail jobs that do not specify a job container.
## 疑难解答
### 无超时设置
There is currently no timeout setting available for the script executed by `ACTIONS_RUNNER_CONTAINER_HOOK`. 因此,您可以考虑向脚本添加超时处理。
### 查看工作流程运行日志
要确认脚本是否正在执行,可以查看该作业的日志。 有关检查日志的详细信息,请参阅“[查看日志以诊断故障](/actions/monitoring-and-troubleshooting-workflows/using-workflow-run-logs#viewing-logs-to-diagnose-failures)”。

View File

@@ -20,6 +20,7 @@ children:
- /adding-self-hosted-runners
- /autoscaling-with-self-hosted-runners
- /running-scripts-before-or-after-a-job
- /customizing-the-containers-used-by-jobs
- /configuring-the-self-hosted-runner-application-as-a-service
- /using-a-proxy-server-with-self-hosted-runners
- /using-labels-with-self-hosted-runners

View File

@@ -23,7 +23,7 @@ shortTitle: 筛选警报
## 关于筛选安全性概述
可以使用安全概述中的筛选器,根据一系列因素(如警报风险级别、警报类型和功能启用)缩小关注范围。 根据特定视图以及是在组织、团队还是存储库级别进行分析,可以使用不同的筛选器。
可以使用安全概述中的筛选器,根据一系列因素(如警报风险级别、警报类型和功能启用)缩小关注范围。 Different filters are available depending on the specific view and whether your analysis is at the organization, team or repository level.
## 按仓库过滤

View File

@@ -1,8 +1,8 @@
---
title: 关于依赖项审查
intro: 依赖项审查可让您在将有漏洞的依赖项引入您的环境之前找到它们,并提供关于许可证、依赖项和依赖项存在时间的信息。
title: About dependency review
intro: 'Dependency review lets you catch vulnerable dependencies before you introduce them to your environment, and provides information on license, dependents, and age of dependencies.'
product: '{% data reusables.gated-features.dependency-review %}'
shortTitle: 依赖项审查
shortTitle: Dependency review
versions:
fpt: '*'
ghes: '>= 3.2'
@@ -21,47 +21,48 @@ redirect_from:
{% data reusables.dependency-review.beta %}
## 关于依赖项审查
## About dependency review
{% data reusables.dependency-review.feature-overview %}
{% data reusables.dependency-review.feature-overview %}
如果拉取请求针对仓库的默认分支并且包含对包清单或锁定文件的更改,您可以显示依赖项审查以查看更改的内容。 依赖项审查包括对锁定文件中间接依赖项的更改详情,并告诉您任何已添加或更新的依赖项是否包含已知漏洞。
If a pull request targets your repository's default branch and contains changes to package manifests or lock files, you can display a dependency review to see what has changed. The dependency review includes details of changes to indirect dependencies in lock files, and it tells you if any of the added or updated dependencies contain known vulnerabilities.
有时,您可能只想更新清单中一个依赖项的版本并生成拉取请求。 但是,如果此直接依赖项的更新版本也更新了依赖项,则拉取请求的更改可能超过您的预期。 每个清单和锁定文件的依赖项审查提供了一种简单的方法来查看更改的内容,以及任何新的依赖项版本是否包含已知的漏洞。
Sometimes you might just want to update the version of one dependency in a manifest and generate a pull request. However, if the updated version of this direct dependency also has updated dependencies, your pull request may have more changes than you expected. The dependency review for each manifest and lock file provides an easy way to see what has changed, and whether any of the new dependency versions contain known vulnerabilities.
通过检查拉取请求中的依赖项审查并更改被标记为有漏洞的任何依赖项,可以避免将漏洞添加到项目中。 有关依赖项审查工作的更多信息,请参阅“[审查拉取请求中的依赖项更改](/pull-requests/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/reviewing-dependency-changes-in-a-pull-request)”。
By checking the dependency reviews in a pull request, and changing any dependencies that are flagged as vulnerable, you can avoid vulnerabilities being added to your project. For more information about how dependency review works, see "[Reviewing dependency changes in a pull request](/pull-requests/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/reviewing-dependency-changes-in-a-pull-request)."
有关配置依赖项评审的详细信息,请参阅“[配置依赖项审查](/code-security/supply-chain-security/understanding-your-software-supply-chain/configuring-dependency-review)”。
For more information about configuring dependency review, see "[Configuring dependency review](/code-security/supply-chain-security/understanding-your-software-supply-chain/configuring-dependency-review)."
{% data variables.product.prodname_dependabot_alerts %} 将会查找依赖项中存在的漏洞,但避免引入潜在问题比在以后修复它们要好得多。 有关 {% data variables.product.prodname_dependabot_alerts %} 的更多信息,请参阅“[关于 {% data variables.product.prodname_dependabot_alerts %}](/github/managing-security-vulnerabilities/about-alerts-for-vulnerable-dependencies#dependabot-alerts-for-vulnerable-dependencies)”。
{% data variables.product.prodname_dependabot_alerts %} will find vulnerabilities that are already in your dependencies, but it's much better to avoid introducing potential problems than to fix problems at a later date. For more information about {% data variables.product.prodname_dependabot_alerts %}, see "[About {% data variables.product.prodname_dependabot_alerts %}](/github/managing-security-vulnerabilities/about-alerts-for-vulnerable-dependencies#dependabot-alerts-for-vulnerable-dependencies)."
依赖项审查支持与依赖关系图相同的语言和包管理生态系统。 更多信息请参阅“[关于依赖关系图](/github/visualizing-repository-data-with-graphs/about-the-dependency-graph#supported-package-ecosystems)”。
Dependency review supports the same languages and package management ecosystems as the dependency graph. For more information, see "[About the dependency graph](/github/visualizing-repository-data-with-graphs/about-the-dependency-graph#supported-package-ecosystems)."
有关 {% data variables.product.product_name %} 上可用的供应链功能的更多信息,请参阅“[关于供应链安全](/code-security/supply-chain-security/understanding-your-software-supply-chain/about-supply-chain-security)”。
For more information on supply chain features available on {% data variables.product.product_name %}, see "[About supply chain security](/code-security/supply-chain-security/understanding-your-software-supply-chain/about-supply-chain-security)."
{% ifversion ghec or ghes %}
## 启用依赖项审查
## Enabling dependency review
启用依赖关系图时,依赖项审查功能可用。 更多信息请参阅“{% ifversion ghec %}[启用依赖关系图](/code-security/supply-chain-security/understanding-your-software-supply-chain/about-the-dependency-graph#enabling-the-dependency-graph){% elsif ghes %}[为企业启用依赖关系图](/admin/code-security/managing-supply-chain-security-for-your-enterprise/enabling-the-dependency-graph-for-your-enterprise){% endif %}”。
The dependency review feature becomes available when you enable the dependency graph. For more information, see "{% ifversion ghec %}[Enabling the dependency graph](/code-security/supply-chain-security/understanding-your-software-supply-chain/about-the-dependency-graph#enabling-the-dependency-graph){% elsif ghes %}[Enabling the dependency graph for your enterprise](/admin/code-security/managing-supply-chain-security-for-your-enterprise/enabling-the-dependency-graph-for-your-enterprise){% endif %}."
{% endif %}
{% ifversion fpt or ghec or ghes > 3.5 or ghae-issue-6396 %}
## 依赖项审查实施
## Dependency review enforcement
{% data reusables.dependency-review.dependency-review-action-beta-note %}
The action is available for all {% ifversion fpt or ghec %}public repositories, as well as private {% endif %}repositories that have {% data variables.product.prodname_GH_advanced_security %} enabled.
You can use the {% data variables.product.prodname_dependency_review_action %} in your repository to enforce dependency reviews on your pull requests. 该操作将扫描由拉取请求中的包版本更改是否引入有漏洞的依赖项版本,并向您示警相关的安全漏洞。 这便于您更好地了解拉取请求中发生的变化,并有助于防止将漏洞添加到存储库中。 更多信息请参阅 [`dependency-review-action`](https://github.com/actions/dependency-review-action)
You can use the {% data variables.product.prodname_dependency_review_action %} in your repository to enforce dependency reviews on your pull requests. The action scans for vulnerable versions of dependencies introduced by package version changes in pull requests, and warns you about the associated security vulnerabilities. This gives you better visibility of what's changing in a pull request, and helps prevent vulnerabilities being added to your repository. For more information, see [`dependency-review-action`](https://github.com/actions/dependency-review-action).
![依赖项审查操作示例](/assets/images/help/graphs/dependency-review-action.png)
![Dependency review action example](/assets/images/help/graphs/dependency-review-action.png)
By default, the {% data variables.product.prodname_dependency_review_action %} check will fail if it discovers any vulnerable packages. A failed check blocks a pull request from being merged when the repository owner requires the dependency review check to pass. 更多信息请参阅“[关于受保护分支](/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-status-checks-before-merging)”。
By default, the {% data variables.product.prodname_dependency_review_action %} check will fail if it discovers any vulnerable packages. A failed check blocks a pull request from being merged when the repository owner requires the dependency review check to pass. For more information, see "[About protected branches](/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-status-checks-before-merging)."
该操作使用依赖项审查 REST API 来获取基本提交和头部提交之间的依赖项更改差异。 您可以使用依赖项审查 API 来获取存储库上任意两个提交之间的依赖项更改差异(包括漏洞数据)。 更多信息请参阅“[依赖项审查](/rest/reference/dependency-graph#dependency-review)”。
The action uses the Dependency Review REST API to get the diff of dependency changes between the base commit and head commit. You can use the Dependency Review API to get the diff of dependency changes, including vulnerability data, between any two commits on a repository. For more information, see "[Dependency review](/rest/reference/dependency-graph#dependency-review)."
{% ifversion dependency-review-action-configuration %}
You can configure the {% data variables.product.prodname_dependency_review_action %} to better suit your needs. For example, you can specify the severity level that will make the action fail, or set an allow or deny list for licenses to scan. For more information, see "[Configuring dependency review](/code-security/supply-chain-security/understanding-your-software-supply-chain/configuring-dependency-review#configuring-the-dependency-review-github-action)."
You can configure the {% data variables.product.prodname_dependency_review_action %} to better suit your needs. For example, you can specify the severity level that will make the action fail, or set an allow or deny list for licenses to scan. For more information, see "[Configuring dependency review](/code-security/supply-chain-security/understanding-your-software-supply-chain/configuring-dependency-review#configuring-the-dependency-review-github-action)."
{% endif %}
{% endif %}

View File

@@ -159,7 +159,7 @@ curl -H "Authorization: token OAUTH-TOKEN" {% data variables.product.api_url_pre
## 用户到服务器请求
虽然大多数 API 交互应使用服务器到服务器安装访问令牌进行,但某些端点允许您使用用户访问令牌通过 API 执行操作。 您的应用程序可以使用[GraphQL v4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql) [REST v3](/rest) 端点发出以下请求。
虽然大多数 API 交互应使用服务器到服务器安装访问令牌进行,但某些端点允许您使用用户访问令牌通过 API 执行操作。 Your app can make the following requests using [GraphQL]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql) or [REST](/rest) endpoints.
### 支持的端点

View File

@@ -52,7 +52,7 @@ We recommend reviewing the list of API endpoints you need as early as possible.
### Design to stay within API rate limits
GitHub Apps use [sliding rules for rate limits](/apps/building-github-apps/understanding-rate-limits-for-github-apps/), which can increase based on the number of repositories and users in the organization. A GitHub App can also make use of [conditional requests](/rest/overview/resources-in-the-rest-api#conditional-requests) or consolidate requests by using the [GraphQL API V4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
GitHub Apps use [sliding rules for rate limits](/apps/building-github-apps/understanding-rate-limits-for-github-apps/), which can increase based on the number of repositories and users in the organization. A GitHub App can also make use of [conditional requests](/rest/overview/resources-in-the-rest-api#conditional-requests) or consolidate requests by using the [GraphQL API]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
### Register a new GitHub App

View File

@@ -14,7 +14,7 @@ topics:
- API
---
GitHub API 有两个稳定版本:[REST API](/rest) 和 [GraphQL API]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql)。 使用 REST API 时,我们建议您[通过 `Accept` 标头请求 v3](/v3/media/#request-specific-version)。 有关使用 GraphQL API 的信息,请参阅 [v4 文档]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql)。
GitHub API 有两个稳定版本:[REST API](/rest) 和 [GraphQL API]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql)。
## 已弃用版本

View File

@@ -41,14 +41,12 @@ shortTitle: 保存有星标的仓库
{% ifversion fpt or ghec %}
## Viewing who has starred a repository
## 查看谁为存储库加了星标
您可以查看已为您有权访问的公共存储库或私有存储库加星标的每个人。
You can view everyone who has starred a public repository or a private repository you have access to.
To view everyone who has starred a repository, add `/stargazers` to the end of the URL of a repository. For example, to view stargazers for the github/docs repository, visit https://github.com/github/docs/stargazers.
要查看已为存储库加星标的每个人,请将 `/stargazers` 添加到存储库 URL 的末尾。 例如,要查看 github/docs 存储库的标星者,请访问 https://github.com/github/docs/stargazers。
## 使用列表组织带星标的存储库

View File

@@ -81,7 +81,7 @@ gh repo fork <em>repository</em> --clone=true
## 创建和推送更改
Go ahead and make a few changes to the project using your favorite text editor, like [Visual Studio Code](https://code.visualstudio.com). 例如,您可以更改 `index.html` 中的文本以添加您的 GitHub 用户名。
继续使用您喜欢的文本编辑器对项目进行一些更改,例如 [Visual Studio Code](https://code.visualstudio.com) 例如,您可以更改 `index.html` 中的文本以添加您的 GitHub 用户名。
当您准备好提交更改时,请暂存并提交更改。 `git add .` 告诉 Git 您希望在下一次提交中包含所有更改。 `git commit` 会拍摄这些更改的快照。

View File

@@ -12,7 +12,7 @@ topics:
- API
---
您可以使用 REST API 或 GraphQL API 访问 GitHub 中的大多数对象(用户、议题、拉取请求等)。 您可以从 REST API 中找到许多对象的**全局节点 ID** ,并在 GraphQL 操作中使用这些 ID。 更多信息请参阅“[预览 GraphQL API v4 REST API v3 资源中的节点 ID](https://developer.github.com/changes/2017-12-19-graphql-node-id/)”。
您可以使用 REST API 或 GraphQL API 访问 GitHub 中的大多数对象(用户、议题、拉取请求等)。 您可以从 REST API 中找到许多对象的**全局节点 ID** ,并在 GraphQL 操作中使用这些 ID。 For more information, see "[Preview GraphQL API Node IDs in REST API resources](https://developer.github.com/changes/2017-12-19-graphql-node-id/)."
{% note %}

View File

@@ -14,7 +14,7 @@ topics:
## 节点限制
要通过[架构](/graphql/guides/introduction-to-graphql#schema)验证,所有 GraphQL API v4 [调用](/graphql/guides/forming-calls-with-graphql)都必须满足这些标准:
To pass [schema](/graphql/guides/introduction-to-graphql#schema) validation, all GraphQL API [calls](/graphql/guides/forming-calls-with-graphql) must meet these standards:
* 客户端必须提供任何[连接](/graphql/guides/introduction-to-graphql#connection)上的 `first``last` 参数。
* `first``last` 的值必须在 1 至 100 之间。
@@ -130,30 +130,30 @@ topics:
## 速率限制
GraphQL API v4 限制不同于 REST API v3 的 [速率限制](/rest/overview/resources-in-the-rest-api#rate-limiting)
The GraphQL API limit is different from the REST API's [rate limits](/rest/overview/resources-in-the-rest-api#rate-limiting).
API 速率限制为什么不同? 使用 [GraphQL](/graphql),一个 GraphQL 调用可替换[多个 REST 调用](/graphql/guides/migrating-from-rest-to-graphql)。 单个复杂 GraphQL 调用可能相当于数千个 REST 请求。 虽然单个 GraphQL 调用远远低于 REST API v3 速率限制,但对 GitHub 的服务器来说,查询的计算成本可能同样高昂。
要准确表示查询的服务器成本GraphQL API v4 可根据标准分数量表计算调用的 **rate limit score速率限制分数** 查询分数计入了父连接及其子连接上的第一个和最后一个参数。
To accurately represent the server cost of a query, the GraphQL API calculates a call's **rate limit score** based on a normalized scale of points. 查询分数计入了父连接及其子连接上的第一个和最后一个参数。
* 计算公式利用父连接及其子连接上的 `first``last` 参数预计算 GitHub 系统上的潜在负载,如 MySQL、ElasticSearch 和 Git。
* 每个连接都有自己的点值。 此点值与调用的其他点数相结合,计入总速率限制分数。
GraphQL API v4 的速率限制为 **5,000 points per hour(每小时 5,000 点)**
The GraphQL API rate limit is **5,000 points per hour**.
请注意,每小时 5,000 点与每小时 5,000 个调用不同:GraphQL API v4 和 REST API v3 使用的速率限制不同。
Note that 5,000 points per hour is not the same as 5,000 calls per hour: the GraphQL API and REST API use different rate limits.
{% note %}
**注**:在我们观察开发者如何使用 GraphQL API v4 时,当前公式和速率限制可能会发生变化。
**Note**: The current formula and rate limit are subject to change as we observe how developers use the GraphQL API.
{% endnote %}
### 返回调用的速率限制状态
使用 REST API v3可以通过[检查](/rest/overview/resources-in-the-rest-api#rate-limiting)返回的 HTTP 标头查看速率限制状态。
With the REST API, you can check the rate limit status by [inspecting](/rest/overview/resources-in-the-rest-api#rate-limiting) the returned HTTP headers.
使用 GraphQL API v4可以通过查询 `rateLimit` 对象上的字段查看速率限制状态。
With the GraphQL API, you can check the rate limit status by querying fields on the `rateLimit` object:
```graphql
query {
@@ -186,7 +186,7 @@ query {
{% note %}
**注**GraphQL API v4 的最低调用成本是 **1**,表示单个请求。
**Note**: The minimum cost of a call to the GraphQL API is **1**, representing a single request.
{% endnote %}

View File

@@ -40,7 +40,7 @@ shortTitle: 筛选文件
1. 在拉取请求列表中,单击要过滤的拉取请求。
{% data reusables.repositories.changed-files %}
1. 单击文件树中的文件可查看相应的文件差异 If the file tree is hidden, click {% octicon "sidebar-collapse" aria-label="The sidebar collapse icon" %} to display the file tree.
1. 单击文件树中的文件可查看相应的文件差异 如果文件树处于隐藏状态,请单击 {% octicon "sidebar-collapse" aria-label="The sidebar collapse icon" %} 以显示文件树。
{% note %}

View File

@@ -37,10 +37,10 @@ shortTitle: 查看依赖项更改
{% ifversion fpt or ghec or ghes > 3.5 or ghae-issue-6396 %}
You can use the {% data variables.product.prodname_dependency_review_action %} to help enforce dependency reviews on pull requests in your repository. {% data reusables.dependency-review.dependency-review-action-overview %}
您可以使用 {% data variables.product.prodname_dependency_review_action %} 来帮助对存储库中的拉取请求强制执行依赖项评审。 {% data reusables.dependency-review.dependency-review-action-overview %}
{% ifversion dependency-review-action-configuration %}
You can configure the {% data variables.product.prodname_dependency_review_action %} to better suit your needs by specifying the type of dependency vulnerability you wish to catch. For more information, see "[Configuring dependency review](/code-security/supply-chain-security/understanding-your-software-supply-chain/configuring-dependency-review#configuring-the-dependency-review-github-action)."
您可以通过指定要捕获的依赖项漏洞的类型来配置 {% data variables.product.prodname_dependency_review_action %} 以更好地满足您的需求。 更多信息请参阅“[配置依赖项审查](/code-security/supply-chain-security/understanding-your-software-supply-chain/configuring-dependency-review#configuring-the-dependency-review-github-action)”。
{% endif %}
{% endif %}

View File

@@ -49,7 +49,7 @@ You can use the file tree to navigate between files in a commit.
{% data reusables.repositories.navigate-to-repo %}
{% data reusables.repositories.navigate-to-commit-page %}
1. 通过单击提交消息链接导航到提交。 ![强调提交消息链接的提交屏幕截图](/assets/images/help/commits/commit-message-link.png)
1. 单击文件树中的文件可查看相应的文件差异 If the file tree is hidden, click {% octicon "sidebar-collapse" aria-label="The sidebar collapse icon" %} to display the file tree.
1. 单击文件树中的文件可查看相应的文件差异 如果文件树处于隐藏状态,请单击 {% octicon "sidebar-collapse" aria-label="The sidebar collapse icon" %} 以显示文件树。
{% note %}

View File

@@ -25,7 +25,7 @@ We encourage you to [explicitly request this version via the `Accept` header](/r
{% ifversion fpt or ghec %}
For information about GitHub's GraphQL API, see the [v4 documentation]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql). For information about migrating to GraphQL, see "[Migrating from REST]({% ifversion ghec%}/free-pro-team@latest{% endif %}/graphql/guides/migrating-from-rest-to-graphql)."
For information about GitHub's GraphQL API, see the [documentation]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql). For information about migrating to GraphQL, see "[Migrating from REST]({% ifversion ghec%}/free-pro-team@latest{% endif %}/graphql/guides/migrating-from-rest-to-graphql)."
{% endif %}

View File

@@ -0,0 +1,7 @@
---
#Reference: #7070
#Actions Runner Container Hooks
versions:
fpt: '*'
ghec: '*'
ghae: 'issue-7070'

View File

@@ -4,9 +4,14 @@ Use `jobs.<job_id>.container` to create a container to run any steps in a job th
### Example: Running a job within a container
```yaml
```yaml{:copy}
name: CI
on:
push:
branches: [ main ]
jobs:
my_job:
container-test-job:
runs-on: ubuntu-latest
container:
image: node:14.16
env:
@@ -16,12 +21,16 @@ jobs:
volumes:
- my_docker_volume:/volume_mount
options: --cpus 1
steps:
- name: Check for dockerenv file
run: (ls /.dockerenv && echo Found dockerenv) || (echo No dockerenv)
```
只指定容器映像时,可以忽略 `image` 关键词。
```yaml
jobs:
my_job:
container-test-job:
runs-on: ubuntu-latest
container: node:14.16
```