New translation batch for es (#28583)
* Add crowdin translations * Run script/i18n/homogenize-frontmatter.js * Run script/i18n/fix-translation-errors.js * Run script/i18n/lint-translation-files.js --check rendering * run script/i18n/reset-files-with-broken-liquid-tags.js --language=es * run script/i18n/reset-known-broken-translation-files.js Co-authored-by: Grace Park <gracepark@github.com>
This commit is contained in:
@@ -138,10 +138,10 @@ jobs:
|
||||
- name: Deploy to Azure Web App
|
||||
id: deploy-to-webapp
|
||||
uses: azure/webapps-deploy@0b651ed7546ecfc75024011f76944cb9b381ef1e
|
||||
with:
|
||||
app-name: {% raw %}${{ env.AZURE_WEBAPP_NAME }}{% endraw %}
|
||||
publish-profile: {% raw %}${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}{% endraw %}
|
||||
images: 'ghcr.io/{% raw %}${{ env.REPO }}{% endraw %}:{% raw %}${{ github.sha }}{% endraw %}'
|
||||
with:
|
||||
app-name: {% raw %}${{ env.AZURE_WEBAPP_NAME }}{% endraw %}
|
||||
publish-profile: {% raw %}${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}{% endraw %}
|
||||
images: 'ghcr.io/{% raw %}${{ env.REPO }}{% endraw %}:{% raw %}${{ github.sha }}{% endraw %}'
|
||||
```
|
||||
|
||||
## Recursos adicionales
|
||||
|
||||
@@ -0,0 +1,530 @@
|
||||
---
|
||||
title: Customizing the containers used by jobs
|
||||
intro: You can customize how your self-hosted runner invokes a container for a job.
|
||||
versions:
|
||||
feature: container-hooks
|
||||
type: reference
|
||||
miniTocMaxHeadingLevel: 4
|
||||
shortTitle: Customize containers used by jobs
|
||||
---
|
||||
|
||||
{% note %}
|
||||
|
||||
**Note**: This feature is currently in beta and is subject to change.
|
||||
|
||||
{% endnote %}
|
||||
|
||||
## About container customization
|
||||
|
||||
{% data variables.product.prodname_actions %} allows you to run a job within a container, using the `container:` statement in your workflow file. For more information, see "[Running jobs in a container](/actions/using-jobs/running-jobs-in-a-container)." To process container-based jobs, the self-hosted runner creates a container for each job.
|
||||
|
||||
{% data variables.product.prodname_actions %} supports commands that let you customize the way your containers are created by the self-hosted runner. For example, you can use these commands to manage the containers through Kubernetes or Podman, and you can also customize the `docker run` or `docker create` commands used to invoke the container. The customization commands are run by a script, which is automatically triggered when a specific environment variable is set on the runner. For more information, see "[Triggering the customization script](#triggering-the-customization-script)" below.
|
||||
|
||||
This customization is only available for Linux-based self-hosted runners, and root user access is not required.
|
||||
|
||||
## Container customization commands
|
||||
|
||||
{% data variables.product.prodname_actions %} includes the following commands for container customization:
|
||||
|
||||
- [`prepare_job`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#prepare_job): Called when a job is started.
|
||||
- [`cleanup_job`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#cleanup_job): Called at the end of a job.
|
||||
- [`run_container_step`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#run_container_step): Called once for each container action in the job.
|
||||
- [`run_script_step`](/actions/hosting-your-own-runners/customizing-the-containers-used-by-jobs#run_script_step): Runs any step that is not a container action.
|
||||
|
||||
Each of these customization commands must be defined in its own JSON file. The file name must match the command name, with the extension `.json`. For example, the `prepare_job` command is defined in `prepare_job.json`. These JSON files will then be run together on the self-hosted runner, as part of the main `index.js` script. This process is described in more detail in "[Generating the customization script](#generating-the-customization-script)."
|
||||
|
||||
These commands also include configuration arguments, explained below in more detail.
|
||||
|
||||
### `prepare_job`
|
||||
|
||||
The `prepare_job` command is called when a job is started. {% data variables.product.prodname_actions %} passes in any job or service containers the job has. This command will be called if you have any service or job containers in the job.
|
||||
|
||||
{% data variables.product.prodname_actions %} assumes that you will do the following tasks in the `prepare_job` command:
|
||||
|
||||
- Prune anything from previous jobs, if needed.
|
||||
- Create a network, if needed.
|
||||
- Pull the job and service containers.
|
||||
- Start the job container.
|
||||
- Start the service containers.
|
||||
- Write to the response file any information that {% data variables.product.prodname_actions %} will need:
|
||||
- Required: State whether the container is an `alpine` linux container (using the `isAlpine` boolean).
|
||||
- Optional: Any context fields you want to set on the job context, otherwise they will be unavailable for users to use. For more information, see "[`job` context](/actions/learn-github-actions/contexts#job-context)."
|
||||
- Return `0` when the health checks have succeeded and the job/service containers are started.
|
||||
|
||||
#### Argumentos
|
||||
|
||||
- `jobContainer`: **Optional**. An object containing information about the specified job container.
|
||||
- `image`: **Required**. A string containing the Docker image.
|
||||
- `workingDirectory`: **Required**. A string containing the absolute path of the working directory.
|
||||
- `createOptions`: **Optional**. The optional _create_ options specified in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
|
||||
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
|
||||
- `userMountVolumes`: **Optional**. An array of user mount volumes set in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
|
||||
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
|
||||
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
|
||||
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
|
||||
- `systemMountVolumes`: **Required**. An array of mounts to mount into the container, same fields as above.
|
||||
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
|
||||
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
|
||||
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
|
||||
- `registro` **Optional**. The Docker registry credentials for a private container registry.
|
||||
- `username`: **Optional**. The username of the registry account.
|
||||
- `password`: **Optional**. The password to the registry account.
|
||||
- `serverUrl`: **Optional**. The registry URL.
|
||||
- `portMappings`: **Optional**. A key value hash of _source:target_ ports to map into the container.
|
||||
- `services`: **Optional**. An array of service containers to spin up.
|
||||
- `contextName`: **Required**. The name of the service in the Job context.
|
||||
- `image`: **Required**. A string containing the Docker image.
|
||||
- `createOptions`: **Optional**. The optional _create_ options specified in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
|
||||
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
|
||||
- `userMountVolumes`: **Optional**. An array of mounts to mount into the container, same fields as above.
|
||||
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
|
||||
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
|
||||
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
|
||||
- `registro` **Optional**. The Docker registry credentials for the private container registry.
|
||||
- `username`: **Optional**. The username of the registry account.
|
||||
- `password`: **Optional**. The password to the registry account.
|
||||
- `serverUrl`: **Optional**. The registry URL.
|
||||
- `portMappings`: **Optional**. A key value hash of _source:target_ ports to map into the container.
|
||||
|
||||
#### Example input
|
||||
|
||||
```json{:copy}
|
||||
{
|
||||
"command": "prepare_job",
|
||||
"responseFile": "/users/octocat/runner/_work/{guid}.json",
|
||||
"state": {},
|
||||
"args": {
|
||||
"jobContainer": {
|
||||
"image": "node:14.16",
|
||||
"workingDirectory": "/__w/octocat-test2/octocat-test2",
|
||||
"createOptions": "--cpus 1",
|
||||
"environmentVariables": {
|
||||
"NODE_ENV": "development"
|
||||
},
|
||||
"userMountVolumes": [
|
||||
{
|
||||
"sourceVolumePath": "my_docker_volume",
|
||||
"targetVolumePath": "/volume_mount",
|
||||
"readOnly": false
|
||||
}
|
||||
],
|
||||
"systemMountVolumes": [
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work",
|
||||
"targetVolumePath": "/__w",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/externals",
|
||||
"targetVolumePath": "/__e",
|
||||
"readOnly": true
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp",
|
||||
"targetVolumePath": "/__w/_temp",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_actions",
|
||||
"targetVolumePath": "/__w/_actions",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_tool",
|
||||
"targetVolumePath": "/__w/_tool",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_home",
|
||||
"targetVolumePath": "/github/home",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_workflow",
|
||||
"targetVolumePath": "/github/workflow",
|
||||
"readOnly": false
|
||||
}
|
||||
],
|
||||
"registry": {
|
||||
"username": "octocat",
|
||||
"password": "examplePassword",
|
||||
"serverUrl": "https://index.docker.io/v1"
|
||||
},
|
||||
"portMappings": { "80": "801" }
|
||||
},
|
||||
"services": [
|
||||
{
|
||||
"contextName": "redis",
|
||||
"image": "redis",
|
||||
"createOptions": "--cpus 1",
|
||||
"environmentVariables": {},
|
||||
"userMountVolumes": [],
|
||||
"portMappings": { "80": "801" },
|
||||
"registry": {
|
||||
"username": "octocat",
|
||||
"password": "examplePassword",
|
||||
"serverUrl": "https://index.docker.io/v1"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Example output
|
||||
|
||||
This example output is the contents of the `responseFile` defined in the input above.
|
||||
|
||||
```json{:copy}
|
||||
{
|
||||
"state": {
|
||||
"network": "example_network_53269bd575972817b43f7733536b200c",
|
||||
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
|
||||
"serviceContainers": {
|
||||
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
|
||||
}
|
||||
},
|
||||
"context": {
|
||||
"container": {
|
||||
"id": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
|
||||
"network": "example_network_53269bd575972817b43f7733536b200c"
|
||||
},
|
||||
"services": {
|
||||
"redis": {
|
||||
"id": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105",
|
||||
"ports": {
|
||||
"8080": "8080"
|
||||
},
|
||||
"network": "example_network_53269bd575972817b43f7733536b200c"
|
||||
}
|
||||
},
|
||||
"isAlpine": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### `cleanup_job`
|
||||
|
||||
The `cleanup_job` command is called at the end of a job. {% data variables.product.prodname_actions %} assumes that you will do the following tasks in the `cleanup_job` command:
|
||||
|
||||
- Stop any running service or job containers (or the equivalent pod).
|
||||
- Stop the network (if one exists).
|
||||
- Delete any job or service containers (or the equivalent pod).
|
||||
- Delete the network (if one exists).
|
||||
- Cleanup anything else that was created for the job.
|
||||
|
||||
#### Argumentos
|
||||
|
||||
No arguments are provided for `cleanup_job`.
|
||||
|
||||
#### Example input
|
||||
|
||||
```json{:copy}
|
||||
{
|
||||
"command": "cleanup_job",
|
||||
"responseFile": null,
|
||||
"state": {
|
||||
"network": "example_network_53269bd575972817b43f7733536b200c",
|
||||
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
|
||||
"serviceContainers": {
|
||||
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
|
||||
}
|
||||
},
|
||||
"args": {}
|
||||
}
|
||||
```
|
||||
|
||||
#### Example output
|
||||
|
||||
No output is expected for `cleanup_job`.
|
||||
|
||||
### `run_container_step`
|
||||
|
||||
The `run_container_step` command is called once for each container action in your job. {% data variables.product.prodname_actions %} assumes that you will do the following tasks in the `run_container_step` command:
|
||||
|
||||
- Pull or build the required container (or fail if you cannot).
|
||||
- Run the container action and return the exit code of the container.
|
||||
- Stream any step logs output to stdout and stderr.
|
||||
- Cleanup the container after it executes.
|
||||
|
||||
#### Argumentos
|
||||
|
||||
- `image`: **Optional**. A string containing the docker image. Otherwise a dockerfile must be provided.
|
||||
- `dockerfile`: **Optional**. A string containing the path to the dockerfile, otherwise an image must be provided.
|
||||
- `entryPointArgs`: **Optional**. A list containing the entry point args.
|
||||
- `entryPoint`: **Optional**. The container entry point to use if the default image entrypoint should be overwritten.
|
||||
- `workingDirectory`: **Required**. A string containing the absolute path of the working directory.
|
||||
- `createOptions`: **Optional**. The optional _create_ options specified in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
|
||||
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
|
||||
- `prependPath`: **Optional**. An array of additional paths to prepend to the `$PATH` variable.
|
||||
- `userMountVolumes`: **Optional**. an array of user mount volumes set in the YAML. For more information, see "[Example: Running a job within a container](/actions/using-jobs/running-jobs-in-a-container#example-running-a-job-within-a-container)."
|
||||
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
|
||||
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
|
||||
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
|
||||
- `systemMountVolumes`: **Required**. An array of mounts to mount into the container, using the same fields as above.
|
||||
- `sourceVolumePath`: **Required**. The source path to the volume that will be mounted into the Docker container.
|
||||
- `targetVolumePath`: **Required**. The target path to the volume that will be mounted into the Docker container.
|
||||
- `readOnly`: **Required**. Determines whether or not the mount should be read-only.
|
||||
- `registro` **Optional**. The Docker registry credentials for a private container registry.
|
||||
- `username`: **Optional**. The username of the registry account.
|
||||
- `password`: **Optional**. The password to the registry account.
|
||||
- `serverUrl`: **Optional**. The registry URL.
|
||||
- `portMappings`: **Optional**. A key value hash of the _source:target_ ports to map into the container.
|
||||
|
||||
#### Example input for image
|
||||
|
||||
If you're using a Docker image, you can specify the image name in the `"image":` parameter.
|
||||
|
||||
```json{:copy}
|
||||
{
|
||||
"command": "run_container_step",
|
||||
"responseFile": null,
|
||||
"state": {
|
||||
"network": "example_network_53269bd575972817b43f7733536b200c",
|
||||
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
|
||||
"serviceContainers": {
|
||||
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
|
||||
}
|
||||
},
|
||||
"args": {
|
||||
"image": "node:14.16",
|
||||
"dockerfile": null,
|
||||
"entryPointArgs": ["-f", "/dev/null"],
|
||||
"entryPoint": "tail",
|
||||
"workingDirectory": "/__w/octocat-test2/octocat-test2",
|
||||
"createOptions": "--cpus 1",
|
||||
"environmentVariables": {
|
||||
"NODE_ENV": "development"
|
||||
},
|
||||
"prependPath": ["/foo/bar", "bar/foo"],
|
||||
"userMountVolumes": [
|
||||
{
|
||||
"sourceVolumePath": "my_docker_volume",
|
||||
"targetVolumePath": "/volume_mount",
|
||||
"readOnly": false
|
||||
}
|
||||
],
|
||||
"systemMountVolumes": [
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work",
|
||||
"targetVolumePath": "/__w",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/externals",
|
||||
"targetVolumePath": "/__e",
|
||||
"readOnly": true
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp",
|
||||
"targetVolumePath": "/__w/_temp",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_actions",
|
||||
"targetVolumePath": "/__w/_actions",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_tool",
|
||||
"targetVolumePath": "/__w/_tool",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_home",
|
||||
"targetVolumePath": "/github/home",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_workflow",
|
||||
"targetVolumePath": "/github/workflow",
|
||||
"readOnly": false
|
||||
}
|
||||
],
|
||||
"registry": null,
|
||||
"portMappings": { "80": "801" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Example input for Dockerfile
|
||||
|
||||
If your container is defined by a Dockerfile, this example demonstrates how to specify the path to a `Dockerfile` in your input, using the `"dockerfile":` parameter.
|
||||
|
||||
```json{:copy}
|
||||
{
|
||||
"command": "run_container_step",
|
||||
"responseFile": null,
|
||||
"state": {
|
||||
"network": "example_network_53269bd575972817b43f7733536b200c",
|
||||
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
|
||||
"services": {
|
||||
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
|
||||
}
|
||||
},
|
||||
"args": {
|
||||
"image": null,
|
||||
"dockerfile": "/__w/_actions/foo/dockerfile",
|
||||
"entryPointArgs": ["hello world"],
|
||||
"entryPoint": "echo",
|
||||
"workingDirectory": "/__w/octocat-test2/octocat-test2",
|
||||
"createOptions": "--cpus 1",
|
||||
"environmentVariables": {
|
||||
"NODE_ENV": "development"
|
||||
},
|
||||
"prependPath": ["/foo/bar", "bar/foo"],
|
||||
"userMountVolumes": [
|
||||
{
|
||||
"sourceVolumePath": "my_docker_volume",
|
||||
"targetVolumePath": "/volume_mount",
|
||||
"readOnly": false
|
||||
}
|
||||
],
|
||||
"systemMountVolumes": [
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work",
|
||||
"targetVolumePath": "/__w",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/externals",
|
||||
"targetVolumePath": "/__e",
|
||||
"readOnly": true
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp",
|
||||
"targetVolumePath": "/__w/_temp",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_actions",
|
||||
"targetVolumePath": "/__w/_actions",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_tool",
|
||||
"targetVolumePath": "/__w/_tool",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_home",
|
||||
"targetVolumePath": "/github/home",
|
||||
"readOnly": false
|
||||
},
|
||||
{
|
||||
"sourceVolumePath": "/home/octocat/git/runner/_layout/_work/_temp/_github_workflow",
|
||||
"targetVolumePath": "/github/workflow",
|
||||
"readOnly": false
|
||||
}
|
||||
],
|
||||
"registry": null,
|
||||
"portMappings": { "80": "801" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Example output
|
||||
|
||||
No output is expected for `run_container_step`.
|
||||
|
||||
### `run_script_step`
|
||||
|
||||
{% data variables.product.prodname_actions %} assumes that you will do the following tasks:
|
||||
|
||||
- Invoke the provided script inside the job container and return the exit code.
|
||||
- Stream any step log output to stdout and stderr.
|
||||
|
||||
#### Argumentos
|
||||
|
||||
- `entryPointArgs`: **Optional**. A list containing the entry point arguments.
|
||||
- `entryPoint`: **Optional**. The container entry point to use if the default image entrypoint should be overwritten.
|
||||
- `prependPath`: **Optional**. An array of additional paths to prepend to the `$PATH` variable.
|
||||
- `workingDirectory`: **Required**. A string containing the absolute path of the working directory.
|
||||
- `environmentVariables`: **Optional**. Sets a map of key environment variables.
|
||||
|
||||
#### Example input
|
||||
|
||||
```json{:copy}
|
||||
{
|
||||
"command": "run_script_step",
|
||||
"responseFile": null,
|
||||
"state": {
|
||||
"network": "example_network_53269bd575972817b43f7733536b200c",
|
||||
"jobContainer": "82e8219701fe096a35941d869cf3d71af1d943b5d8bdd718857fb87ac3042480",
|
||||
"serviceContainers": {
|
||||
"redis": "60972d9aa486605e66b0dad4abb678dc3d9116f536579e418176eedb8abb9105"
|
||||
}
|
||||
},
|
||||
"args": {
|
||||
"entryPointArgs": ["-e", "/runner/temp/example.sh"],
|
||||
"entryPoint": "bash",
|
||||
"environmentVariables": {
|
||||
"NODE_ENV": "development"
|
||||
},
|
||||
"prependPath": ["/foo/bar", "bar/foo"],
|
||||
"workingDirectory": "/__w/octocat-test2/octocat-test2"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Example output
|
||||
|
||||
No output is expected for `run_script_step`.
|
||||
|
||||
## Generating the customization script
|
||||
|
||||
{% data variables.product.prodname_dotcom %} has created an example repository that demonstrates how to generate customization scripts for Docker and Kubernetes.
|
||||
|
||||
{% note %}
|
||||
|
||||
**Note:** The resulting scripts are available for testing purposes, and you will need to determine whether they are appropriate for your requirements.
|
||||
|
||||
{% endnote %}
|
||||
|
||||
1. Clone the [actions/runner-container-hooks](https://github.com/actions/runner-container-hooks) repository to your self-hosted runner.
|
||||
|
||||
1. The `examples/` directory contains some existing customization commands, each with its own JSON file. You can review these examples and use them as a starting point for your own customization commands.
|
||||
|
||||
- `prepare_job.json`
|
||||
- `run_script_step.json`
|
||||
- `run_container_step.json`
|
||||
|
||||
1. Build the npm packages. These commands generate the `index.js` files inside `packages/docker/dist` and `packages/k8s/dist`.
|
||||
|
||||
```shell
|
||||
npm install && npm run bootstrap && npm run build-all
|
||||
```
|
||||
|
||||
When the resulting `index.js` is triggered by {% data variables.product.prodname_actions %}, it will run the customization commands defined in the JSON files. To trigger the `index.js`, you will need to add it your `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER` environment variable, as described in the next section.
|
||||
|
||||
## Triggering the customization script
|
||||
|
||||
The custom script must be located on the runner, but should not be stored in the self-hosted runner application directory. The scripts are executed in the security context of the service account that's running the runner service.
|
||||
|
||||
{% note %}
|
||||
|
||||
**Note**: The triggered script is processed synchronously, so it will block job execution while running.
|
||||
|
||||
{% endnote %}
|
||||
|
||||
The script is automatically executed when the runner has the following environment variable containing an absolute path to the script:
|
||||
|
||||
- `ACTIONS_RUNNER_CONTAINER_HOOK`: The script defined in this environment variable is triggered when a job has been assigned to a runner, but before the job starts running.
|
||||
|
||||
To set this environment variable, you can either add it to the operating system, or add it to a file named `.env` within the self-hosted runner application directory. For example, the following `.env` entry will have the runner automatically run the script at `/Users/octocat/runner/index.js` before each container-based job runs:
|
||||
|
||||
```bash
|
||||
ACTIONS_RUNNER_CONTAINER_HOOK=/Users/octocat/runner/index.js
|
||||
```
|
||||
|
||||
If you want to ensure that your job always runs inside a container, and subsequently always applies your container customizations, you can set the `ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER` variable on the self hosted runner to `true`. This will fail jobs that do not specify a job container.
|
||||
|
||||
## Solución de problemas
|
||||
|
||||
### No timeout setting
|
||||
|
||||
There is currently no timeout setting available for the script executed by `ACTIONS_RUNNER_CONTAINER_HOOK`. As a result, you could consider adding timeout handling to your script.
|
||||
|
||||
### Reviewing the workflow run log
|
||||
|
||||
To confirm whether your scripts are executing, you can review the logs for that job. For more information on checking the logs, see "[Viewing logs to diagnose failures](/actions/monitoring-and-troubleshooting-workflows/using-workflow-run-logs#viewing-logs-to-diagnose-failures)."
|
||||
@@ -20,6 +20,7 @@ children:
|
||||
- /adding-self-hosted-runners
|
||||
- /autoscaling-with-self-hosted-runners
|
||||
- /running-scripts-before-or-after-a-job
|
||||
- /customizing-the-containers-used-by-jobs
|
||||
- /configuring-the-self-hosted-runner-application-as-a-service
|
||||
- /using-a-proxy-server-with-self-hosted-runners
|
||||
- /using-labels-with-self-hosted-runners
|
||||
|
||||
@@ -86,7 +86,7 @@ Si quieres permitir respuestas de correo electrónico para las notificaciones, d
|
||||
|
||||
### Crea un Paquete de soporte
|
||||
|
||||
If you cannot determine what is wrong from the displayed error message, you can download a [support bundle](/enterprise/admin/guides/enterprise-support/providing-data-to-github-support) containing the entire SMTP conversation between your mail server and {% data variables.product.prodname_ghe_server %}. Una vez que hayas descargado y extraído el paquete, verifica las entradas en *enterprise-manage-logs/unicorn.log* para toda la bitácora de conversaciones de SMTP y cualquier error relacionado.
|
||||
Si no puedes determinar lo que está mal desde el mensaje de error mostrado, puedes descargar un [paquete de soporte](/enterprise/admin/guides/enterprise-support/providing-data-to-github-support) que contiene toda la conversación SMTP entre tu servidor de correo y {% data variables.product.prodname_ghe_server %}. Una vez que hayas descargado y extraído el paquete, verifica las entradas en *enterprise-manage-logs/unicorn.log* para toda la bitácora de conversaciones de SMTP y cualquier error relacionado.
|
||||
|
||||
El registro unicornio debería mostrar una transacción similar a la siguiente:
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ topics:
|
||||
|
||||
## Configurar el primer nodo
|
||||
|
||||
1. Conèctate al nodo que se designarà como el primario de MySQL en la `cluster.conf`. For more information, see "[About the cluster configuration file](/enterprise/admin/guides/clustering/initializing-the-cluster/#about-the-cluster-configuration-file)."
|
||||
1. Conèctate al nodo que se designarà como el primario de MySQL en la `cluster.conf`. Para obtener màs informaciòn, consulta la secciòn "[Acerca del archivo de configuraciòn de clùster](/enterprise/admin/guides/clustering/initializing-the-cluster/#about-the-cluster-configuration-file)".
|
||||
2. En tu navegador web, visita `https://<ip address>:8443/setup/`.
|
||||
{% data reusables.enterprise_installation.upload-a-license-file %}
|
||||
{% data reusables.enterprise_installation.save-settings-in-web-based-mgmt-console %}
|
||||
@@ -30,7 +30,7 @@ topics:
|
||||
|
||||
## Inicializar la agrupación
|
||||
|
||||
Para inicializar la agrupación, necesitas un archivo de configuración de agrupación (`cluster.conf`). For more information, see "[About the cluster configuration file](/enterprise/admin/guides/clustering/initializing-the-cluster/#about-the-cluster-configuration-file)".
|
||||
Para inicializar la agrupación, necesitas un archivo de configuración de agrupación (`cluster.conf`). Para obtener màs informaciòn, consulta la secciòn "[Acerca del archivo de configuraciòn de clùster](/enterprise/admin/guides/clustering/initializing-the-cluster/#about-the-cluster-configuration-file)".
|
||||
|
||||
1. Desde el primer nodo que se configuró, ejecuta `ghe-cluster-config. init`. De esta manera, se inicializará la agrupación si existen nodos en el archivo de configuración de la agrupación que no están configurados.
|
||||
2. Ejecuta `ghe-cluster-config-apply`. Esto validará el archivo `cluster.conf`, aplicará la configuración a cada archivo del nodo y traerá los servicios configurados en cada nodo.
|
||||
@@ -39,7 +39,7 @@ Para comprobar el estado de una agrupación en funcionamiento, usa el comando `g
|
||||
|
||||
## Acerca del archivo de configuración de la agrupación
|
||||
|
||||
El archivo de configuración de la agrupación (`cluster.conf`) define los nodos en la agrupación, y los servicios que ejecutan. For more information, see "[About cluster nodes](/enterprise/admin/guides/clustering/about-cluster-nodes)."
|
||||
El archivo de configuración de la agrupación (`cluster.conf`) define los nodos en la agrupación, y los servicios que ejecutan. Para obtener más información, consulta la sección "[Acerca de los nodos de clúster](/enterprise/admin/guides/clustering/about-cluster-nodes)".
|
||||
|
||||
Este ejemplo `cluster.conf` define una agrupación con cinco nodos.
|
||||
|
||||
|
||||
@@ -95,4 +95,4 @@ Para actualizar a la versión más reciente {% data variables.product.prodname_e
|
||||
{% endnote %}
|
||||
|
||||
15. Cambia el tráfico de red de usuario desde la instancia anterior a la nueva instancia utilizando la asignación de DNS o la dirección IP.
|
||||
16. Upgrade to the latest patch release of {% data variables.product.prodname_ghe_server %}. Para obtener más información, consulta "[Actualizar {% data variables.product.prodname_ghe_server %}](/enterprise/admin/guides/installation/upgrading-github-enterprise-server/)."
|
||||
16. Mejora al lanzamiento de parche más reciente de {% data variables.product.prodname_ghe_server %}. Para obtener más información, consulta "[Actualizar {% data variables.product.prodname_ghe_server %}](/enterprise/admin/guides/installation/upgrading-github-enterprise-server/)."
|
||||
|
||||
@@ -25,7 +25,7 @@ You must enter unique values from your SAML IdP when configuring SAML SSO for {%
|
||||
|
||||
{% ifversion ghec %}
|
||||
|
||||
The SP metadata for {% data variables.product.product_name %} is available for either organizations or enterprises with SAML SSO. {% data variables.product.product_name %} uses the `urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST` binding.
|
||||
The SP metadata for {% data variables.product.product_name %} is available for either organizations or enterprises with SAML SSO. {% data variables.product.product_name %} utiliza el enlace `urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST`.
|
||||
|
||||
### Organizaciones
|
||||
|
||||
@@ -33,13 +33,13 @@ You can configure SAML SSO for an individual organization in your enterprise. Yo
|
||||
|
||||
The SP metadata for an organization on {% data variables.product.product_location %} is available at `https://github.com/orgs/ORGANIZATION/saml/metadata`, where **ORGANIZATION** is the name of your organization on {% data variables.product.product_location %}.
|
||||
|
||||
| Valor | Otros nombres | Descripción | Ejemplo |
|
||||
|:--------------------------------------------------------- |:------------------------------------ |:---------------------------------------------------------------------------------------- |:--------------------------------------------------- |
|
||||
| ID de Entidad de SP | SP URL, audience restriction | The top-level URL for your organization on {% data variables.product.product_location %} | `https://github.com/orgs/ORGANIZATION` |
|
||||
| URL del Servicio de Consumidor de Aserciones (ACS) del SP | Reply, recipient, or destination URL | URL a la que el IdP enviará respuestas de SAML | `https://github.com/orgs/ORGANIZATION/saml/consume` |
|
||||
| URL de inicio de sesión único (SSO) del SP | | URL en donde el IdP comienza con SSO | `https://github.com/orgs/ORGANIZATION/saml/sso` |
|
||||
| Valor | Otros nombres | Descripción | Ejemplo |
|
||||
|:--------------------------------------------------------- |:---------------------------------------- |:---------------------------------------------------------------------------------------- |:--------------------------------------------------- |
|
||||
| ID de Entidad de SP | URL de SP, restricción de la audiencia | The top-level URL for your organization on {% data variables.product.product_location %} | `https://github.com/orgs/ORGANIZATION` |
|
||||
| URL del Servicio de Consumidor de Aserciones (ACS) del SP | URL de respuesta, receptora o de destino | URL a la que el IdP enviará respuestas de SAML | `https://github.com/orgs/ORGANIZATION/saml/consume` |
|
||||
| URL de inicio de sesión único (SSO) del SP | | URL en donde el IdP comienza con SSO | `https://github.com/orgs/ORGANIZATION/saml/sso` |
|
||||
|
||||
### Enterprises
|
||||
### Empresas
|
||||
|
||||
The SP metadata for an enterprise on {% data variables.product.product_location %} is available at `https://github.com/enterprises/ENTERPRISE/saml/metadata`, where **ENTERPRISE** is the name of your enterprise on {% data variables.product.product_location %}.
|
||||
|
||||
@@ -53,11 +53,11 @@ The SP metadata for an enterprise on {% data variables.product.product_location
|
||||
|
||||
The SP metadata for {% data variables.product.product_location %} is available at `http(s)://HOSTNAME/saml/metadata`, where **HOSTNAME** is the hostname for your instance. {% data variables.product.product_name %} utiliza el enlace `urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST`.
|
||||
|
||||
| Valor | Otros nombres | Descripción | Ejemplo |
|
||||
|:--------------------------------------------------------- |:---------------------------------------- |:---------------------------------------------------------------- |:--------------------------------- |
|
||||
| ID de Entidad de SP | URL de SP, restricción de la audiencia | Your top-level URL for {% data variables.product.product_name %} | `http(s)://HOSTNAME` |
|
||||
| URL del Servicio de Consumidor de Aserciones (ACS) del SP | URL de respuesta, receptora o de destino | URL a la que el IdP enviará respuestas de SAML | `http(s)://HOSTNAME/saml/consume` |
|
||||
| URL de inicio de sesión único (SSO) del SP | | URL en donde el IdP comienza con SSO | `http(s)://HOSTNAME/sso` |
|
||||
| Valor | Otros nombres | Descripción | Ejemplo |
|
||||
|:--------------------------------------------------------- |:---------------------------------------- |:----------------------------------------------------------------------- |:--------------------------------- |
|
||||
| ID de Entidad de SP | URL de SP, restricción de la audiencia | Tu URL de más alto nivel para {% data variables.product.product_name %} | `http(s)://HOSTNAME` |
|
||||
| URL del Servicio de Consumidor de Aserciones (ACS) del SP | URL de respuesta, receptora o de destino | URL a la que el IdP enviará respuestas de SAML | `http(s)://HOSTNAME/saml/consume` |
|
||||
| URL de inicio de sesión único (SSO) del SP | | URL en donde el IdP comienza con SSO | `http(s)://HOSTNAME/sso` |
|
||||
|
||||
{% elsif ghae %}
|
||||
|
||||
@@ -80,9 +80,9 @@ The following SAML attributes are available for {% data variables.product.produc
|
||||
| `ID del nombre` | Sí | Un identificador de usuario persistente. Se puede usar cualquier formato de identificador de nombre persistente. {% ifversion ghec %}If you use an enterprise with {% data variables.product.prodname_emus %}, {% endif %}{% data variables.product.product_name %} will normalize the `NameID` element to use as a username unless one of the alternative assertions is provided. Para obtener más información, consulta la sección "[Consideraciones de nombre de usuario para la autenticación externa](/admin/identity-and-access-management/managing-iam-for-your-enterprise/username-considerations-for-external-authentication)". |
|
||||
| `SessionNotOnOrAfter` | No | The date that {% data variables.product.product_name %} invalidates the associated session. After invalidation, the person must authenticate once again to access {% ifversion ghec or ghae %}your enterprise's resources{% elsif ghes %}{% data variables.product.product_location %}{% endif %}. For more information, see "[Session duration and timeout](#session-duration-and-timeout)." |
|
||||
{%- ifversion ghes or ghae %}
|
||||
| `administrator` | No | When the value is `true`, {% data variables.product.product_name %} will automatically promote the user to be a {% ifversion ghes %}site administrator{% elsif ghae %}enterprise owner{% endif %}. Any other value or a non-existent value will demote the account and remove administrative access. | | `username` | No | The username for {% data variables.product.product_location %}. |
|
||||
| `administrator` | No | When the value is `true`, {% data variables.product.product_name %} will automatically promote the user to be a {% ifversion ghes %}site administrator{% elsif ghae %}enterprise owner{% endif %}. Setting this attribute to anything but `true` will result in demotion, as long as the value is not blank. Omitting this attribute or leaving the value blank will not change the role of the user. | | `username` | No | The username for {% data variables.product.product_location %}. |
|
||||
{%- endif %}
|
||||
| `full_name` | No | {% ifversion ghec %}If you configure SAML SSO for an enterprise and you use {% data variables.product.prodname_emus %}, the{% else %}The{% endif %} full name of the user to display on the user's profile page. | | `emails` | No | Las direcciones de correo electrónico del usuario.{% ifversion ghes or ghae %} Puedes especificar más de una dirección.{% endif %}{% ifversion ghec or ghes %} Si sincronizas el uso de licencias entre {% data variables.product.prodname_ghe_server %} y {% data variables.product.prodname_ghe_cloud %}, {% data variables.product.prodname_github_connect %} utiliza `emails` para identificar a los usuarios únicos entre los productos. Para obtener más información, consulta la sección "[Sincronizar el uso de licencias entre {% data variables.product.prodname_ghe_server %} y {% data variables.product.prodname_ghe_cloud %}](/billing/managing-your-license-for-github-enterprise/syncing-license-usage-between-github-enterprise-server-and-github-enterprise-cloud)".{% endif %} | | `public_keys` | No | {% ifversion ghec %}Si configuras el SSO de SAML para una empresa y utilizas {% data variables.product.prodname_emus %}, las{% else %}Las{% endif %}llaves SSH públicas para el usuario. You can specify more than one key. | | `gpg_keys` | No | {% ifversion ghec %}If you configure SAML SSO for an enterprise and you use {% data variables.product.prodname_emus %}, the{% else %}The{% endif %} GPG keys for the user. Puedes especificar más de una clave. |
|
||||
| `full_name` | No | {% ifversion ghec %}If you configure SAML SSO for an enterprise and you use {% data variables.product.prodname_emus %}, the{% else %}The{% endif %} full name of the user to display on the user's profile page. | | `emails` | No | Las direcciones de correo electrónico del usuario.{% ifversion ghes or ghae %} Puedes especificar más de una dirección.{% endif %}{% ifversion ghec or ghes %} Si sincronizas el uso de licencias entre {% data variables.product.prodname_ghe_server %} y {% data variables.product.prodname_ghe_cloud %}, {% data variables.product.prodname_github_connect %} utiliza `emails` para identificar a los usuarios únicos entre los productos. Para obtener más información, consulta la sección "[Sincronizar el uso de licencias entre {% data variables.product.prodname_ghe_server %} y {% data variables.product.prodname_ghe_cloud %}](/billing/managing-your-license-for-github-enterprise/syncing-license-usage-between-github-enterprise-server-and-github-enterprise-cloud)".{% endif %} | | `public_keys` | No | {% ifversion ghec %}Si configuras el SSO de SAML para una empresa y utilizas {% data variables.product.prodname_emus %}, las{% else %}Las{% endif %}llaves SSH públicas para el usuario. Puedes especificar más de una clave. | | `gpg_keys` | No | {% ifversion ghec %}If you configure SAML SSO for an enterprise and you use {% data variables.product.prodname_emus %}, the{% else %}The{% endif %} GPG keys for the user. Puedes especificar más de una clave. |
|
||||
|
||||
Para especificar más de un valor para un atributo, utiliza elementos múltiples de `<saml2:AttributeValue>`.
|
||||
|
||||
|
||||
@@ -42,7 +42,6 @@ Si compraste {% data variables.product.prodname_enterprise %} mediante un Acuerd
|
||||
|
||||
### Facturación para las precompilaciones de los {% data variables.product.prodname_codespaces %}
|
||||
|
||||
{% data reusables.codespaces.prebuilds-beta-note %}
|
||||
|
||||
{% data reusables.codespaces.billing-for-prebuilds %}
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@ shortTitle: Filtrar alertas
|
||||
|
||||
## Acerca de filtrar el resumen de seguridad
|
||||
|
||||
Puedes utilizar filtros en el resumen de seguridad para reducir tu enfoque con base en una serie de factores, como el nivel de riesgo de la alerta, el tipo de esta y la habilitación de características. Los diversos filtros se encuentran disponibles dependiendo de la vista específica y de si estás analizando a nivel de organización, de equipo o de repositorio.
|
||||
Puedes utilizar filtros en el resumen de seguridad para reducir tu enfoque con base en una serie de factores, como el nivel de riesgo de la alerta, el tipo de esta y la habilitación de características. Different filters are available depending on the specific view and whether your analysis is at the organization, team or repository level.
|
||||
|
||||
## Filtrar por repositorio
|
||||
|
||||
|
||||
@@ -50,11 +50,19 @@ The dependency review feature becomes available when you enable the dependency g
|
||||
|
||||
{% data reusables.dependency-review.dependency-review-action-beta-note %}
|
||||
|
||||
You can use the Dependency Review GitHub Action in your repository to enforce dependency reviews on your pull requests. The action scans for vulnerable versions of dependencies introduced by package version changes in pull requests, and warns you about the associated security vulnerabilities. This gives you better visibility of what's changing in a pull request, and helps prevent vulnerabilities being added to your repository. For more information, see [`dependency-review-action`](https://github.com/actions/dependency-review-action).
|
||||
The action is available for all {% ifversion fpt or ghec %}public repositories, as well as private {% endif %}repositories that have {% data variables.product.prodname_GH_advanced_security %} enabled.
|
||||
|
||||
You can use the {% data variables.product.prodname_dependency_review_action %} in your repository to enforce dependency reviews on your pull requests. The action scans for vulnerable versions of dependencies introduced by package version changes in pull requests, and warns you about the associated security vulnerabilities. This gives you better visibility of what's changing in a pull request, and helps prevent vulnerabilities being added to your repository. For more information, see [`dependency-review-action`](https://github.com/actions/dependency-review-action).
|
||||
|
||||

|
||||
|
||||
The Dependency Review GitHub Action check will fail if it discovers any vulnerable package, but will only block a pull request from being merged if the repository owner has required the check to pass before merging. For more information, see "[About protected branches](/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-status-checks-before-merging)."
|
||||
By default, the {% data variables.product.prodname_dependency_review_action %} check will fail if it discovers any vulnerable packages. A failed check blocks a pull request from being merged when the repository owner requires the dependency review check to pass. For more information, see "[About protected branches](/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-status-checks-before-merging)."
|
||||
|
||||
The action uses the Dependency Review REST API to get the diff of dependency changes between the base commit and head commit. You can use the Dependency Review API to get the diff of dependency changes, including vulnerability data, between any two commits on a repository. For more information, see "[Dependency review](/rest/reference/dependency-graph#dependency-review)."
|
||||
|
||||
{% ifversion dependency-review-action-configuration %}
|
||||
You can configure the {% data variables.product.prodname_dependency_review_action %} to better suit your needs. For example, you can specify the severity level that will make the action fail, or set an allow or deny list for licenses to scan. For more information, see "[Configuring dependency review](/code-security/supply-chain-security/understanding-your-software-supply-chain/configuring-dependency-review#configuring-the-dependency-review-github-action)."
|
||||
{% endif %}
|
||||
|
||||
{% endif %}
|
||||
|
||||
|
||||
@@ -47,3 +47,56 @@ La revisión de dependencias se encuentra disponible cuando se habilita la gráf
|
||||
1. Under "Configure security and analysis features", check if the dependency graph is enabled.
|
||||
1. If dependency graph is enabled, click **Enable** next to "{% data variables.product.prodname_GH_advanced_security %}" to enable {% data variables.product.prodname_advanced_security %}, including dependency review. El botón de habilitar está inhabilitado si tu empresa no tiene licencias disponibles para la {% data variables.product.prodname_advanced_security %}.{% ifversion ghes < 3.3 %} {% endif %}{% ifversion ghes > 3.2 %} {% endif %}
|
||||
{% endif %}
|
||||
|
||||
{% ifversion dependency-review-action-configuration %}
|
||||
## Configuring the {% data variables.product.prodname_dependency_review_action %}
|
||||
|
||||
{% data reusables.dependency-review.dependency-review-action-beta-note %}
|
||||
{% data reusables.dependency-review.dependency-review-action-overview %}
|
||||
|
||||
The following configuration options are available.
|
||||
|
||||
| Opción | Requerido | Uso |
|
||||
| ------------------ | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `fail-on-severity` | Opcional | Defines the threshold for level of severity (`low`, `moderate`, `high`, `critical`).</br>The action will fail on any pull requests that introduce vulnerabilities of the specified severity level or higher. |
|
||||
| `allow-licenses` | Opcional | Contains a list of allowed licenses. You can find the possible values for this parameter in the [Licenses](/rest/licenses) page of the API documentation.</br>The action will fail on pull requests that introduce dependencies with licenses that do not match the list. |
|
||||
| `deny-licenses` | Opcional | Contains a list of prohibited licenses. You can find the possible values for this parameter in the [Licenses](/rest/licenses) page of the API documentation.</br>The action will fail on pull requests that introduce dependencies with licenses that match the list. |
|
||||
|
||||
{% tip %}
|
||||
|
||||
**Tip:** The `allow-licenses` and `deny-licenses` options are mutually exclusive.
|
||||
|
||||
{% endtip %}
|
||||
|
||||
This {% data variables.product.prodname_dependency_review_action %} example file illustrates how you can use these configuration options.
|
||||
|
||||
```yaml{:copy}
|
||||
name: 'Dependency Review'
|
||||
on: [pull_request]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
dependency-review:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: 'Checkout Repository'
|
||||
uses: {% data reusables.actions.action-checkout %}
|
||||
- name: Dependency Review
|
||||
uses: actions/dependency-review-action@v2
|
||||
with:
|
||||
# Possible values: "critical", "high", "moderate", "low"
|
||||
fail-on-severity: critical
|
||||
# You can only can only include one of these two options: `allow-licenses` and `deny-licences`
|
||||
# ([String]). Only allow these licenses (optional)
|
||||
# Possible values: Any `spdx_id` value(s) from https://docs.github.com/en/rest/licenses
|
||||
# allow-licenses: GPL-3.0, BSD-3-Clause, MIT
|
||||
|
||||
# ([String]). Block the pull request on these licenses (optional)
|
||||
# Possible values: Any `spdx_id` value(s) from https://docs.github.com/en/rest/licenses
|
||||
# deny-licenses: LGPL-2.0, BSD-2-Clause
|
||||
```
|
||||
|
||||
For further details about the configuration options, see [`dependency-review-action`](https://github.com/actions/dependency-review-action#readme).
|
||||
{% endif %}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
title: Acerca de las precompilaciones de los codespaces
|
||||
shortTitle: Acerca de las precompilaciones
|
||||
intro: Las precompilaciones de los codespaces te ayudan a acelerar la creación de los codespaces nuevos.
|
||||
intro: Codespaces prebuilds help to speed up the creation of new codespaces for large or complex repositories.
|
||||
versions:
|
||||
fpt: '*'
|
||||
ghec: '*'
|
||||
@@ -10,15 +10,13 @@ topics:
|
||||
product: '{% data reusables.gated-features.codespaces %}'
|
||||
---
|
||||
|
||||
{% data reusables.codespaces.prebuilds-beta-note %}
|
||||
|
||||
## Resumen
|
||||
|
||||
El precompilar tus codespaces te permite ser más productivo y acceder a tu codespace más rápidamente, sin importar el tamaño y complejidad de tu proyecto. Esto es porque cualquier código fuente, extensiones del editor, dependencias de proyecto, comandos y configuraciones ya se han descargado, instalado y aplicado antes de que crees un codespace para tu proyecto. Piensa en la precompilación como una plantilla "lista para utilizarse" para un codespace.
|
||||
Prebuilding your codespaces allows you to be more productive and access your codespace faster, particularly if your repository is large or complex and new codespaces currently take more than 2 minutes to start. Esto es porque cualquier código fuente, extensiones del editor, dependencias de proyecto, comandos y configuraciones ya se han descargado, instalado y aplicado antes de que crees un codespace para tu proyecto. Piensa en la precompilación como una plantilla "lista para utilizarse" para un codespace.
|
||||
|
||||
Predeterminadamente, cada que subas cambios a tu repositorio, {% data variables.product.prodname_codespaces %} utiliza {% data variables.product.prodname_actions %} para actualizar tus precompilaciones automáticamente.
|
||||
|
||||
Cuando las precompilaciones están disponibles para una rama en particular de un repositorio y para tu región, verás la etiqueta "{% octicon "zap" aria-label="The zap icon" %} Prebuild ready" en la lista de opciones de tipo de máquina al crear un codespace. Para obtener más información, consulta la sección "[Crear un codespace](/codespaces/developing-in-codespaces/creating-a-codespace#creating-a-codespace)".
|
||||
Cuando las precompilaciones están disponibles para una rama en particular de un repositorio y para tu región, verás la etiqueta "{% octicon "zap" aria-label="The zap icon" %} Prebuild ready" en la lista de opciones de tipo de máquina al crear un codespace. If a prebuild is still being created, you will see the "{% octicon "history" aria-label="The history icon" %} Prebuild in progress" label. Para obtener más información, consulta la sección "[Crear un codespace](/codespaces/developing-in-codespaces/creating-a-codespace#creating-a-codespace)".
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -13,8 +13,6 @@ product: '{% data reusables.gated-features.codespaces %}'
|
||||
permissions: People with admin access to a repository can configure prebuilds for the repository.
|
||||
---
|
||||
|
||||
{% data reusables.codespaces.prebuilds-beta-note %}
|
||||
|
||||
Puedes ajustar una configuración de precompilación para una rama específica de tu repositorio.
|
||||
|
||||
Habitualmente, a cualquier rama que se cree de una rama base con precompilación habilitada habitualmente también se le asignará una precompilación durante la creación del codespace. Esto es cierto si el contenedor dev en la rama es el mismo que en la rama base. Esto es porque la mayoría de las configuraciones de precompilación de las ramas con la misma configuración de contenedor dev son idénticas, así que los desarrolladores también pueden beneficiarse de tener tiempos más rápidos de creación de codespaces en dichas ramas. Para obtener más información, consulta la sección "[Introducción a los contenedores dev](/codespaces/setting-up-your-project-for-codespaces/introduction-to-dev-containers)".
|
||||
@@ -48,7 +46,15 @@ Antes de que configures las precompilaciones para tu proyecto, se debe cumplir c
|
||||
|
||||
{% endnote %}
|
||||
|
||||
1. Elige las regiones en las que quieres configurar una precompilación. Los desarrolladores deben ubicarse en una región que selecciones para poder crear codespaces desde una precompilación. Como alternativa, selecciona **Todas las regiones**.
|
||||
1. Elige cómo quieres activar automáticamente las actualizaciones de la plantilla de precompilación.
|
||||
|
||||
* **Cada subida** (el ajuste predeterminado) - Con este ajuste, las configuraciones de precompilación se actualizarán en cada subida que se haga a la rama predeterminada. Esto garantizará que los codespaces que se generen de una plantilla de precompilación siempre contengan la configuración de codespace más reciente, incluyendo cualquier dependencia que se haya actualizado o agregado recientemente.
|
||||
* **En el cambio de configuración** - Con este ajuste, as configuraciones de precompilación se actualizarán cada que lo hagan los archivos de configuración asociados para cada repositorio y rama en cuestión. Esto garantiza que los cambios a los archivos de configuración del contenedor dev para el repositorio se utilicen cuando se genera un codespace desde una plantilla de precompilación. El flujo de trabajo de acciones que actualiza la plantilla de precompilación se ejecutará con menor frecuencia, así que esta opción utilizará menos minutos de las acciones. Sin embargo, esta opción no garantiza que los cdespaces siempre incluyan dependencias recientemente actualizadas o agregadas, así que estas podrían tener que agregarse o actualizarse manualmente después de que un codespace se haya creado.
|
||||
* **Programado** - Con este ajuste, puedes hacer que tus configuraciones de precompilación se actualicen en un itinerario personalizado que tú defines. This can reduce consumption of Actions minutes, however, with this option, codespaces may be created that do not use the latest dev container configuration changes.
|
||||
|
||||

|
||||
|
||||
1. Select **Reduce prebuild available to only specific regions** to limit access to your prebuilt image, then select which regions you want it available in. Developers can only create codespaces from a prebuild if they are located in a region you select. By default, your prebuilt image is available to all regions where codespaces is available and storage costs apply for each region.
|
||||
|
||||

|
||||
|
||||
@@ -60,13 +66,17 @@ Antes de que configures las precompilaciones para tu proyecto, se debe cumplir c
|
||||
|
||||
{% endnote %}
|
||||
|
||||
1. Elige cómo quieres activar automáticamente las actualizaciones de la plantilla de precompilación.
|
||||
1. Set the number of prebuild template versions to be retained. You can input any number between 1 and 5. The default number of saved versions is 2, which means that only the latest template version and the previous version are saved.
|
||||
|
||||
* **Cada subida** (el ajuste predeterminado) - Con este ajuste, las configuraciones de precompilación se actualizarán en cada subida que se haga a la rama predeterminada. Esto garantizará que los codespaces que se generen de una plantilla de precompilación siempre contengan la configuración de codespace más reciente, incluyendo cualquier dependencia que se haya actualizado o agregado recientemente.
|
||||
* **En el cambio de configuración** - Con este ajuste, as configuraciones de precompilación se actualizarán cada que lo hagan los archivos de configuración asociados para cada repositorio y rama en cuestión. Esto garantiza que los cambios a los archivos de configuración del contenedor dev para el repositorio se utilicen cuando se genera un codespace desde una plantilla de precompilación. El flujo de trabajo de acciones que actualiza la plantilla de precompilación se ejecutará con menor frecuencia, así que esta opción utilizará menos minutos de las acciones. Sin embargo, esta opción no garantiza que los cdespaces siempre incluyan dependencias recientemente actualizadas o agregadas, así que estas podrían tener que agregarse o actualizarse manualmente después de que un codespace se haya creado.
|
||||
* **Programado** - Con este ajuste, puedes hacer que tus configuraciones de precompilación se actualicen en un itinerario personalizado que tú defines. Esto puede reducir el consumo de minutos de acciones y también la cantidad de tiempo durante la cual las precompilaciones no están disponibles porque se están actualizando. Sin embargo, con esta opción, se podrían crear codespaces que no utilicen los cambios de configuración más recientes al contenedor dev.
|
||||
Depending on your prebuild trigger settings, your prebuild template could change with each push or on each dev container configuration change. Retaining older versions of prebuild templates enables you to create a prebuild from an older commit with a different dev container configuration than the current prebuild template. Since there is a storage cost associated with retaining prebuild template versions, you can choose the number of versions to be retained based on the needs of your team. For more information on billing, see "[About billing for {% data variables.product.prodname_codespaces %}](/billing/managing-billing-for-github-codespaces/about-billing-for-codespaces#codespaces-pricing)."
|
||||
|
||||

|
||||
If you set the number of prebuild template versions to save to 1, {% data variables.product.prodname_codespaces %} will only save the latest version of the prebuild template and will delete the older version each time the template is updated. This means you will not get a prebuilt codespace if you go back to an older dev container configuration.
|
||||
|
||||

|
||||
|
||||
1. Add users or teams to notify when the prebuild workflow run fails for this configuration. You can begin typing a username, team name, or full name, then click the name once it appears to add them to the list. The users or teams you add will receive an email when prebuild failures occur, containing a link to the workflow run logs to help with further investigation.
|
||||
|
||||

|
||||
|
||||
1. Da clic en **Crear**.
|
||||
|
||||
|
||||
@@ -15,5 +15,4 @@ children:
|
||||
- /managing-prebuilds
|
||||
- /testing-dev-container-changes
|
||||
---
|
||||
|
||||
{% data reusables.codespaces.prebuilds-beta-note %}
|
||||
|
||||
|
||||
@@ -12,8 +12,6 @@ product: '{% data reusables.gated-features.codespaces %}'
|
||||
miniTocMaxHeadingLevel: 3
|
||||
---
|
||||
|
||||
{% data reusables.codespaces.prebuilds-beta-note %}
|
||||
|
||||
## Verificar, cambiar y borrar tus configuraciones de precompilación
|
||||
|
||||
Las precompilaciones que configuras para un repositorio se crean y actualizan utilizando un flujo de trabajo de {% data variables.product.prodname_actions %} que admistra el servicio de {% data variables.product.prodname_codespaces %}.
|
||||
|
||||
@@ -14,8 +14,6 @@ product: '{% data reusables.gated-features.codespaces %}'
|
||||
permissions: People with write permissions to a repository can create or edit the dev container configuration for a branch.
|
||||
---
|
||||
|
||||
{% data reusables.codespaces.prebuilds-beta-note %}
|
||||
|
||||
Cualquier cambio que hagas en la configuración del contenedor dev para una rama con precompilación habilitada dará como resultado una actualización a la configuración de codespace y a la plantilla precompilada asociada. Por lo tanto, es importante probar estos cambios en un codespace de una rama de prueba antes de confirmar tus cambios en una rama de tu repositorio que se esté utilizando activamente. Esto garantizará que no estás introduciendo cambios importantes para tu equipo.
|
||||
|
||||
Para obtener más información, consulta la sección "[Introducción a los contenedores dev](/codespaces/setting-up-your-project-for-codespaces/introduction-to-dev-containers)".
|
||||
|
||||
@@ -12,8 +12,6 @@ product: '{% data reusables.gated-features.codespaces %}'
|
||||
miniTocMaxHeadingLevel: 3
|
||||
---
|
||||
|
||||
{% data reusables.codespaces.prebuilds-beta-note %}
|
||||
|
||||
Para obtener más información sobre las precompilaciones de los {% data variables.product.prodname_codespaces %}, consulta la sección "[Precompilar tus codespaces](/codespaces/prebuilding-your-codespaces)".
|
||||
|
||||
## Verificar si un codespace se creó desde una precompilación
|
||||
|
||||
@@ -159,7 +159,7 @@ Ya que los permisos a nivel de usuario se otorgan individualmente, puedes agrega
|
||||
|
||||
## Solicitudes de usuario a servidor
|
||||
|
||||
Mientras que la mayoría de tu interacción con la API deberá darse utilizando tus tokens de acceso a la instalación de servidor a servidor, ciertas terminales te permiten llevar a cabo acciones a través de la API utilizando un token de acceso. Tu app puede hacer las siguientes solicitudes utilizando las terminales de [GraphQL v4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql) o de [REST v3](/rest).
|
||||
Mientras que la mayoría de tu interacción con la API deberá darse utilizando tus tokens de acceso a la instalación de servidor a servidor, ciertas terminales te permiten llevar a cabo acciones a través de la API utilizando un token de acceso. Your app can make the following requests using [GraphQL]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql) or [REST](/rest) endpoints.
|
||||
|
||||
### Terminales compatibles
|
||||
|
||||
|
||||
@@ -53,7 +53,7 @@ Te recomendamos revisar la lista de terminales de la API que necesitas tan pront
|
||||
|
||||
### Diseñar con apego a los límites de tasa de la API
|
||||
|
||||
Las GitHub Apps utilizan [reglas móviles para los límites de tasa](/apps/building-github-apps/understanding-rate-limits-for-github-apps/), las cuales pueden incrementar con base en la cantidad de repositorios y usuarios de la organización. Una GitHub App también puede hacer uso de [solicitudes condicionales](/rest/overview/resources-in-the-rest-api#conditional-requests) o de solicitudes consolidadas si utiliza la [API de GraphQL V4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
|
||||
Las GitHub Apps utilizan [reglas móviles para los límites de tasa](/apps/building-github-apps/understanding-rate-limits-for-github-apps/), las cuales pueden incrementar con base en la cantidad de repositorios y usuarios de la organización. A GitHub App can also make use of [conditional requests](/rest/overview/resources-in-the-rest-api#conditional-requests) or consolidate requests by using the [GraphQL API]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
|
||||
|
||||
### Registrar una GitHub App nueva
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ topics:
|
||||
- API
|
||||
---
|
||||
|
||||
Hay dos versiones estables de la API de GitHub: la [API de REST](/rest) y la [API de GraphQL]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql). Cuando utilizas la API de REST, te exhortamos a que [solicites la v3 a través del encabezado de `Accept`](/v3/media/#request-specific-version). Para obtener más información sobre cómo utilizar la API de GraphQL, consulta los [documentos de la v4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
|
||||
Hay dos versiones estables de la API de GitHub: la [API de REST](/rest) y la [API de GraphQL]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql).
|
||||
|
||||
## Versiones obsoletas
|
||||
|
||||
|
||||
@@ -1343,7 +1343,7 @@ El conjunto de datos de asesoría de seguridad también impulsa las {% data vari
|
||||
|
||||
## security_and_analysis
|
||||
|
||||
Activity related to enabling or disabling code security and analysis features for a repository or organization.
|
||||
Actividad relacionada con habilitar o inhabilitar la seguridad de código y características de seguridad para un repositorio u organización.
|
||||
|
||||
### Disponibilidad
|
||||
|
||||
@@ -1353,9 +1353,9 @@ Activity related to enabling or disabling code security and analysis features fo
|
||||
|
||||
### Objeto de carga útil del webhook
|
||||
|
||||
| Clave | Tipo | Descripción |
|
||||
| --------- | -------- | ---------------------------------------------------------------------- |
|
||||
| `changes` | `objeto` | The changes that were made to the code security and analysis features. |
|
||||
| Clave | Tipo | Descripción |
|
||||
| --------- | -------- | ------------------------------------------------------------------------------------ |
|
||||
| `changes` | `objeto` | Los cambios que se hicieron a la seguridad del código y características de análisis. |
|
||||
{% data reusables.webhooks.repo_desc %}
|
||||
{% data reusables.webhooks.org_desc %}
|
||||
{% data reusables.webhooks.app_desc %}
|
||||
|
||||
@@ -40,6 +40,16 @@ Marcar un repositorio como favorito es un proceso simple de dos pasos.
|
||||
1. Opcionalmente, para dejar de marcar un repositorio como favorito, haz clic en **Desmarcar como favorito**. 
|
||||
|
||||
{% ifversion fpt or ghec %}
|
||||
|
||||
## Viewing who has starred a repository
|
||||
|
||||
|
||||
You can view everyone who has starred a public repository or a private repository you have access to.
|
||||
|
||||
|
||||
To view everyone who has starred a repository, add `/stargazers` to the end of the URL of a repository. For example, to view stargazers for the github/docs repository, visit https://github.com/github/docs/stargazers.
|
||||
|
||||
|
||||
## Organizar los repositorios marcados como favoritos con las listas
|
||||
|
||||
{% note %}
|
||||
|
||||
@@ -81,7 +81,7 @@ gh repo fork <em>repository</em> --clone=true
|
||||
|
||||
## Hacer y subir cambios
|
||||
|
||||
Puedes proceder y hacer algunos cambios al proyecto utilizando tu editor de texto favorito, como [Atom](https://atom.io). Podrías, por ejemplo, cambiar el texto en `index.html` para agregar tu nombre de usuario de GitHub.
|
||||
Go ahead and make a few changes to the project using your favorite text editor, like [Visual Studio Code](https://code.visualstudio.com). Podrías, por ejemplo, cambiar el texto en `index.html` para agregar tu nombre de usuario de GitHub.
|
||||
|
||||
Cuando estés listo para enviar tus cambios, pruébalos y confírmalos. `git add .` le dice a Git que quieres incluir todos tus cambios en la siguiente confirmación. `git commit` toma una captura de estos cambios.
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ topics:
|
||||
- API
|
||||
---
|
||||
|
||||
Puedes acceder a la mayoría de objetos en GitHub (usuarios, informes de problemas, solicitudes de extracción, etc.) utilizando ya sea la API de Rest o la de GraphQL. Puedes encontrar la **ID de nodo global** de muchos objetos desde dentro de la API de REST y utilizar estas ID en tus operaciones de GraphQL. Para obtener más información, consulta la sección "[Vista previa de las ID de nodo de la API de GraphQL v4 en los recursos de la API de REST v3](https://developer.github.com/changes/2017-12-19-graphql-node-id/)".
|
||||
Puedes acceder a la mayoría de objetos en GitHub (usuarios, informes de problemas, solicitudes de extracción, etc.) utilizando ya sea la API de Rest o la de GraphQL. Puedes encontrar la **ID de nodo global** de muchos objetos desde dentro de la API de REST y utilizar estas ID en tus operaciones de GraphQL. For more information, see "[Preview GraphQL API Node IDs in REST API resources](https://developer.github.com/changes/2017-12-19-graphql-node-id/)."
|
||||
|
||||
{% note %}
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ topics:
|
||||
|
||||
## Límite de nodo
|
||||
|
||||
Para pasar la validación del [modelo](/graphql/guides/introduction-to-graphql#schema), todas las [llamadas](/graphql/guides/forming-calls-with-graphql) la API v4 de GraphQL deben cumplir con los siguientes estándares:
|
||||
To pass [schema](/graphql/guides/introduction-to-graphql#schema) validation, all GraphQL API [calls](/graphql/guides/forming-calls-with-graphql) must meet these standards:
|
||||
|
||||
* Los clientes deben suministrar un argumento `first` o `last` en cualquier [conexión](/graphql/guides/introduction-to-graphql#connection).
|
||||
* Los valores de `first` y `last` deben estar dentro de 1-100.
|
||||
@@ -130,30 +130,30 @@ Estos dos ejemplos te muestran cómo calcular los nodos totales en una llamada.
|
||||
|
||||
## Limite de tasa
|
||||
|
||||
El límite de la API v4 de GraphQL es diferente a los [límites de tasa](/rest/overview/resources-in-the-rest-api#rate-limiting) de la API v3 de REST.
|
||||
The GraphQL API limit is different from the REST API's [rate limits](/rest/overview/resources-in-the-rest-api#rate-limiting).
|
||||
|
||||
¿Por qué son diferentes los límites de tasa de la API? Con [GraphQL](/graphql), una llamada de GraphQL puede reemplazar [varias llamadas de REST](/graphql/guides/migrating-from-rest-to-graphql). Una sola llamada compleja de GraphQL puede ser el equivalente a miles de solicitudes de REST. Si bien una sola llamada de GraphQL caería muy debajo del límite de tasa de la API de REST, la consulta podría ser igual de cara en términos de procesamiento para los servidores de GitHub.
|
||||
|
||||
Para representar con precisión el costo de una consulta al servidor, la API v4 de GraphQL calcula la **puntuación de tasa límite** de una llamada con base en una escala de puntos normalizada. Los factores de puntuación de una consulta en argumentos "firs" y "last" en una conexión padre y sus hijos.
|
||||
To accurately represent the server cost of a query, the GraphQL API calculates a call's **rate limit score** based on a normalized scale of points. Los factores de puntuación de una consulta en argumentos "firs" y "last" en una conexión padre y sus hijos.
|
||||
|
||||
* La fórmula utiliza los argumentos `first` y `last` en una conexión padre y en sus hijos para pre-calcular la carga potencial en los sistemas de GitHub, tal como MySQL, ElasticSearch y Git.
|
||||
* Cada conexión nueva tiene su propio valor de puntos. Los puntos se combinan con otros puntos desde la llamada en una puntuación de tasa límite general.
|
||||
|
||||
El límite de tasa de la API v4 de GraphQL es de **5,000 puntos por hora**.
|
||||
The GraphQL API rate limit is **5,000 points per hour**.
|
||||
|
||||
Nota que 5,000 puntos por hora no es lo mismo que 5,000 llamadas por hora: la API v4 de GraphQL y la API v3 de REST utilizan límites de tasa diferentes.
|
||||
Note that 5,000 points per hour is not the same as 5,000 calls per hour: the GraphQL API and REST API use different rate limits.
|
||||
|
||||
{% note %}
|
||||
|
||||
**Nota**: La fórmula y el límite de tasa actuales están sujetos a cambio mientras observamos cómo los desarrolladores utilizan la API v4 de GraphQL.
|
||||
**Note**: The current formula and rate limit are subject to change as we observe how developers use the GraphQL API.
|
||||
|
||||
{% endnote %}
|
||||
|
||||
### Recuperar el estado de límite de tasa de una llamada
|
||||
|
||||
Con la API v3 de REST, puedes revisar el estado de límite de tasa si [inspeccionas](/rest/overview/resources-in-the-rest-api#rate-limiting) los encabezados HTTP devueltos.
|
||||
With the REST API, you can check the rate limit status by [inspecting](/rest/overview/resources-in-the-rest-api#rate-limiting) the returned HTTP headers.
|
||||
|
||||
Con la API v4 de GraphQL, puedes revisar el estado de límite de tasa si consultas los campos en el objeto `rateLimit`:
|
||||
With the GraphQL API, you can check the rate limit status by querying fields on the `rateLimit` object:
|
||||
|
||||
```graphql
|
||||
query {
|
||||
@@ -186,7 +186,7 @@ Al consultar el objeto `rateLimit` se devuelve el puntaje de una llamada, pero e
|
||||
|
||||
{% note %}
|
||||
|
||||
**Nota**: El costo mínimo de una llamada a la API v4 de GraphQL es **1**, lo cual representa solo una solicitud.
|
||||
**Note**: The minimum cost of a call to the GraphQL API is **1**, representing a single request.
|
||||
|
||||
{% endnote %}
|
||||
|
||||
|
||||
@@ -40,16 +40,16 @@ You can filter files in a pull request by file extension type, such as `.html` o
|
||||
{% data reusables.repositories.sidebar-pr %}
|
||||
1. In the list of pull requests, click the pull request you'd like to filter.
|
||||
{% data reusables.repositories.changed-files %}
|
||||
1. If the file tree is hidden, click **Show file tree** to display the file tree.
|
||||
|
||||
1. Click on a file in the file tree to view the corresponding file diff. If the file tree is hidden, click {% octicon "sidebar-collapse" aria-label="The sidebar collapse icon" %} to display the file tree.
|
||||
|
||||
{% note %}
|
||||
|
||||
**Note**: The file tree will not display if your screen width is too narrow or if the pull request only includes one file.
|
||||
|
||||
{% endnote %}
|
||||
|
||||
1. Click on a file in the file tree to view the corresponding file diff.
|
||||

|
||||
|
||||

|
||||
1. To filter by file path, enter part or all of the file path in the **Filter changed files** search box. Alternatively, use the file filter dropdown. For more information, see "[Using the file filter dropdown](#using-the-file-filter-dropdown)."
|
||||
|
||||
{% endif %}
|
||||
|
||||
@@ -35,9 +35,14 @@ shortTitle: Review dependency changes
|
||||
Dependency review allows you to "shift left". You can use the provided predictive information to catch vulnerable dependencies before they hit production. For more information, see "[About dependency review](/code-security/supply-chain-security/about-dependency-review)."
|
||||
|
||||
{% ifversion fpt or ghec or ghes > 3.5 or ghae-issue-6396 %}
|
||||
You can use the Dependency Review GitHub Action to help enforce dependency reviews on pull requests in your repository. For more information, see "[Dependency review enforcement](/code-security/supply-chain-security/understanding-your-software-supply-chain/about-dependency-review#dependency-review-enforcement)."
|
||||
|
||||
You can use the {% data variables.product.prodname_dependency_review_action %} to help enforce dependency reviews on pull requests in your repository. {% data reusables.dependency-review.dependency-review-action-overview %}
|
||||
|
||||
{% ifversion dependency-review-action-configuration %}
|
||||
You can configure the {% data variables.product.prodname_dependency_review_action %} to better suit your needs by specifying the type of dependency vulnerability you wish to catch. For more information, see "[Configuring dependency review](/code-security/supply-chain-security/understanding-your-software-supply-chain/configuring-dependency-review#configuring-the-dependency-review-github-action)."
|
||||
{% endif %}
|
||||
|
||||
{% endif %}
|
||||
## Reviewing dependencies in a pull request
|
||||
|
||||
{% data reusables.repositories.sidebar-pr %}
|
||||
|
||||
@@ -77,14 +77,8 @@ Before you can sync your fork with an upstream repository, you must [configure a
|
||||
> 2 files changed, 7 insertions(+), 9 deletions(-)
|
||||
> delete mode 100644 README
|
||||
> create mode 100644 README.md
|
||||
``` If your local branch didn't have any unique commits, Git will instead perform a "fast-forward":
|
||||
```shell
|
||||
$ git merge upstream/main
|
||||
> Updating 34e91da..16c56ad
|
||||
> Fast-forward
|
||||
> README.md | 5 +++--
|
||||
> 1 file changed, 3 insertions(+), 2 deletions(-)
|
||||
```
|
||||
```
|
||||
|
||||
|
||||
{% tip %}
|
||||
|
||||
|
||||
@@ -41,5 +41,28 @@ Once the commit is on the default branch, any tags that contain the commit will
|
||||
|
||||

|
||||
|
||||
{% ifversion commit-tree-view %}
|
||||
|
||||
## Using the file tree
|
||||
|
||||
You can use the file tree to navigate between files in a commit.
|
||||
|
||||
{% data reusables.repositories.navigate-to-repo %}
|
||||
{% data reusables.repositories.navigate-to-commit-page %}
|
||||
1. Navigate to the commit by clicking the commit message link.
|
||||

|
||||
1. Click on a file in the file tree to view the corresponding file diff. If the file tree is hidden, click {% octicon "sidebar-collapse" aria-label="The sidebar collapse icon" %} to display the file tree.
|
||||
|
||||
{% note %}
|
||||
|
||||
**Note**: The file tree will not display if your screen width is too narrow or if the commit only includes one file.
|
||||
|
||||
{% endnote %}
|
||||
|
||||

|
||||
1. To filter by file path, enter part or all of the file path in the **Filter changed files** search box.
|
||||
|
||||
{% endif %}
|
||||
|
||||
## Further reading
|
||||
- "[Committing and reviewing changes to your project](/desktop/contributing-to-projects/committing-and-reviewing-changes-to-your-project#about-commits)" on {% data variables.product.prodname_desktop %}
|
||||
@@ -24,12 +24,6 @@ topics:
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{% warning %}
|
||||
|
||||
Advertencia: Desde la segunda mitad de octubre del 2021, ya no se están manteniendo las librerías oficiales de Octokit. Para obtener más información, consulta [este debate en el repositorio de octokit.js](https://github.com/octokit/octokit.js/discussions/620).
|
||||
|
||||
{% endwarning %}
|
||||
|
||||
# Librería de terceros
|
||||
|
||||
### Clojure
|
||||
|
||||
@@ -185,7 +185,7 @@ _Buscar_
|
||||
- [`PUT /repos/:owner/:repo/topics`](/rest/reference/repos#replace-all-repository-topics) (:write)
|
||||
- [`POST /repos/:owner/:repo/transfer`](/rest/reference/repos#transfer-a-repository) (:write)
|
||||
{% ifversion fpt or ghec -%}
|
||||
- [`GET /repos/:owner/:repo/vulnerability-alerts`](/rest/reference/repos#enable-vulnerability-alerts) (:write)
|
||||
- [`GET /repos/:owner/:repo/vulnerability-alerts`](/rest/reference/repos#enable-vulnerability-alerts) (:read)
|
||||
{% endif -%}
|
||||
{% ifversion fpt or ghec -%}
|
||||
- [`PUT /repos/:owner/:repo/vulnerability-alerts`](/rest/reference/repos#enable-vulnerability-alerts) (:write)
|
||||
|
||||
@@ -24,7 +24,7 @@ Predeterminadamente, todas las solicitudes a `{% data variables.product.api_url_
|
||||
|
||||
{% ifversion fpt or ghec %}
|
||||
|
||||
Para obtener más información acerca de la API de GraphQL de GitHub, consulta la [documentación de la V4]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql). Para obtener más información acerca de cómo migrarse a GraphQL, consulta la sección "[Migrarse desde REST]({% ifversion ghec%}/free-pro-team@latest{% endif %}/graphql/guides/migrating-from-rest-to-graphql)".
|
||||
For information about GitHub's GraphQL API, see the [documentation]({% ifversion ghec %}/free-pro-team@latest{% endif %}/graphql). Para obtener más información acerca de cómo migrarse a GraphQL, consulta la sección "[Migrarse desde REST]({% ifversion ghec%}/free-pro-team@latest{% endif %}/graphql/guides/migrating-from-rest-to-graphql)".
|
||||
|
||||
{% endif %}
|
||||
|
||||
|
||||
8
translations/es-ES/data/features/commit-tree-view.yml
Normal file
8
translations/es-ES/data/features/commit-tree-view.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
#Issue 6662
|
||||
#Commit file tree view
|
||||
versions:
|
||||
fpt: '*'
|
||||
ghec: '*'
|
||||
ghes: '>=3.6'
|
||||
ghae: 'issue-6662'
|
||||
7
translations/es-ES/data/features/container-hooks.yml
Normal file
7
translations/es-ES/data/features/container-hooks.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
#Reference: #7070
|
||||
#Actions Runner Container Hooks
|
||||
versions:
|
||||
fpt: '*'
|
||||
ghec: '*'
|
||||
ghae: 'issue-7070'
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
#Reference: Issue #7061 Configuring the dependency review action - [Public Beta]
|
||||
versions:
|
||||
fpt: '*'
|
||||
ghec: '*'
|
||||
ghes: '>3.5'
|
||||
ghae: 'issue-7061'
|
||||
@@ -100,8 +100,8 @@ upcoming_changes:
|
||||
owner: cheshire137
|
||||
-
|
||||
location: DependencyGraphDependency.packageLabel
|
||||
description: '`packageLabel` will be removed. Use normalized `packageName` field instead.'
|
||||
reason: '`packageLabel` will be removed.'
|
||||
description: '`packageLabel` se eliminará. Utiliza el campo normalizado `packageName` en su lugar.'
|
||||
reason: '`packageLabel` se eliminará.'
|
||||
date: '2022-10-01T00:00:00+00:00'
|
||||
criticality: breaking
|
||||
owner: github/dependency_graph
|
||||
|
||||
@@ -100,8 +100,8 @@ upcoming_changes:
|
||||
owner: cheshire137
|
||||
-
|
||||
location: DependencyGraphDependency.packageLabel
|
||||
description: '`packageLabel` will be removed. Use normalized `packageName` field instead.'
|
||||
reason: '`packageLabel` will be removed.'
|
||||
description: '`packageLabel` se eliminará. Utiliza el campo normalizado `packageName` en su lugar.'
|
||||
reason: '`packageLabel` se eliminará.'
|
||||
date: '2022-10-01T00:00:00+00:00'
|
||||
criticality: breaking
|
||||
owner: github/dependency_graph
|
||||
|
||||
@@ -109,5 +109,8 @@ sections:
|
||||
- heading: 'Obsoletización del soporte para XenServer Hypervisor'
|
||||
notes:
|
||||
- 'Desde {% data variables.product.prodname_ghe_server %} 3.1, comenzaremos a descontinuar el soporte par Xen Hypervisor. La obsoletización completa está programada para {% data variables.product.prodname_ghe_server %} 3.3, siguiendo la ventana de obsoletización estándar de un año.'
|
||||
- heading: 'Change to the format of authentication tokens affects GitHub Connect'
|
||||
notes:
|
||||
- "GitHub Connect will no longer work after June 3rd for instances running GitHub Enterprise Server 3.1 or older, due to the format of GitHub authentication tokens changing. To continue using GitHub Connect, upgrade to GitHub Enterprise Server 3.2 or later. For more information, see the [GitHub Blog](https://github.blog/2022-05-20-action-needed-by-github-connect-customers-using-ghes-3-1-and-older-to-adopt-new-authentication-token-format-updates/). [Updated: 2022-06-14]\n"
|
||||
backups:
|
||||
- '{% data variables.product.prodname_ghe_server %} 3.1 requiere por lo menos de una versión [3.1.0 de las Utilidades de Respaldo de GitHub Enterprise](https://github.com/github/backup-utils) para los [Respaldos y la Recuperación de Desastres](/enterprise-server@3.1/admin/configuration/configuring-backups-on-your-appliance).'
|
||||
|
||||
@@ -0,0 +1,21 @@
|
||||
---
|
||||
date: '2022-06-09'
|
||||
sections:
|
||||
security_fixes:
|
||||
- Los paquetes se actualizaron a las últimas versiones de seguridad.
|
||||
bugs:
|
||||
- An internal script to validate hostnames in the {% data variables.product.prodname_ghe_server %} configuration file would return an error if the hostname string started with a "." (period character).
|
||||
- In HA configurations where the primary node's hostname was longer than 60 characters, MySQL would fail to be configured.
|
||||
- The calculation of "maximum committers across entire instance" reported in the site admin dashboard was incorrect.
|
||||
- An incorrect database entry for repository replicas caused database corruption when performing a restore using {% data variables.product.prodname_enterprise_backup_utilities %}.
|
||||
changes:
|
||||
- In HA configurations where Elasticsearch reported a valid yellow status, changes introduced in a previous fix would block the `ghe-repl-stop` command and not allow replication to be stopped. Using `ghe-repo-stop --force` will now force Elasticsearch to stop when the service is in a normal or valid yellow status.
|
||||
known_issues:
|
||||
- El registor de npm del {% data variables.product.prodname_registry %} ya no regresa un valor de tiempo en las respuestas de metadatos. Esto se hizo para permitir mejoras de rendimiento sustanciales. Seguimos teniendo todos los datos necesarios para devolver un valor de tiempo como parte de la respuesta de metadatos y terminaremos de devolver este valor ene l futuro una vez que hayamos resuelto los problemas de rendimiento existentes.
|
||||
- En una instancia recién configurada de {% data variables.product.prodname_ghe_server %} sin ningún usuario, un atacante podría crear el primer usuario adminsitrador.
|
||||
- Las reglas de cortafuegos personalizadas se eliminan durante el proceso de actualización.
|
||||
- Los archivos rastreados del LFS de Git que se [cargaron mediante la interface web](https://github.com/blog/2105-upload-files-to-your-repositories) se agregaron incorrecta y directamente al repositorio.
|
||||
- Las propuestas no pudieron cerrarse si contenían un permalink a un blob en el mismo repositorio en donde la ruta de archvio del blob era más grande a 255 caracteres.
|
||||
- Cuando se habilita la opción "Los usuarios pueden buscar en GitHub.com" con las propuestas de {% data variables.product.prodname_github_connect %}, las propuestas en los repositorios internos y privados no se incluyen en los resultados de búsqueda de {% data variables.product.prodname_dotcom_the_website %}.
|
||||
- Si se habilitan las {% data variables.product.prodname_actions %} para {% data variables.product.prodname_ghe_server %}, el desmontar un nodo de réplica con `ghe-repl-teardown` tendrá éxito, pero podría devolver un `ERROR:Running migrations`.
|
||||
- Los límites de recursos que son específicos para procesar ganchos de pre-recepción podrían ocasionar que fallen algunos ganchos de pre-recepción.
|
||||
@@ -306,6 +306,11 @@ sections:
|
||||
Two legacy GitHub Apps-related webhook events have been removed: `integration_installation` and `integration_installation_repositories`. You should instead be listening to the `installation` and `installation_repositories` events.
|
||||
- |
|
||||
The following REST API endpoint has been removed: `POST /installations/{installation_id}/access_tokens`. You should instead be using the namespaced equivalent `POST /app/installations/{installation_id}/access_tokens`.
|
||||
- heading: Change to the format of authentication tokens affects GitHub Connect
|
||||
notes:
|
||||
# https://github.com/github/releases/issues/1235
|
||||
- |
|
||||
GitHub Connect will no longer work after June 3rd for instances running GitHub Enterprise Server 3.1 or older, due to the format of GitHub authentication tokens changing. To continue using GitHub Connect, upgrade to GitHub Enterprise Server 3.2 or later. For more information, see the [GitHub Blog](https://github.blog/2022-05-20-action-needed-by-github-connect-customers-using-ghes-3-1-and-older-to-adopt-new-authentication-token-format-updates/). [Updated: 2022-06-14]
|
||||
|
||||
backups:
|
||||
- '{% data variables.product.prodname_ghe_server %} 3.2 requires at least [GitHub Enterprise Backup Utilities 3.2.0](https://github.com/github/backup-utils) for [Backups and Disaster Recovery](/enterprise-server@3.2/admin/configuration/configuring-backups-on-your-appliance).'
|
||||
|
||||
@@ -0,0 +1,23 @@
|
||||
---
|
||||
date: '2022-06-09'
|
||||
sections:
|
||||
security_fixes:
|
||||
- Los paquetes se actualizaron a las últimas versiones de seguridad.
|
||||
bugs:
|
||||
- An internal script to validate hostnames in the {% data variables.product.prodname_ghe_server %} configuration file would return an error if the hostname string started with a "." (period character).
|
||||
- In HA configurations where the primary node's hostname was longer than 60 characters, MySQL would fail to be configured.
|
||||
- The `--gateway` argument was added to the `ghe-setup-network` command, to allow passing the gateway address when configuring network settings using the command line.
|
||||
- Image attachments that were deleted would return a `500 Internal Server Error` instead of a `404 Not Found` error.
|
||||
- The calculation of "maximum committers across entire instance" reported in the site admin dashboard was incorrect.
|
||||
- An incorrect database entry for repository replicas caused database corruption when performing a restore using {% data variables.product.prodname_enterprise_backup_utilities %}.
|
||||
changes:
|
||||
- Optimised the inclusion of metrics when generating a cluster support bundle.
|
||||
- In HA configurations where Elasticsearch reported a valid yellow status, changes introduced in a previous fix would block the `ghe-repl-stop` command and not allow replication to be stopped. Using `ghe-repo-stop --force` will now force Elasticsearch to stop when the service is in a normal or valid yellow status.
|
||||
known_issues:
|
||||
- En una instancia recién configurada de {% data variables.product.prodname_ghe_server %} sin ningún usuario, un atacante podría crear el primer usuario adminsitrador.
|
||||
- Las reglas de cortafuegos personalizadas se eliminan durante el proceso de actualización.
|
||||
- Los archivos rastreados del LFS de Git que se [cargaron mediante la interface web](https://github.com/blog/2105-upload-files-to-your-repositories) se agregaron incorrecta y directamente al repositorio.
|
||||
- Las propuestas no pudieron cerrarse si contenían un permalink a un blob en el mismo repositorio en donde la ruta de archvio del blob era más grande a 255 caracteres.
|
||||
- Cuando se habilita la opción "Los usuarios pueden buscar en GitHub.com" con las propuestas de {% data variables.product.prodname_github_connect %}, las propuestas en los repositorios internos y privados no se incluyen en los resultados de búsqueda de {% data variables.product.prodname_dotcom_the_website %}.
|
||||
- El registor de npm del {% data variables.product.prodname_registry %} ya no regresa un valor de tiempo en las respuestas de metadatos. Esto se hizo para permitir mejoras de rendimiento sustanciales. Seguimos teniendo todos los datos necesarios para devolver un valor de tiempo como parte de la respuesta de metadatos y terminaremos de devolver este valor ene l futuro una vez que hayamos resuelto los problemas de rendimiento existentes.
|
||||
- Los límites de recursos que son específicos para procesar ganchos de pre-recepción podrían ocasionar que fallen algunos ganchos de pre-recepción.
|
||||
@@ -113,5 +113,8 @@ sections:
|
||||
- heading: 'Obsoletización de extensiones de bit-caché personalizadas'
|
||||
notes:
|
||||
- "Desde {% data variables.product.prodname_ghe_server %} 3.1, el soporte de las extensiones bit-cache propietarias de {% data variables.product.company_short %} se comenzó a eliminar paulatinamente. Estas extensiones ahora son obsoletas en {% data variables.product.prodname_ghe_server %} 3.3.\n\nCualquier repositorio que ya haya estado presente y activo en {% data variables.product.product_location %} ejecutando la versión 3.1 o 3.2 ya se actualizó atuomáticamente.\n\nLos repositorios que no estuvieron presentes y activos antes de mejorar a {% data variables.product.prodname_ghe_server %} 3.3 podrían no funcionar de forma óptima sino hasta que se ejecute una tarea de mantenimiento de repositorio y esta se complete exitosamente.\n\nPara iniciar una tarea de mantenimiento de repositorio manualmente, dirígete a `https://<hostname>/stafftools/repositories/<owner>/<repository>/network` en cada repositorio afectado y haz clic en el botón **Schedule**.\n"
|
||||
- heading: 'Change to the format of authentication tokens affects GitHub Connect'
|
||||
notes:
|
||||
- "GitHub Connect will no longer work after June 3rd for instances running GitHub Enterprise Server 3.1 or older, due to the format of GitHub authentication tokens changing. To continue using GitHub Connect, upgrade to GitHub Enterprise Server 3.2 or later. For more information, see the [GitHub Blog](https://github.blog/2022-05-20-action-needed-by-github-connect-customers-using-ghes-3-1-and-older-to-adopt-new-authentication-token-format-updates/). [Updated: 2022-06-14]\n"
|
||||
backups:
|
||||
- '{% data variables.product.prodname_ghe_server %} 3.3 requiere por lo menos de las [Utilidades de Respaldo de GitHub Enterprise Backup 3.3.0](https://github.com/github/backup-utils) para hacer [Respaldos y Recuperación de Desastres](/admin/configuration/configuring-your-enterprise/configuring-backups-on-your-appliance).'
|
||||
|
||||
@@ -0,0 +1,26 @@
|
||||
---
|
||||
date: '2022-06-09'
|
||||
sections:
|
||||
security_fixes:
|
||||
- Los paquetes se actualizaron a las últimas versiones de seguridad.
|
||||
bugs:
|
||||
- An internal script to validate hostnames in the {% data variables.product.prodname_ghe_server %} configuration file would return an error if the hostname string started with a "." (period character).
|
||||
- In HA configurations where the primary node's hostname was longer than 60 characters, MySQL would fail to be configured
|
||||
- The `--gateway` argument was added to the `ghe-setup-network` command, to allow passing the gateway address when configuring network settings using the command line.
|
||||
- Image attachments that were deleted would return a `500 Internal Server Error` instead of a `404 Not Found` error.
|
||||
- The calculation of "maximum committers across entire instance" reported in the site admin dashboard was incorrect.
|
||||
- An incorrect database entry for repository replicas caused database corruption when performing a restore using {% data variables.product.prodname_enterprise_backup_utilities %}.
|
||||
changes:
|
||||
- Optimised the inclusion of metrics when generating a cluster support bundle.
|
||||
- In HA configurations where Elasticsearch reported a valid yellow status, changes introduced in a previous fix would block the `ghe-repl-stop` command and not allow replication to be stopped. Using `ghe-repo-stop --force` will now force Elasticsearch to stop when the service is in a normal or valid yellow status.
|
||||
- When using `ghe-migrator` or exporting from {% data variables.product.prodname_dotcom_the_website %}, migrations would fail to export pull request attachments.
|
||||
known_issues:
|
||||
- Después de haber actualizado a {% data variables.product.prodname_ghe_server %} 3.3, podría que las {% data variables.product.prodname_actions %} no inicien automáticamente. Para resolver este problema, conéctate al aplicativo a través de SSH y ejecuta el comando `ghe-actions-start`.
|
||||
- En una instancia recién configurada de {% data variables.product.prodname_ghe_server %} sin ningún usuario, un atacante podría crear el primer usuario adminsitrador.
|
||||
- Las reglas de cortafuegos personalizadas se eliminan durante el proceso de actualización.
|
||||
- Los archivos rastreados del LFS de Git que se [cargaron mediante la interface web](https://github.com/blog/2105-upload-files-to-your-repositories) se agregaron incorrecta y directamente al repositorio.
|
||||
- Las propuestas no pudieron cerrarse si contenían un permalink a un blob en el mismo repositorio en donde la ruta de archvio del blob era más grande a 255 caracteres.
|
||||
- Cuando se habilita la opción "Los usuarios pueden buscar en GitHub.com" con las propuestas de {% data variables.product.prodname_github_connect %}, las propuestas en los repositorios internos y privados no se incluyen en los resultados de búsqueda de {% data variables.product.prodname_dotcom_the_website %}.
|
||||
- El registor de npm del {% data variables.product.prodname_registry %} ya no regresa un valor de tiempo en las respuestas de metadatos. Esto se hizo para permitir mejoras de rendimiento sustanciales. Seguimos teniendo todos los datos necesarios para devolver un valor de tiempo como parte de la respuesta de metadatos y terminaremos de devolver este valor ene l futuro una vez que hayamos resuelto los problemas de rendimiento existentes.
|
||||
- Los límites de recursos que son específicos para procesar ganchos de pre-recepción podrían ocasionar que fallen algunos ganchos de pre-recepción.
|
||||
- 'Los ajustes de almacenamiento de {% data variables.product.prodname_actions %} no pueden validarse y guardarse en la {% data variables.enterprise.management_console %} cuando se selecciona "Forzar estilo de ruta" y, en su lugar, debe configurarse la utilidad de línea de comando `ghe-actions-precheck`.'
|
||||
@@ -199,5 +199,10 @@ sections:
|
||||
Los repositorios que no estuvieron presentes y activos antes de mejorar a {% data variables.product.prodname_ghe_server %} 3.3 podrían no funcionar de forma óptima sino hasta que se ejecute una tarea de mantenimiento de repositorio y esta se complete exitosamente.
|
||||
|
||||
Para iniciar una tarea de mantenimiento de repositorio manualmente, dirígete a `https://<hostname>/stafftools/repositories/<owner>/<repository>/network` en cada repositorio afectado y haz clic en el botón **Schedule**.
|
||||
-
|
||||
heading: Change to the format of authentication tokens affects GitHub Connect
|
||||
notes:
|
||||
- |
|
||||
GitHub Connect will no longer work after June 3rd for instances running GitHub Enterprise Server 3.1 or older, due to the format of GitHub authentication tokens changing. For more information, see the [GitHub changelog](https://github.blog/2022-05-20-action-needed-by-github-connect-customers-using-ghes-3-1-and-older-to-adopt-new-authentication-token-format-updates/). [Updated: 2022-06-14]
|
||||
backups:
|
||||
- '{% data variables.product.prodname_ghe_server %} 3.4 requiere por lo menos de las [Utilidades de Respaldo de GitHub Enterprise 3.4.0](https://github.com/github/backup-utils) para la [Recuperación de Desastres y Respaldos](/admin/configuration/configuring-your-enterprise/configuring-backups-on-your-appliance).'
|
||||
|
||||
@@ -0,0 +1,34 @@
|
||||
---
|
||||
date: '2022-06-09'
|
||||
sections:
|
||||
security_fixes:
|
||||
- Los paquetes se actualizaron a las últimas versiones de seguridad.
|
||||
bugs:
|
||||
- An internal script to validate hostnames in the {% data variables.product.prodname_ghe_server %} configuration file would return an error if the hostname string started with a "." (period character).
|
||||
- In HA configurations where the primary node's hostname was longer than 60 characters, MySQL would fail to be configured.
|
||||
- When {% data variables.product.prodname_actions %} was enabled but TLS was disabled on {% data variables.product.prodname_ghe_server %} 3.4.1 and later, applying a configuration update would fail.
|
||||
- The `--gateway` argument was added to the `ghe-setup-network` command, to allow passing the gateway address when configuring network settings using the command line.
|
||||
- 'The [{% data variables.product.prodname_GH_advanced_security %} billing API](/rest/enterprise-admin/billing#get-github-advanced-security-active-committers-for-an-enterprise) endpoints were not enabled and accessible.'
|
||||
- Image attachments that were deleted would return a `500 Internal Server Error` instead of a `404 Not Found` error.
|
||||
- In environments configured with a repository cache server, the `ghe-repl-status` command incorrectly showed gists as being under-replicated.
|
||||
- The "Get a commit" and "Compare two commits" endpoints in the [Commit API](/rest/commits/commits) would return a `500` error if a file path in the diff contained an encoded and escaped unicode character.
|
||||
- The calculation of "maximum committers across entire instance" reported in the site admin dashboard was incorrect.
|
||||
- An incorrect database entry for repository replicas caused database corruption when performing a restore using {% data variables.product.prodname_enterprise_backup_utilities %}.
|
||||
- The activity timeline for secret scanning alerts wasn't displayed.
|
||||
changes:
|
||||
- Optimised the inclusion of metrics when generating a cluster support bundle.
|
||||
- In HA configurations where Elasticsearch reported a valid yellow status, changes introduced in a previous fix would block the `ghe-repl-stop` command and not allow replication to be stopped. Using `ghe-repo-stop --force` will now force Elasticsearch to stop when the service is in a normal or valid yellow status.
|
||||
known_issues:
|
||||
- En una instancia recién configurada de {% data variables.product.prodname_ghe_server %} sin ningún usuario, un atacante podría crear el primer usuario adminsitrador.
|
||||
- Las reglas de cortafuegos personalizadas se eliminan durante el proceso de actualización.
|
||||
- Los archivos rastreados del LFS de Git que se [cargaron mediante la interface web](https://github.com/blog/2105-upload-files-to-your-repositories) se agregaron incorrecta y directamente al repositorio.
|
||||
- Las propuestas no pudieron cerrarse si contenían un permalink a un blob en el mismo repositorio en donde la ruta de archvio del blob era más grande a 255 caracteres.
|
||||
- Cuando se habilita la opción "Los usuarios pueden buscar en GitHub.com" con las propuestas de {% data variables.product.prodname_github_connect %}, las propuestas en los repositorios internos y privados no se incluyen en los resultados de búsqueda de {% data variables.product.prodname_dotcom_the_website %}.
|
||||
- El registor de npm del {% data variables.product.prodname_registry %} ya no regresa un valor de tiempo en las respuestas de metadatos. Esto se hizo para permitir mejoras de rendimiento sustanciales. Seguimos teniendo todos los datos necesarios para devolver un valor de tiempo como parte de la respuesta de metadatos y terminaremos de devolver este valor ene l futuro una vez que hayamos resuelto los problemas de rendimiento existentes.
|
||||
- Los límites de recursos que son específicos para procesar ganchos de pre-recepción podrían ocasionar que fallen algunos ganchos de pre-recepción.
|
||||
- |
|
||||
Cuando utilizas las aserciones cifradas con {% data variables.product.prodname_ghe_server %} 3.4.0 y 3.4.1, un atributo nuevo de XML `WantAssertionsEncrypted` en el `SPSSODescriptor` contiene un atributo inválido para los metadatos de SAML. Los IdP que consumen esta terminal de metadatos de SAML podrían encontrar errores al validar el modelo XML de los metadatos de SAML. Habrá una corrección disponible en el siguiente lanzamiento de parche. [Actualizado: 2022-04-11]
|
||||
|
||||
Para darle una solución a este problema, puedes tomar una de las dos acciones siguientes.
|
||||
- Reconfigurar el IdP cargando una copia estática de los metadatos de SAML sin el atributo `WantAssertionsEncrypted`.
|
||||
- Copiar los metadatos de SAML, eliminar el atributo `WantAssertionsEncrypted`, hospedarlo en un servidor web y reconfigurar el IdP para que apunte a esa URL.
|
||||
@@ -319,10 +319,10 @@ sections:
|
||||
MinIO has announced the removal of the MinIO Gateways starting June 1st, 2022. While MinIO Gateway for NAS continues to be one of the supported storage providers for Github Actions and Github Packages, we recommend moving to MinIO LTS support to avail support and bug fixes from MinIO. For more information about rate limits, see "[Scheduled removal of MinIO Gateway for GCS, Azure, HDFS in the minio/minio repository](https://github.com/minio/minio/issues/14331)."
|
||||
deprecations:
|
||||
-
|
||||
heading: Change to the format of authentication tokens
|
||||
heading: Change to the format of authentication tokens affects GitHub Connect
|
||||
notes:
|
||||
- |
|
||||
GitHub Connect will no longer work after June 3rd for instances running GitHub Enterprise Server 3.1 or older, due to the format of GitHub authentication tokens changing. For more information, see the [GitHub changelog](https://github.blog/changelog/2021-03-31-authentication-token-format-updates-are-generally-available/).
|
||||
GitHub Connect will no longer work after June 3rd for instances running GitHub Enterprise Server 3.1 or older, due to the format of GitHub authentication tokens changing. To continue using GitHub Connect, upgrade to GitHub Enterprise Server 3.2 or later. For more information, see the [GitHub Blog](https://github.blog/2022-05-20-action-needed-by-github-connect-customers-using-ghes-3-1-and-older-to-adopt-new-authentication-token-format-updates/). [Updated: 2022-06-14]
|
||||
-
|
||||
heading: CodeQL runner deprecated in favor of CodeQL CLI
|
||||
notes:
|
||||
|
||||
@@ -0,0 +1,32 @@
|
||||
---
|
||||
date: '2022-06-09'
|
||||
sections:
|
||||
security_fixes:
|
||||
- Los paquetes se actualizaron a las últimas versiones de seguridad.
|
||||
bugs:
|
||||
- An internal script to validate hostnames in the {% data variables.product.prodname_ghe_server %} configuration file would return an error if the hostname string started with a "." (period character).
|
||||
- In HA configurations where the primary node's hostname was longer than 60 characters, MySQL would fail to be configured.
|
||||
- When {% data variables.product.prodname_actions %} was enabled but TLS was disabled on {% data variables.product.prodname_ghe_server %} 3.4.1 and later, applying a configuration update would fail.
|
||||
- The `--gateway` argument was added to the `ghe-setup-network` command, to allow passing the gateway address when configuring network settings using the command line.
|
||||
- 'The [{% data variables.product.prodname_GH_advanced_security %} billing API](/rest/enterprise-admin/billing#get-github-advanced-security-active-committers-for-an-enterprise) endpoints were not enabled and accessible.'
|
||||
- Image attachments that were deleted would return a `500 Internal Server Error` instead of a `404 Not Found` error.
|
||||
- In environments configured with a repository cache server, the `ghe-repl-status` command incorrectly showed gists as being under-replicated.
|
||||
- The "Get a commit" and "Compare two commits" endpoints in the [Commit API](/rest/commits/commits) would return a `500` error if a file path in the diff contained an encoded and escaped unicode character.
|
||||
- The calculation of "maximum committers across entire instance" reported in the site admin dashboard was incorrect.
|
||||
- An incorrect database entry for repository replicas caused database corruption when performing a restore using {% data variables.product.prodname_enterprise_backup_utilities %}.
|
||||
- 'A {% data variables.product.prodname_github_app %} would not be able to subscribe to the [`secret_scanning_alert_location` webhook event](/developers/webhooks-and-events/webhooks/webhook-events-and-payloads#secret_scanning_alert_location) on an installation.'
|
||||
- The activity timeline for secret scanning alerts wasn't displayed.
|
||||
- Deleted repos were not purged after 90 days.
|
||||
changes:
|
||||
- Optimised the inclusion of metrics when generating a cluster support bundle.
|
||||
- In HA configurations where Elasticsearch reported a valid yellow status, changes introduced in a previous fix would block the `ghe-repl-stop` command and not allow replication to be stopped. Using `ghe-repo-stop --force` will now force Elasticsearch to stop when the service is in a normal or valid yellow status.
|
||||
known_issues:
|
||||
- En una instancia recién configurada de {% data variables.product.prodname_ghe_server %} sin ningún usuario, un atacante podría crear el primer usuario adminsitrador.
|
||||
- Las reglas de cortafuegos personalizadas se eliminan durante el proceso de actualización.
|
||||
- Los archivos rastreados del LFS de Git que se [cargaron mediante la interface web](https://github.com/blog/2105-upload-files-to-your-repositories) se agregaron incorrecta y directamente al repositorio.
|
||||
- Las propuestas no pudieron cerrarse si contenían un permalink a un blob en el mismo repositorio en donde la ruta de archvio del blob era más grande a 255 caracteres.
|
||||
- Cuando se habilita "Los usuarios pueden buscar en GitHub.com" con GitHub Connect, las propuestas en los repositorios privados e internos no se incluirán en los resultados de búsqueda de GitHub.com.
|
||||
- El registor de npm del {% data variables.product.prodname_registry %} ya no regresa un valor de tiempo en las respuestas de metadatos. Esto se hizo para permitir mejoras de rendimiento sustanciales. Seguimos teniendo todos los datos necesarios para devolver un valor de tiempo como parte de la respuesta de metadatos y terminaremos de devolver este valor ene l futuro una vez que hayamos resuelto los problemas de rendimiento existentes.
|
||||
- Los límites de recursos que son específicos para procesar ganchos de pre-recepción podrían ocasionar que fallen algunos ganchos de pre-recepción.
|
||||
- Actions services need to be restarted after restoring an appliance from a backup taken on a different host.
|
||||
- 'Deleted repositories will not be purged from disk automatically after the 90-day retention period ends. This issue is resolved in the 3.5.1 release. [Updated: 2022-06-10]'
|
||||
@@ -4,9 +4,14 @@ Si no configuras un `container`, todos los pasos se ejecutan directamente en el
|
||||
|
||||
### Ejemplo: Ejecutar un job dentro de un contenedor
|
||||
|
||||
```yaml
|
||||
```yaml{:copy}
|
||||
name: CI
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
jobs:
|
||||
my_job:
|
||||
container-test-job:
|
||||
runs-on: ubuntu-latest
|
||||
container:
|
||||
image: node:14.16
|
||||
env:
|
||||
@@ -16,12 +21,16 @@ jobs:
|
||||
volumes:
|
||||
- my_docker_volume:/volume_mount
|
||||
options: --cpus 1
|
||||
steps:
|
||||
- name: Check for dockerenv file
|
||||
run: (ls /.dockerenv && echo Found dockerenv) || (echo No dockerenv)
|
||||
```
|
||||
|
||||
Cuando solo especificas una imagen de contenedor, puedes omitir la palabra clave `image`.
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
my_job:
|
||||
container-test-job:
|
||||
runs-on: ubuntu-latest
|
||||
container: node:14.16
|
||||
```
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
{% data variables.product.prodname_codespaces %} es de uso gratuito durante el beta. Cuando {% data variables.product.prodname_codespaces %} se hace generalmente disponible, se te cobrará por el almacenamiento y uso del procesamiento.
|
||||
@@ -1,5 +0,0 @@
|
||||
Durante el beta, la funcionalidad será limitada.
|
||||
- {% data reusables.codespaces.use-chrome %}
|
||||
- Úicamente estará disponible un tamaño único de codespace.
|
||||
- Únicamente serán compatibles los contenedores Linux.
|
||||
- Un codespace no se puede reanudar completamente. Los procesos que estuvieran ejecutándose al momento en el que se paró un codespace no se reiniciarán.
|
||||
@@ -1,7 +1,7 @@
|
||||
By default, a {% data variables.product.prodname_actions %} workflow is triggered every time you create or update a prebuild template, or push to a prebuild-enabled branch. As with other workflows, while prebuild workflows are running they will either consume some of the Actions minutes included with your account, if you have any, or they will incur charges for Actions minutes. For more information about pricing for Actions minutes, see "[About billing for {% data variables.product.prodname_actions %}](/billing/managing-billing-for-github-actions/about-billing-for-github-actions)."
|
||||
|
||||
If you are an organization owner, you can track usage of prebuild workflows by downloading a {% data variables.product.prodname_actions %} usage report for your organization. You can identify workflow runs for prebuilds by filtering the CSV output to only include the workflow called "Create Codespaces Prebuilds." Para obtener más información, consulta la sección "[Visualizar tu uso de {% data variables.product.prodname_actions %}](/billing/managing-billing-for-github-actions/viewing-your-github-actions-usage#viewing-github-actions-usage-for-your-organization)".
|
||||
Alongside {% data variables.product.prodname_actions %} minutes, you will also be billed for the storage of prebuild templates associated with each prebuild configuration for a given repository and region. Storage of prebuild templates is billed at the same rate as storage of codespaces. For more information, see "[Calculating storage usage](#calculating-storage-usage)."
|
||||
|
||||
To reduce consumption of Actions minutes, you can set a prebuild template to be updated only when you make a change to your dev container configuration files, or only on a custom schedule. Para obtener más información, consulta la sección "[Configurar las precompilaciones](/codespaces/prebuilding-your-codespaces/configuring-prebuilds#configuring-a-prebuild)".
|
||||
To reduce consumption of Actions minutes, you can set a prebuild template to be updated only when you make a change to your dev container configuration files, or only on a custom schedule. You can also manage your storage usage by adjusting the number of template versions to be retained for your prebuild configurations. Para obtener más información, consulta la sección "[Configurar las precompilaciones](/codespaces/prebuilding-your-codespaces/configuring-prebuilds#configuring-a-prebuild)".
|
||||
|
||||
While {% data variables.product.prodname_codespaces %} prebuilds is in beta there is no charge for storage of templates. When prebuilds become generally available, you will be billed for storing prebuild templates for each prebuild configuration in each region selected for that configuration.
|
||||
If you are an organization owner, you can track usage of prebuild workflows and storage by downloading a {% data variables.product.prodname_actions %} usage report for your organization. You can identify workflow runs for prebuilds by filtering the CSV output to only include the workflow called "Create Codespaces Prebuilds." Para obtener más información, consulta la sección "[Visualizar tu uso de {% data variables.product.prodname_actions %}](/billing/managing-billing-for-github-actions/viewing-your-github-actions-usage#viewing-github-actions-usage-for-your-organization)".
|
||||
@@ -1,5 +0,0 @@
|
||||
{% note %}
|
||||
|
||||
**Note:** The ability to prebuild codespaces is currently in beta and subject to change.
|
||||
|
||||
{% endnote %}
|
||||
@@ -1 +0,0 @@
|
||||
Durante el beta, no serán compatibles los repositorios privados que pertenezcan a las organizaciones o cualquier repositorio perteneciente a una organización que requiera el inicio de sesión único de SAML.
|
||||
@@ -1,5 +1,5 @@
|
||||
{% note %}
|
||||
|
||||
**Note**: The Dependency Review GitHub Action is currently in public beta and subject to change.
|
||||
**Note**: The {% data variables.product.prodname_dependency_review_action %} is currently in public beta and subject to change.
|
||||
|
||||
{% endnote %}
|
||||
@@ -0,0 +1,3 @@
|
||||
The {% data variables.product.prodname_dependency_review_action %} scans your pull requests for dependency changes and raises an error if any new dependencies have known vulnerabilities. The action is supported by an API endpoint that compares the dependencies between two revisions and reports any differences.
|
||||
|
||||
For more information about the action and the API endpoint, see "[About dependency review](/code-security/supply-chain-security/understanding-your-software-supply-chain/about-dependency-review#dependency-review-reinforcement)," and "[Dependency review](/rest/dependency-graph/dependency-review)" in the API documentation, respectively.
|
||||
@@ -1 +1 @@
|
||||
By default, all activity types trigger workflows that run on this event. Puedes limitar tus ejecuciones de flujo de trabajo a tipos de actividad específicos usando la palabra clave `types`. Para obtener más información, consulta "[Sintaxis del flujo de trabajo para {% data variables.product.prodname_actions %}](/articles/workflow-syntax-for-github-actions#onevent_nametypes)".
|
||||
Predeterminadamente, todos los tipos de actividad activan flujos de trabajo que pueden ejecutarse en este evento. Puedes limitar tus ejecuciones de flujo de trabajo a tipos de actividad específicos usando la palabra clave `types`. Para obtener más información, consulta "[Sintaxis del flujo de trabajo para {% data variables.product.prodname_actions %}](/articles/workflow-syntax-for-github-actions#onevent_nametypes)".
|
||||
|
||||
@@ -1 +1 @@
|
||||
1. Optionally, next to "Billing & plans", click **Get usage report** to email a CSV report of storage use for {% data variables.product.prodname_actions %}, {% data variables.product.prodname_registry %}, and {% data variables.product.prodname_codespaces %} to the account's primary email address. 
|
||||
1. Optionally, next to "Usage this month", click **Get usage report** to get an email containing a link for downloading a CSV report of storage use for {% data variables.product.prodname_actions %}, {% data variables.product.prodname_registry %}, and {% data variables.product.prodname_codespaces %}. The email is sent to your account's primary email address. You can choose whether the report should cover the last 7, 30, 90, or 180 days. 
|
||||
|
||||
@@ -1 +1 @@
|
||||
1. Above the list of files, click {% octicon "git-branch" aria-label="The branch icon" %} **Branches**. 
|
||||
1. Sobre la lista de archivos, haz clic en {% octicon "git-branch" aria-label="The branch icon" %} **Ramas**. 
|
||||
|
||||
@@ -143,6 +143,7 @@ prodname_code_scanning_capc: 'Escaneo de código'
|
||||
prodname_codeql_runner: 'Ejecutor de CodeQL'
|
||||
prodname_advisory_database: 'GitHub Advisory Database'
|
||||
prodname_codeql_workflow: 'Flujo de trabajo de análisis de CodeQL'
|
||||
prodname_dependency_review_action: 'Dependency Review GitHub Action'
|
||||
#Visual Studio
|
||||
prodname_vs: 'Visual Studio'
|
||||
prodname_vscode_shortname: 'VS Code'
|
||||
|
||||
Reference in New Issue
Block a user