@@ -60,7 +60,7 @@ Your contributions calendar shows your contribution activity.
|
||||
### Viewing contributions from specific times
|
||||
|
||||
- Click on a day's square to show the contributions made during that 24-hour period.
|
||||
- Press *Shift* and click on another day's square to show contributions made during that time span.
|
||||
- Press _Shift_ and click on another day's square to show contributions made during that time span.
|
||||
|
||||
{% note %}
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ shortTitle: Marketing emails
|
||||
|
||||
{% data reusables.user-settings.access_settings %}
|
||||
{% data reusables.user-settings.emails %}
|
||||
3. Under *Email preferences*, select **Only receive account related emails, and those I subscribe to**.
|
||||
3. Under _Email preferences_, select **Only receive account related emails, and those I subscribe to**.
|
||||
4. Click **Save email preferences**.
|
||||
|
||||
## Further reading
|
||||
|
||||
@@ -137,7 +137,7 @@ If you don't specify a Node.js version, {% data variables.product.prodname_dotco
|
||||
|
||||
### Example using npm
|
||||
|
||||
This example installs the dependencies defined in the *package.json* file. For more information, see [`npm install`](https://docs.npmjs.com/cli/install).
|
||||
This example installs the dependencies defined in the _package.json_ file. For more information, see [`npm install`](https://docs.npmjs.com/cli/install).
|
||||
|
||||
```yaml copy
|
||||
steps:
|
||||
@@ -150,7 +150,7 @@ steps:
|
||||
run: npm install
|
||||
```
|
||||
|
||||
Using `npm ci` installs the versions in the *package-lock.json* or *npm-shrinkwrap.json* file and prevents updates to the lock file. Using `npm ci` is generally faster than running `npm install`. For more information, see [`npm ci`](https://docs.npmjs.com/cli/ci.html) and "[Introducing `npm ci` for faster, more reliable builds](https://blog.npmjs.org/post/171556855892/introducing-npm-ci-for-faster-more-reliable)."
|
||||
Using `npm ci` installs the versions in the _package-lock.json_ or _npm-shrinkwrap.json_ file and prevents updates to the lock file. Using `npm ci` is generally faster than running `npm install`. For more information, see [`npm ci`](https://docs.npmjs.com/cli/ci.html) and "[Introducing `npm ci` for faster, more reliable builds](https://blog.npmjs.org/post/171556855892/introducing-npm-ci-for-faster-more-reliable)."
|
||||
|
||||
```yaml copy
|
||||
steps:
|
||||
@@ -165,7 +165,7 @@ steps:
|
||||
|
||||
### Example using Yarn
|
||||
|
||||
This example installs the dependencies defined in the *package.json* file. For more information, see [`yarn install`](https://yarnpkg.com/en/docs/cli/install).
|
||||
This example installs the dependencies defined in the _package.json_ file. For more information, see [`yarn install`](https://yarnpkg.com/en/docs/cli/install).
|
||||
|
||||
```yaml copy
|
||||
steps:
|
||||
@@ -197,9 +197,9 @@ steps:
|
||||
|
||||
To authenticate to your private registry, you'll need to store your npm authentication token as a secret. For example, create a repository secret called `NPM_TOKEN`. For more information, see "[AUTOTITLE](/actions/security-guides/encrypted-secrets)."
|
||||
|
||||
In the example below, the secret `NPM_TOKEN` stores the npm authentication token. The `setup-node` action configures the *.npmrc* file to read the npm authentication token from the `NODE_AUTH_TOKEN` environment variable. When using the `setup-node` action to create an *.npmrc* file, you must set the `NODE_AUTH_TOKEN` environment variable with the secret that contains your npm authentication token.
|
||||
In the example below, the secret `NPM_TOKEN` stores the npm authentication token. The `setup-node` action configures the _.npmrc_ file to read the npm authentication token from the `NODE_AUTH_TOKEN` environment variable. When using the `setup-node` action to create an _.npmrc_ file, you must set the `NODE_AUTH_TOKEN` environment variable with the secret that contains your npm authentication token.
|
||||
|
||||
Before installing dependencies, use the `setup-node` action to create the *.npmrc* file. The action has two input parameters. The `node-version` parameter sets the Node.js version, and the `registry-url` parameter sets the default registry. If your package registry uses scopes, you must use the `scope` parameter. For more information, see [`npm-scope`](https://docs.npmjs.com/misc/scope).
|
||||
Before installing dependencies, use the `setup-node` action to create the _.npmrc_ file. The action has two input parameters. The `node-version` parameter sets the Node.js version, and the `registry-url` parameter sets the default registry. If your package registry uses scopes, you must use the `scope` parameter. For more information, see [`npm-scope`](https://docs.npmjs.com/misc/scope).
|
||||
|
||||
```yaml copy
|
||||
steps:
|
||||
@@ -217,7 +217,7 @@ steps:
|
||||
NODE_AUTH_TOKEN: {% raw %}${{ secrets.NPM_TOKEN }}{% endraw %}
|
||||
```
|
||||
|
||||
The example above creates an *.npmrc* file with the following contents:
|
||||
The example above creates an _.npmrc_ file with the following contents:
|
||||
|
||||
```ini
|
||||
//registry.npmjs.org/:_authToken=${NODE_AUTH_TOKEN}
|
||||
@@ -283,7 +283,7 @@ If you have a custom requirement or need finer controls for caching, you can use
|
||||
|
||||
## Building and testing your code
|
||||
|
||||
You can use the same commands that you use locally to build and test your code. For example, if you run `npm run build` to run build steps defined in your *package.json* file and `npm test` to run your test suite, you would add those commands in your workflow file.
|
||||
You can use the same commands that you use locally to build and test your code. For example, if you run `npm run build` to run build steps defined in your _package.json_ file and `npm test` to run your test suite, you would add those commands in your workflow file.
|
||||
|
||||
```yaml copy
|
||||
steps:
|
||||
|
||||
@@ -213,7 +213,7 @@ steps:
|
||||
|
||||
### Requirements file
|
||||
|
||||
After you update `pip`, a typical next step is to install dependencies from *requirements.txt*. For more information, see [pip](https://pip.pypa.io/en/stable/cli/pip_install/#example-requirements-file).
|
||||
After you update `pip`, a typical next step is to install dependencies from _requirements.txt_. For more information, see [pip](https://pip.pypa.io/en/stable/cli/pip_install/#example-requirements-file).
|
||||
|
||||
```yaml copy
|
||||
steps:
|
||||
|
||||
@@ -62,7 +62,7 @@ ENTRYPOINT ["sh", "-c", "echo $GITHUB_SHA"]
|
||||
|
||||
To supply `args` defined in the action's metadata file to a Docker container that uses the _exec_ form in the `ENTRYPOINT`, we recommend creating a shell script called `entrypoint.sh` that you call from the `ENTRYPOINT` instruction:
|
||||
|
||||
#### Example *Dockerfile*
|
||||
#### Example _Dockerfile_
|
||||
|
||||
```dockerfile
|
||||
# Container image that runs your code
|
||||
@@ -75,7 +75,7 @@ COPY entrypoint.sh /entrypoint.sh
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
```
|
||||
|
||||
#### Example *entrypoint.sh* file
|
||||
#### Example _entrypoint.sh_ file
|
||||
|
||||
Using the example Dockerfile above, {% data variables.product.product_name %} will send the `args` configured in the action's metadata file as arguments to `entrypoint.sh`. Add the `#!/bin/sh` [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) at the top of the `entrypoint.sh` file to explicitly use the system's [POSIX](https://en.wikipedia.org/wiki/POSIX)-compliant shell.
|
||||
|
||||
|
||||
@@ -100,13 +100,13 @@ export GITHUB_ACTIONS_RUNNER_TLS_NO_VERIFY=1
|
||||
|
||||
## Reviewing the self-hosted runner application log files
|
||||
|
||||
You can monitor the status of the self-hosted runner application and its activities. Log files are kept in the `_diag` directory where you installed the runner application, and a new log is generated each time the application is started. The filename begins with *Runner_*, and is followed by a UTC timestamp of when the application was started.
|
||||
You can monitor the status of the self-hosted runner application and its activities. Log files are kept in the `_diag` directory where you installed the runner application, and a new log is generated each time the application is started. The filename begins with _Runner__, and is followed by a UTC timestamp of when the application was started.
|
||||
|
||||
For detailed logs on workflow job executions, see the next section describing the *Worker_* files.
|
||||
For detailed logs on workflow job executions, see the next section describing the _Worker__ files.
|
||||
|
||||
## Reviewing a job's log file
|
||||
|
||||
The self-hosted runner application creates a detailed log file for each job that it processes. These files are stored in the `_diag` directory where you installed the runner application, and the filename begins with *Worker_*.
|
||||
The self-hosted runner application creates a detailed log file for each job that it processes. These files are stored in the `_diag` directory where you installed the runner application, and the filename begins with _Worker__.
|
||||
|
||||
{% linux %}
|
||||
|
||||
@@ -220,7 +220,7 @@ PS C:\actions-runner> Get-EventLog -LogName Application -Source ActionsRunnerSer
|
||||
|
||||
We recommend that you regularly check the automatic update process, as the self-hosted runner will not be able to process jobs if it falls below a certain version threshold. The self-hosted runner application automatically updates itself, but note that this process does not include any updates to the operating system or other software; you will need to separately manage these updates.
|
||||
|
||||
You can view the update activities in the *Runner_* log files. For example:
|
||||
You can view the update activities in the _Runner__ log files. For example:
|
||||
|
||||
```shell
|
||||
[Feb 12 12:37:07 INFO SelfUpdater] An update is available.
|
||||
|
||||
@@ -36,13 +36,13 @@ For more information, see "[AUTOTITLE](/actions/learn-github-actions/understandi
|
||||
When migrating from CircleCI, consider the following differences:
|
||||
|
||||
- CircleCI’s automatic test parallelism automatically groups tests according to user-specified rules or historical timing information. This functionality is not built into {% data variables.product.prodname_actions %}.
|
||||
- Actions that execute in Docker containers are sensitive to permissions problems since containers have a different mapping of users. You can avoid many of these problems by not using the `USER` instruction in your *Dockerfile*. {% ifversion ghae %}{% data reusables.actions.self-hosted-runners-software %}
|
||||
- Actions that execute in Docker containers are sensitive to permissions problems since containers have a different mapping of users. You can avoid many of these problems by not using the `USER` instruction in your _Dockerfile_. {% ifversion ghae %}{% data reusables.actions.self-hosted-runners-software %}
|
||||
{% else %}For more information about the Docker filesystem on {% data variables.product.product_name %}-hosted runners, see "[AUTOTITLE](/actions/using-github-hosted-runners/about-github-hosted-runners#docker-container-filesystem)."
|
||||
{% endif %}
|
||||
|
||||
## Migrating workflows and jobs
|
||||
|
||||
CircleCI defines `workflows` in the *config.yml* file, which allows you to configure more than one workflow. {% data variables.product.product_name %} requires one workflow file per workflow, and as a consequence, does not require you to declare `workflows`. You'll need to create a new workflow file for each workflow configured in *config.yml*.
|
||||
CircleCI defines `workflows` in the _config.yml_ file, which allows you to configure more than one workflow. {% data variables.product.product_name %} requires one workflow file per workflow, and as a consequence, does not require you to declare `workflows`. You'll need to create a new workflow file for each workflow configured in _config.yml_.
|
||||
|
||||
Both CircleCI and {% data variables.product.prodname_actions %} configure `jobs` in the configuration file using similar syntax. If you configure any dependencies between jobs using `requires` in your CircleCI workflow, you can use the equivalent {% data variables.product.prodname_actions %} `needs` syntax. For more information, see "[AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idneeds)."
|
||||
|
||||
@@ -162,7 +162,7 @@ For more information, see "[AUTOTITLE](/actions/using-workflows/storing-workflow
|
||||
|
||||
Both systems enable you to include additional containers for databases, caching, or other dependencies.
|
||||
|
||||
In CircleCI, the first image listed in the *config.yaml* is the primary image used to run commands. {% data variables.product.prodname_actions %} uses explicit sections: use `container` for the primary container, and list additional containers in `services`.
|
||||
In CircleCI, the first image listed in the _config.yaml_ is the primary image used to run commands. {% data variables.product.prodname_actions %} uses explicit sections: use `container` for the primary container, and list additional containers in `services`.
|
||||
|
||||
Below is an example in CircleCI and {% data variables.product.prodname_actions %} configuration syntax.
|
||||
|
||||
@@ -274,7 +274,7 @@ For more information, see "[AUTOTITLE](/actions/using-containerized-services/abo
|
||||
|
||||
## Complete Example
|
||||
|
||||
Below is a real-world example. The left shows the actual CircleCI *config.yml* for the [thoughtbot/administrator](https://github.com/thoughtbot/administrate) repository. The right shows the {% data variables.product.prodname_actions %} equivalent.
|
||||
Below is a real-world example. The left shows the actual CircleCI _config.yml_ for the [thoughtbot/administrator](https://github.com/thoughtbot/administrate) repository. The right shows the {% data variables.product.prodname_actions %} equivalent.
|
||||
|
||||
### Complete example for CircleCI
|
||||
|
||||
|
||||
@@ -40,13 +40,13 @@ You may also find it helpful to have a basic understanding of the following:
|
||||
|
||||
## About package configuration
|
||||
|
||||
The `name` and `version` fields in the *package.json* file create a unique identifier that registries use to link your package to a registry. You can add a summary for the package listing page by including a `description` field in the *package.json* file. For more information, see "[Creating a package.json file](https://docs.npmjs.com/creating-a-package-json-file)" and "[Creating Node.js modules](https://docs.npmjs.com/creating-node-js-modules)" in the npm documentation.
|
||||
The `name` and `version` fields in the _package.json_ file create a unique identifier that registries use to link your package to a registry. You can add a summary for the package listing page by including a `description` field in the _package.json_ file. For more information, see "[Creating a package.json file](https://docs.npmjs.com/creating-a-package-json-file)" and "[Creating Node.js modules](https://docs.npmjs.com/creating-node-js-modules)" in the npm documentation.
|
||||
|
||||
When a local *.npmrc* file exists and has a `registry` value specified, the `npm publish` command uses the registry configured in the *.npmrc* file. {% data reusables.actions.setup-node-intro %}
|
||||
When a local _.npmrc_ file exists and has a `registry` value specified, the `npm publish` command uses the registry configured in the _.npmrc_ file. {% data reusables.actions.setup-node-intro %}
|
||||
|
||||
You can specify the Node.js version installed on the runner using the `setup-node` action.
|
||||
|
||||
If you add steps in your workflow to configure the `publishConfig` fields in your *package.json* file, you don't need to specify the registry-url using the `setup-node` action, but you will be limited to publishing the package to one registry. For more information, see "[publishConfig](https://docs.npmjs.com/cli/v9/configuring-npm/package-json#publishconfig)" in the npm documentation.
|
||||
If you add steps in your workflow to configure the `publishConfig` fields in your _package.json_ file, you don't need to specify the registry-url using the `setup-node` action, but you will be limited to publishing the package to one registry. For more information, see "[publishConfig](https://docs.npmjs.com/cli/v9/configuring-npm/package-json#publishconfig)" in the npm documentation.
|
||||
|
||||
## Publishing packages to the npm registry
|
||||
|
||||
@@ -54,11 +54,11 @@ You can trigger a workflow to publish your package every time you publish a new
|
||||
|
||||
To perform authenticated operations against the npm registry in your workflow, you'll need to store your npm authentication token as a secret. For example, create a repository secret called `NPM_TOKEN`. For more information, see "[AUTOTITLE](/actions/security-guides/encrypted-secrets)."
|
||||
|
||||
By default, npm uses the `name` field of the *package.json* file to determine the name of your published package. When publishing to a global namespace, you only need to include the package name. For example, you would publish a package named `my-package` to `https://www.npmjs.com/package/my-package`.
|
||||
By default, npm uses the `name` field of the _package.json_ file to determine the name of your published package. When publishing to a global namespace, you only need to include the package name. For example, you would publish a package named `my-package` to `https://www.npmjs.com/package/my-package`.
|
||||
|
||||
If you're publishing a package that includes a scope prefix, include the scope in the name of your *package.json* file. For example, if your npm scope prefix is "octocat" and the package name is "hello-world", the `name` in your *package.json* file should be `@octocat/hello-world`. If your npm package uses a scope prefix and the package is public, you need to use the option `npm publish --access public`. This is an option that npm requires to prevent someone from publishing a private package unintentionally.
|
||||
If you're publishing a package that includes a scope prefix, include the scope in the name of your _package.json_ file. For example, if your npm scope prefix is "octocat" and the package name is "hello-world", the `name` in your _package.json_ file should be `@octocat/hello-world`. If your npm package uses a scope prefix and the package is public, you need to use the option `npm publish --access public`. This is an option that npm requires to prevent someone from publishing a private package unintentionally.
|
||||
|
||||
This example stores the `NPM_TOKEN` secret in the `NODE_AUTH_TOKEN` environment variable. When the `setup-node` action creates an *.npmrc* file, it references the token from the `NODE_AUTH_TOKEN` environment variable.
|
||||
This example stores the `NPM_TOKEN` secret in the `NODE_AUTH_TOKEN` environment variable. When the `setup-node` action creates an _.npmrc_ file, it references the token from the `NODE_AUTH_TOKEN` environment variable.
|
||||
|
||||
```yaml copy
|
||||
name: Publish Package to npmjs
|
||||
@@ -81,7 +81,7 @@ jobs:
|
||||
NODE_AUTH_TOKEN: {% raw %}${{ secrets.NPM_TOKEN }}{% endraw %}
|
||||
```
|
||||
|
||||
In the example above, the `setup-node` action creates an *.npmrc* file on the runner with the following contents:
|
||||
In the example above, the `setup-node` action creates an _.npmrc_ file on the runner with the following contents:
|
||||
|
||||
```ini
|
||||
//registry.npmjs.org/:_authToken=${NODE_AUTH_TOKEN}
|
||||
@@ -97,9 +97,9 @@ You can trigger a workflow to publish your package every time you publish a new
|
||||
|
||||
### Configuring the destination repository
|
||||
|
||||
Linking your package to {% data variables.product.prodname_registry %} using the `repository` key is optional. If you choose not to provide the `repository` key in your *package.json* file, then {% data variables.product.prodname_registry %} publishes a package in the {% data variables.product.prodname_dotcom %} repository you specify in the `name` field of the *package.json* file. For example, a package named `@my-org/test` is published to the `my-org/test` {% data variables.product.prodname_dotcom %} repository. If the `url` specified in the `repository` key is invalid, your package may still be published however it won't be linked to the repository source as intended.
|
||||
Linking your package to {% data variables.product.prodname_registry %} using the `repository` key is optional. If you choose not to provide the `repository` key in your _package.json_ file, then {% data variables.product.prodname_registry %} publishes a package in the {% data variables.product.prodname_dotcom %} repository you specify in the `name` field of the _package.json_ file. For example, a package named `@my-org/test` is published to the `my-org/test` {% data variables.product.prodname_dotcom %} repository. If the `url` specified in the `repository` key is invalid, your package may still be published however it won't be linked to the repository source as intended.
|
||||
|
||||
If you do provide the `repository` key in your *package.json* file, then the repository in that key is used as the destination npm registry for {% data variables.product.prodname_registry %}. For example, publishing the below *package.json* results in a package named `my-package` published to the `octocat/my-other-repo` {% data variables.product.prodname_dotcom %} repository. Once published, only the repository source is updated, and the package doesn't inherit any permissions from the destination repository.
|
||||
If you do provide the `repository` key in your _package.json_ file, then the repository in that key is used as the destination npm registry for {% data variables.product.prodname_registry %}. For example, publishing the below _package.json_ results in a package named `my-package` published to the `octocat/my-other-repo` {% data variables.product.prodname_dotcom %} repository. Once published, only the repository source is updated, and the package doesn't inherit any permissions from the destination repository.
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -118,7 +118,7 @@ If you want to publish your package to a different repository, you must use a {%
|
||||
|
||||
### Example workflow
|
||||
|
||||
This example stores the `GITHUB_TOKEN` secret in the `NODE_AUTH_TOKEN` environment variable. When the `setup-node` action creates an *.npmrc* file, it references the token from the `NODE_AUTH_TOKEN` environment variable.
|
||||
This example stores the `GITHUB_TOKEN` secret in the `NODE_AUTH_TOKEN` environment variable. When the `setup-node` action creates an _.npmrc_ file, it references the token from the `NODE_AUTH_TOKEN` environment variable.
|
||||
|
||||
```yaml copy
|
||||
name: Publish package to GitHub Packages
|
||||
@@ -146,7 +146,7 @@ jobs:
|
||||
NODE_AUTH_TOKEN: {% raw %}${{ secrets.GITHUB_TOKEN }}{% endraw %}
|
||||
```
|
||||
|
||||
The `setup-node` action creates an *.npmrc* file on the runner. When you use the `scope` input to the `setup-node` action, the *.npmrc* file includes the scope prefix. By default, the `setup-node` action sets the scope in the *.npmrc* file to the account that contains that workflow file.
|
||||
The `setup-node` action creates an _.npmrc_ file on the runner. When you use the `scope` input to the `setup-node` action, the _.npmrc_ file includes the scope prefix. By default, the `setup-node` action sets the scope in the _.npmrc_ file to the account that contains that workflow file.
|
||||
|
||||
```ini
|
||||
//npm.pkg.github.com/:_authToken=${NODE_AUTH_TOKEN}
|
||||
|
||||
@@ -283,7 +283,7 @@ steps:
|
||||
|
||||
You can test your workflow using the following script, which connects to the PostgreSQL service and adds a new table with some placeholder data. The script then prints the values stored in the PostgreSQL table to the terminal. Your script can use any language you'd like, but this example uses Node.js and the `pg` npm module. For more information, see the [npm pg module](https://www.npmjs.com/package/pg).
|
||||
|
||||
You can modify *client.js* to include any PostgreSQL operations needed by your workflow. In this example, the script connects to the PostgreSQL service, adds a table to the `postgres` database, inserts some placeholder data, and then retrieves the data.
|
||||
You can modify _client.js_ to include any PostgreSQL operations needed by your workflow. In this example, the script connects to the PostgreSQL service, adds a table to the `postgres` database, inserts some placeholder data, and then retrieves the data.
|
||||
|
||||
{% data reusables.actions.service-container-add-script %}
|
||||
|
||||
|
||||
@@ -270,7 +270,7 @@ steps:
|
||||
|
||||
You can test your workflow using the following script, which creates a Redis client and populates the client with some placeholder data. The script then prints the values stored in the Redis client to the terminal. Your script can use any language you'd like, but this example uses Node.js and the `redis` npm module. For more information, see the [npm redis module](https://www.npmjs.com/package/redis).
|
||||
|
||||
You can modify *client.js* to include any Redis operations needed by your workflow. In this example, the script creates the Redis client instance, adds placeholder data, then retrieves the data.
|
||||
You can modify _client.js_ to include any Redis operations needed by your workflow. In this example, the script creates the Redis client instance, adds placeholder data, then retrieves the data.
|
||||
|
||||
{% data reusables.actions.service-container-add-script %}
|
||||
|
||||
|
||||
@@ -1085,7 +1085,7 @@ on:
|
||||
|
||||
{% note %}
|
||||
|
||||
**Note:** The `prereleased` type will not trigger for pre-releases published from draft releases, but the `published` type will trigger. If you want a workflow to run when stable *and* pre-releases publish, subscribe to `published` instead of `released` and `prereleased`.
|
||||
**Note:** The `prereleased` type will not trigger for pre-releases published from draft releases, but the `published` type will trigger. If you want a workflow to run when stable _and_ pre-releases publish, subscribe to `published` instead of `released` and `prereleased`.
|
||||
|
||||
{% endnote %}
|
||||
|
||||
|
||||
@@ -46,8 +46,8 @@ Your key must be an RSA key and must not have a passphrase. For more information
|
||||
{% data reusables.enterprise_management_console.privacy %}
|
||||
{% data reusables.enterprise_management_console.select-tls-only %}
|
||||
4. Under "TLS Protocol support", select the protocols you want to allow.
|
||||
5. Under "Certificate", click **Choose File**, then choose a TLS certificate or certificate chain (in PEM format) to install. This file will usually have a *.pem*, *.crt*, or *.cer* extension.
|
||||
6. Under "Unencrypted key", click **Choose File**, then choose an RSA key (in PEM format) to install. This file will usually have a *.key* extension.
|
||||
5. Under "Certificate", click **Choose File**, then choose a TLS certificate or certificate chain (in PEM format) to install. This file will usually have a _.pem_, _.crt_, or _.cer_ extension.
|
||||
6. Under "Unencrypted key", click **Choose File**, then choose an RSA key (in PEM format) to install. This file will usually have a _.key_ extension.
|
||||
|
||||
{% data reusables.enterprise_management_console.save-settings %}
|
||||
|
||||
|
||||
@@ -492,7 +492,7 @@ SSL-Session:
|
||||
Verify return code: 0 (ok)
|
||||
```
|
||||
|
||||
If, on the other hand, the remote server's SSL certificate can *not* be verified, your `SSL-Session` should have a nonzero return code:
|
||||
If, on the other hand, the remote server's SSL certificate can _not_ be verified, your `SSL-Session` should have a nonzero return code:
|
||||
|
||||
```
|
||||
SSL-Session:
|
||||
@@ -634,7 +634,7 @@ $ ghe-cluster-status
|
||||
|
||||
This utility creates a support bundle tarball containing important logs from each of the nodes in either a Geo-replication or Clustering configuration.
|
||||
|
||||
By default, the command creates the tarball in */tmp*, but you can also have it `cat` the tarball to `STDOUT` for easy streaming over SSH. This is helpful in the case where the web UI is unresponsive or downloading a support bundle from */setup/support* doesn't work. You must use this command if you want to generate an *extended* bundle, containing older logs. You can also use this command to upload the cluster support bundle directly to {% data variables.product.prodname_enterprise %} support.
|
||||
By default, the command creates the tarball in _/tmp_, but you can also have it `cat` the tarball to `STDOUT` for easy streaming over SSH. This is helpful in the case where the web UI is unresponsive or downloading a support bundle from _/setup/support_ doesn't work. You must use this command if you want to generate an _extended_ bundle, containing older logs. You can also use this command to upload the cluster support bundle directly to {% data variables.product.prodname_enterprise %} support.
|
||||
|
||||
{% data reusables.enterprise.bundle-utility-period-argument-availability-note %}
|
||||
|
||||
@@ -1047,7 +1047,7 @@ ghe-diagnostics
|
||||
{% data reusables.enterprise_enterprise_support.use_ghe_cluster_support_bundle %}
|
||||
This utility creates a support bundle tarball containing important logs from your instance.
|
||||
|
||||
By default, the command creates the tarball in */tmp*, but you can also have it `cat` the tarball to `STDOUT` for easy streaming over SSH. This is helpful in the case where the web UI is unresponsive or downloading a support bundle from */setup/support* doesn't work. You must use this command if you want to generate an *extended* bundle, containing older logs. You can also use this command to upload the support bundle directly to {% data variables.product.prodname_enterprise %} support.
|
||||
By default, the command creates the tarball in _/tmp_, but you can also have it `cat` the tarball to `STDOUT` for easy streaming over SSH. This is helpful in the case where the web UI is unresponsive or downloading a support bundle from _/setup/support_ doesn't work. You must use this command if you want to generate an _extended_ bundle, containing older logs. You can also use this command to upload the support bundle directly to {% data variables.product.prodname_enterprise %} support.
|
||||
|
||||
{% data reusables.enterprise.bundle-utility-period-argument-availability-note %}
|
||||
|
||||
@@ -1133,9 +1133,9 @@ ghe-migrations -refresh_rate SECONDS
|
||||
|
||||
### ghe-update-check
|
||||
|
||||
This utility will check to see if a new patch release of {% data variables.product.prodname_enterprise %} is available. If it is, and if space is available on your instance, it will download the package. By default, it's saved to */var/lib/ghe-updates*. An administrator can then [perform the upgrade](/admin/enterprise-management/updating-the-virtual-machine-and-physical-resources).
|
||||
This utility will check to see if a new patch release of {% data variables.product.prodname_enterprise %} is available. If it is, and if space is available on your instance, it will download the package. By default, it's saved to _/var/lib/ghe-updates_. An administrator can then [perform the upgrade](/admin/enterprise-management/updating-the-virtual-machine-and-physical-resources).
|
||||
|
||||
A file containing the status of the download is available at */var/lib/ghe-updates/ghe-update-check.status*.
|
||||
A file containing the status of the download is available at _/var/lib/ghe-updates/ghe-update-check.status_.
|
||||
|
||||
To check for the latest {% data variables.product.prodname_enterprise %} release, use the `-i` switch.
|
||||
|
||||
|
||||
@@ -58,21 +58,21 @@ Then, when told to fetch `https://github.example.com/myorg/myrepo`, Git will ins
|
||||
```
|
||||
$ ghe-repl-node --datacenter DC-NAME
|
||||
```
|
||||
1. Set a `cache-location` for the repository cache, replacing *CACHE-LOCATION* with an alphanumeric identifier, such as the region where the cache is deployed. Also set a datacenter name for this cache; new caches will attempt to seed from another cache in the same datacenter.
|
||||
1. Set a `cache-location` for the repository cache, replacing _CACHE-LOCATION_ with an alphanumeric identifier, such as the region where the cache is deployed. Also set a datacenter name for this cache; new caches will attempt to seed from another cache in the same datacenter.
|
||||
|
||||
```shell
|
||||
$ ghe-repl-node --cache CACHE-LOCATION --datacenter REPLICA-DC-NAME
|
||||
```
|
||||
{% else %}
|
||||
1. To configure the repository cache, use the `ghe-repl-node` command and include the necessary parameters.
|
||||
- Set a `cache-location` for the repository cache, replacing *CACHE-LOCATION* with an alphanumeric identifier, such as the region where the cache is deployed. The *CACHE-LOCATION* value must not be any of the subdomains reserved for use with subdomain isolation, such as `assets` or `media`. For a list of reserved names, see "[AUTOTITLE](/admin/configuration/configuring-network-settings/enabling-subdomain-isolation#about-subdomain-isolation)."
|
||||
- Set a `cache-domain` for the repository cache, replacing *EXTERNAL-CACHE-DOMAIN* with the hostname Git clients will use to access the repository cache. If you do not specify a `cache-domain`, {% data variables.product.product_name %} will prepend the *CACHE-LOCATION* value as a subdomain to the hostname configured for your instance. For more information, see "[AUTOTITLE](/admin/configuration/configuring-network-settings/configuring-a-hostname)."
|
||||
- Set a `cache-location` for the repository cache, replacing _CACHE-LOCATION_ with an alphanumeric identifier, such as the region where the cache is deployed. The _CACHE-LOCATION_ value must not be any of the subdomains reserved for use with subdomain isolation, such as `assets` or `media`. For a list of reserved names, see "[AUTOTITLE](/admin/configuration/configuring-network-settings/enabling-subdomain-isolation#about-subdomain-isolation)."
|
||||
- Set a `cache-domain` for the repository cache, replacing _EXTERNAL-CACHE-DOMAIN_ with the hostname Git clients will use to access the repository cache. If you do not specify a `cache-domain`, {% data variables.product.product_name %} will prepend the _CACHE-LOCATION_ value as a subdomain to the hostname configured for your instance. For more information, see "[AUTOTITLE](/admin/configuration/configuring-network-settings/configuring-a-hostname)."
|
||||
- If you haven't already, set the datacenter name on the primary and any replica appliances, replacing DC-NAME with a datacenter name.
|
||||
|
||||
```
|
||||
$ ghe-repl-node --datacenter DC-NAME
|
||||
```
|
||||
- New caches will attempt to seed from another cache in the same datacenter. Set a `datacenter` for the repository cache, replacing *REPLICA-DC-NAME* with the name of the datacenter where you're deploying the node.
|
||||
- New caches will attempt to seed from another cache in the same datacenter. Set a `datacenter` for the repository cache, replacing _REPLICA-DC-NAME_ with the name of the datacenter where you're deploying the node.
|
||||
|
||||
```shell
|
||||
$ ghe-repl-node --cache CACHE-LOCATION --cache-domain EXTERNAL-CACHE-DOMAIN --datacenter REPLICA-DC-NAME
|
||||
|
||||
@@ -83,7 +83,7 @@ You can configure [Nagios](https://www.nagios.org/) to monitor {% data variables
|
||||
nagiosuser@nagios:~$ sudo chown nagios:nagios /var/lib/nagios/.ssh/id_ed25519
|
||||
```
|
||||
|
||||
3. To authorize the public key to run *only* the `ghe-cluster-status -n` command, use a `command=` prefix in the `/data/user/common/authorized_keys` file. From the administrative shell on any node, modify this file to add the public key generated in step 1. For example: `command="/usr/local/bin/ghe-cluster-status -n" ssh-ed25519 AAAA....`
|
||||
3. To authorize the public key to run _only_ the `ghe-cluster-status -n` command, use a `command=` prefix in the `/data/user/common/authorized_keys` file. From the administrative shell on any node, modify this file to add the public key generated in step 1. For example: `command="/usr/local/bin/ghe-cluster-status -n" ssh-ed25519 AAAA....`
|
||||
|
||||
4. Validate and copy the configuration to each node in the cluster by running `ghe-cluster-config-apply` on the node where you modified the `/data/user/common/authorized_keys` file.
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ Use an upgrade package to upgrade a {% data variables.product.prodname_ghe_serve
|
||||
1. Review [Cluster network configuration](/admin/enterprise-management/configuring-clustering/cluster-network-configuration) for the version you are upgrading to, and update your configuration as needed.
|
||||
2. Back up your data with [{% data variables.product.prodname_enterprise_backup_utilities %}](https://github.com/github/backup-utils#readme).
|
||||
3. Schedule a maintenance window for end users of your {% data variables.product.prodname_ghe_server %} cluster, as it will be unavailable for normal use during the upgrade. Maintenance mode blocks user access and prevents data changes while the cluster upgrade is in progress.
|
||||
4. On the [{% data variables.product.prodname_ghe_server %} Download Page](https://enterprise.github.com/download), copy the URL for the upgrade *.pkg* file to the clipboard.
|
||||
4. On the [{% data variables.product.prodname_ghe_server %} Download Page](https://enterprise.github.com/download), copy the URL for the upgrade _.pkg_ file to the clipboard.
|
||||
5. From the administrative shell of any node, use the `ghe-cluster-each` command combined with `curl` to download the release package to each node in a single step. Use the URL you copied in the previous step as an argument.
|
||||
```shell
|
||||
$ ghe-cluster-each -- "cd /home/admin && curl -L -O https://PACKAGE-URL.pkg"
|
||||
|
||||
@@ -23,7 +23,7 @@ If you haven't already set up an external `collectd` server, you will need to do
|
||||
1. Log into your `collectd` server.
|
||||
2. Create or edit the `collectd` configuration file to load the network plugin and populate the server and port directives with the proper values. On most distributions, this is located at `/etc/collectd/collectd.conf`
|
||||
|
||||
An example *collectd.conf* to run a `collectd` server:
|
||||
An example _collectd.conf_ to run a `collectd` server:
|
||||
|
||||
LoadPlugin network
|
||||
...
|
||||
|
||||
@@ -142,7 +142,7 @@ If the upgrade target you're presented with is a feature release instead of a pa
|
||||
{% data reusables.enterprise_installation.download-note %}
|
||||
|
||||
{% data reusables.enterprise_installation.ssh-into-instance %}
|
||||
1. {% data reusables.enterprise_installation.enterprise-download-upgrade-pkg %} Copy the URL for the upgrade hotpackage (*.hpkg* file).
|
||||
1. {% data reusables.enterprise_installation.enterprise-download-upgrade-pkg %} Copy the URL for the upgrade hotpackage (_.hpkg_ file).
|
||||
{% data reusables.enterprise_installation.download-package %}
|
||||
1. Run the `ghe-upgrade` command using the package file name:
|
||||
```shell
|
||||
@@ -183,7 +183,7 @@ While you can use a hotpatch to upgrade to the latest patch release within a fea
|
||||
{% data reusables.enterprise_installation.download-note %}
|
||||
|
||||
{% data reusables.enterprise_installation.ssh-into-instance %}
|
||||
1. {% data reusables.enterprise_installation.enterprise-download-upgrade-pkg %} Select the appropriate platform and copy the URL for the upgrade package (*.pkg* file).
|
||||
1. {% data reusables.enterprise_installation.enterprise-download-upgrade-pkg %} Select the appropriate platform and copy the URL for the upgrade package (_.pkg_ file).
|
||||
{% data reusables.enterprise_installation.download-package %}
|
||||
1. Enable maintenance mode and wait for all active processes to complete on the {% data variables.product.prodname_ghe_server %} instance. For more information, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/enabling-and-scheduling-maintenance-mode)."
|
||||
|
||||
|
||||
@@ -29,8 +29,8 @@ You can use {% data variables.product.prodname_github_connect %} to allow {% dat
|
||||
|
||||
Once {% data variables.product.prodname_github_connect %} is configured, you can use the latest version of an action by deleting its local repository in the `actions` organization on your instance. For example, if your enterprise instance is using `v1` of the `actions/checkout` action, and you need to use `{% data reusables.actions.action-checkout %}` which isn't available on your enterprise instance, perform the following steps to be able to use the latest `checkout` action from {% data variables.product.prodname_dotcom_the_website %}:
|
||||
|
||||
1. From an enterprise owner account on {% data variables.product.product_name %}, navigate to the repository you want to delete from the *actions* organization (in this example `checkout`).
|
||||
1. By default, site administrators are not owners of the bundled *actions* organization. To get the access required to delete the `checkout` repository, you must use the site admin tools. Click {% octicon "rocket" aria-label="Site admin" %} in the upper-right corner of any page in that repository.
|
||||
1. From an enterprise owner account on {% data variables.product.product_name %}, navigate to the repository you want to delete from the _actions_ organization (in this example `checkout`).
|
||||
1. By default, site administrators are not owners of the bundled _actions_ organization. To get the access required to delete the `checkout` repository, you must use the site admin tools. Click {% octicon "rocket" aria-label="Site admin" %} in the upper-right corner of any page in that repository.
|
||||
1. Click {% octicon "shield-lock" aria-hidden="true" %} **Security** to see an overview of the security for the repository.
|
||||
|
||||

|
||||
|
||||
@@ -78,7 +78,7 @@ Use these attributes to finish configuring LDAP for {% data variables.location.p
|
||||
| `Domain search password` | {% octicon "x" aria-label="Optional" %} | The password for the domain search user. |
|
||||
| `Administrators group` | {% octicon "x" aria-label="Optional" %} | Users in this group are promoted to site administrators when signing into your appliance. If you don't configure an LDAP Administrators group, the first LDAP user account that signs into your appliance will be automatically promoted to a site administrator. |
|
||||
| `Domain base` | {% octicon "check" aria-label="Required" %} | The fully qualified `Distinguished Name` (DN) of an LDAP subtree you want to search for users and groups. You can add as many as you like; however, each group must be defined in the same domain base as the users that belong to it. If you specify restricted user groups, only users that belong to those groups will be in scope. We recommend that you specify the top level of your LDAP directory tree as your domain base and use restricted user groups to control access. |
|
||||
| `Restricted user groups` | {% octicon "x" aria-label="Optional" %} | If specified, only users in these groups will be allowed to log in. You only need to specify the common names (CNs) of the groups, and you can add as many groups as you like. If no groups are specified, *all* users within the scope of the specified domain base will be able to sign in to your {% data variables.product.prodname_ghe_server %} instance. |
|
||||
| `Restricted user groups` | {% octicon "x" aria-label="Optional" %} | If specified, only users in these groups will be allowed to log in. You only need to specify the common names (CNs) of the groups, and you can add as many groups as you like. If no groups are specified, _all_ users within the scope of the specified domain base will be able to sign in to your {% data variables.product.prodname_ghe_server %} instance. |
|
||||
| `User ID` | {% octicon "check" aria-label="Required" %} | The LDAP attribute that identifies the LDAP user who attempts authentication. Once a mapping is established, users may change their {% data variables.product.prodname_ghe_server %} usernames. This field should be `sAMAccountName` for most Active Directory installations, but it may be `uid` for other LDAP solutions, such as OpenLDAP. The default value is `uid`. |
|
||||
| `Profile name` | {% octicon "x" aria-label="Optional" %} | The name that will appear on the user's {% data variables.product.prodname_ghe_server %} profile page. Unless LDAP Sync is enabled, users may change their profile names. |
|
||||
| `Emails` | {% octicon "x" aria-label="Optional" %} | The email addresses for a user's {% data variables.product.prodname_ghe_server %} account. |
|
||||
@@ -165,10 +165,10 @@ When LDAP Sync is enabled, site admins and organization owners can search the LD
|
||||
|
||||
This has the potential to disclose sensitive organizational information to contractors or other unprivileged users, including:
|
||||
|
||||
- The existence of specific LDAP Groups visible to the *Domain search user*.
|
||||
- The existence of specific LDAP Groups visible to the _Domain search user_.
|
||||
- Members of the LDAP group who have {% data variables.product.prodname_ghe_server %} user accounts, which is disclosed when creating a team synced with that LDAP group.
|
||||
|
||||
If disclosing such information is not desired, your company or organization should restrict the permissions of the configured *Domain search user* in the admin console. If such restriction isn't possible, contact {% data variables.contact.contact_ent_support %}.
|
||||
If disclosing such information is not desired, your company or organization should restrict the permissions of the configured _Domain search user_ in the admin console. If such restriction isn't possible, contact {% data variables.contact.contact_ent_support %}.
|
||||
|
||||
{% endwarning %}
|
||||
|
||||
|
||||
@@ -646,9 +646,9 @@ Before you'll see `git` category actions, you must enable Git events in the audi
|
||||
|
||||
| Action | Description
|
||||
|--------|-------------
|
||||
| `migration.create` | A migration file was created for transferring data from a *source* location (such as a {% data variables.product.prodname_dotcom_the_website %} organization or a {% data variables.product.prodname_ghe_server %} instance) to a *target* {% data variables.product.prodname_ghe_server %} instance.
|
||||
| `migration.destroy_file` | A migration file for transferring data from a *source* location (such as a {% data variables.product.prodname_dotcom_the_website %} organization or a {% data variables.product.prodname_ghe_server %} instance) to a *target* {% data variables.product.prodname_ghe_server %} instance was deleted.
|
||||
| `migration.download` | A migration file for transferring data from a *source* location (such as a {% data variables.product.prodname_dotcom_the_website %} organization or a {% data variables.product.prodname_ghe_server %} instance) to a *target* {% data variables.product.prodname_ghe_server %} instance was downloaded.
|
||||
| `migration.create` | A migration file was created for transferring data from a _source_ location (such as a {% data variables.product.prodname_dotcom_the_website %} organization or a {% data variables.product.prodname_ghe_server %} instance) to a _target_ {% data variables.product.prodname_ghe_server %} instance.
|
||||
| `migration.destroy_file` | A migration file for transferring data from a _source_ location (such as a {% data variables.product.prodname_dotcom_the_website %} organization or a {% data variables.product.prodname_ghe_server %} instance) to a _target_ {% data variables.product.prodname_ghe_server %} instance was deleted.
|
||||
| `migration.download` | A migration file for transferring data from a _source_ location (such as a {% data variables.product.prodname_dotcom_the_website %} organization or a {% data variables.product.prodname_ghe_server %} instance) to a _target_ {% data variables.product.prodname_ghe_server %} instance was downloaded.
|
||||
{%- endif %}
|
||||
|
||||
## `oauth_access` category actions
|
||||
|
||||
@@ -21,7 +21,7 @@ topics:
|
||||
|
||||
## Storage architecture
|
||||
|
||||
{% data variables.product.product_name %} requires two storage volumes, one mounted to the *root filesystem* path (`/`) and the other to the *user filesystem* path (`/data/user`). This architecture simplifies the upgrade, rollback, and recovery procedures by separating the running software environment from persistent application data.
|
||||
{% data variables.product.product_name %} requires two storage volumes, one mounted to the _root filesystem_ path (`/`) and the other to the _user filesystem_ path (`/data/user`). This architecture simplifies the upgrade, rollback, and recovery procedures by separating the running software environment from persistent application data.
|
||||
|
||||
The root filesystem is included in the distributed machine image. It contains the base operating system and the {% data variables.product.product_name %} application environment. The root filesystem should be treated as ephemeral. Any data on the root filesystem will be replaced when upgrading to future {% data variables.product.product_name %} releases.
|
||||
|
||||
|
||||
@@ -74,7 +74,7 @@ You can use a Linux container management tool to build a pre-receive hook enviro
|
||||
|
||||
{% endnote %}
|
||||
|
||||
For more information about creating a chroot environment see "[Chroot](https://wiki.debian.org/chroot)" from the *Debian Wiki*, "[BasicChroot](https://help.ubuntu.com/community/BasicChroot)" from the *Ubuntu Community Help Wiki*, or "[Installing Alpine Linux in a chroot](https://wiki.alpinelinux.org/wiki/Installing_Alpine_Linux_in_a_chroot)" from the *Alpine Linux Wiki*.
|
||||
For more information about creating a chroot environment see "[Chroot](https://wiki.debian.org/chroot)" from the _Debian Wiki_, "[BasicChroot](https://help.ubuntu.com/community/BasicChroot)" from the _Ubuntu Community Help Wiki_, or "[Installing Alpine Linux in a chroot](https://wiki.alpinelinux.org/wiki/Installing_Alpine_Linux_in_a_chroot)" from the _Alpine Linux Wiki_.
|
||||
|
||||
## Uploading a pre-receive hook environment on {% data variables.product.prodname_ghe_server %}
|
||||
|
||||
|
||||
@@ -35,7 +35,7 @@ This string represents the following arguments.
|
||||
| `<new-value>` | New object name to be stored in the ref.<br> When you delete a ref, the value is 40 zeroes. |
|
||||
| `<ref-name>` | The full name of the ref. |
|
||||
|
||||
For more information about `git-receive-pack`, see "[git-receive-pack](https://git-scm.com/docs/git-receive-pack)" in the Git documentation. For more information about refs, see "[Git References](https://git-scm.com/book/en/v2/Git-Internals-Git-References)" in *Pro Git*.
|
||||
For more information about `git-receive-pack`, see "[git-receive-pack](https://git-scm.com/docs/git-receive-pack)" in the Git documentation. For more information about refs, see "[Git References](https://git-scm.com/book/en/v2/Git-Internals-Git-References)" in _Pro Git_.
|
||||
|
||||
### Output (`stdout`)
|
||||
|
||||
@@ -259,4 +259,4 @@ You can test a pre-receive hook script locally before you create or update it on
|
||||
Notice that the push was rejected after executing the pre-receive hook and echoing the output from the script.
|
||||
|
||||
## Further reading
|
||||
- "[Customizing Git - An Example Git-Enforced Policy](https://git-scm.com/book/en/v2/Customizing-Git-An-Example-Git-Enforced-Policy)" from the *Pro Git website*
|
||||
- "[Customizing Git - An Example Git-Enforced Policy](https://git-scm.com/book/en/v2/Customizing-Git-An-Example-Git-Enforced-Policy)" from the _Pro Git website_
|
||||
|
||||
@@ -14,7 +14,7 @@ topics:
|
||||
- User account
|
||||
shortTitle: Rebuild contributions
|
||||
---
|
||||
Whenever a commit is pushed to {% data variables.product.prodname_enterprise %}, it is linked to a user account if they are both associated with the same email address. However, existing commits are *not* retroactively linked when a user registers a new email address or creates a new account.
|
||||
Whenever a commit is pushed to {% data variables.product.prodname_enterprise %}, it is linked to a user account if they are both associated with the same email address. However, existing commits are _not_ retroactively linked when a user registers a new email address or creates a new account.
|
||||
|
||||
{% data reusables.enterprise_site_admin_settings.access-settings %}
|
||||
{% data reusables.enterprise_site_admin_settings.search-user %}
|
||||
|
||||
@@ -51,11 +51,11 @@ Name | Description
|
||||
**`(no scope)`** | Grants read-only access to public information (including user profile info, repository info, and gists){% endif %}{% ifversion ghes or ghae %}
|
||||
**`site_admin`** | Grants site administrators access to [{% data variables.product.prodname_ghe_server %} Administration API endpoints](/rest/enterprise-admin).{% endif %}
|
||||
**`repo`** | Grants full access to public{% ifversion ghec or ghes or ghae %}, internal,{% endif %} and private repositories including read and write access to code, commit statuses, repository invitations, collaborators, deployment statuses, and repository webhooks. **Note**: In addition to repository related resources, the `repo` scope also grants access to manage organization-owned resources including projects, invitations, team memberships and webhooks. This scope also grants the ability to manage projects owned by users.
|
||||
 `repo:status`| Grants read/write access to commit statuses in {% ifversion fpt %}public and private{% elsif ghec or ghes %}public, private, and internal{% elsif ghae %}private and internal{% endif %} repositories. This scope is only necessary to grant other users or services access to private repository commit statuses *without* granting access to the code.
|
||||
 `repo_deployment`| Grants access to [deployment statuses](/rest/repos#deployments) for {% ifversion not ghae %}public{% else %}internal{% endif %} and private repositories. This scope is only necessary to grant other users or services access to deployment statuses, *without* granting access to the code.{% ifversion not ghae %}
|
||||
 `repo:status`| Grants read/write access to commit statuses in {% ifversion fpt %}public and private{% elsif ghec or ghes %}public, private, and internal{% elsif ghae %}private and internal{% endif %} repositories. This scope is only necessary to grant other users or services access to private repository commit statuses _without_ granting access to the code.
|
||||
 `repo_deployment`| Grants access to [deployment statuses](/rest/repos#deployments) for {% ifversion not ghae %}public{% else %}internal{% endif %} and private repositories. This scope is only necessary to grant other users or services access to deployment statuses, _without_ granting access to the code.{% ifversion not ghae %}
|
||||
 `public_repo`| Limits access to public repositories. That includes read/write access to code, commit statuses, repository projects, collaborators, and deployment statuses for public repositories and organizations. Also required for starring public repositories.{% endif %}
|
||||
 `repo:invite` | Grants accept/decline abilities for invitations to collaborate on a repository. This scope is only necessary to grant other users or services access to invites *without* granting access to the code.{% ifversion fpt or ghes or ghec %}
|
||||
 `security_events` | Grants: <br/> read and write access to security events in the [{% data variables.product.prodname_code_scanning %} API](/rest/code-scanning) {%- ifversion ghec %}<br/> read and write access to security events in the [{% data variables.product.prodname_secret_scanning %} API](/rest/secret-scanning){%- endif %} <br/> This scope is only necessary to grant other users or services access to security events *without* granting access to the code.{% endif %}
|
||||
 `repo:invite` | Grants accept/decline abilities for invitations to collaborate on a repository. This scope is only necessary to grant other users or services access to invites _without_ granting access to the code.{% ifversion fpt or ghes or ghec %}
|
||||
 `security_events` | Grants: <br/> read and write access to security events in the [{% data variables.product.prodname_code_scanning %} API](/rest/code-scanning) {%- ifversion ghec %}<br/> read and write access to security events in the [{% data variables.product.prodname_secret_scanning %} API](/rest/secret-scanning){%- endif %} <br/> This scope is only necessary to grant other users or services access to security events _without_ granting access to the code.{% endif %}
|
||||
**`admin:repo_hook`** | Grants read, write, ping, and delete access to repository hooks in {% ifversion fpt %}public or private{% elsif ghec or ghes %}public, private, or internal{% elsif ghae %}private or internal{% endif %} repositories. The `repo` {% ifversion fpt or ghec or ghes %}and `public_repo` scopes grant{% else %}scope grants{% endif %} full access to repositories, including repository hooks. Use the `admin:repo_hook` scope to limit access to only repository hooks.
|
||||
 `write:repo_hook` | Grants read, write, and ping access to hooks in {% ifversion fpt %}public or private{% elsif ghec or ghes %}public, private, or internal{% elsif ghae %}private or internal{% endif %} repositories.
|
||||
 `read:repo_hook`| Grants read and ping access to hooks in {% ifversion fpt %}public or private{% elsif ghec or ghes %}public, private, or internal{% elsif ghae %}private or internal{% endif %} repositories.
|
||||
|
||||
@@ -29,10 +29,10 @@ When an {% data variables.product.prodname_oauth_app %} wants to identify you by
|
||||
|
||||
## {% data variables.product.prodname_oauth_app %} access
|
||||
|
||||
{% data variables.product.prodname_oauth_apps %} can have *read* or *write* access to your {% data variables.product.product_name %} data.
|
||||
{% data variables.product.prodname_oauth_apps %} can have _read_ or _write_ access to your {% data variables.product.product_name %} data.
|
||||
|
||||
- **Read access** only allows an app to *look at* your data.
|
||||
- **Write access** allows an app to *change* your data.
|
||||
- **Read access** only allows an app to _look at_ your data.
|
||||
- **Write access** allows an app to _change_ your data.
|
||||
|
||||
{% tip %}
|
||||
|
||||
@@ -42,7 +42,7 @@ When an {% data variables.product.prodname_oauth_app %} wants to identify you by
|
||||
|
||||
### About OAuth scopes
|
||||
|
||||
*Scopes* are named groups of permissions that an {% data variables.product.prodname_oauth_app %} can request to access both public and non-public data.
|
||||
_Scopes_ are named groups of permissions that an {% data variables.product.prodname_oauth_app %} can request to access both public and non-public data.
|
||||
|
||||
When you want to use an {% data variables.product.prodname_oauth_app %} that integrates with {% data variables.product.product_name %}, that app lets you know what type of access to your data will be required. If you grant access to the app, then the app will be able to perform actions on your behalf, such as reading or modifying data. For example, if you want to use an app that requests `user:email` scope, the app will have read-only access to your private email addresses. For more information, see "[AUTOTITLE](/apps/oauth-apps/building-oauth-apps/scopes-for-oauth-apps)."
|
||||
|
||||
@@ -81,9 +81,9 @@ When {% data variables.product.prodname_oauth_apps %} request new access permiss
|
||||
|
||||
When you authorize an {% data variables.product.prodname_oauth_app %} for your personal account, you'll also see how the authorization will affect each organization you're a member of.
|
||||
|
||||
- **For organizations *with* {% data variables.product.prodname_oauth_app %} access restrictions, you can request that organization admins approve the application for use in that organization.** If the organization does not approve the application, then the application will only be able to access the organization's public resources. If you're an organization admin, you can [approve the application](/organizations/managing-oauth-access-to-your-organizations-data/approving-oauth-apps-for-your-organization) yourself.
|
||||
- **For organizations _with_ {% data variables.product.prodname_oauth_app %} access restrictions, you can request that organization admins approve the application for use in that organization.** If the organization does not approve the application, then the application will only be able to access the organization's public resources. If you're an organization admin, you can [approve the application](/organizations/managing-oauth-access-to-your-organizations-data/approving-oauth-apps-for-your-organization) yourself.
|
||||
|
||||
- **For organizations *without* {% data variables.product.prodname_oauth_app %} access restrictions, the application will automatically be authorized for access to that organization's resources.** For this reason, you should be careful about which {% data variables.product.prodname_oauth_apps %} you approve for access to your personal account resources as well as any organization resources.
|
||||
- **For organizations _without_ {% data variables.product.prodname_oauth_app %} access restrictions, the application will automatically be authorized for access to that organization's resources.** For this reason, you should be careful about which {% data variables.product.prodname_oauth_apps %} you approve for access to your personal account resources as well as any organization resources.
|
||||
|
||||
If you belong to any organizations with SAML single sign-on (SSO) enabled, and you have created a linked identity for that organization by authenticating via SAML in the past, you must have an active SAML session for each organization each time you authorize an {% data variables.product.prodname_oauth_app %}.
|
||||
|
||||
|
||||
@@ -27,14 +27,14 @@ If the developer has chosen to supply further information, the right-hand side o
|
||||
|
||||
## Types of application access and data
|
||||
|
||||
Applications can have *read* or *write* access to your {% data variables.product.product_name %} data.
|
||||
Applications can have _read_ or _write_ access to your {% data variables.product.product_name %} data.
|
||||
|
||||
- **Read access** only allows an application to *look at* your data.
|
||||
- **Write access** allows an application to *change* your data.
|
||||
- **Read access** only allows an application to _look at_ your data.
|
||||
- **Write access** allows an application to _change_ your data.
|
||||
|
||||
### About OAuth scopes
|
||||
|
||||
*Scopes* are named groups of permissions that an application can request to access both public and non-public data.
|
||||
_Scopes_ are named groups of permissions that an application can request to access both public and non-public data.
|
||||
|
||||
When you want to use a third-party application that integrates with {% data variables.product.product_name %}, that application lets you know what type of access to your data will be required. If you grant access to the application, then the application will be able to perform actions on your behalf, such as reading or modifying data. For example, if you want to use an app that requests `user:email` scope, the app will have read-only access to your private email addresses. For more information, see "[AUTOTITLE](/apps/oauth-apps/building-oauth-apps/scopes-for-oauth-apps)."
|
||||
|
||||
|
||||
@@ -35,19 +35,19 @@ Before you generate a new SSH key, you should check your local machine for exist
|
||||
# Lists the files in your .ssh directory, if they exist
|
||||
```
|
||||
|
||||
3. Check the directory listing to see if you already have a public SSH key. By default, the {% ifversion ghae %}filename of a supported public key for {% data variables.product.product_name %} is *id_rsa.pub*.{% else %}filenames of supported public keys for {% data variables.product.product_name %} are one of the following.
|
||||
- *id_rsa.pub*
|
||||
- *id_ecdsa.pub*
|
||||
- *id_ed25519.pub*{% endif %}
|
||||
3. Check the directory listing to see if you already have a public SSH key. By default, the {% ifversion ghae %}filename of a supported public key for {% data variables.product.product_name %} is _id_rsa.pub_.{% else %}filenames of supported public keys for {% data variables.product.product_name %} are one of the following.
|
||||
- _id_rsa.pub_
|
||||
- _id_ecdsa.pub_
|
||||
- _id_ed25519.pub_{% endif %}
|
||||
|
||||
{% tip %}
|
||||
|
||||
**Tip**: If you receive an error that *~/.ssh* doesn't exist, you do not have an existing SSH key pair in the default location. You can create a new SSH key pair in the next step.
|
||||
**Tip**: If you receive an error that _~/.ssh_ doesn't exist, you do not have an existing SSH key pair in the default location. You can create a new SSH key pair in the next step.
|
||||
|
||||
{% endtip %}
|
||||
|
||||
4. Either generate a new SSH key or upload an existing key.
|
||||
- If you don't have a supported public and private key pair, or don't wish to use any that are available, generate a new SSH key.
|
||||
- If you see an existing public and private key pair listed (for example, *id_rsa.pub* and *id_rsa*) that you would like to use to connect to {% data variables.product.product_name %}, you can add the key to the ssh-agent.
|
||||
- If you see an existing public and private key pair listed (for example, _id_rsa.pub_ and _id_rsa_) that you would like to use to connect to {% data variables.product.product_name %}, you can add the key to the ssh-agent.
|
||||
|
||||
For more information about generation of a new SSH key or addition of an existing key to the ssh-agent, see "[AUTOTITLE](/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent)."
|
||||
|
||||
@@ -161,7 +161,7 @@ If your server needs to access multiple repositories, you can create a new accou
|
||||
|
||||
**Tip:** Our [terms of service][tos] state:
|
||||
|
||||
> *Accounts registered by "bots" or other automated methods are not permitted.*
|
||||
> _Accounts registered by "bots" or other automated methods are not permitted._
|
||||
|
||||
This means that you cannot automate the creation of accounts. But if you want to create a single machine user for automating tasks such as deploy scripts in your project or organization, that is totally cool.
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ We're off to a great start. Let's set up SSH to allow agent forwarding to your s
|
||||
|
||||
{% warning %}
|
||||
|
||||
**Warning:** You may be tempted to use a wildcard like `Host *` to just apply this setting to all SSH connections. That's not really a good idea, as you'd be sharing your local SSH keys with *every* server you SSH into. They won't have direct access to the keys, but they will be able to use them *as you* while the connection is established. **You should only add servers you trust and that you intend to use with agent forwarding.**
|
||||
**Warning:** You may be tempted to use a wildcard like `Host *` to just apply this setting to all SSH connections. That's not really a good idea, as you'd be sharing your local SSH keys with _every_ server you SSH into. They won't have direct access to the keys, but they will be able to use them _as you_ while the connection is established. **You should only add servers you trust and that you intend to use with agent forwarding.**
|
||||
|
||||
{% endwarning %}
|
||||
|
||||
|
||||
@@ -97,8 +97,8 @@ The `ssh-agent` process will continue to run until you log out, shut down your c
|
||||
|
||||
On Mac OS X Leopard through OS X El Capitan, these default private key files are handled automatically:
|
||||
|
||||
- *.ssh/id_rsa*
|
||||
- *.ssh/identity*
|
||||
- _.ssh/id_rsa_
|
||||
- _.ssh/identity_
|
||||
|
||||
The first time you use your key, you will be prompted to enter your passphrase. If you choose to save the passphrase with your keychain, you won't have to enter it again.
|
||||
|
||||
|
||||
@@ -76,7 +76,7 @@ Check the value of `Cache-Control`. In this example, there's no `Cache-Control`.
|
||||
- If you own the server that's hosting the image, modify it so that it returns a `Cache-Control` of `no-cache` for images.
|
||||
- If you're using an external service for hosting images, contact support for that service.
|
||||
|
||||
If `Cache-Control` *is* set to `no-cache`, contact {% data variables.contact.contact_support %} or search the {% data variables.contact.community_support_forum %}.
|
||||
If `Cache-Control` _is_ set to `no-cache`, contact {% data variables.contact.contact_support %} or search the {% data variables.contact.community_support_forum %}.
|
||||
|
||||
### Removing an image from Camo's cache
|
||||
|
||||
|
||||
@@ -82,7 +82,7 @@ To illustrate how `git filter-repo` works, we'll show you how to remove your fil
|
||||
```
|
||||
brew install git-filter-repo
|
||||
```
|
||||
For more information, see [*INSTALL.md*](https://github.com/newren/git-filter-repo/blob/main/INSTALL.md) in the `newren/git-filter-repo` repository.
|
||||
For more information, see [_INSTALL.md_](https://github.com/newren/git-filter-repo/blob/main/INSTALL.md) in the `newren/git-filter-repo` repository.
|
||||
|
||||
2. If you don't already have a local copy of your repository with sensitive data in its history, [clone the repository](/repositories/creating-and-managing-repositories/cloning-a-repository) to your local computer.
|
||||
```shell
|
||||
@@ -101,7 +101,7 @@ To illustrate how `git filter-repo` works, we'll show you how to remove your fil
|
||||
4. Run the following command, replacing `PATH-TO-YOUR-FILE-WITH-SENSITIVE-DATA` with the **path to the file you want to remove, not just its filename**. These arguments will:
|
||||
- Force Git to process, but not check out, the entire history of every branch and tag
|
||||
- Remove the specified file, as well as any empty commits generated as a result
|
||||
- Remove some configurations, such as the remote URL, stored in the *.git/config* file. You may want to back up this file in advance for restoration later.
|
||||
- Remove some configurations, such as the remote URL, stored in the _.git/config_ file. You may want to back up this file in advance for restoration later.
|
||||
- **Overwrite your existing tags**
|
||||
```shell
|
||||
$ git filter-repo --invert-paths --path PATH-TO-YOUR-FILE-WITH-SENSITIVE-DATA
|
||||
@@ -163,7 +163,7 @@ After using either the BFG tool or `git filter-repo` to remove the sensitive dat
|
||||
|
||||
1. Contact {% data variables.contact.contact_support %}, asking them to remove cached views and references to the sensitive data in pull requests on {% data variables.product.product_name %}. Please provide the name of the repository and/or a link to the commit you need removed.{% ifversion ghes %} For more information about how site administrators can remove unreachable Git objects, see "[AUTOTITLE](/admin/configuration/configuring-your-enterprise/command-line-utilities#ghe-repo-gc)."{% endif %}
|
||||
|
||||
2. Tell your collaborators to [rebase](https://git-scm.com/book/en/Git-Branching-Rebasing), *not* merge, any branches they created off of your old (tainted) repository history. One merge commit could reintroduce some or all of the tainted history that you just went to the trouble of purging.
|
||||
2. Tell your collaborators to [rebase](https://git-scm.com/book/en/Git-Branching-Rebasing), _not_ merge, any branches they created off of your old (tainted) repository history. One merge commit could reintroduce some or all of the tainted history that you just went to the trouble of purging.
|
||||
|
||||
3. After some time has passed and you're confident that the BFG tool / `git filter-repo` had no unintended side effects, you can force all objects in your local repository to be dereferenced and garbage collected with the following commands (using Git 1.8.5 or newer):
|
||||
```shell
|
||||
|
||||
@@ -40,7 +40,7 @@ You can delete unauthorized (or possibly compromised) SSH keys to ensure that an
|
||||
> 2048 SHA256:274ffWxgaxq/tSINAykStUL7XWyRNcRTlcST1Ei7gBQ /Users/USERNAME/.ssh/id_rsa (RSA)
|
||||
```
|
||||
|
||||
7. The SSH keys on {% data variables.product.product_name %} *should* match the same keys on your computer.
|
||||
7. The SSH keys on {% data variables.product.product_name %} _should_ match the same keys on your computer.
|
||||
|
||||
{% endmac %}
|
||||
|
||||
@@ -68,7 +68,7 @@ You can delete unauthorized (or possibly compromised) SSH keys to ensure that an
|
||||
> 2048 SHA256:274ffWxgaxq/tSINAykStUL7XWyRNcRTlcST1Ei7gBQ /Users/USERNAME/.ssh/id_rsa (RSA)
|
||||
```
|
||||
|
||||
7. The SSH keys on {% data variables.product.product_name %} *should* match the same keys on your computer.
|
||||
7. The SSH keys on {% data variables.product.product_name %} _should_ match the same keys on your computer.
|
||||
|
||||
{% endwindows %}
|
||||
|
||||
@@ -94,7 +94,7 @@ You can delete unauthorized (or possibly compromised) SSH keys to ensure that an
|
||||
> 2048 SHA256:274ffWxgaxq/tSINAykStUL7XWyRNcRTlcST1Ei7gBQ /Users/USERNAME/.ssh/id_rsa (RSA)
|
||||
```
|
||||
|
||||
7. The SSH keys on {% data variables.product.product_name %} *should* match the same keys on your computer.
|
||||
7. The SSH keys on {% data variables.product.product_name %} _should_ match the same keys on your computer.
|
||||
|
||||
{% endlinux %}
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ $ ssh -T -ai ~/.ssh/id_rsa git@{% data variables.command_line.codeblock %}
|
||||
> provide shell access.
|
||||
```
|
||||
|
||||
The *username* in the response is the account on {% ifversion ghae %}{% data variables.product.product_name %}{% else %}{% data variables.location.product_location %}{% endif %} that the key is currently attached to. If the response looks something like "username/repo", the key has been attached to a repository as a [*deploy key*](/authentication/connecting-to-github-with-ssh/managing-deploy-keys#deploy-keys).
|
||||
The _username_ in the response is the account on {% ifversion ghae %}{% data variables.product.product_name %}{% else %}{% data variables.location.product_location %}{% endif %} that the key is currently attached to. If the response looks something like "username/repo", the key has been attached to a repository as a [_deploy key_](/authentication/connecting-to-github-with-ssh/managing-deploy-keys#deploy-keys).
|
||||
|
||||
|
||||
To force SSH to use only the key provided on the command line, use `-o` to add the `IdentitiesOnly=yes` option:
|
||||
|
||||
@@ -16,7 +16,7 @@ shortTitle: Permission denied (publickey)
|
||||
---
|
||||
## Should the `sudo` command or elevated privileges be used with Git?
|
||||
|
||||
You should not be using the `sudo` command or elevated privileges, such as administrator permissions, with Git. If you have a *very good reason* you must use `sudo`, then ensure you are using it with every command (it's probably just better to use `su` to get a shell as root at that point). If you [generate SSH keys](/authentication/connecting-to-github-with-ssh) without `sudo` and then try to use a command like `sudo git push`, you won't be using the same keys that you generated.
|
||||
You should not be using the `sudo` command or elevated privileges, such as administrator permissions, with Git. If you have a _very good reason_ you must use `sudo`, then ensure you are using it with every command (it's probably just better to use `su` to get a shell as root at that point). If you [generate SSH keys](/authentication/connecting-to-github-with-ssh) without `sudo` and then try to use a command like `sudo git push`, you won't be using the same keys that you generated.
|
||||
|
||||
## Check that you are connecting to the correct server
|
||||
|
||||
@@ -95,7 +95,7 @@ $ ssh -T git@{% data variables.command_line.codeblock %}
|
||||
|
||||
{% endlinux %}
|
||||
|
||||
The `ssh-add` command *should* print out a long string of numbers and letters. If it does not print anything, you will need to [generate a new SSH key](/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) and associate it with {% data variables.product.product_name %}.
|
||||
The `ssh-add` command _should_ print out a long string of numbers and letters. If it does not print anything, you will need to [generate a new SSH key](/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) and associate it with {% data variables.product.product_name %}.
|
||||
|
||||
{% tip %}
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ You cannot apply coupons to paid plans for {% data variables.product.prodname_ma
|
||||
## Redeeming a coupon for your personal account
|
||||
|
||||
{% data reusables.dotcom_billing.enter_coupon_code_on_redeem_page %}
|
||||
4. Under "Redeem your coupon", click **Choose** next to your *personal* account's username.
|
||||
4. Under "Redeem your coupon", click **Choose** next to your _personal_ account's username.
|
||||
{% data reusables.dotcom_billing.redeem_coupon %}
|
||||
|
||||
## Redeeming a coupon for your organization
|
||||
@@ -35,5 +35,5 @@ You cannot apply coupons to paid plans for {% data variables.product.prodname_ma
|
||||
{% data reusables.dotcom_billing.org-billing-perms %}
|
||||
|
||||
{% data reusables.dotcom_billing.enter_coupon_code_on_redeem_page %}
|
||||
4. Under "Redeem your coupon", click **Choose** next to the *organization* you want to apply the coupon to. If you'd like to apply your coupon to a new organization that doesn't exist yet, click **Create a new organization**.
|
||||
4. Under "Redeem your coupon", click **Choose** next to the _organization_ you want to apply the coupon to. If you'd like to apply your coupon to a new organization that doesn't exist yet, click **Create a new organization**.
|
||||
{% data reusables.dotcom_billing.redeem_coupon %}
|
||||
|
||||
@@ -20,7 +20,7 @@ shortTitle: About organizations
|
||||
|
||||
To access an organization, each member must sign into their own personal account.
|
||||
|
||||
Organization members can have different roles, such as *owner* or *billing manager*:
|
||||
Organization members can have different roles, such as _owner_ or _billing manager_:
|
||||
|
||||
- **Owners** have complete administrative access to an organization and its contents.
|
||||
- **Billing managers** can manage billing settings, and cannot access organization contents. Billing managers are not shown in the list of organization members.
|
||||
|
||||
@@ -22,7 +22,7 @@ shortTitle: Upgrade or downgrade
|
||||
|
||||
**Tips**:
|
||||
- Before you upgrade your client's organization, you can [view or update the payment method on file for the organization](/billing/managing-your-github-billing-settings/adding-or-editing-a-payment-method).
|
||||
- These instructions are for upgrading and downgrading organizations on the *per-seat subscription*. If your client pays for {% data variables.product.product_name %} using a *legacy per-repository* plan, you can upgrade or [downgrade](/billing/managing-billing-for-your-github-account/downgrading-your-github-subscription) their legacy plan, or [switch their organization to per-seat pricing](/billing/managing-billing-for-your-github-account/upgrading-your-github-subscription).
|
||||
- These instructions are for upgrading and downgrading organizations on the _per-seat subscription_. If your client pays for {% data variables.product.product_name %} using a _legacy per-repository_ plan, you can upgrade or [downgrade](/billing/managing-billing-for-your-github-account/downgrading-your-github-subscription) their legacy plan, or [switch their organization to per-seat pricing](/billing/managing-billing-for-your-github-account/upgrading-your-github-subscription).
|
||||
|
||||
{% endtip %}
|
||||
|
||||
|
||||
@@ -225,7 +225,7 @@ codeql github upload-results \
|
||||
|
||||
| Option | Required | Usage |
|
||||
|--------|:--------:|-----|
|
||||
| <nobr>`--repository`</nobr> | {% octicon "check" aria-label="Required" %} | Specify the *OWNER/NAME* of the repository to upload data to. The owner must be an organization within an enterprise that has a license for {% data variables.product.prodname_GH_advanced_security %} and {% data variables.product.prodname_GH_advanced_security %} must be enabled for the repository{% ifversion fpt or ghec %}, unless the repository is public{% endif %}. For more information, see "[AUTOTITLE](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-security-and-analysis-settings-for-your-repository)."
|
||||
| <nobr>`--repository`</nobr> | {% octicon "check" aria-label="Required" %} | Specify the _OWNER/NAME_ of the repository to upload data to. The owner must be an organization within an enterprise that has a license for {% data variables.product.prodname_GH_advanced_security %} and {% data variables.product.prodname_GH_advanced_security %} must be enabled for the repository{% ifversion fpt or ghec %}, unless the repository is public{% endif %}. For more information, see "[AUTOTITLE](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-security-and-analysis-settings-for-your-repository)."
|
||||
| <nobr>`--ref`</nobr> | {% octicon "check" aria-label="Required" %} | Specify the name of the `ref` you checked out and analyzed so that the results can be matched to the correct code. For a branch use: `refs/heads/BRANCH-NAME`, for the head commit of a pull request use `refs/pull/NUMBER/head`, or for the {% data variables.product.prodname_dotcom %}-generated merge commit of a pull request use `refs/pull/NUMBER/merge`.
|
||||
| <nobr>`--commit`</nobr> | {% octicon "check" aria-label="Required" %} | Specify the full SHA of the commit you analyzed.
|
||||
| <nobr>`--sarif`</nobr> | {% octicon "check" aria-label="Required" %} | Specify the SARIF file to load.{% ifversion ghes or ghae %}
|
||||
@@ -314,7 +314,7 @@ For more information about pack compatibility, see "[AUTOTITLE](/code-security/c
|
||||
This example runs the `codeql database analyze` command with the `--download` option to:
|
||||
|
||||
1. Download the latest version of the `octo-org/security-queries` pack.
|
||||
2. Download a version of the `octo-org/optional-security-queries` pack that is *compatible* with version 1.0.1 (in this case, it is version 1.0.2). For more information on semver compatibility, see [npm's semantic version range documentation](https://github.com/npm/node-semver#ranges).
|
||||
2. Download a version of the `octo-org/optional-security-queries` pack that is _compatible_ with version 1.0.1 (in this case, it is version 1.0.2). For more information on semver compatibility, see [npm's semantic version range documentation](https://github.com/npm/node-semver#ranges).
|
||||
3. Run all the default queries in `octo-org/security-queries`.
|
||||
4. Run only the query `queries/csrf.ql` from `octo-org/optional-security-queries`
|
||||
|
||||
|
||||
@@ -71,7 +71,7 @@ for all columns.
|
||||
|
||||
Select output format. Choices include:
|
||||
|
||||
`text` *(default)*: A human-readable plain text table.
|
||||
`text` _(default)_: A human-readable plain text table.
|
||||
|
||||
`csv`: Comma-separated values.
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ options of [codeql bqrs decode](/code-security/codeql-cli/codeql-cli-manual/bqrs
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
### Supporting pagination in codeql bqrs decode
|
||||
|
||||
|
||||
@@ -138,9 +138,9 @@ languages or different parts of the code.
|
||||
If you analyze the same version of a code base in several different ways
|
||||
(e.g., for different languages) and upload the results to GitHub for
|
||||
presentation in Code Scanning, this value should differ between each of
|
||||
the analyses, which tells Code Scanning that the analyses *supplement*
|
||||
rather than *supersede* each other. (The values should be consistent
|
||||
between runs of the same analysis for *different* versions of the code
|
||||
the analyses, which tells Code Scanning that the analyses _supplement_
|
||||
rather than _supersede_ each other. (The values should be consistent
|
||||
between runs of the same analysis for _different_ versions of the code
|
||||
base.)
|
||||
|
||||
This value will appear (with a trailing slash appended if not already
|
||||
@@ -153,7 +153,7 @@ present) as the `<run>.automationId` property in SARIF v1, the
|
||||
The number of threads used for computing paths.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `--sarif-run-property=<String=String>`
|
||||
|
||||
@@ -39,7 +39,7 @@ Run a query suite (or some individual queries) against a CodeQL
|
||||
database, producing results, styled as alerts or paths, in SARIF or
|
||||
another interpreted format.
|
||||
|
||||
This command combines the effect of the [codeql database run-queries](/code-security/codeql-cli/codeql-cli-manual/database-run-queries) and [codeql database interpret-results](/code-security/codeql-cli/codeql-cli-manual/database-interpret-results) commands. If you want to run queries whose results *don't* meet the requirements for
|
||||
This command combines the effect of the [codeql database run-queries](/code-security/codeql-cli/codeql-cli-manual/database-run-queries) and [codeql database interpret-results](/code-security/codeql-cli/codeql-cli-manual/database-interpret-results) commands. If you want to run queries whose results _don't_ meet the requirements for
|
||||
being interpreted as source-code alerts, use
|
||||
[codeql database run-queries](/code-security/codeql-cli/codeql-cli-manual/database-run-queries) or [codeql query run](/code-security/codeql-cli/codeql-cli-manual/query-run) instead, and then [codeql bqrs decode](/code-security/codeql-cli/codeql-cli-manual/bqrs-decode) to convert the raw results to a readable notation.
|
||||
|
||||
@@ -184,9 +184,9 @@ languages or different parts of the code.
|
||||
If you analyze the same version of a code base in several different ways
|
||||
(e.g., for different languages) and upload the results to GitHub for
|
||||
presentation in Code Scanning, this value should differ between each of
|
||||
the analyses, which tells Code Scanning that the analyses *supplement*
|
||||
rather than *supersede* each other. (The values should be consistent
|
||||
between runs of the same analysis for *different* versions of the code
|
||||
the analyses, which tells Code Scanning that the analyses _supplement_
|
||||
rather than _supersede_ each other. (The values should be consistent
|
||||
between runs of the same analysis for _different_ versions of the code
|
||||
base.)
|
||||
|
||||
This value will appear (with a trailing slash appended if not already
|
||||
@@ -228,7 +228,7 @@ If no timeout is specified, or is given as 0, no timeout will be set
|
||||
Use this many threads to evaluate queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `--[no-]save-cache`
|
||||
@@ -287,17 +287,17 @@ evaluation using xterm control sequences. Possible values are:
|
||||
|
||||
`no`: Never produce fancy progress; assume a dumb terminal.
|
||||
|
||||
`auto` *(default)*: Autodetect whether the command is running in an
|
||||
`auto` _(default)_: Autodetect whether the command is running in an
|
||||
appropriate terminal.
|
||||
|
||||
`yes`: Assume the terminal can understand xterm control sequences. The
|
||||
feature still depends on being able to autodetect the *size* of the
|
||||
feature still depends on being able to autodetect the _size_ of the
|
||||
terminal, and will also be disabled if `-q` is given.
|
||||
|
||||
`25x80` (or similar): Like `yes`, and also explicitly give the size of
|
||||
the terminal.
|
||||
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a *different*
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a _different_
|
||||
terminal than stderr. Mostly useful for internal testing.
|
||||
|
||||
### Options for controlling outputting of structured evaluator logs
|
||||
@@ -332,7 +332,7 @@ How to handle warnings from the QL compiler. One of:
|
||||
|
||||
`hide`: Suppress warnings.
|
||||
|
||||
`show` *(default)*: Print warnings but continue with compilation.
|
||||
`show` _(default)_: Print warnings but continue with compilation.
|
||||
|
||||
`error`: Treat warnings as errors.
|
||||
|
||||
|
||||
@@ -103,7 +103,7 @@ Select how aggressively to trim the cache. Choices include:
|
||||
`brutal`: Remove the entire cache, trimming down to the state of a
|
||||
freshly extracted dataset
|
||||
|
||||
`normal` *(default)*: Trim everything except explicitly "cached"
|
||||
`normal` _(default)_: Trim everything except explicitly "cached"
|
||||
predicates.
|
||||
|
||||
`light`: Simply make sure the defined size limits for the disk cache are
|
||||
|
||||
@@ -77,7 +77,7 @@ Select how aggressively to trim the cache. Choices include:
|
||||
`brutal`: Remove the entire cache, trimming down to the state of a
|
||||
freshly extracted dataset
|
||||
|
||||
`normal` *(default)*: Trim everything except explicitly "cached"
|
||||
`normal` _(default)_: Trim everything except explicitly "cached"
|
||||
predicates.
|
||||
|
||||
`light`: Simply make sure the defined size limits for the disk cache are
|
||||
|
||||
@@ -40,10 +40,10 @@ one of the CodeQL products.
|
||||
#### `<database>`
|
||||
|
||||
\[Mandatory] Path to the CodeQL database to create. This directory will
|
||||
be created, and *must not* already exist (but its parent must).
|
||||
be created, and _must not_ already exist (but its parent must).
|
||||
|
||||
If the `--db-cluster` option is given, this will not be a database
|
||||
itself, but a directory that will *contain* databases for several
|
||||
itself, but a directory that will _contain_ databases for several
|
||||
languages built from the same source root.
|
||||
|
||||
It is important that this directory is not in a location that the build
|
||||
@@ -107,7 +107,7 @@ Use this many threads for the import operation, and pass it as a hint to
|
||||
any invoked build commands.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `-M, --ram=<MB>`
|
||||
@@ -126,7 +126,7 @@ If no build command is specified, the command attempts to figure out
|
||||
automatically how to build the source tree, based on heuristics from the
|
||||
selected language pack.
|
||||
|
||||
Beware that some combinations of multiple languages *require* an
|
||||
Beware that some combinations of multiple languages _require_ an
|
||||
explicit build command to be specified.
|
||||
|
||||
#### `--no-cleanup`
|
||||
@@ -230,7 +230,7 @@ Select how aggressively to trim the cache. Choices include:
|
||||
`brutal`: Remove the entire cache, trimming down to the state of a
|
||||
freshly extracted dataset
|
||||
|
||||
`normal` *(default)*: Trim everything except explicitly "cached"
|
||||
`normal` _(default)_: Trim everything except explicitly "cached"
|
||||
predicates.
|
||||
|
||||
`light`: Simply make sure the defined size limits for the disk cache are
|
||||
|
||||
@@ -45,7 +45,7 @@ Available since `v2.12.6`.
|
||||
have been prepared for extraction with [codeql database init](/code-security/codeql-cli/codeql-cli-manual/database-init).
|
||||
|
||||
If the `--db-cluster` option is given, this is not a database itself,
|
||||
but a directory that *contains* databases, and all of those databases
|
||||
but a directory that _contains_ databases, and all of those databases
|
||||
will be processed together.
|
||||
|
||||
#### `--format=<format>`
|
||||
@@ -67,7 +67,7 @@ of SARIF between different CodeQL versions.
|
||||
#### `--[no-]db-cluster`
|
||||
|
||||
Indicates that the directory given on the command line is not a database
|
||||
itself, but a directory that *contains* one or more databases under
|
||||
itself, but a directory that _contains_ one or more databases under
|
||||
construction. Those databases will be processed together.
|
||||
|
||||
#### `-o, --output=<output>`
|
||||
@@ -92,9 +92,9 @@ languages or different parts of the code.
|
||||
If you analyze the same version of a code base in several different ways
|
||||
(e.g., for different languages) and upload the results to GitHub for
|
||||
presentation in Code Scanning, this value should differ between each of
|
||||
the analyses, which tells Code Scanning that the analyses *supplement*
|
||||
rather than *supersede* each other. (The values should be consistent
|
||||
between runs of the same analysis for *different* versions of the code
|
||||
the analyses, which tells Code Scanning that the analyses _supplement_
|
||||
rather than _supersede_ each other. (The values should be consistent
|
||||
between runs of the same analysis for _different_ versions of the code
|
||||
base.)
|
||||
|
||||
This value will appear (with a trailing slash appended if not already
|
||||
|
||||
@@ -42,13 +42,13 @@ Finalize a database that was created with [codeql database init](/code-security/
|
||||
have been prepared for extraction with [codeql database init](/code-security/codeql-cli/codeql-cli-manual/database-init).
|
||||
|
||||
If the `--db-cluster` option is given, this is not a database itself,
|
||||
but a directory that *contains* databases, and all of those databases
|
||||
but a directory that _contains_ databases, and all of those databases
|
||||
will be processed together.
|
||||
|
||||
#### `--[no-]db-cluster`
|
||||
|
||||
Indicates that the directory given on the command line is not a database
|
||||
itself, but a directory that *contains* one or more databases under
|
||||
itself, but a directory that _contains_ one or more databases under
|
||||
construction. Those databases will be processed together.
|
||||
|
||||
#### `--additional-dbs=<database>[:<database>...]`
|
||||
@@ -93,7 +93,7 @@ database's extractor.
|
||||
Use this many threads for the import operation.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `-M, --ram=<MB>`
|
||||
@@ -136,7 +136,7 @@ Select how aggressively to trim the cache. Choices include:
|
||||
`brutal`: Remove the entire cache, trimming down to the state of a
|
||||
freshly extracted dataset
|
||||
|
||||
`normal` *(default)*: Trim everything except explicitly "cached"
|
||||
`normal` _(default)_: Trim everything except explicitly "cached"
|
||||
predicates.
|
||||
|
||||
`light`: Simply make sure the defined size limits for the disk cache are
|
||||
|
||||
@@ -36,7 +36,7 @@ codeql database import [--dbscheme=<file>] [--threads=<num>] [--ram=<MB>] <optio
|
||||
unfinalized database.
|
||||
|
||||
The result of this command is that the target database (the one in the
|
||||
*first* argument) will be augmented with the data from all the other
|
||||
_first_ argument) will be augmented with the data from all the other
|
||||
databases passed. In particular, TRAP files from the other databases
|
||||
will be imported and sources in them will be copied.
|
||||
|
||||
@@ -56,7 +56,7 @@ meaningful.
|
||||
have been prepared for extraction with [codeql database init](/code-security/codeql-cli/codeql-cli-manual/database-init).
|
||||
|
||||
If the `--db-cluster` option is given, this is not a database itself,
|
||||
but a directory that *contains* databases, and all of those databases
|
||||
but a directory that _contains_ databases, and all of those databases
|
||||
will be processed together.
|
||||
|
||||
#### `<additionalDbs>...`
|
||||
@@ -70,7 +70,7 @@ database clusters rather than individual CodeQL databases.
|
||||
#### `--[no-]db-cluster`
|
||||
|
||||
Indicates that the directory given on the command line is not a database
|
||||
itself, but a directory that *contains* one or more databases under
|
||||
itself, but a directory that _contains_ one or more databases under
|
||||
construction. Those databases will be processed together.
|
||||
|
||||
### Options for controlling the TRAP import operation
|
||||
@@ -86,7 +86,7 @@ database's extractor.
|
||||
Use this many threads for the import operation.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `-M, --ram=<MB>`
|
||||
|
||||
@@ -64,8 +64,8 @@ Ask the extractor to use this many threads. This option is passed to the
|
||||
extractor as a suggestion. If the CODEQL\_THREADS environment variable is
|
||||
set, the environment variable value takes precedence over this option.
|
||||
|
||||
You can pass 0 to use one thread per core on the machine, or -*N* to
|
||||
leave *N* cores unused (except still use at least one thread).
|
||||
You can pass 0 to use one thread per core on the machine, or -_N_ to
|
||||
leave _N_ cores unused (except still use at least one thread).
|
||||
|
||||
#### `-M, --ram=<MB>`
|
||||
|
||||
|
||||
@@ -46,10 +46,10 @@ extractors in the middle of an extraction operation anyway.)
|
||||
#### `<database>`
|
||||
|
||||
\[Mandatory] Path to the CodeQL database to create. This directory will
|
||||
be created, and *must not* already exist (but its parent must).
|
||||
be created, and _must not_ already exist (but its parent must).
|
||||
|
||||
If the `--db-cluster` option is given, this will not be a database
|
||||
itself, but a directory that will *contain* databases for several
|
||||
itself, but a directory that will _contain_ databases for several
|
||||
languages built from the same source root.
|
||||
|
||||
It is important that this directory is not in a location that the build
|
||||
|
||||
@@ -146,9 +146,9 @@ languages or different parts of the code.
|
||||
If you analyze the same version of a code base in several different ways
|
||||
(e.g., for different languages) and upload the results to GitHub for
|
||||
presentation in Code Scanning, this value should differ between each of
|
||||
the analyses, which tells Code Scanning that the analyses *supplement*
|
||||
rather than *supersede* each other. (The values should be consistent
|
||||
between runs of the same analysis for *different* versions of the code
|
||||
the analyses, which tells Code Scanning that the analyses _supplement_
|
||||
rather than _supersede_ each other. (The values should be consistent
|
||||
between runs of the same analysis for _different_ versions of the code
|
||||
base.)
|
||||
|
||||
This value will appear (with a trailing slash appended if not already
|
||||
@@ -161,7 +161,7 @@ present) as the `<run>.automationId` property in SARIF v1, the
|
||||
The number of threads used for computing paths.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `--[no-]print-diagnostics-summary`
|
||||
|
||||
@@ -51,13 +51,13 @@ source root.
|
||||
have been prepared for extraction with [codeql database init](/code-security/codeql-cli/codeql-cli-manual/database-init).
|
||||
|
||||
If the `--db-cluster` option is given, this is not a database itself,
|
||||
but a directory that *contains* databases, and all of those databases
|
||||
but a directory that _contains_ databases, and all of those databases
|
||||
will be processed together.
|
||||
|
||||
#### `--[no-]db-cluster`
|
||||
|
||||
Indicates that the directory given on the command line is not a database
|
||||
itself, but a directory that *contains* one or more databases under
|
||||
itself, but a directory that _contains_ one or more databases under
|
||||
construction. Those databases will be processed together.
|
||||
|
||||
### Common options
|
||||
|
||||
@@ -115,7 +115,7 @@ If no timeout is specified, or is given as 0, no timeout will be set
|
||||
Use this many threads to evaluate queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `--[no-]save-cache`
|
||||
@@ -174,17 +174,17 @@ evaluation using xterm control sequences. Possible values are:
|
||||
|
||||
`no`: Never produce fancy progress; assume a dumb terminal.
|
||||
|
||||
`auto` *(default)*: Autodetect whether the command is running in an
|
||||
`auto` _(default)_: Autodetect whether the command is running in an
|
||||
appropriate terminal.
|
||||
|
||||
`yes`: Assume the terminal can understand xterm control sequences. The
|
||||
feature still depends on being able to autodetect the *size* of the
|
||||
feature still depends on being able to autodetect the _size_ of the
|
||||
terminal, and will also be disabled if `-q` is given.
|
||||
|
||||
`25x80` (or similar): Like `yes`, and also explicitly give the size of
|
||||
the terminal.
|
||||
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a *different*
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a _different_
|
||||
terminal than stderr. Mostly useful for internal testing.
|
||||
|
||||
### Options for controlling outputting of structured evaluator logs
|
||||
@@ -219,7 +219,7 @@ How to handle warnings from the QL compiler. One of:
|
||||
|
||||
`hide`: Suppress warnings.
|
||||
|
||||
`show` *(default)*: Print warnings but continue with compilation.
|
||||
`show` _(default)_: Print warnings but continue with compilation.
|
||||
|
||||
`error`: Treat warnings as errors.
|
||||
|
||||
|
||||
@@ -44,7 +44,7 @@ database.
|
||||
have been prepared for extraction with [codeql database init](/code-security/codeql-cli/codeql-cli-manual/database-init).
|
||||
|
||||
If the `--db-cluster` option is given, this is not a database itself,
|
||||
but a directory that *contains* databases, and all of those databases
|
||||
but a directory that _contains_ databases, and all of those databases
|
||||
will be processed together.
|
||||
|
||||
#### `<command>...`
|
||||
@@ -65,8 +65,8 @@ Ask the extractor to use this many threads. This option is passed to the
|
||||
extractor as a suggestion. If the CODEQL\_THREADS environment variable is
|
||||
set, the environment variable value takes precedence over this option.
|
||||
|
||||
You can pass 0 to use one thread per core on the machine, or -*N* to
|
||||
leave *N* cores unused (except still use at least one thread).
|
||||
You can pass 0 to use one thread per core on the machine, or -_N_ to
|
||||
leave _N_ cores unused (except still use at least one thread).
|
||||
|
||||
#### `-M, --ram=<MB>`
|
||||
|
||||
@@ -77,7 +77,7 @@ set, the environment variable value takes precedence over this option.
|
||||
#### `--[no-]db-cluster`
|
||||
|
||||
Indicates that the directory given on the command line is not a database
|
||||
itself, but a directory that *contains* one or more databases under
|
||||
itself, but a directory that _contains_ one or more databases under
|
||||
construction. Those databases will be processed together.
|
||||
|
||||
#### `--no-tracing`
|
||||
|
||||
@@ -78,7 +78,7 @@ value.
|
||||
|
||||
#### `--target-dbscheme=<file>`
|
||||
|
||||
The *target* dbscheme we want to upgrade to. If this is not given, a
|
||||
The _target_ dbscheme we want to upgrade to. If this is not given, a
|
||||
maximal upgrade path will be constructed
|
||||
|
||||
#### `--target-sha=<sha>`
|
||||
@@ -120,7 +120,7 @@ If no timeout is specified, or is given as 0, no timeout will be set
|
||||
Use this many threads to evaluate queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `--[no-]save-cache`
|
||||
@@ -179,17 +179,17 @@ evaluation using xterm control sequences. Possible values are:
|
||||
|
||||
`no`: Never produce fancy progress; assume a dumb terminal.
|
||||
|
||||
`auto` *(default)*: Autodetect whether the command is running in an
|
||||
`auto` _(default)_: Autodetect whether the command is running in an
|
||||
appropriate terminal.
|
||||
|
||||
`yes`: Assume the terminal can understand xterm control sequences. The
|
||||
feature still depends on being able to autodetect the *size* of the
|
||||
feature still depends on being able to autodetect the _size_ of the
|
||||
terminal, and will also be disabled if `-q` is given.
|
||||
|
||||
`25x80` (or similar): Like `yes`, and also explicitly give the size of
|
||||
the terminal.
|
||||
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a *different*
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a _different_
|
||||
terminal than stderr. Mostly useful for internal testing.
|
||||
|
||||
### Options for controlling outputting of structured evaluator logs
|
||||
|
||||
@@ -53,7 +53,7 @@ useful to set it to 0.
|
||||
|
||||
Select output format. Possible choices:
|
||||
|
||||
`text` *(default)*: A human-readable textual rendering.
|
||||
`text` _(default)_: A human-readable textual rendering.
|
||||
|
||||
`json`: A streamed JSON array of objects.
|
||||
|
||||
|
||||
@@ -75,7 +75,7 @@ Select how aggressively to trim the cache. Choices include:
|
||||
`brutal`: Remove the entire cache, trimming down to the state of a
|
||||
freshly extracted dataset
|
||||
|
||||
`normal` *(default)*: Trim everything except explicitly "cached"
|
||||
`normal` _(default)_: Trim everything except explicitly "cached"
|
||||
predicates.
|
||||
|
||||
`light`: Simply make sure the defined size limits for the disk cache are
|
||||
|
||||
@@ -34,7 +34,7 @@ codeql dataset import --dbscheme=<file> [--threads=<num>] <options>... -- <datas
|
||||
|
||||
Create a dataset by populating it with TRAP files, or add data from TRAP
|
||||
files to an existing dataset. Updating a dataset is only possible if it
|
||||
has the correct dbscheme *and* its ID pool has been preserved from the
|
||||
has the correct dbscheme _and_ its ID pool has been preserved from the
|
||||
initial import.
|
||||
|
||||
## Primary options
|
||||
@@ -60,7 +60,7 @@ want to import.
|
||||
Use this many threads for the import operation.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `--[no-]check-undefined-labels`
|
||||
|
||||
@@ -56,7 +56,7 @@ typically with a '.dbscheme.stats' extension.
|
||||
The number of concurrent threads to use.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
### Common options
|
||||
|
||||
@@ -74,7 +74,7 @@ value.
|
||||
|
||||
#### `--target-dbscheme=<file>`
|
||||
|
||||
The *target* dbscheme we want to upgrade to. If this is not given, a
|
||||
The _target_ dbscheme we want to upgrade to. If this is not given, a
|
||||
maximal upgrade path will be constructed
|
||||
|
||||
#### `--target-sha=<sha>`
|
||||
@@ -116,7 +116,7 @@ If no timeout is specified, or is given as 0, no timeout will be set
|
||||
Use this many threads to evaluate queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `--[no-]save-cache`
|
||||
@@ -175,17 +175,17 @@ evaluation using xterm control sequences. Possible values are:
|
||||
|
||||
`no`: Never produce fancy progress; assume a dumb terminal.
|
||||
|
||||
`auto` *(default)*: Autodetect whether the command is running in an
|
||||
`auto` _(default)_: Autodetect whether the command is running in an
|
||||
appropriate terminal.
|
||||
|
||||
`yes`: Assume the terminal can understand xterm control sequences. The
|
||||
feature still depends on being able to autodetect the *size* of the
|
||||
feature still depends on being able to autodetect the _size_ of the
|
||||
terminal, and will also be disabled if `-q` is given.
|
||||
|
||||
`25x80` (or similar): Like `yes`, and also explicitly give the size of
|
||||
the terminal.
|
||||
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a *different*
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a _different_
|
||||
terminal than stderr. Mostly useful for internal testing.
|
||||
|
||||
### Options for controlling outputting of structured evaluator logs
|
||||
|
||||
@@ -74,9 +74,9 @@ languages or different parts of the code.
|
||||
If you analyze the same version of a code base in several different ways
|
||||
(e.g., for different languages) and upload the results to GitHub for
|
||||
presentation in Code Scanning, this value should differ between each of
|
||||
the analyses, which tells Code Scanning that the analyses *supplement*
|
||||
rather than *supersede* each other. (The values should be consistent
|
||||
between runs of the same analysis for *different* versions of the code
|
||||
the analyses, which tells Code Scanning that the analyses _supplement_
|
||||
rather than _supersede_ each other. (The values should be consistent
|
||||
between runs of the same analysis for _different_ versions of the code
|
||||
base.)
|
||||
|
||||
This value will appear (with a trailing slash appended if not already
|
||||
|
||||
@@ -68,7 +68,7 @@ absolute. It is considered relative to the root of the CodeQL pack.
|
||||
#### `-o, --output=<dir|file.bqrs>`
|
||||
|
||||
Usually this is an existing directory into which the BQRS output from
|
||||
the queries will be written. Filenames *within* this directory will be
|
||||
the queries will be written. Filenames _within_ this directory will be
|
||||
derived from the QL file names.
|
||||
|
||||
Alternatively, if there is exactly one query to run, it may be the name
|
||||
@@ -110,7 +110,7 @@ If no timeout is specified, or is given as 0, no timeout will be set
|
||||
Use this many threads to evaluate queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `--[no-]save-cache`
|
||||
@@ -169,17 +169,17 @@ evaluation using xterm control sequences. Possible values are:
|
||||
|
||||
`no`: Never produce fancy progress; assume a dumb terminal.
|
||||
|
||||
`auto` *(default)*: Autodetect whether the command is running in an
|
||||
`auto` _(default)_: Autodetect whether the command is running in an
|
||||
appropriate terminal.
|
||||
|
||||
`yes`: Assume the terminal can understand xterm control sequences. The
|
||||
feature still depends on being able to autodetect the *size* of the
|
||||
feature still depends on being able to autodetect the _size_ of the
|
||||
terminal, and will also be disabled if `-q` is given.
|
||||
|
||||
`25x80` (or similar): Like `yes`, and also explicitly give the size of
|
||||
the terminal.
|
||||
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a *different*
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a _different_
|
||||
terminal than stderr. Mostly useful for internal testing.
|
||||
|
||||
### Options for controlling outputting of structured evaluator logs
|
||||
@@ -208,7 +208,7 @@ How to handle warnings from the QL compiler. One of:
|
||||
|
||||
`hide`: Suppress warnings.
|
||||
|
||||
`show` *(default)*: Print warnings but continue with compilation.
|
||||
`show` _(default)_: Print warnings but continue with compilation.
|
||||
|
||||
`error`: Treat warnings as errors.
|
||||
|
||||
|
||||
@@ -70,7 +70,7 @@ If no timeout is specified, or is given as 0, no timeout will be set
|
||||
Use this many threads to evaluate queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `--[no-]save-cache`
|
||||
@@ -129,17 +129,17 @@ evaluation using xterm control sequences. Possible values are:
|
||||
|
||||
`no`: Never produce fancy progress; assume a dumb terminal.
|
||||
|
||||
`auto` *(default)*: Autodetect whether the command is running in an
|
||||
`auto` _(default)_: Autodetect whether the command is running in an
|
||||
appropriate terminal.
|
||||
|
||||
`yes`: Assume the terminal can understand xterm control sequences. The
|
||||
feature still depends on being able to autodetect the *size* of the
|
||||
feature still depends on being able to autodetect the _size_ of the
|
||||
terminal, and will also be disabled if `-q` is given.
|
||||
|
||||
`25x80` (or similar): Like `yes`, and also explicitly give the size of
|
||||
the terminal.
|
||||
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a *different*
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a _different_
|
||||
terminal than stderr. Mostly useful for internal testing.
|
||||
|
||||
### Options for controlling outputting of structured evaluator logs
|
||||
|
||||
@@ -69,7 +69,7 @@ If no timeout is specified, or is given as 0, no timeout will be set
|
||||
Use this many threads to evaluate queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `--[no-]save-cache`
|
||||
@@ -128,17 +128,17 @@ evaluation using xterm control sequences. Possible values are:
|
||||
|
||||
`no`: Never produce fancy progress; assume a dumb terminal.
|
||||
|
||||
`auto` *(default)*: Autodetect whether the command is running in an
|
||||
`auto` _(default)_: Autodetect whether the command is running in an
|
||||
appropriate terminal.
|
||||
|
||||
`yes`: Assume the terminal can understand xterm control sequences. The
|
||||
feature still depends on being able to autodetect the *size* of the
|
||||
feature still depends on being able to autodetect the _size_ of the
|
||||
terminal, and will also be disabled if `-q` is given.
|
||||
|
||||
`25x80` (or similar): Like `yes`, and also explicitly give the size of
|
||||
the terminal.
|
||||
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a *different*
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a _different_
|
||||
terminal than stderr. Mostly useful for internal testing.
|
||||
|
||||
#### `--search-path=<dir>[:<dir>...]`
|
||||
|
||||
@@ -119,7 +119,7 @@ If no timeout is specified, or is given as 0, no timeout will be set
|
||||
Use this many threads to evaluate queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `--[no-]save-cache`
|
||||
@@ -178,17 +178,17 @@ evaluation using xterm control sequences. Possible values are:
|
||||
|
||||
`no`: Never produce fancy progress; assume a dumb terminal.
|
||||
|
||||
`auto` *(default)*: Autodetect whether the command is running in an
|
||||
`auto` _(default)_: Autodetect whether the command is running in an
|
||||
appropriate terminal.
|
||||
|
||||
`yes`: Assume the terminal can understand xterm control sequences. The
|
||||
feature still depends on being able to autodetect the *size* of the
|
||||
feature still depends on being able to autodetect the _size_ of the
|
||||
terminal, and will also be disabled if `-q` is given.
|
||||
|
||||
`25x80` (or similar): Like `yes`, and also explicitly give the size of
|
||||
the terminal.
|
||||
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a *different*
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a _different_
|
||||
terminal than stderr. Mostly useful for internal testing.
|
||||
|
||||
### Options for controlling outputting of structured evaluator logs
|
||||
|
||||
@@ -66,7 +66,7 @@ Enabling this flag forces all timestamps to be UTC.
|
||||
|
||||
Control the format of the output produced.
|
||||
|
||||
`predicates` *(default)*: Produce a summary of the computation performed
|
||||
`predicates` _(default)_: Produce a summary of the computation performed
|
||||
for each predicate. This will be a stream of JSON objects separated
|
||||
either by two newline characters (by default) or one if the
|
||||
`--minify-output` option is passed.
|
||||
|
||||
@@ -72,7 +72,7 @@ path.
|
||||
If no output path is provided, only a single .qhelp or .ql file will be
|
||||
accepted, and the output will be written to stdout.
|
||||
|
||||
If an output directory is used, filenames *within* the output directory
|
||||
If an output directory is used, filenames _within_ the output directory
|
||||
will be derived from the .qhelp file names.
|
||||
|
||||
#### `--warnings=<mode>`
|
||||
@@ -81,7 +81,7 @@ How to handle warnings from the query help renderer. One of:
|
||||
|
||||
`hide`: Suppress warnings.
|
||||
|
||||
`show` *(default)*: Print warnings but continue with rendering.
|
||||
`show` _(default)_: Print warnings but continue with rendering.
|
||||
|
||||
`error`: Treat warnings as errors.
|
||||
|
||||
|
||||
@@ -54,17 +54,17 @@ your release).
|
||||
|
||||
#### `-r, --repository=<repository-name>`
|
||||
|
||||
GitHub repository owner and name (e.g., *github/octocat*) to use as an
|
||||
GitHub repository owner and name (e.g., _github/octocat_) to use as an
|
||||
endpoint for uploading. The CLI will attempt to autodetect this from the
|
||||
checkout path if it is omitted.
|
||||
|
||||
#### `-f, --ref=<ref>`
|
||||
|
||||
Name of the ref that was analyzed. If this ref is a pull request merge
|
||||
commit, then use *refs/pulls/1234/merge* or *refs/pulls/1234/head*
|
||||
commit, then use _refs/pulls/1234/merge_ or _refs/pulls/1234/head_
|
||||
(depending on whether or not this commit corresponds to the HEAD or
|
||||
MERGE commit of the PR). Otherwise, this should be a branch:
|
||||
*refs/heads/branch-name*. If omitted, the CLI will attempt to
|
||||
_refs/heads/branch-name_. If omitted, the CLI will attempt to
|
||||
automatically populate this from the current branch of the checkout
|
||||
path, if this exists.
|
||||
|
||||
@@ -90,7 +90,7 @@ version 2.1.0 (this is the default version of SARIF used by CodeQL).
|
||||
|
||||
Select output format. Choices include:
|
||||
|
||||
`text` *(default)*: Print the URL for tracking the status of the SARIF
|
||||
`text` _(default)_: Print the URL for tracking the status of the SARIF
|
||||
upload.
|
||||
|
||||
`json`: Print the response body of the SARIF upload API request.
|
||||
|
||||
@@ -45,7 +45,7 @@ The root directory of the package.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
#### `--pack-path=<packPath>`
|
||||
|
||||
@@ -60,7 +60,7 @@ The path of the query pack file to create. This file must not yet exist.
|
||||
Use this many threads to compile queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `-M, --ram=<MB>`
|
||||
|
||||
@@ -56,7 +56,7 @@ The root directory of the package.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
#### `-f, --[no-]force`
|
||||
|
||||
|
||||
@@ -55,14 +55,14 @@ Defaults to `./.codeql/pack`.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
#### `-j, --threads=<num>`
|
||||
|
||||
Use this many threads to compile queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `-M, --ram=<MB>`
|
||||
|
||||
@@ -53,7 +53,7 @@ version for a CodeQL pack, then the latest version will be downloaded.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
#### `-d, --dir=<dir>`
|
||||
|
||||
|
||||
@@ -52,7 +52,7 @@ The root directory of the package.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
#### `-f, --[no-]force`
|
||||
|
||||
@@ -85,7 +85,7 @@ and will not be added to the package lock.
|
||||
|
||||
\[Deprecated] Specifies how to resolve dependencies:
|
||||
|
||||
`minimal-update` *(default)*: Update or create the codeql-pack.lock.yml
|
||||
`minimal-update` _(default)_: Update or create the codeql-pack.lock.yml
|
||||
based on the existing contents of the qlpack.yml file. If any existing
|
||||
codeql-pack.lock.yml does not satisfy the current dependencies in the
|
||||
qlpack.yml, the lock file will be updated as necessary.
|
||||
|
||||
@@ -53,7 +53,7 @@ then this operation will run on all CodeQL packages in the workspace.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
#### `--groups=[-]<group>[,[-]<group>...]`
|
||||
|
||||
|
||||
@@ -49,7 +49,7 @@ The root directory of the package.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
### Common options
|
||||
|
||||
|
||||
@@ -61,7 +61,7 @@ Delete the pack bundle after publishing.
|
||||
Use this many threads to compile queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `-M, --ram=<MB>`
|
||||
@@ -112,7 +112,7 @@ Available since `v2.11.3`.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
#### `--groups=[-]<group>[,[-]<group>...]`
|
||||
|
||||
|
||||
@@ -48,13 +48,13 @@ The root directory of the package.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
#### `--mode=<mode>`
|
||||
|
||||
Specifies how to resolve dependencies:
|
||||
|
||||
`minimal-update` *(default)*: Update or create the codeql-pack.lock.yml
|
||||
`minimal-update` _(default)_: Update or create the codeql-pack.lock.yml
|
||||
based on the existing contents of the qlpack.yml file. If any existing
|
||||
codeql-pack.lock.yml does not satisfy the current dependencies in the
|
||||
qlpack.yml, the lock file will be updated as necessary.
|
||||
|
||||
@@ -49,7 +49,7 @@ The root directory of the package.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
#### `-f, --[no-]force`
|
||||
|
||||
|
||||
@@ -93,14 +93,14 @@ compiled.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
#### `-j, --threads=<num>`
|
||||
|
||||
Use this many threads to compile queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `-M, --ram=<MB>`
|
||||
@@ -115,7 +115,7 @@ How to handle warnings from the QL compiler. One of:
|
||||
|
||||
`hide`: Suppress warnings.
|
||||
|
||||
`show` *(default)*: Print warnings but continue with compilation.
|
||||
`show` _(default)_: Print warnings but continue with compilation.
|
||||
|
||||
`error`: Treat warnings as errors.
|
||||
|
||||
|
||||
@@ -51,7 +51,7 @@ Overwrite each input file with a formatted version of its content.
|
||||
#### `--[no-]check-only`
|
||||
|
||||
Instead of writing output, exit with status 1 if any input files
|
||||
*differ* from their correct formatting. A message telling which files
|
||||
_differ_ from their correct formatting. A message telling which files
|
||||
differed will be printed to standard error unless you also give `-qq`.
|
||||
|
||||
#### `-b, --backup=<ext>`
|
||||
|
||||
@@ -99,7 +99,7 @@ If no timeout is specified, or is given as 0, no timeout will be set
|
||||
Use this many threads to evaluate queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
#### `--[no-]save-cache`
|
||||
@@ -158,17 +158,17 @@ evaluation using xterm control sequences. Possible values are:
|
||||
|
||||
`no`: Never produce fancy progress; assume a dumb terminal.
|
||||
|
||||
`auto` *(default)*: Autodetect whether the command is running in an
|
||||
`auto` _(default)_: Autodetect whether the command is running in an
|
||||
appropriate terminal.
|
||||
|
||||
`yes`: Assume the terminal can understand xterm control sequences. The
|
||||
feature still depends on being able to autodetect the *size* of the
|
||||
feature still depends on being able to autodetect the _size_ of the
|
||||
terminal, and will also be disabled if `-q` is given.
|
||||
|
||||
`25x80` (or similar): Like `yes`, and also explicitly give the size of
|
||||
the terminal.
|
||||
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a *different*
|
||||
`25x80:/dev/pts/17` (or similar): show fancy progress on a _different_
|
||||
terminal than stderr. Mostly useful for internal testing.
|
||||
|
||||
### Options for controlling outputting of structured evaluator logs
|
||||
@@ -203,7 +203,7 @@ How to handle warnings from the QL compiler. One of:
|
||||
|
||||
`hide`: Suppress warnings.
|
||||
|
||||
`show` *(default)*: Print warnings but continue with compilation.
|
||||
`show` _(default)_: Print warnings but continue with compilation.
|
||||
|
||||
`error`: Treat warnings as errors.
|
||||
|
||||
|
||||
@@ -67,7 +67,7 @@ and code 1 otherwise.
|
||||
|
||||
Select output format. Choices include:
|
||||
|
||||
`text` *(default)*: Print the path to the found extractor pack to
|
||||
`text` _(default)_: Print the path to the found extractor pack to
|
||||
standard output.
|
||||
|
||||
`json`: Print the path to the found extractor pack as a JSON string.
|
||||
|
||||
@@ -56,7 +56,7 @@ The directory to be searched.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
### Options for limiting the set of collected files
|
||||
|
||||
|
||||
@@ -61,7 +61,7 @@ per-user configuration file).
|
||||
|
||||
Select output format. Choices include:
|
||||
|
||||
`text` *(default)*: Print the paths to extractor packs to standard
|
||||
`text` _(default)_: Print the paths to extractor packs to standard
|
||||
output.
|
||||
|
||||
`json`: Print the paths to extractor packs as a JSON string.
|
||||
|
||||
@@ -59,7 +59,7 @@ on by default.
|
||||
|
||||
Select output format. Choices include:
|
||||
|
||||
`lines` *(default)*: Print command line arguments on one line each.
|
||||
`lines` _(default)_: Print command line arguments on one line each.
|
||||
|
||||
`json`: Print a JSON object with all the data.
|
||||
|
||||
@@ -185,7 +185,7 @@ The root directory of the pack containing queries to compile.
|
||||
resolution.
|
||||
|
||||
This is used when the pack can be found by name somewhere in the search
|
||||
path. If you know the *disk location* of your desired root package,
|
||||
path. If you know the _disk location_ of your desired root package,
|
||||
pretend it contains a .ql file and use `--query` instead.
|
||||
|
||||
### Common options
|
||||
|
||||
@@ -92,7 +92,7 @@ when the kind is `library`.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
#### `--no-recursive`
|
||||
|
||||
|
||||
@@ -71,7 +71,7 @@ absolute. It is considered relative to the root of the CodeQL pack.
|
||||
|
||||
Select output format. Choices include:
|
||||
|
||||
`text` *(default)*: A line-oriented list of pathnames.
|
||||
`text` _(default)_: A line-oriented list of pathnames.
|
||||
|
||||
`json`: A plain list of pathnames as strings.
|
||||
|
||||
|
||||
@@ -39,7 +39,7 @@ configured memory outside the Java heap.
|
||||
|
||||
In particular, this should be used to find appropriate `-J-Xmx` and
|
||||
`--off-heap-ram` options before staring a query server based on a
|
||||
desired *total* RAM amount.
|
||||
desired _total_ RAM amount.
|
||||
|
||||
## Primary options
|
||||
|
||||
@@ -47,7 +47,7 @@ desired *total* RAM amount.
|
||||
|
||||
Select output format. Choices include:
|
||||
|
||||
`lines` *(default)*: Print command-line arguments on one line each.
|
||||
`lines` _(default)_: Print command-line arguments on one line each.
|
||||
|
||||
`json`: Print them as a JSON array.
|
||||
|
||||
|
||||
@@ -47,8 +47,8 @@ Each argument is one of:
|
||||
|
||||
#### `--slice=<N/M>`
|
||||
|
||||
\[Advanced] Divide the test cases into *M* roughly equal-sized slices
|
||||
and process only the *N*th of them. This can be used for manual
|
||||
\[Advanced] Divide the test cases into _M_ roughly equal-sized slices
|
||||
and process only the _N_th of them. This can be used for manual
|
||||
parallelization of the testing process.
|
||||
|
||||
#### `--[no-]strict-test-discovery`
|
||||
@@ -71,7 +71,7 @@ files even though a `.qlref` file cannot really be a non-test.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
### Common options
|
||||
|
||||
|
||||
@@ -43,13 +43,13 @@ in extraordinary cases where exact control is needed.
|
||||
|
||||
#### `--dbscheme=<file>`
|
||||
|
||||
\[Mandatory] The *current* dbscheme of the dataset we want to upgrade.
|
||||
\[Mandatory] The _current_ dbscheme of the dataset we want to upgrade.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format. Choices include:
|
||||
|
||||
`lines` *(default)*: Print upgrade scripts on one line each.
|
||||
`lines` _(default)_: Print upgrade scripts on one line each.
|
||||
|
||||
`json`: Print a JSON array of upgrade script paths.
|
||||
|
||||
@@ -93,7 +93,7 @@ value.
|
||||
|
||||
#### `--target-dbscheme=<file>`
|
||||
|
||||
The *target* dbscheme we want to upgrade to. If this is not given, a
|
||||
The _target_ dbscheme we want to upgrade to. If this is not given, a
|
||||
maximal upgrade path will be constructed
|
||||
|
||||
#### `--target-sha=<sha>`
|
||||
|
||||
@@ -38,11 +38,11 @@ same output will be considered to pass. What it does can also be
|
||||
achieved by ordinary file manipulation, but you may find its syntax more
|
||||
useful for this special case.
|
||||
|
||||
The command-line arguments specify one or more *tests* -- that is,
|
||||
The command-line arguments specify one or more _tests_ -- that is,
|
||||
`.ql(ref)` files -- and the command automatically derives the names of
|
||||
the `.actual` files from them. Any test that doesn't have an `.actual`
|
||||
file will be silently ignored, which makes it easy to accept just the
|
||||
results of *failing* tests from a previous run.
|
||||
results of _failing_ tests from a previous run.
|
||||
|
||||
## Primary options
|
||||
|
||||
@@ -55,8 +55,8 @@ Each argument is one of:
|
||||
|
||||
#### `--slice=<N/M>`
|
||||
|
||||
\[Advanced] Divide the test cases into *M* roughly equal-sized slices
|
||||
and process only the *N*th of them. This can be used for manual
|
||||
\[Advanced] Divide the test cases into _M_ roughly equal-sized slices
|
||||
and process only the _N_th of them. This can be used for manual
|
||||
parallelization of the testing process.
|
||||
|
||||
#### `--[no-]strict-test-discovery`
|
||||
|
||||
@@ -108,7 +108,7 @@ takes up a lot of space in the dataset.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format, either `text` *(default)* or `json`.
|
||||
Select output format, either `text` _(default)_ or `json`.
|
||||
|
||||
### Common options
|
||||
|
||||
|
||||
@@ -51,7 +51,7 @@ useful to set it to 0.
|
||||
|
||||
Select output format. Possible choices:
|
||||
|
||||
`text` *(default)*: A human-readable textual rendering.
|
||||
`text` _(default)_: A human-readable textual rendering.
|
||||
|
||||
`json`: A streamed JSON array of test result objects.
|
||||
|
||||
@@ -70,7 +70,7 @@ in the future, so consumers should ignore any event with an unrecognized
|
||||
|
||||
\[Advanced] Preserve the databases extracted to run the test queries,
|
||||
even where all tests in a directory pass. (The database will always be
|
||||
left present when there are tests that *fail*).
|
||||
left present when there are tests that _fail_).
|
||||
|
||||
#### `--[no-]fast-compilation`
|
||||
|
||||
@@ -110,8 +110,8 @@ Set total amount of RAM the test runner should be allowed to use.
|
||||
|
||||
#### `--slice=<N/M>`
|
||||
|
||||
\[Advanced] Divide the test cases into *M* roughly equal-sized slices
|
||||
and process only the *N*th of them. This can be used for manual
|
||||
\[Advanced] Divide the test cases into _M_ roughly equal-sized slices
|
||||
and process only the _N_th of them. This can be used for manual
|
||||
parallelization of the testing process.
|
||||
|
||||
#### `--[no-]strict-test-discovery`
|
||||
@@ -281,7 +281,7 @@ If no timeout is specified, or is given as 0, no timeout will be set
|
||||
Use this many threads to evaluate queries.
|
||||
|
||||
Defaults to 1. You can pass 0 to use one thread per core on the machine,
|
||||
or -*N* to leave *N* cores unused (except still use at least one
|
||||
or -_N_ to leave _N_ cores unused (except still use at least one
|
||||
thread).
|
||||
|
||||
### Options for controlling outputting of structured evaluator logs
|
||||
|
||||
@@ -36,7 +36,7 @@ Show the version of the CodeQL toolchain.
|
||||
|
||||
#### `--format=<fmt>`
|
||||
|
||||
Select output format. Choices include `text` *(default)* ,`terse`, and
|
||||
Select output format. Choices include `text` _(default)_ ,`terse`, and
|
||||
`json`.
|
||||
|
||||
### Common options
|
||||
|
||||
@@ -20,7 +20,7 @@ redirect_from:
|
||||
|
||||
You use a {% data variables.product.prodname_codeql %} workspace when you want to group multiple {% data variables.product.prodname_codeql %} packs together. A typical use case for a {% data variables.product.prodname_codeql %} workspace is to develop a set of {% data variables.product.prodname_codeql %} library and query packs that are mutually dependent. For more information on {% data variables.product.prodname_codeql %} packs, see "[About {% data variables.product.prodname_codeql %} packs](/code-security/codeql-cli/codeql-cli-reference/about-codeql-packs)."
|
||||
|
||||
The main benefit of a {% data variables.product.prodname_codeql %} workspace is that it makes it easier for you to develop and maintain multiple {% data variables.product.prodname_codeql %} packs. When you use a {% data variables.product.prodname_codeql %} workspace, all the {% data variables.product.prodname_codeql %} packs in the workspace are available as *source dependencies* for each other when you run a {% data variables.product.prodname_codeql %} command that resolves queries. This makes it easier to develop, maintain, and publish multiple, related {% data variables.product.prodname_codeql %} packs.
|
||||
The main benefit of a {% data variables.product.prodname_codeql %} workspace is that it makes it easier for you to develop and maintain multiple {% data variables.product.prodname_codeql %} packs. When you use a {% data variables.product.prodname_codeql %} workspace, all the {% data variables.product.prodname_codeql %} packs in the workspace are available as _source dependencies_ for each other when you run a {% data variables.product.prodname_codeql %} command that resolves queries. This makes it easier to develop, maintain, and publish multiple, related {% data variables.product.prodname_codeql %} packs.
|
||||
|
||||
In most cases, you should store the {% data variables.product.prodname_codeql %} workspace and the {% data variables.product.prodname_codeql %} packs contained in it in one git repository. This makes it easier to share your {% data variables.product.prodname_codeql %} development environment.
|
||||
|
||||
|
||||
@@ -210,7 +210,7 @@ and `@precision high` from the `my-custom-queries` directory, use:
|
||||
precision: very-high
|
||||
```
|
||||
|
||||
Note that the following query suite definition behaves differently from the definition above. This definition selects queries that are `@kind problem` *or*
|
||||
Note that the following query suite definition behaves differently from the definition above. This definition selects queries that are `@kind problem` _or_
|
||||
are `@precision very-high`:
|
||||
|
||||
```
|
||||
|
||||
@@ -90,7 +90,7 @@ For supported languages, {% data variables.product.prodname_dependabot %} automa
|
||||
|
||||
{% note %}
|
||||
|
||||
**Note:** During the beta release, this feature is available only for new Python advisories created *after* April 14, 2022, and for a subset of historical Python advisories. {% data variables.product.prodname_dotcom %} is working to backfill data across additional historical Python advisories, which are added on a rolling basis. Vulnerable calls are highlighted only on the {% data variables.product.prodname_dependabot_alerts %} pages.
|
||||
**Note:** During the beta release, this feature is available only for new Python advisories created _after_ April 14, 2022, and for a subset of historical Python advisories. {% data variables.product.prodname_dotcom %} is working to backfill data across additional historical Python advisories, which are added on a rolling basis. Vulnerable calls are highlighted only on the {% data variables.product.prodname_dependabot_alerts %} pages.
|
||||
|
||||
{% endnote %}
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user