1
0
mirror of synced 2026-01-10 18:02:53 -05:00

Merge branch 'main' into 5765-private-profile

This commit is contained in:
Martin Lopes
2022-03-21 13:28:50 +10:00
committed by GitHub
1885 changed files with 43476 additions and 475303 deletions

View File

@@ -22,7 +22,7 @@
"davidanson.vscode-markdownlint",
"bierner.markdown-preview-github-styles",
"streetsidesoftware.code-spell-checker",
"hubwriter.open-reusable"
"alistairchristie.open-reusables"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.

View File

@@ -78,3 +78,5 @@ In your `docs-internal` checkout:
- [ ] In `github/github`, edit the release's config file in `app/api/description/config/releases/`, and change `deprecated: false` to `deprecated: true`.
- [ ] Open a new PR, and get the required code owner approvals. A docs-content team member can approve it for the docs team.
- [ ] When the PR is approved, [deploy the `github/github` PR](https://thehub.github.com/engineering/devops/deployment/deploying-dotcom/). If you haven't deployed a `github/github` PR before, work with someone that has -- the process isn't too involved depending on how you deploy, but there are a lot of details that can potentially be confusing as you can see from the documentation.
**Note**: you can do this step independently of the other steps after a GHES version is deprecated since it should no longer get updates in github/github. You should plan to get this PR merged as soon as possible, otherwise if you wait too long our OpenAPI automation may re-add the static files that you removed in step 5.

View File

@@ -98,15 +98,18 @@ This file should be automatically updated, but you can also run `script/update-e
### Before shipping the release branch
- [ ] Add the GHES release notes to `data/release-notes/` and update the versioning frontmatter in `content/admin/release-notes.md` to `enterprise-server: '<=<RELEASE>'`
- [ ] Add the GHES release notes to `data/release-notes/`.
- [ ] Add any required smoke tests to the opening post in the megabranch PR.
Usually, we should smoke test any new GHES admin guides, any large features landing in this GHES version for the first time, and the REST and GraphQL API references.
- [ ] Alert the Neon Squad (formally docs-ecosystem team) 1-2 days before the release to deploy to `github/github`. A PR should already be open in `github/github`, to change `published` to `true` in `app/api/description/config/releases/ghes-<NEXT RELEASE NUMBER>.yaml`. They will need to:
- [ ] A few days before shipping, check for broken links. Run `script/check-english-links.js` in a local copy of the megabranch.
- [ ] [Freeze the repos](https://github.com/github/docs-content/blob/main/docs-content-docs/docs-content-workflows/freezing.md) at least 1-2 days before the release, and post an announcement in Slack so everybody knows. It's helpful to freeze the repos before doing the OpenAPI merges to avoid changes to the megabranch while preparing and deploying.
- [ ] Alert the Neon Squad (formally docs-ecosystem team) 1-2 days before the release to deploy to `github/github`. A PR should already be open in `github/github`, to change the OpenAPI schema config `published` to `true` in `app/api/description/config/releases/ghes-<NEXT RELEASE NUMBER>.yaml`. They will need to:
- [ ] Get the required approval from `@github/ecosystem-api-reviewers` then deploy the PR to dotcom. This process generally takes 30-90 minutes.
- [ ] Once the PR merges, make sure that the auto-generated PR titled "Update OpenAPI Descriptions" in doc-internal contains both the derefrenced and decorated JSON files for the new GHES release. If everything looks good, merge the "Update OpenAPI Description" PR into the GHES release megabranch. **Note:** Be careful about resolving the conflicts correctly—you may wish to delete the existing OpenAPI files for the release version from the megabranch, so there are no conflicts to resolve and to ensure that the incoming artifacts are the correct ones.
- [ ] Add a blocking review to the auto-generated "Update OpenAPI Descriptions" PR in the public REST API description. (Remove this blocking review once the GHES release ships.)
- [ ] [Freeze the repos](https://github.com/github/docs-content/blob/main/docs-content-docs/docs-content-workflows/freezing.md) at least 1-2 days before the release, and post an announcement in Slack so everybody knows.
- [ ] Once the PR merges, make sure that the auto-generated PR titled "Update OpenAPI Descriptions" in doc-internal contains both the dereferenced and decorated JSON files for the new GHES release. If everything looks good, merge the "Update OpenAPI Description" PR into the GHES release megabranch. **Note:** Be careful about resolving the conflicts correctly—you may wish to delete the existing OpenAPI files for the release version from the megabranch (that is, delete the GHES release version `lib/rest/static` decorated and dereferenced JSON files), so there are no conflicts to resolve and to ensure that the incoming artifacts are the correct ones.
- [ ] Alert the Ecosystem-API team in #ecosystem-api about the pending release freeze and incoming blocking review of OpenAPI updates in the public REST API description (the `rest-api-descriptions` repo). They'll need to block any future "Update OpenAPI Descriptions" PRs in the public REST API description until after the ship.
- [ ] Add a blocking review to the auto-generated "Update OpenAPI Descriptions" PR in the public REST API description. (You or they will remove this blocking review once the GHES release ships.)
### 🚢 🛳️ 🚢 Shipping the release branch
@@ -114,11 +117,18 @@ This file should be automatically updated, but you can also run `script/update-e
- [ ] The `github/docs-internal` repo is frozen, and the `Repo Freeze Check / Prevent merging during deployment freezes (pull_request_target)` test is expected to fail.
Use admin permissions to ship the release branch with this failure. Make sure that the merge's commit title does not include anything like `[DO NOT MERGE]`, and remove all the branch's commit details from the merge's commit message except for the co-author list.
- [ ] Do any required smoke tests listed in the opening post in the megabranch PR.
- [ ] Do any required smoke tests listed in the opening post in the megabranch PR. You can monitor and check when the production deploy completed by viewing the [`docs-internal` deployments page](https://github.com/github/docs-internal/deployments).
- [ ] Once smoke tests have passed, you can [unfreeze the repos](https://github.com/github/docs-content/blob/main/docs-content-docs/docs-content-workflows/freezing.md) and post an announcement in Slack.
- [ ] After unfreezing, push the search index LFS objects for the public `github/docs` repo. The LFS objects were already being pushed for the internal repo after the `sync-english-index-for-<PLAN@RELEASE>` was added to the megabranch. To push the LFS objects, run the [search sync workflow](https://github.com/github/docs-internal/actions/workflows/sync-search-indices.yml). Once you're there, click on `Run workflow` button. A modal will pop up where you can set the following inputs:
Branch: The new version megabranch you're working on
version: `enterprise-server@<RELEASE>`
language: `en`
- [ ] After unfreezing, the megabranch creator should push the search index LFS objects for the public `github/docs` repo. The LFS objects were already pushed for the internal repo after the `sync-english-index-for-<PLAN@RELEASE>` was added to the megabranch. To push the LFS objects to the public repo:
1. First navigate to the [sync search indices workflow](https://github.com/github/docs-internal/actions/workflows/sync-search-indices.yml).
2. Then, to run the workflow with parameters, click on `Run workflow` button.
3. A modal will pop up where you will set the following inputs:
- Branch: The new version megabranch you're working on
- Version: `enterprise-server@<RELEASE>`
- Language: `en`
4. Run the job. The workflow job may fail on the first run—so retry the failed job if needed.
- [ ] After unfreezing, alert the Ecosystem-API team in #ecosystem-api the docs freeze is finished/thawed and the release has shipped.
- [ ] You (or they) can now remove your blocking review on the auto-generated "Update OpenAPI Descriptions" PR in public REST API description (the `rest-api-descriptions` repo). (although it's likely newer PRs have been created since yours with the blocking review, in which case the Ecosystem-API team will close your PR and perform the next step on the most recent PR).
- [ ] The Ecosystem-API team will merge the latest auto-generated "Update OpenAPI Descriptions" PR (which will contain the OpenAPI schema config that changed `published` to `true` for the release).
- [ ] After unfreezing, if there were significant or highlighted GraphQL changes in the release, consider manually running the [GraphQL update workflow](https://github.com/github/docs-internal/actions/workflows/update-graphql-files.yml) to update our GraphQL schemas. By default this workflow only runs once every 24 hours.
- [ ] After the release, in the `docs-content` repo, add the now live version number to the "Specific GHES version(s)" section in the following files: [`.github/ISSUE_TEMPLATE/release-tier-1-or-2-tracking.yml`](https://github.com/github/docs-content/blob/main/.github/ISSUE_TEMPLATE/release-tier-1-or-2-tracking.yml) and [`.github/ISSUE_TEMPLATE/release-tier-3-or-tier-4.yml`](https://github.com/github/docs-content/blob/main/.github/ISSUE_TEMPLATE/release-tier-3-or-tier-4.yml). When the PR is approved, merge it in.

View File

@@ -190,7 +190,7 @@ export function generateUpdateProjectNextItemFieldMutation({
// Strip all non-alphanumeric out of the item ID when creating the mutation ID to avoid a GraphQL parsing error
// (statistically, this should still give us a unique mutation ID)
return `
set_${fieldID.substr(1)}_item_${item.replaceAll(
set_${fieldID.slice(1)}_item_${item.replaceAll(
/[^a-z0-9]/g,
''
)}: updateProjectNextItemField(input: {

View File

@@ -26,7 +26,3 @@ updates:
schedule:
interval: weekly
day: thursday
ignore:
- dependency-name: '*'
update-types:
['version-update:semver-patch', 'version-update:semver-minor']

View File

@@ -27,6 +27,9 @@ on:
description: 'The commit SHA to build'
type: string
required: true
push:
branches:
- gh-readonly-queue/main/**
permissions:
contents: read
@@ -50,7 +53,7 @@ jobs:
# to link a PR to a list of environments later.
url: ${{ env.APP_URL }}
env:
PR_NUMBER: ${{ github.event.number || github.event.inputs.PR_NUMBER }}
PR_NUMBER: ${{ github.event.number || github.event.inputs.PR_NUMBER || github.run_id }}
COMMIT_REF: ${{ github.event.pull_request.head.sha || github.event.inputs.COMMIT_REF }}
BRANCH_NAME: ${{ github.head_ref || github.ref_name }}
IS_INTERNAL_BUILD: ${{ github.repository == 'github/docs-internal' }}
@@ -162,7 +165,7 @@ jobs:
rsync -rptovR ./user-code/content/./**/*.md ./content
rsync -rptovR ./user-code/assets/./**/*.png ./assets
rsync -rptovR ./user-code/data/./**/*.{yml,md} ./data
rsync -rptovR ./user-code/components/./**/*.{ts,tsx} ./components
rsync -rptovR ./user-code/components/./**/*.{scss,ts,tsx} ./components
rsync -rptovR --ignore-missing-args ./user-code/lib/./**/*.{js,ts} ./lib
rsync -rptovR --ignore-missing-args ./user-code/middleware/./**/*.{js,ts} ./middleware
rsync -rptovR ./user-code/pages/./**/*.tsx ./pages

View File

@@ -61,7 +61,7 @@ jobs:
# This will fail if the IMAGE_REPO doesn't exist, but we don't care
- name: 'Untag all docker images for this PR'
run: |
az acr repository delete -n ${{ secrets.NONPROD_REGISTRY_NAME }} --repository ${{ env.IMAGE_REPO }} -y || true
az acr repository delete -n ${{ secrets.NONPROD_REGISTRY_SERVER }} --repository ${{ env.IMAGE_REPO }} -y || true
# Remove all GitHub deployments from this environment and remove the environment
- uses: strumwolf/delete-deployment-environment@45c821e46baa405e25410700fe2e9643929706a0

View File

@@ -59,5 +59,8 @@ jobs:
path: .next/cache
key: ${{ runner.os }}-nextjs-${{ hashFiles('package*.json') }}
- name: Run build script
run: npm run build
- name: Run browser-test
run: npm run browser-test

View File

@@ -41,7 +41,7 @@ jobs:
run: script/i18n/homogenize-frontmatter.js
- name: Check in homogenized files
uses: EndBug/add-and-commit@756d9ea820f11931e591eaf57f25e0f5b903d5b2
uses: EndBug/add-and-commit@050a66787244b10a4874a2a5f682130263edc192
with:
# The arguments for the `git add` command
add: 'translations'

View File

@@ -9,6 +9,7 @@ on:
push:
branches:
- main
- gh-readonly-queue/main/**
pull_request:
permissions:

View File

@@ -54,10 +54,10 @@ jobs:
run: script/rest/update-files.js --decorate-only
- name: Check in the decorated files
uses: EndBug/add-and-commit@756d9ea820f11931e591eaf57f25e0f5b903d5b2
uses: EndBug/add-and-commit@050a66787244b10a4874a2a5f682130263edc192
with:
# The arguments for the `git add` command
add: 'lib/rest/static/decorated'
add: '["lib/rest/static/apps", "lib/rest/static/decorated"]'
# The message for the commit
message: 'Add decorated OpenAPI schema files'

View File

@@ -6,6 +6,9 @@ name: 'Orphaned assets check'
on:
pull_request:
push:
branches:
- gh-readonly-queue/main/**
permissions:
contents: read

View File

@@ -15,6 +15,9 @@ on:
- unlocked
branches:
- main
push:
branches:
- gh-readonly-queue/main/**
permissions:
contents: none

View File

@@ -7,6 +7,9 @@ name: Node.js Tests
on:
workflow_dispatch:
pull_request:
push:
branches:
- gh-readonly-queue/main/**
permissions:
contents: read

View File

@@ -12,6 +12,9 @@ on:
- opened
- reopened
- synchronize
push:
branches:
- gh-readonly-queue/main/**
permissions:
# This is needed by dorny/paths-filter
@@ -36,7 +39,7 @@ jobs:
id: filter
with:
# Base branch used to get changed files
base: ${{ github.event.pull_request.base.ref }}
base: ${{ github.event.pull_request.base.ref || github.base_ref || github.ref }}
# Enables setting an output in the format in `${FILTER_NAME}_files
# with the names of the matching files formatted as JSON array

View File

@@ -9,7 +9,7 @@
"http://localhost:4001/en/github/authenticating-to-github/adding-a-new-ssh-key-to-your-github-account",
"http://localhost:4001/en/github/authenticating-to-github/creating-a-strong-password",
"http://localhost:4001/en/github",
"http://localhost:4001/en/github/importing-your-projects-to-github/adding-an-existing-project-to-github-using-the-command-line",
"http://localhost:4001/en/github/importing-your-projects-to-github/adding-locally-hosted-code-to-github",
"http://localhost:4001/en/actions",
"http://localhost:4001/en/github/authenticating-to-github/creating-a-personal-access-token",
"http://localhost:4001/en/github/authenticating-to-github/checking-for-existing-ssh-keys",

View File

@@ -89,6 +89,7 @@ COPY --chown=node:node feature-flags.json ./
COPY --chown=node:node data ./data
COPY --chown=node:node next.config.js ./
COPY --chown=node:node server.mjs ./server.mjs
COPY --chown=node:node start-server.mjs ./start-server.mjs
EXPOSE $PORT

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

After

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 67 KiB

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 355 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 274 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.9 KiB

View File

@@ -93,7 +93,7 @@ export const DefaultLayout = (props: Props) => {
style={{ height: '100vh' }}
>
<Header />
<main id="main-content">
<main id="main-content" style={{ scrollMarginTop: '5rem' }}>
<DeprecationBanner />
<RestBanner />

View File

@@ -3,12 +3,9 @@ import Link from 'next/link'
import { useRouter } from 'next/router'
import { MarkGithubIcon, CommentDiscussionIcon } from '@primer/octicons-react'
import { useVersion } from 'components/hooks/useVersion'
import { Lead } from 'components/ui/Lead'
export function GenericError() {
const { isEnterprise } = useVersion()
return (
<div className="min-h-screen d-flex flex-column">
<Head>
@@ -28,11 +25,7 @@ export function GenericError() {
</p>
<a
id="contact-us"
href={
isEnterprise
? 'https://enterprise.github.com/support'
: 'https://support.github.com/contact'
}
href="https://support.github.com/contact"
className="btn btn-outline mt-2"
>
<CommentDiscussionIcon size="small" className="octicon mr-1" />

View File

@@ -66,6 +66,10 @@ export default function ClientSideHighlightJS() {
intersectionObserver.observe(element)
}
}
return () => {
intersectionObserver.disconnect()
}
}, [asPath])
return null

View File

@@ -20,6 +20,8 @@ const supportedTools = [
'vscode',
'importer_cli',
'graphql',
'powershell',
'bash',
]
const toolTitles = {
webui: 'Web browser',
@@ -30,6 +32,8 @@ const toolTitles = {
vscode: 'Visual Studio Code',
importer_cli: 'GitHub Enterprise Importer CLI',
graphql: 'GraphQL API',
powershell: 'PowerShell',
bash: 'Bash',
} as Record<string, string>
// Imperatively modify article content to show only the selected tool

View File

@@ -1,11 +1,9 @@
import { PeopleIcon, CommentDiscussionIcon } from '@primer/octicons-react'
import { useTranslation } from 'components/hooks/useTranslation'
import { useVersion } from 'components/hooks/useVersion'
import { useMainContext } from 'components/context/MainContext'
export const Support = () => {
const { isEnterprise } = useVersion()
const { t } = useTranslation('support')
const { communityRedirect } = useMainContext()
@@ -25,11 +23,7 @@ export const Support = () => {
<div>
<a
id="contact-us"
href={
isEnterprise
? 'https://enterprise.github.com/support'
: 'https://support.github.com/contact'
}
href="https://support.github.com/contact"
className="Link—secondary text-bold"
>
<CommentDiscussionIcon size="small" className="octicon mr-1" />

View File

@@ -7,6 +7,7 @@ const restRepoDisplayPages = [
'branches',
'collaborators',
'commits',
'deploy_keys',
'deployments',
'pages',
'releases',
@@ -19,6 +20,7 @@ const restRepoCategoryExceptionsTitles = {
branches: 'Branches',
collaborators: 'Collaborators',
commits: 'Commits',
deploy_keys: 'Deploy Keys',
deployments: 'Deployments',
pages: 'GitHub Pages',
releases: 'Releases',

View File

@@ -225,7 +225,7 @@ const article: PlaygroundArticleT = {
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.6, 3.7, 3.8, 3.9]
python-version: ["3.6", "3.7", "3.8", "3.9"]
steps:
- uses: actions/checkout@v2
@@ -265,7 +265,7 @@ const article: PlaygroundArticleT = {
# You can use PyPy versions in python-version.
# For example, pypy2 and pypy3
matrix:
python-version: [2.7, 3.6, 3.7, 3.8, 3.9]
python-version: ["2.7", "3.6", "3.7", "3.8", "3.9"]
steps:
- uses: actions/checkout@v2
@@ -320,12 +320,12 @@ const article: PlaygroundArticleT = {
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.6, 3.7, 3.8, 3.9, pypy2, pypy3]
python-version: ["3.6", "3.7", "3.8", "3.9", pypy2, pypy3]
exclude:
- os: macos-latest
python-version: 3.6
python-version: "3.6"
- os: windows-latest
python-version: 3.6
python-version: "3.6"
`,
},
'4': {
@@ -468,7 +468,7 @@ const article: PlaygroundArticleT = {
runs-on: ubuntu-latest
strategy:
matrix:
python: [3.7, 3.8, 3.9]
python: ["3.7", "3.8", "3.9"]
steps:
- uses: actions/checkout@v2
@@ -490,15 +490,15 @@ const article: PlaygroundArticleT = {
name: Python package
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.6, 3.7, 3.8, 3.9]
python-version: ["3.6", "3.7", "3.8", "3.9"]
steps:
- uses: actions/checkout@v2
- name: Setup Python # Set Python version

View File

@@ -1,8 +1,4 @@
.codeBlock {
pre {
margin-bottom: 0;
border: 1px solid var(--color-border-default);
max-height: 32rem;
overflow: auto;
}
max-height: 32rem;
overflow: auto;
}

View File

@@ -20,7 +20,7 @@ export function CodeBlock({ verb, headingLang, codeBlock, highlight }: Props) {
})
return (
<div className="code-extra">
<div className={headingLang && 'code-extra'}>
{headingLang && (
<header className="d-flex flex-justify-between flex-items-center p-2 text-small rounded-top-1 border">
{headingLang === 'JavaScript' ? (
@@ -41,16 +41,10 @@ export function CodeBlock({ verb, headingLang, codeBlock, highlight }: Props) {
</Tooltip>
</header>
)}
<pre
className={cx(
styles.methodCodeBlock,
'd-flex flex-justify-between flex-items-center rounded-1 border'
)}
data-highlight={highlight}
>
<pre className={cx(styles.codeBlock, 'rounded-1 border')} data-highlight={highlight}>
<code>
{verb && (
<span className="color-bg-accent-emphasis color-fg-on-emphasis rounded-1 text-uppercase">
<span className="color-bg-accent-emphasis color-fg-on-emphasis rounded-1 text-uppercase p-1">
{verb}
</span>
)}{' '}

View File

@@ -1,6 +1,7 @@
import type { xCodeSample } from './types'
import { useTranslation } from 'components/hooks/useTranslation'
import { CodeBlock } from './CodeBlock'
import { Fragment } from 'react'
type Props = {
slug: string
@@ -11,7 +12,7 @@ export function RestCodeSamples({ slug, xCodeSamples }: Props) {
const { t } = useTranslation('products')
return (
<>
<Fragment key={xCodeSamples + slug}>
<h4 id={`${slug}--code-samples`}>
<a href={`#${slug}--code-samples`}>{`${t('rest.reference.code_samples')}`}</a>
</h4>
@@ -29,6 +30,6 @@ export function RestCodeSamples({ slug, xCodeSamples }: Props) {
}
return sampleElements
})}
</>
</Fragment>
)
}

View File

@@ -48,6 +48,7 @@ export function RestOperation({ operation }: Props) {
{previews && (
<RestPreviewNotice slug={operation.slug} previews={operation['x-github'].previews} />
)}
<RestResponse responses={operation.responses} variant="error" />
</div>
)
}

View File

@@ -1,5 +1,6 @@
.parameterTable {
table-layout: fixed !important;
z-index: 0;
thead {
tr {

View File

@@ -1,17 +1,18 @@
import { useState, useEffect } from 'react'
import React, { useState, useEffect } from 'react'
import cx from 'classnames'
import { useRouter } from 'next/router'
import dynamic from 'next/dynamic'
import { DefaultLayout } from 'components/DefaultLayout'
import { ArticleTitle } from 'components/article/ArticleTitle'
import { useMainContext } from 'components/context/MainContext'
import { MarkdownContent } from 'components/ui/MarkdownContent'
import { Lead } from 'components/ui/Lead'
import { ArticleGridLayout } from 'components/article/ArticleGridLayout'
import { MiniTocItem } from 'components/context/ArticleContext'
import { RestCategoryOperationsT } from './types'
import { RestOperation } from './RestOperation'
import { MiniTocs } from 'components/ui/MiniTocs'
import { ChevronDownIcon, ChevronUpIcon, SearchIcon } from '@primer/octicons-react'
import { useTranslation } from 'components/hooks/useTranslation'
import { ActionList } from '@primer/react'
const ClientSideHighlightJS = dynamic(() => import('components/article/ClientSideHighlightJS'), {
ssr: false,
@@ -37,9 +38,11 @@ export const RestReferencePage = ({
restOperations,
miniTocItems,
}: StructuredContentT) => {
const { t } = useTranslation('pages')
const { asPath } = useRouter()
const { page } = useMainContext()
const subcategories = Object.keys(restOperations)
const [collapsed, setCollapsed] = useState({} as Record<number, boolean>)
// We have some one-off redirects for rest api docs
// currently those are limited to the repos page, but
@@ -65,7 +68,8 @@ export const RestReferencePage = ({
if (
hash &&
(pathname.endsWith('/rest/reference/repos') ||
pathname.endsWith('/rest/reference/enterprise-admin'))
pathname.endsWith('/rest/reference/enterprise-admin') ||
pathname.endsWith('/rest/reference/deployments'))
) {
setLoadClientsideRedirectExceptions(true)
}
@@ -106,6 +110,43 @@ export const RestReferencePage = ({
// consecutive one does.
}, [asPath])
// Resetting the collapsed array when we move to another REST page
useEffect(() => {
setCollapsed({})
}, [asPath])
const handleClick = (param: number) => {
setCollapsed((prevState) => {
return { ...prevState, [param]: !prevState[param] }
})
}
const renderTocItem = (item: MiniTocItem, index: number) => {
return (
<ActionList.Item
as="li"
key={item.contents}
className={item.platform}
sx={{ listStyle: 'none', padding: '2px' }}
>
<div className={cx('lh-condensed d-block width-full')}>
<div className="d-inline-flex" dangerouslySetInnerHTML={{ __html: item.contents }} />
{item.items && item.items.length > 0 && (
<button
className="background-transparent border-0 ml-1"
onClick={() => handleClick(index)}
>
{!collapsed[index] ? <ChevronDownIcon /> : <ChevronUpIcon />}
</button>
)}
{collapsed[index] && item.items && item.items.length > 0 ? (
<ul className="ml-3">{item.items.map(renderTocItem)}</ul>
) : null}
</div>
</ActionList.Item>
)
}
return (
<DefaultLayout>
{/* Doesn't matter *where* this is included because it will
@@ -113,47 +154,60 @@ export const RestReferencePage = ({
{loadClientsideRedirectExceptions && <ClientSideRedirectExceptions />}
{lazyLoadHighlightJS && <ClientSideHighlightJS />}
<div className="container-xl px-3 px-md-6 my-4">
<ArticleGridLayout
topper={<ArticleTitle>{page.title}</ArticleTitle>}
intro={
<>
{page.introPlainText && (
<Lead data-testid="lead" data-search="lead">
{page.introPlainText}
</Lead>
)}
</>
}
toc={
<>
{miniTocItems && miniTocItems.length > 1 && (
<MiniTocs pageTitle={page.title} miniTocItems={miniTocItems} />
)}
</>
}
>
<div key={`restCategory-introContent`}>
<div dangerouslySetInnerHTML={{ __html: introContent }} />
<div className="container-xl px-3 px-md-6 my-4 mx-xl-12 mx-lg-12">
<h1>{page.title}</h1>
{page.introPlainText && (
<Lead data-testid="lead" data-search="lead" className="markdown-body">
{page.introPlainText}
</Lead>
)}
<div className="my-3 d-flex">
<div className="pr-3 mt-1">
<Circle className="color-fg-on-emphasis color-bg-emphasis">
<SearchIcon className="" size={15} />
</Circle>
</div>
<div id="article-contents">
<MarkdownContent>
{subcategories.map((subcategory, index) => (
<div key={`restCategory-${index}`}>
<div dangerouslySetInnerHTML={{ __html: descriptions[subcategory] }} />
{restOperations[subcategory].map((operation, index) => (
<RestOperation
key={`restOperation-${index}`}
operation={operation}
index={index}
/>
))}
</div>
))}
</MarkdownContent>
<h3>{t('miniToc')}</h3>
{miniTocItems && (
<ActionList
key={page.title}
items={miniTocItems.map((items, i) => {
return {
key: page.title + i,
text: page.title,
renderItem: () => <ul>{renderTocItem(items, i)}</ul>,
}
})}
/>
)}
</div>
</ArticleGridLayout>
</div>
<div key={`restCategory-introContent`}>
<div dangerouslySetInnerHTML={{ __html: introContent }} />
</div>
<MarkdownContent>
{subcategories.map((subcategory, index) => (
<div key={`restCategory-${index}`}>
<div dangerouslySetInnerHTML={{ __html: descriptions[subcategory] }} />
{restOperations[subcategory].map((operation, index) => (
<RestOperation key={`restOperation-${index}`} operation={operation} index={index} />
))}
</div>
))}
</MarkdownContent>
</div>
</DefaultLayout>
)
}
const Circle = ({ className, children }: { className?: string; children?: React.ReactNode }) => {
return (
<div
className={cx('circle d-flex flex-justify-center flex-items-center', className)}
style={{ width: 24, height: 24 }}
>
{children}
</div>
)
}

View File

@@ -1,14 +1,44 @@
import { CodeResponse } from './types'
import { CodeBlock } from './CodeBlock'
import { useTranslation } from 'components/hooks/useTranslation'
import { RestResponseTable } from './RestResponseTable'
type Props = {
responses: Array<CodeResponse>
variant?: 'non-error' | 'error'
}
export function RestResponse({ responses }: Props) {
export function RestResponse(props: Props) {
const { responses, variant = 'non-error' } = props
const { t } = useTranslation('products')
if (!responses || responses.length === 0) {
return null
}
const filteredResponses = responses.filter((response) => {
const responseCode = parseInt(response.httpStatusCode)
if (variant === 'error') {
return responseCode >= 400
} else {
return responseCode < 400
}
})
if (filteredResponses.length === 0) {
return null
}
if (variant === 'error') {
return (
<RestResponseTable heading={t('rest.reference.error_codes')} responses={filteredResponses} />
)
}
return (
<>
{responses.map((response: CodeResponse, index: number) => {
{filteredResponses.map((response, index) => {
return (
<div key={`${response.httpStatusMessage}-${index}}`}>
<h4 dangerouslySetInnerHTML={{ __html: response.description }} />

View File

@@ -0,0 +1,48 @@
.restResponseTable {
table-layout: fixed !important;
thead {
tr {
border-top: none;
th {
border: 0;
font-weight: normal;
}
th:first-child {
width: 25%;
}
th:nth-child(2) {
width: 75%;
}
}
}
tr:nth-child(2n) {
background: none !important;
}
td {
padding: 0.75rem 0.5rem !important;
border: 0 !important;
vertical-align: top;
width: 100%;
}
tbody {
tr td:first-child {
width: 30%;
font-weight: bold;
}
tr td:nth-child(2) {
width: 70%;
}
table tr td:not(:first-child) {
font-weight: normal;
}
}
}

View File

@@ -0,0 +1,45 @@
import cx from 'classnames'
import { CodeResponse } from './types'
import { useTranslation } from 'components/hooks/useTranslation'
import styles from './RestResponseTable.module.scss'
type Props = {
heading: string
responses: Array<CodeResponse>
}
export function RestResponseTable({ heading, responses }: Props) {
const { t } = useTranslation('products')
return (
<>
<h4>{heading}</h4>
<table className={cx(styles.restResponseTable)}>
<thead>
<tr className="text-left">
<th>{t('rest.reference.http_status_code')}</th>
<th>{t('rest.reference.description')}</th>
</tr>
</thead>
<tbody>
{responses.map((response, index) => {
return (
<tr key={`${response.description}-${index}}`}>
<td>
<code>{response.httpStatusCode}</code>
</td>
<td>
{response.description ? (
<div dangerouslySetInnerHTML={{ __html: response.description }} />
) : (
response.httpStatusMessage
)}
</td>
</tr>
)
})}
</tbody>
</table>
</>
)
}

View File

@@ -8,7 +8,6 @@ import { Link } from 'components/Link'
import { ProductTreeNode, useMainContext } from 'components/context/MainContext'
import { AllProductsLink } from 'components/sidebar/AllProductsLink'
import { EventType, sendEvent } from 'components/lib/events'
import styles from './SidebarProduct.module.scss'
export const SidebarProduct = () => {
@@ -154,7 +153,7 @@ const CollapsibleSection = (props: SectionProps) => {
<details open={defaultOpen} onToggle={onToggle} className="details-reset">
<summary className="outline-none">
<div className="d-flex flex-justify-between">
<div className="pl-4 pr-1 py-2 f6 text-uppercase d-block flex-auto mr-3 color-fg-default no-underline text-bold">
<div className="pl-4 pr-1 py-2 f5 d-block flex-auto mr-3 color-fg-default no-underline text-bold">
{title}
</div>
<span style={{ marginTop: 7 }} className="flex-shrink-0 pr-3">

View File

@@ -11,26 +11,28 @@ export const ScrollButton = ({ className, ariaLabel }: ScrollButtonPropsT) => {
const [show, setShow] = useState(false)
useEffect(() => {
// show scroll button only when view is scrolled down
const onScroll = function () {
const y = document.documentElement.scrollTop // get the height from page top
if (y < 100) {
setShow(false)
} else if (y >= 100) {
setShow(true)
}
}
window.addEventListener('scroll', onScroll)
// We cannot determine document.documentElement.scrollTop height because we set the height: 100vh and set overflow to auto to keep the header sticky
// That means window.scrollTop height is always 0
// Using IntersectionObserver we can detemine if the h1 header is in view or not. If not, we show the scroll to top button, if so, we hide it
const observer = new IntersectionObserver(
function (entries) {
if (entries[0].isIntersecting === false) {
setShow(true)
} else {
setShow(false)
}
},
{ threshold: [0] }
)
observer.observe(document.getElementsByTagName('h1')[0])
return () => {
window.removeEventListener('scroll', onScroll)
observer.disconnect()
}
}, [])
const onClick = () => {
window.scrollTo(0, 0)
const topOfPage = document.getElementById('github-logo')
if (topOfPage) topOfPage.focus()
document?.getElementById('github-logo')?.focus()
document?.getElementById('main-content')?.scrollIntoView()
}
return (

View File

@@ -228,7 +228,7 @@ defaultPlatform: linux
### `defaultTool`
- Purpose: Override the initial tool selection for a page, where tool refers to the application the reader is using to work with GitHub (such as GitHub.com's web UI, the GitHub CLI, or GitHub Desktop) or the GitHub APIs (such as cURL or the GitHub CLI). For more information about the tool selector, see [Markup reference for GitHub Docs](../contributing/content-markup-reference.md#tool-tags). If this frontmatter is omitted, then the tool-specific content matching the GitHub web UI is shown by default. If a user has indicated a tool preference (by clicking on a tool tab), then the user's preference will be applied instead of the default value.
- Type: `String`, one of: `webui`, `cli`, `desktop`, `curl`, `codespaces`, `vscode`, `importer_cli`, `graphql`.
- Type: `String`, one of: `webui`, `cli`, `desktop`, `curl`, `codespaces`, `vscode`, `importer_cli`, `graphql`, `powershell`, `bash`.
- Optional.
```yaml

View File

@@ -134,7 +134,7 @@ Email notifications from {% data variables.product.product_location %} contain t
| `To` field | This field connects directly to the thread.{% ifversion not ghae %} If you reply to the email, you'll add a new comment to the conversation.{% endif %} |
| `Cc` address | {% data variables.product.product_name %} will `Cc` you if you're subscribed to a conversation. The second `Cc` email address matches the notification reason. The suffix for these notification reasons is {% data variables.notifications.cc_address %}. The possible notification reasons are: <ul><li>`assign`: You were assigned to an issue or pull request.</li><li>`author`: You created an issue or pull request.</li><li>`ci_activity`: A {% data variables.product.prodname_actions %} workflow run that you triggered was completed.</li><li>`comment`: You commented on an issue or pull request.</li><li>`manual`: There was an update to an issue or pull request you manually subscribed to.</li><li>`mention`: You were mentioned on an issue or pull request.</li><li>`push`: Someone committed to a pull request you're subscribed to.</li><li>`review_requested`: You or a team you're a member of was requested to review a pull request.</li>{% ifversion fpt or ghes or ghae-issue-4864 or ghec %}<li>`security_alert`: {% data variables.product.prodname_dotcom %} detected a vulnerability in a repository you receive alerts for.</li>{% endif %}<li>`state_change`: An issue or pull request you're subscribed to was either closed or opened.</li><li>`subscribed`: There was an update in a repository you're watching.</li><li>`team_mention`: A team you belong to was mentioned on an issue or pull request.</li><li>`your_activity`: You opened, commented on, or closed an issue or pull request.</li></ul> |
| `mailing list` field | This field identifies the name of the repository and its owner. The format of this address is always `<repository name>.<repository owner>.{% data variables.command_line.backticks %}`. |{% ifversion fpt or ghes or ghae-issue-4864 or ghec %}
| `X-GitHub-Severity` field | {% data reusables.repositories.security-alerts-x-github-severity %} The possible severity levels are:<ul><li>`low`</li><li>`moderate`</li><li>`high`</li><li>`critical`</li></ul>For more information, see "[About alerts for vulnerable dependencies](/github/managing-security-vulnerabilities/about-alerts-for-vulnerable-dependencies)." |{% endif %}
| `X-GitHub-Severity` field | {% data reusables.repositories.security-alerts-x-github-severity %} The possible severity levels are:<ul><li>`low`</li><li>`moderate`</li><li>`high`</li><li>`critical`</li></ul>For more information, see "[About {% data variables.product.prodname_dependabot_alerts %}](/github/managing-security-vulnerabilities/about-alerts-for-vulnerable-dependencies)." |{% endif %}
## Choosing your notification settings

View File

@@ -173,7 +173,7 @@ If you use {% data variables.product.prodname_dependabot %} to keep your depende
- `reason:security_alert` to show notifications for {% data variables.product.prodname_dependabot_alerts %} and security update pull requests.
- `author:app/dependabot` to show notifications generated by {% data variables.product.prodname_dependabot %}. This includes {% data variables.product.prodname_dependabot_alerts %}, security update pull requests, and version update pull requests.
For more information about {% data variables.product.prodname_dependabot %}, see "[About managing vulnerable dependencies](/github/managing-security-vulnerabilities/about-managing-vulnerable-dependencies)."
For more information about {% data variables.product.prodname_dependabot %}, see "[About {% data variables.product.prodname_dependabot_alerts %}](/code-security/supply-chain-security/about-alerts-for-vulnerable-dependencies)."
{% endif %}
{% ifversion ghes < 3.3 or ghae-issue-4864 %}
@@ -182,7 +182,7 @@ If you use {% data variables.product.prodname_dependabot %} to tell you about vu
- `is:repository_vulnerability_alert`
- `reason:security_alert`
For more information about {% data variables.product.prodname_dependabot %}, see "[About alerts for vulnerable dependencies](/github/managing-security-vulnerabilities/about-alerts-for-vulnerable-dependencies)."
For more information about {% data variables.product.prodname_dependabot %}, see "[About {% data variables.product.prodname_dependabot_alerts %}](/github/managing-security-vulnerabilities/about-alerts-for-vulnerable-dependencies)."
{% endif %}
{% endif %}

View File

@@ -49,5 +49,5 @@ For an overview of repository-level security, see "[Securing your repository](/c
## Further reading
- "[About the dependency graph](/github/visualizing-repository-data-with-graphs/about-the-dependency-graph)"
- "[Managing vulnerabilities in your project's dependencies](/github/managing-security-vulnerabilities/managing-vulnerabilities-in-your-projects-dependencies)"
- "[About {% data variables.product.prodname_dependabot_alerts %}](/code-security/supply-chain-security/about-alerts-for-vulnerable-dependencies)"
- "[Keeping your dependencies updated automatically](/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically)"

View File

@@ -45,7 +45,7 @@ The repository owner has full control of the repository. In addition to the acti
| Customize the repository's social media preview | "[Customizing your repository's social media preview](/github/administering-a-repository/customizing-your-repositorys-social-media-preview)" |
| Create a template from the repository | "[Creating a template repository](/github/creating-cloning-and-archiving-repositories/creating-a-template-repository)" |{% ifversion fpt or ghes or ghae-issue-4864 or ghec %}
| Control access to {% data variables.product.prodname_dependabot_alerts %} for vulnerable dependencies | "[Managing security and analysis settings for your repository](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-security-and-analysis-settings-for-your-repository#granting-access-to-security-alerts)" |{% endif %}{% ifversion fpt or ghec %}
| Dismiss {% data variables.product.prodname_dependabot_alerts %} in the repository | "[Viewing and updating vulnerable dependencies in your repository](/github/managing-security-vulnerabilities/viewing-and-updating-vulnerable-dependencies-in-your-repository)" |
| Dismiss {% data variables.product.prodname_dependabot_alerts %} in the repository | "[Viewing {% data variables.product.prodname_dependabot_alerts %} for vulnerable dependencies](/github/managing-security-vulnerabilities/viewing-and-updating-vulnerable-dependencies-in-your-repository)" |
| Manage data use for a private repository | "[Managing data use settings for your private repository](/get-started/privacy-on-github/managing-data-use-settings-for-your-private-repository)"|{% endif %}
| Define code owners for the repository | "[About code owners](/github/creating-cloning-and-archiving-repositories/about-code-owners)" |
| Archive the repository | "[Archiving repositories](/repositories/archiving-a-github-repository/archiving-repositories)" |{% ifversion fpt or ghec %}

View File

@@ -141,7 +141,7 @@ In your `hello-world-javascript-action` directory, create a `README.md` file tha
- Environment variables the action uses.
- An example of how to use your action in a workflow.
```markdown
```markdown{:copy}
# Hello world javascript action
This action prints "Hello World" or "Hello" + the name of a person to greet to the log.
@@ -198,7 +198,7 @@ Checking in your `node_modules` directory can cause problems. As an alternative,
`rm -rf node_modules/*`
1. From your terminal, commit the updates to your `action.yml`, `dist/index.js`, and `node_modules` files.
```shell
```shell{:copy}
git add action.yml dist/index.js node_modules/*
git commit -m "Use vercel/ncc"
git tag -a -m "My first action release" v1.1

View File

@@ -132,6 +132,8 @@ jobs:
# ...deployment-specific steps
```
For guidance on writing deployment-specific steps, see "[Finding deployment examples](#finding-deployment-examples)."
## Viewing deployment history
When a {% data variables.product.prodname_actions %} workflow deploys to an environment, the environment is displayed on the main page of the repository. For more information about viewing deployments to environments, see "[Viewing deployment history](/developers/overview/viewing-deployment-history)."
@@ -164,7 +166,7 @@ You can use a status badge to display the status of your deployment workflow. {%
For more information, see "[Adding a workflow status badge](/actions/managing-workflow-runs/adding-a-workflow-status-badge)."
## Next steps
## Finding deployment examples
This article demonstrated features of {% data variables.product.prodname_actions %} that you can add to your deployment workflows.

View File

@@ -4,7 +4,8 @@ shortTitle: About deployments
intro: 'Learn how deployments can run with {% data variables.product.prodname_actions %} workflows.'
versions:
fpt: '*'
ghae: issue-4856
ghes: '*'
ghae: '*'
ghec: '*'
children:
- /about-continuous-deployment

View File

@@ -4,7 +4,8 @@ shortTitle: Deploying Xcode applications
intro: 'You can sign Xcode apps within your continuous integration (CI) workflow by installing an Apple code signing certificate on {% data variables.product.prodname_actions %} runners.'
versions:
fpt: '*'
ghae: issue-4856
ghes: '*'
ghae: '*'
ghec: '*'
children:
- /installing-an-apple-certificate-on-macos-runners-for-xcode-development

View File

@@ -4,7 +4,8 @@ shortTitle: Managing your deployments
intro: You can review the past activity of your deployments.
versions:
fpt: '*'
ghae: issue-4856
ghes: '*'
ghae: '*'
ghec: '*'
children:
- /viewing-deployment-history

View File

@@ -4,7 +4,8 @@ shortTitle: Targeting different environments
intro: You can configure environments with protection rules and secrets. A workflow job that references an environment must follow any protection rules for the environment before running or accessing the environment's secrets.
versions:
fpt: '*'
ghae: issue-4856
ghes: '*'
ghae: '*'
ghec: '*'
children:
- /using-environments-for-deployment

View File

@@ -51,7 +51,8 @@ For more information about installing and using self-hosted runners, see "[Addin
- Can use cloud services or local machines that you already pay for.
- Are customizable to your hardware, operating system, software, and security requirements.
- Don't need to have a clean instance for every job execution.
- Are free to use with {% data variables.product.prodname_actions %}, but you are responsible for the cost of maintaining your runner machines.
- Are free to use with {% data variables.product.prodname_actions %}, but you are responsible for the cost of maintaining your runner machines.{% ifversion ghec or ghes or ghae %}
- Can be organized into groups to restrict access to specific {% if restrict-groups-to-workflows %}workflows, {% endif %}organizations and repositories. For more information, see "[Managing access to self-hosted runners using groups](/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups)."{% endif %}
## Requirements for self-hosted runner machines
@@ -132,16 +133,30 @@ Some extra configuration might be required to use actions from {% data variables
## Communication between self-hosted runners and {% data variables.product.product_name %}
The self-hosted runner polls {% data variables.product.product_name %} to retrieve application updates and to check if any jobs are queued for processing. The self-hosted runner uses a HTTPS _long poll_ that opens a connection to {% data variables.product.product_name %} for 50 seconds, and if no response is received, it then times out and creates a new long poll. The application must be running on the machine to accept and run {% data variables.product.prodname_actions %} jobs.
The self-hosted runner connects to {% data variables.product.product_name %} to receive job assignments and to download new versions of the runner application. The self-hosted runner uses an {% ifversion ghes %}HTTP(S){% else %}HTTPS{% endif %} _long poll_ that opens a connection to {% data variables.product.product_name %} for 50 seconds, and if no response is received, it then times out and creates a new long poll. The application must be running on the machine to accept and run {% data variables.product.prodname_actions %} jobs.
{% data reusables.actions.self-hosted-runner-ports-protocols %}
{% data reusables.actions.self-hosted-runner-communications-for-ghae %}
{% ifversion fpt or ghec %}
Since the self-hosted runner opens a connection to {% data variables.product.product_location %}, you do not need to allow {% data variables.product.prodname_dotcom %} to make inbound connections to your self-hosted runner.
{% elsif ghes or ghae %}
Only an outbound connection from the runner to {% data variables.product.product_location %} is required. There is no need for an inbound connection from {% data variables.product.product_location %} to the runner.
{%- endif %}
{% ifversion ghes %}
{% data variables.product.product_name %} must accept inbound connections from your runners over {% ifversion ghes %}HTTP(S){% else %}HTTPS{% endif %} at {% data variables.product.product_location %}'s hostname and API subdomain, and your runners must allow outbound connections over {% ifversion ghes %}HTTP(S){% else %}HTTPS{% endif %} to {% data variables.product.product_location %}'s hostname and API subdomain.
{% elsif ghae %}
You must ensure that the self-hosted runner has appropriate network access to communicate with your {% data variables.product.product_name %} URL and its subdomains. For example, if your subdomain for {% data variables.product.product_name %} is `octoghae`, then you will need to allow the self-hosted runner to access `octoghae.githubenterprise.com`, `api.octoghae.githubenterprise.com`, and `codeload.octoghae.githubenterprise.com`.
If you use an IP address allow list, you must add your self-hosted runner's IP address to the allow list. For more information, see "[Managing allowed IP addresses for your organization](/organizations/keeping-your-organization-secure/managing-allowed-ip-addresses-for-your-organization#using-github-actions-with-an-ip-allow-list)."
{% endif %}
{% ifversion fpt or ghec %}
Since the self-hosted runner opens a connection to {% data variables.product.prodname_dotcom %}, you do not need to allow {% data variables.product.prodname_dotcom %} to make inbound connections to your self-hosted runner.
You must ensure that the machine has the appropriate network access to communicate with the {% data variables.product.prodname_dotcom %} hosts listed below. Some hosts are required for essential runner operations, while other hosts are only required for certain functionality.
{% note %}
@@ -186,29 +201,29 @@ github-registry-files.githubusercontent.com
In addition, your workflow may require access to other network resources. For example, if your workflow installs packages or publishes containers to {% data variables.product.prodname_dotcom %} Packages, then the runner will also require access to those network endpoints.
If you use an IP address allow list for your {% data variables.product.prodname_dotcom %} organization or enterprise account, you must add your self-hosted runner's IP address to the allow list. For more information, see "[Managing allowed IP addresses for your organization](/organizations/keeping-your-organization-secure/managing-allowed-ip-addresses-for-your-organization#using-github-actions-with-an-ip-allow-list)" or "[Enforcing policies for security settings in your enterprise](/admin/policies/enforcing-policies-for-your-enterprise/enforcing-policies-for-security-settings-in-your-enterprise)".
If you use an IP address allow list for your {% data variables.product.prodname_dotcom %} organization or enterprise account, you must add your self-hosted runner's IP address to the allow list. For more information, see "[Managing allowed IP addresses for your organization](/{% ifversion fpt %}enterprise-cloud@latest/{% endif %}/organizations/keeping-your-organization-secure/managing-allowed-ip-addresses-for-your-organization#using-github-actions-with-an-ip-allow-list)" or "[Enforcing policies for security settings in your enterprise](/{% ifversion fpt %}enterprise-cloud@latest/{% endif %}admin/policies/enforcing-policies-for-your-enterprise/enforcing-policies-for-security-settings-in-your-enterprise){% ifversion fpt %}" in the {% data variables.product.prodname_ghe_cloud %} documentation.{% else %}."{% endif %}
{% else %}
You must ensure that the machine has the appropriate network access to communicate with {% data variables.product.product_location %}.{% ifversion ghes %} Self-hosted runners connect directly to {% data variables.product.product_location %} and do not require any external internet access in order to function. As a result, you can use network routing to direct communication between the self-hosted runner and {% data variables.product.product_location %}. For example, you can assign a private IP address to your self-hosted runner and configure routing to send traffic to {% data variables.product.product_location %}, with no need for traffic to traverse a public network.{% endif %}
{% ifversion ghes %}Self-hosted runners do not require any external internet access in order to function. As a result, you can use network routing to direct communication between the self-hosted runner and {% data variables.product.product_location %}. For example, you can assign a private IP address to your self-hosted runner and configure routing to send traffic to {% data variables.product.product_location %}, with no need for traffic to traverse a public network.{% endif %}
{% endif %}
{% ifversion ghae %}
If you use an IP address allow list for your {% data variables.product.prodname_dotcom %} organization or enterprise account, you must add your self-hosted runner's IP address to the allow list. For more information, see "[Managing allowed IP addresses for your organization](/organizations/keeping-your-organization-secure/managing-allowed-ip-addresses-for-your-organization#using-github-actions-with-an-ip-allow-list)."
{% endif %}
You can also use self-hosted runners with a proxy server. For more information, see "[Using a proxy server with self-hosted runners](/actions/automating-your-workflow-with-github-actions/using-a-proxy-server-with-self-hosted-runners)."
{% ifversion ghes %}
For more information about troubleshooting common network connectivity issues, see "[Monitoring and troubleshooting self-hosted runners](/actions/hosting-your-own-runners/monitoring-and-troubleshooting-self-hosted-runners#troubleshooting-network-connectivity)."
{% ifversion ghes or ghae %}
## Communication between self-hosted runners and {% data variables.product.prodname_dotcom_the_website %}
Self-hosted runners do not need to connect to {% data variables.product.prodname_dotcom_the_website %} unless you have [enabled automatic access to {% data variables.product.prodname_dotcom_the_website %} actions using {% data variables.product.prodname_github_connect %}](/admin/github-actions/managing-access-to-actions-from-githubcom/enabling-automatic-access-to-githubcom-actions-using-github-connect).
Self-hosted runners do not need to connect to {% data variables.product.prodname_dotcom_the_website %} unless you have enabled automatic access to {% data variables.product.prodname_dotcom_the_website %} actions for {% data variables.product.product_location %}. For more information, see "[About using actions in your enterprise](/admin/github-actions/managing-access-to-actions-from-githubcom/about-using-actions-in-your-enterprise)."
If you have enabled automatic access to {% data variables.product.prodname_dotcom_the_website %} actions using {% data variables.product.prodname_github_connect %}, then the self-hosted runner will connect directly to {% data variables.product.prodname_dotcom_the_website %} to download actions. You must ensure that the machine has the appropriate network access to communicate with the {% data variables.product.prodname_dotcom %} URLs listed below.
{% note %}
**Note:** Some of the domains listed below are configured using `CNAME` records. Some firewalls might require you to add rules recursively for all `CNAME` records. Note that the `CNAME` records might change in the future, and that only the domains listed below will remain constant.
{% endnote %}
If you have enabled automatic access to {% data variables.product.prodname_dotcom_the_website %} actions, then the self-hosted runner will connect directly to {% data variables.product.prodname_dotcom_the_website %} to download actions. You must ensure that the machine has the appropriate network access to communicate with the {% data variables.product.prodname_dotcom %} URLs listed below.
```
github.com
@@ -216,6 +231,13 @@ api.github.com
codeload.github.com
```
{% note %}
**Note:** Some of the domains listed above are configured using `CNAME` records. Some firewalls might require you to add rules recursively for all `CNAME` records. Note that the `CNAME` records might change in the future, and that only the domains listed above will remain constant.
{% endnote %}
{% endif %}
## Self-hosted runner security

View File

@@ -9,11 +9,12 @@ versions:
ghae: '*'
ghec: '*'
type: tutorial
shortTitle: Manage runner groups
shortTitle: Manage access to runners
---
{% data reusables.actions.enterprise-beta %}
{% data reusables.actions.enterprise-github-hosted-runners %}
{% data reusables.actions.restrict-runner-workflow-beta %}
## About self-hosted runner groups
@@ -30,9 +31,9 @@ If you use {% data variables.product.prodname_ghe_cloud %}, you can create addit
{% endif %}
{% ifversion ghec or ghes or ghae %}
Self-hosted runner groups are used to control access to self-hosted runners at the organization and enterprise level. Enterprise admins can configure access policies that control which organizations in an enterprise have access to the runner group. Organization admins can configure access policies that control which repositories in an organization have access to the runner group.
Self-hosted runner groups are used to control access to self-hosted runners at the organization and enterprise level. Enterprise owners can configure access policies that control which organizations {% if restrict-groups-to-workflows %}and workflows {% endif %}in an enterprise have access to the runner group. Organization owners can configure access policies that control which repositories{% if restrict-groups-to-workflows %} and workflows{% endif %} in an organization have access to the runner group.
When an enterprise admin grants an organization access to a runner group, organization admins can see the runner group listed in the organization's self-hosted runner settings. The organizations admins can then assign additional granular repository access policies to the enterprise runner group.
When an enterprise owner grants an organization access to a runner group, organization owners can see the runner group listed in the organization's self-hosted runner settings. The organization owners can then assign additional granular repository{% if restrict-groups-to-workflows %} and workflow{% endif %} access policies to the enterprise runner group.
When new runners are created, they are automatically assigned to the default group. Runners can only be in one group at a time. You can move runners from the default group to another group. For more information, see "[Moving a self-hosted runner to a group](#moving-a-self-hosted-runner-to-a-group)."
@@ -42,13 +43,14 @@ All organizations have a single default self-hosted runner group. Organizations
Self-hosted runners are automatically assigned to the default group when created, and can only be members of one group at a time. You can move a runner from the default group to any group you create.
When creating a group, you must choose a policy that defines which repositories have access to the runner group.
When creating a group, you must choose a policy that defines which repositories{% if restrict-groups-to-workflows %} and workflows{% endif %} have access to the runner group.
{% ifversion ghec or ghes > 3.3 or ghae-issue-5091 %}
{% data reusables.organizations.navigate-to-org %}
{% data reusables.organizations.org_settings %}
{% data reusables.actions.settings-sidebar-actions-runner-groups %}
1. In the "Runner groups" section, click **New runner group**.
1. Enter a name for your runner group.
{% data reusables.actions.runner-group-assign-policy-repo %}
{% warning %}
@@ -58,6 +60,7 @@ When creating a group, you must choose a policy that defines which repositories
For more information, see "[About self-hosted runners](/actions/hosting-your-own-runners/about-self-hosted-runners#self-hosted-runner-security-with-public-repositories)."
{% endwarning %}
{% data reusables.actions.runner-group-assign-policy-workflow %}{%- if restrict-groups-to-workflows %} Organization-owned runner groups cannot access workflows from a different organization in the enterprise; instead, you must create an enterprise-owned runner group.{% endif %}
{% data reusables.actions.self-hosted-runner-create-group %}
{% elsif ghae or ghes < 3.4 %}
{% data reusables.organizations.navigate-to-org %}
@@ -88,7 +91,7 @@ When creating a group, you must choose a policy that defines which repositories
## Creating a self-hosted runner group for an enterprise
Enterprises can add their self-hosted runners to groups for access management. Enterprises can create groups of self-hosted runners that are accessible to specific organizations in the enterprise account. Organization admins can then assign additional granular repository access policies to the enterprise runner groups. For information about how to create a self-hosted runner group with the REST API, see the enterprise endpoints in the [{% data variables.product.prodname_actions %} REST API](/rest/reference/actions#self-hosted-runner-groups).
Enterprises can add their self-hosted runners to groups for access management. Enterprises can create groups of self-hosted runners that are accessible to specific organizations in the enterprise account{% if restrict-groups-to-workflows %} or to specific workflows{% endif %}. Organization owners can then assign additional granular repository{% if restrict-groups-to-workflows %} or workflow{% endif %} access policies to the enterprise runner groups. For information about how to create a self-hosted runner group with the REST API, see the enterprise endpoints in the [{% data variables.product.prodname_actions %} REST API](/rest/reference/actions#self-hosted-runner-groups).
Self-hosted runners are automatically assigned to the default group when created, and can only be members of one group at a time. You can assign the runner to a specific group during the registration process, or you can later move the runner from the default group to a custom group.
@@ -115,17 +118,21 @@ When creating a group, you must choose a policy that defines which organizations
![Add runner group options](/assets/images/help/settings/actions-enterprise-account-add-runner-group-options-ae.png)
{%- endif %}
{% data reusables.actions.runner-group-assign-policy-workflow %}
1. Click **Save group** to create the group and apply the policy.
{% endif %}
## Changing the access policy of a self-hosted runner group
You can update the access policy of a runner group, or rename a runner group.
For runner groups in an enterprise, you can change what organizations in the enterprise can access a runner group{% if restrict-groups-to-workflows %} or restrict what workflows a runner group can run{% endif %}. For runner groups in an organization, you can change what repositories in the organization can access a runner group{% if restrict-groups-to-workflows %} or restrict what workflows a runner group can run{% endif %}.
### Changing what organizations or repositories can access a runner group
{% ifversion fpt or ghec or ghes > 3.3 or ghae-issue-5091 %}
{% data reusables.actions.self-hosted-runner-groups-navigate-to-repo-org-enterprise %}
{% data reusables.actions.settings-sidebar-actions-runner-groups-selection %}
1. Modify the access options, or change the runner group name.
1. For runner groups in an enterprise, under **Organization access**, modify what organizations can access the runner group. For runner groups in an organization, under **Repository access**, modify what repositories can access the runner group.
{%- ifversion fpt or ghec or ghes %}
{% warning %}
@@ -142,6 +149,35 @@ You can update the access policy of a runner group, or rename a runner group.
{% data reusables.actions.self-hosted-runner-configure-runner-group-access %}
{% endif %}
{% if restrict-groups-to-workflows %}
### Changing what workflows can access a runner group
You can configure a self-hosted runner group to run either selected workflows or all workflows. For example, you might use this setting to protect secrets that are stored on self-hosted runners or to standardize deployment workflows by restricting a runner group to run only a specific reusable workflow. This setting cannot be overridden if you are configuring an organization's runner group that was shared by an enterprise.
{% data reusables.actions.self-hosted-runner-groups-navigate-to-repo-org-enterprise %}
{% data reusables.actions.settings-sidebar-actions-runner-groups-selection %}
1. Under **Workflow access**, select the dropdown menu and click **Selected workflows**.
1. Click {% octicon "gear" aria-label="the gear icon" %}.
1. Enter a comma separated list of the workflows that can access the runner group. Use the full path, including the repository name and owner. Pin the workflow to a branch, tag, or full SHA. For example: `octo-org/octo-repo/.github/workflows/build.yml@v2, octo-org/octo-repo/.github/workflows/deploy.yml@d6dc6c96df4f32fa27b039f2084f576ed2c5c2a5, monalisa/octo-test/.github/workflows/test.yml@main`.
Only jobs directly defined within the selected workflows will have access to the runner group.
Organization-owned runner groups cannot access workflows from a different organization in the enterprise; instead, you must create an enterprise-owned runner group.
1. Click **Save**.
{% endif %}
## Changing the name of a runner group
{% ifversion fpt or ghec or ghes > 3.3 or ghae-issue-5091 %}
{% data reusables.actions.self-hosted-runner-groups-navigate-to-repo-org-enterprise %}
{% data reusables.actions.settings-sidebar-actions-runner-groups-selection %}
1. Change the runner group name.
{% elsif ghae or ghes < 3.4 %}
{% data reusables.actions.self-hosted-runner-configure-runner-group %}
1. Change the runner group name.
{% endif %}
{% ifversion ghec or ghes or ghae %}
## Automatically adding a self-hosted runner to a group

View File

@@ -32,7 +32,9 @@ shortTitle: Monitor & troubleshoot
* **Active**: The runner is currently executing a job.
* **Offline**: The runner is not connected to {% data variables.product.product_name %}. This could be because the machine is offline, the self-hosted runner application is not running on the machine, or the self-hosted runner application cannot communicate with {% data variables.product.product_name %}.
## Checking self-hosted runner network connectivity
## Troubleshooting network connectivity
### Checking self-hosted runner network connectivity
You can use the self-hosted runner application's `run` script with the `--check` parameter to check that a self-hosted runner can access all required network services on {% data variables.product.product_location %}.
@@ -65,6 +67,27 @@ The script tests each service, and outputs either a `PASS` or `FAIL` for each on
If you have any failing checks, you should also verify that your self-hosted runner machine meets all the communication requirements. For more information, see "[About self-hosted runners](/actions/hosting-your-own-runners/about-self-hosted-runners#communication-requirements)."
### Disabling TLS certificate verification
{% ifversion ghes %}
By default, the self-hosted runner application verifies the TLS certificate for {% data variables.product.product_name %}. If your {% data variables.product.product_name %} has a self-signed or internally-issued certificate, you may wish to disable TLS certificate verification for testing purposes.
{% else %}
By default, the self-hosted runner application verifies the TLS certificate for {% data variables.product.product_name %}. If you encounter network problems, you may wish to disable TLS certificate verification for testing purposes.
{% endif %}
To disable TLS certification verification in the self-hosted runner application, set the `GITHUB_ACTIONS_RUNNER_TLS_NO_VERIFY` environment variable to `1` before configuring and running the self-hosted runner application.
```shell
export GITHUB_ACTIONS_RUNNER_TLS_NO_VERIFY=1
./config.sh --url <em>https://github.com/octo-org/octo-repo</em> --token
./run.sh
```
{% warning %}
**Warning**: Disabling TLS verification is not recommended since TLS provides privacy and data integrity between the self-hosted runner application and {% data variables.product.product_name %}. We recommend that you install the {% data variables.product.product_name %} certificate in the operating system certificate store for your self-hosted runner. For guidance on how to install the {% data variables.product.product_name %} certificate, check with your operating system vendor.
{% endwarning %}
## Reviewing the self-hosted runner application log files
You can monitor the status of the self-hosted runner application and its activities. Log files are kept in the `_diag` directory where you installed the runner application, and a new log is generated each time the application is started. The filename begins with *Runner_*, and is followed by a UTC timestamp of when the application was started.

View File

@@ -76,7 +76,7 @@ The following table indicates where each context and special function can be use
| <code>concurrency</code> | <code>github, inputs</code> | |
| <code>env</code> | <code>github, secrets, inputs</code> | |
| <code>jobs.&lt;job_id&gt;.concurrency</code> | <code>github, needs, strategy, matrix, inputs</code> | |
| <code>jobs.&lt;job_id&gt;.container</code> | <code>github, needs, strategy, matrix, secrets, inputs</code> | |
| <code>jobs.&lt;job_id&gt;.container</code> | <code>github, needs, strategy, matrix, env, secrets, inputs</code> | |
| <code>jobs.&lt;job_id&gt;.container.credentials</code> | <code>github, needs, strategy, matrix, env, secrets, inputs</code> | |
| <code>jobs.&lt;job_id&gt;.container.env.&lt;env_id&gt;</code> | <code>github, needs, strategy, matrix, job, runner, env, secrets, inputs</code> | |
| <code>jobs.&lt;job_id&gt;.continue-on-error</code> | <code>github, needs, strategy, matrix, inputs</code> | |
@@ -183,6 +183,7 @@ The `github` context contains information about the workflow run and the event t
| `github.action_path` | `string` | The path where an action is located. This property is only supported in composite actions. You can use this path to access files located in the same repository as the action. |
| `github.action_ref` | `string` | For a step executing an action, this is the ref of the action being executed. For example, `v2`. |
| `github.action_repository` | `string` | For a step executing an action, this is the owner and repository name of the action. For example, `actions/checkout`. |
| `github.action_status` | `string` | For a composite action, the current result of the composite action. |
| `github.actor` | `string` | The username of the user that initiated the workflow run. |
| `github.api_url` | `string` | The URL of the {% data variables.product.prodname_dotcom %} REST API. |
| `github.base_ref` | `string` | The `base_ref` or target branch of the pull request in a workflow run. This property is only available when the event that triggers a workflow run is either `pull_request` or `pull_request_target`. |
@@ -766,10 +767,14 @@ on:
deploy_target:
required: true
type: string
perform_deploy:
required: true
type: boolean
jobs:
deploy:
runs-on: ubuntu-latest
if: ${{ inputs.perform_deploy == 'true' }}
steps:
- name: Deploy build to target
run: deploy --build ${{ inputs.build_id }} --target ${{ inputs.deploy_target }}

View File

@@ -156,7 +156,7 @@ We strongly recommend that actions use environment variables to access the files
| `GITHUB_RUN_ID` | {% data reusables.actions.run_id_description %} For example, `1658821493`. |
| `GITHUB_RUN_NUMBER` | {% data reusables.actions.run_number_description %} For example, `3`. |
| `GITHUB_SERVER_URL`| The URL of the {% data variables.product.product_name %} server. For example: `https://{% data variables.product.product_url %}`.
| `GITHUB_SHA` | The commit SHA that triggered the workflow. For example, `ffac537e6cbbf934b08745a378932722df287a53`. |
| `GITHUB_SHA` | The commit SHA that triggered the workflow. The value of this commit SHA depends on the event that triggered the workflow. For more information, see [Events that trigger workflows](/actions/using-workflows/events-that-trigger-workflows). For example, `ffac537e6cbbf934b08745a378932722df287a53`. |
| `GITHUB_WORKFLOW` | The name of the workflow. For example, `My test workflow`. If the workflow file doesn't specify a `name`, the value of this variable is the full path of the workflow file in the repository. |
| `GITHUB_WORKSPACE` | The default working directory on the runner for steps, and the default location of your repository when using the [`checkout`](https://github.com/actions/checkout) action. For example, `/home/runner/work/my-repo-name/my-repo-name`. |
{%- if actions-runner-arch-envvars %}

View File

@@ -1,6 +1,6 @@
---
title: Re-running workflows and jobs
intro: You can re-run a workflow run up to 30 days after its initial run.
intro: You can re-run a workflow run{% if re-run-jobs %}, all failed jobs in a workflow run, or specific jobs in a workflow run{% endif %} up to 30 days after its initial run.
permissions: People with write permissions to a repository can re-run workflows in the repository.
miniTocMaxHeadingLevel: 3
redirect_from:
@@ -15,9 +15,11 @@ versions:
{% data reusables.actions.enterprise-beta %}
{% data reusables.actions.enterprise-github-hosted-runners %}
## Re-running all the jobs in a workflow
## About re-running workflows and jobs
Re-running a workflow uses the same `GITHUB_SHA` (commit SHA) and `GITHUB_REF` (Git ref) of the original event that triggered the workflow run. You can re-run a workflow for up to 30 days after the initial run.
Re-running a workflow{% if re-run-jobs %} or jobs in a workflow{% endif %} uses the same `GITHUB_SHA` (commit SHA) and `GITHUB_REF` (Git ref) of the original event that triggered the workflow run. You can re-run a workflow{% if re-run-jobs %} or jobs in a workflow{% endif %} for up to 30 days after the initial run.
## Re-running all the jobs in a workflow
{% webui %}
@@ -26,7 +28,9 @@ Re-running a workflow uses the same `GITHUB_SHA` (commit SHA) and `GITHUB_REF` (
{% data reusables.repositories.navigate-to-workflow %}
{% data reusables.repositories.view-run %}
{% ifversion fpt or ghes > 3.2 or ghae-issue-4721 or ghec %}
1. In the upper-right corner of the workflow, use the **Re-run jobs** drop-down menu, and select **Re-run all jobs**
1. In the upper-right corner of the workflow, use the **Re-run jobs** drop-down menu, and select **Re-run all jobs**.
If no jobs failed, you will not see the **Re-run jobs** drop-down menu. Instead, click **Re-run all jobs**.
![Rerun checks drop-down menu](/assets/images/help/repository/rerun-checks-drop-down.png)
{% endif %}
{% ifversion ghes < 3.3 or ghae %}
@@ -54,8 +58,64 @@ gh run watch
{% endcli %}
{% if re-run-jobs %}
## Re-running failed jobs in a workflow
If any jobs in a workflow run failed, you can re-run just the jobs that failed. When you re-run failed jobs in a workflow, a new workflow run will start for all failed jobs and their dependents. Any outputs for any successful jobs in the previous workflow run will be used for the re-run. Any artifacts that were created in the initial run will be available in the re-run. Any environment protection rules that passed in the previous run will automatically pass in the re-run.
{% webui %}
{% data reusables.repositories.navigate-to-repo %}
{% data reusables.repositories.actions-tab %}
{% data reusables.repositories.navigate-to-workflow %}
{% data reusables.repositories.view-run %}
1. In the upper-right corner of the workflow, use the **Re-run jobs** drop-down menu, and select **Re-run failed jobs**.
![Re-run failed jobs drop-down menu](/assets/images/help/repository/rerun-failed-jobs-drop-down.png)
{% endwebui %}
{% cli %}
To re-run failed jobs in a workflow run, use the `run rerun` subcommand with the `--failed` flag. Replace `run-id` with the ID of the run for which you want to re-run failed jobs. If you don't specify a `run-id`, {% data variables.product.prodname_cli %} returns an interactive menu for you to choose a recent failed run.
```shell
gh run rerun <em>run-id</em> --failed
```
{% endcli %}
## Re-running a specific job in a workflow
When you re-run a specific job in a workflow, a new workflow run will start for the job and any dependents. Any outputs for any other jobs in the previous workflow run will be used for the re-run. Any artifacts that were created in the initial run will be available in the re-run. Any environment protection rules that passed in the previous run will automatically pass in the re-run.
{% webui %}
{% data reusables.repositories.navigate-to-repo %}
{% data reusables.repositories.actions-tab %}
{% data reusables.repositories.navigate-to-workflow %}
{% data reusables.repositories.view-run %}
1. Next to the job that you want to re-run, click {% octicon "sync" aria-label="The re-run icon" %}.
![Re-run selected job](/assets/images/help/repository/re-run-selected-job.png)
Alternatively, click on a job to view the log. In the log, click {% octicon "sync" aria-label="The re-run icon" %}.
![Re-run selected job](/assets/images/help/repository/re-run-single-job-from-log.png)
{% endwebui %}
{% cli %}
To re-run a specific job in a workflow run, use the `run rerun` subcommand with the `--job` flag. Replace `job-id` with the ID of the job that you want to re-run.
```shell
gh run rerun --job <em>job-id</em>
```
{% endcli %}
{% endif %}
{% ifversion fpt or ghes > 3.2 or ghae-issue-4721 or ghec %}
### Reviewing previous workflow runs
## Reviewing previous workflow runs
You can view the results from your previous attempts at running a workflow. You can also view previous workflow runs using the API. For more information, see ["Get a workflow run"](/rest/reference/actions#get-a-workflow-run).
@@ -63,8 +123,13 @@ You can view the results from your previous attempts at running a workflow. You
{% data reusables.repositories.actions-tab %}
{% data reusables.repositories.navigate-to-workflow %}
{% data reusables.repositories.view-run %}
{%- if re-run-jobs %}
1. Any previous run attempts are shown in the **Latest** drop-down menu.
![Previous run attempts](/assets/images/help/repository/previous-run-attempts.png)
{%- else %}
1. Any previous run attempts are shown in the left pane.
![Rerun workflow](/assets/images/help/settings/actions-review-workflow-rerun.png)
{%- endif %}
1. Click an entry to view its results.
{% endif %}

View File

@@ -63,6 +63,16 @@ You can download the log files from your workflow run. You can also download a w
![Download logs drop-down menu](/assets/images/help/repository/download-logs-drop-down-updated-2.png)
{% if re-run-jobs %}
{% note %}
**Note**: When you download the log archive for a workflow that was partially re-run, the archive only includes the jobs that were re-run. To get a complete set of logs for jobs that were run from a workflow, you must download the log archives for the previous run attempts that ran the other jobs.
{% endnote %}
{% endif %}
## Deleting logs
You can delete the log files from your workflow run. {% data reusables.repositories.permissions-statement-write %}

View File

@@ -131,7 +131,7 @@ The `build-push-action` options required for {% data variables.product.prodname_
{% ifversion fpt or ghec %}
{% data reusables.package_registry.publish-docker-image %}
The above workflow if triggered by a push to the "release" branch. It checks out the GitHub repository, and uses the `login-action` to log in to the {% data variables.product.prodname_container_registry %}. It then extracts labels and tags for the Docker image. Finally, it uses the `build-push-action` action to build the image and publish it on the {% data variables.product.prodname_container_registry %}.
The above workflow is triggered by a push to the "release" branch. It checks out the GitHub repository, and uses the `login-action` to log in to the {% data variables.product.prodname_container_registry %}. It then extracts labels and tags for the Docker image. Finally, it uses the `build-push-action` action to build the image and publish it on the {% data variables.product.prodname_container_registry %}.
{% else %}
```yaml{:copy}

View File

@@ -23,7 +23,7 @@ At the start of each workflow run, {% data variables.product.prodname_dotcom %}
When you enable {% data variables.product.prodname_actions %}, {% data variables.product.prodname_dotcom %} installs a {% data variables.product.prodname_github_app %} on your repository. The `GITHUB_TOKEN` secret is a {% data variables.product.prodname_github_app %} installation access token. You can use the installation access token to authenticate on behalf of the {% data variables.product.prodname_github_app %} installed on your repository. The token's permissions are limited to the repository that contains your workflow. For more information, see "[Permissions for the `GITHUB_TOKEN`](#permissions-for-the-github_token)."
Before each job begins, {% data variables.product.prodname_dotcom %} fetches an installation access token for the job. The token expires when the job is finished.
Before each job begins, {% data variables.product.prodname_dotcom %} fetches an installation access token for the job. {% data reusables.actions.github-token-expiration %}
The token is also available in the `github.token` context. For more information, see "[Contexts](/actions/learn-github-actions/contexts#github-context)."

View File

@@ -227,6 +227,10 @@ steps:
```
{% endraw %}
Secrets cannot be directly referenced in `if:` conditionals. Instead, consider setting secrets as job-level environment variables, then referencing the environment variables to conditionally run steps in the job. For more information, see "[Context availability](/actions/learn-github-actions/contexts#context-availability)" and [`jobs.<job_id>.steps[*].if`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsif).
If a secret has not been set, the return value of an expression referencing the secret (such as {% raw %}`${{ secrets.SuperSecret }}`{% endraw %} in the example) will be an empty string.
Avoid passing secrets between processes from the command line, whenever possible. Command-line processes may be visible to other users (using the `ps` command) or captured by [security audit events](https://docs.microsoft.com/windows-server/identity/ad-ds/manage/component-updates/command-line-process-auditing). To help protect secrets, consider using environment variables, `STDIN`, or other mechanisms supported by the target process.
If you must pass secrets within a command line, then enclose them within the proper quoting rules. Secrets often contain special characters that may unintentionally affect your shell. To escape these special characters, use quoting with your environment variables. For example:

View File

@@ -265,7 +265,7 @@ This list describes the recommended approaches for accessing repository data wit
{% ifversion fpt or ghec %}As a result, self-hosted runners should almost [never be used for public repositories](/actions/hosting-your-own-runners/about-self-hosted-runners#self-hosted-runner-security-with-public-repositories) on {% data variables.product.product_name %}, because any user can open pull requests against the repository and compromise the environment. Similarly, be{% elsif ghes or ghae %}Be{% endif %} cautious when using self-hosted runners on private or internal repositories, as anyone who can fork the repository and open a pull request (generally those with read access to the repository) are able to compromise the self-hosted runner environment, including gaining access to secrets and the `GITHUB_TOKEN` which{% ifversion fpt or ghes > 3.1 or ghae or ghec %}, depending on its settings, can grant {% else %} grants {% endif %}write access to the repository. Although workflows can control access to environment secrets by using environments and required reviews, these workflows are not run in an isolated environment and are still susceptible to the same risks when run on a self-hosted runner.
When a self-hosted runner is defined at the organization or enterprise level, {% data variables.product.product_name %} can schedule workflows from multiple repositories onto the same runner. Consequently, a security compromise of these environments can result in a wide impact. To help reduce the scope of a compromise, you can create boundaries by organizing your self-hosted runners into separate groups. For more information, see "[Managing access to self-hosted runners using groups](/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups)."
When a self-hosted runner is defined at the organization or enterprise level, {% data variables.product.product_name %} can schedule workflows from multiple repositories onto the same runner. Consequently, a security compromise of these environments can result in a wide impact. To help reduce the scope of a compromise, you can create boundaries by organizing your self-hosted runners into separate groups. You can restrict what {% if restrict-groups-to-workflows %}workflows, {% endif %}organizations and repositories can access runner groups. For more information, see "[Managing access to self-hosted runners using groups](/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups)."
You should also consider the environment of the self-hosted runner machines:
- What sensitive information resides on the machine configured as a self-hosted runner? For example, private SSH keys, API access tokens, among others.

View File

@@ -234,3 +234,11 @@ For example, if a pull request contains a `feature` branch (the current scope) a
## Usage limits and eviction policy
{% data variables.product.prodname_dotcom %} will remove any cache entries that have not been accessed in over 7 days. There is no limit on the number of caches you can store, but the total size of all caches in a repository is limited to 10 GB. If you exceed this limit, {% data variables.product.prodname_dotcom %} will save your cache but will begin evicting caches until the total size is less than 10 GB.
{% if actions-cache-management %}
## Managing caches
You can use the {% data variables.product.product_name %} REST API to manage your caches. At present, you can use the API to see your cache usage, with more functionality expected in future updates. For more information, see the "[Actions](/rest/reference/actions#cache)" REST API documentation.
{% endif %}

View File

@@ -307,3 +307,5 @@ For information about using the REST API to query the audit log for an organizat
## Next steps
To continue learning about {% data variables.product.prodname_actions %}, see "[Events that trigger workflows](/actions/learn-github-actions/events-that-trigger-workflows)."
{% if restrict-groups-to-workflows %}You can standardize deployments by creating a self-hosted runner group that can only execute a specific reusable workflow. For more information, see "[Managing access to self-hosted runners using groups](/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups)."{% endif %}

View File

@@ -2,6 +2,7 @@
title: Workflow commands for GitHub Actions
shortTitle: Workflow commands
intro: You can use workflow commands when running shell commands in a workflow or in an action's code.
defaultTool: bash
redirect_from:
- /articles/development-tools-for-github-actions
- /github/automating-your-workflow-with-github-actions/development-tools-for-github-actions
@@ -26,10 +27,24 @@ Actions can communicate with the runner machine to set environment variables, ou
Most workflow commands use the `echo` command in a specific format, while others are invoked by writing to a file. For more information, see ["Environment files".](#environment-files)
``` bash
### Example
{% bash %}
```bash{:copy}
echo "::workflow-command parameter1={data},parameter2={data}::{command value}"
```
{% endbash %}
{% powershell %}
```pwsh{:copy}
Write-Output "::workflow-command parameter1={data},parameter2={data}::{command value}"
```
{% endpowershell %}
{% note %}
**Note:** Workflow command and parameter names are not case-sensitive.
@@ -46,14 +61,18 @@ echo "::workflow-command parameter1={data},parameter2={data}::{command value}"
The [actions/toolkit](https://github.com/actions/toolkit) includes a number of functions that can be executed as workflow commands. Use the `::` syntax to run the workflow commands within your YAML file; these commands are then sent to the runner over `stdout`. For example, instead of using code to set an output, as below:
```javascript
```javascript{:copy}
core.setOutput('SELECTED_COLOR', 'green');
```
### Example: Setting a value
You can use the `set-output` command in your workflow to set the same value:
{% bash %}
{% raw %}
``` yaml
```yaml{:copy}
- name: Set selected color
run: echo '::set-output name=SELECTED_COLOR::green'
id: random-color-generator
@@ -62,6 +81,22 @@ You can use the `set-output` command in your workflow to set the same value:
```
{% endraw %}
{% endbash %}
{% powershell %}
{% raw %}
```yaml{:copy}
- name: Set selected color
run: Write-Output "::set-output name=SELECTED_COLOR::green"
id: random-color-generator
- name: Get color
run: Write-Output "The selected color is ${{ steps.random-color-generator.outputs.SELECTED_COLOR }}"
```
{% endraw %}
{% endpowershell %}
The following table shows which toolkit functions are available within a workflow:
| Toolkit function | Equivalent workflow command |
@@ -85,186 +120,336 @@ The following table shows which toolkit functions are available within a workflo
## Setting an output parameter
```
Sets an action's output parameter.
```{:copy}
::set-output name={name}::{value}
```
Sets an action's output parameter.
Optionally, you can also declare output parameters in an action's metadata file. For more information, see "[Metadata syntax for {% data variables.product.prodname_actions %}](/articles/metadata-syntax-for-github-actions#outputs-for-docker-container-and-javascript-actions)."
### Example
### Example: Setting an output parameter
``` bash
{% bash %}
```bash{:copy}
echo "::set-output name=action_fruit::strawberry"
```
## Setting a debug message
{% endbash %}
{% powershell %}
```pwsh{:copy}
Write-Output "::set-output name=action_fruit::strawberry"
```
::debug::{message}
```
{% endpowershell %}
## Setting a debug message
Prints a debug message to the log. You must create a secret named `ACTIONS_STEP_DEBUG` with the value `true` to see the debug messages set by this command in the log. For more information, see "[Enabling debug logging](/actions/managing-workflow-runs/enabling-debug-logging)."
### Example
```{:copy}
::debug::{message}
```
``` bash
### Example: Setting a debug message
{% bash %}
```bash{:copy}
echo "::debug::Set the Octocat variable"
```
{% endbash %}
{% powershell %}
```pwsh{:copy}
Write-Output "::debug::Set the Octocat variable"
```
{% endpowershell %}
{% ifversion fpt or ghes > 3.2 or ghae-issue-4929 or ghec %}
## Setting a notice message
```
Creates a notice message and prints the message to the log. {% data reusables.actions.message-annotation-explanation %}
```{:copy}
::notice file={name},line={line},endLine={endLine},title={title}::{message}
```
Creates a notice message and prints the message to the log. {% data reusables.actions.message-annotation-explanation %}
{% data reusables.actions.message-parameters %}
### Example
### Example: Setting a notice message
``` bash
{% bash %}
```bash{:copy}
echo "::notice file=app.js,line=1,col=5,endColumn=7::Missing semicolon"
```
{% endbash %}
{% powershell %}
```pwsh{:copy}
Write-Output "::notice file=app.js,line=1,col=5,endColumn=7::Missing semicolon"
```
{% endpowershell %}
{% endif %}
## Setting a warning message
```
Creates a warning message and prints the message to the log. {% data reusables.actions.message-annotation-explanation %}
```{:copy}
::warning file={name},line={line},endLine={endLine},title={title}::{message}
```
Creates a warning message and prints the message to the log. {% data reusables.actions.message-annotation-explanation %}
{% data reusables.actions.message-parameters %}
### Example
### Example: Setting a warning message
``` bash
{% bash %}
```bash{:copy}
echo "::warning file=app.js,line=1,col=5,endColumn=7::Missing semicolon"
```
{% endbash %}
{% powershell %}
```pwsh{:copy}
Write-Output "::warning file=app.js,line=1,col=5,endColumn=7::Missing semicolon"
```
{% endpowershell %}
## Setting an error message
```
Creates an error message and prints the message to the log. {% data reusables.actions.message-annotation-explanation %}
```{:copy}
::error file={name},line={line},endLine={endLine},title={title}::{message}
```
Creates an error message and prints the message to the log. {% data reusables.actions.message-annotation-explanation %}
{% data reusables.actions.message-parameters %}
### Example
### Example: Setting an error message
``` bash
{% bash %}
```bash{:copy}
echo "::error file=app.js,line=1,col=5,endColumn=7::Missing semicolon"
```
{% endbash %}
{% powershell %}
```pwsh{:copy}
Write-Output "::error file=app.js,line=1,col=5,endColumn=7::Missing semicolon"
```
{% endpowershell %}
## Grouping log lines
```
Creates an expandable group in the log. To create a group, use the `group` command and specify a `title`. Anything you print to the log between the `group` and `endgroup` commands is nested inside an expandable entry in the log.
```{:copy}
::group::{title}
::endgroup::
```
Creates an expandable group in the log. To create a group, use the `group` command and specify a `title`. Anything you print to the log between the `group` and `endgroup` commands is nested inside an expandable entry in the log.
### Example: Grouping log lines
### Example
{% bash %}
```bash
echo "::group::My title"
echo "Inside group"
echo "::endgroup::"
```yaml{:copy}
jobs:
bash-example:
runs-on: ubuntu-latest
steps:
- name: Group of log lines
run: |
echo "::group::My title"
echo "Inside group"
echo "::endgroup::"
```
{% endbash %}
{% powershell %}
```yaml{:copy}
jobs:
powershell-example:
runs-on: windows-latest
steps:
- name: Group of log lines
run: |
Write-Output "::group::My title"
Write-Output "Inside group"
Write-Output "::endgroup::"
```
{% endpowershell %}
![Foldable group in workflow run log](/assets/images/actions-log-group.png)
## Masking a value in log
```
```{:copy}
::add-mask::{value}
```
Masking a value prevents a string or variable from being printed in the log. Each masked word separated by whitespace is replaced with the `*` character. You can use an environment variable or string for the mask's `value`.
### Example masking a string
### Example: Masking a string
When you print `"Mona The Octocat"` in the log, you'll see `"***"`.
```bash
{% bash %}
```bash{:copy}
echo "::add-mask::Mona The Octocat"
```
### Example masking an environment variable
{% endbash %}
{% powershell %}
```pwsh{:copy}
Write-Output "::add-mask::Mona The Octocat"
```
{% endpowershell %}
### Example: Masking an environment variable
When you print the variable `MY_NAME` or the value `"Mona The Octocat"` in the log, you'll see `"***"` instead of `"Mona The Octocat"`.
```bash
MY_NAME="Mona The Octocat"
echo "::add-mask::$MY_NAME"
{% bash %}
```yaml{:copy}
jobs:
bash-example:
runs-on: ubuntu-latest
env:
MY_NAME: "Mona The Octocat"
steps:
- name: bash-version
run: echo "::add-mask::$MY_NAME"
```
{% endbash %}
{% powershell %}
```yaml{:copy}
jobs:
powershell-example:
runs-on: windows-latest
env:
MY_NAME: "Mona The Octocat"
steps:
- name: powershell-version
run: Write-Output "::add-mask::$env:MY_NAME"
```
{% endpowershell %}
## Stopping and starting workflow commands
`::stop-commands::{endtoken}`
Stops processing any workflow commands. This special command allows you to log anything without accidentally running a workflow command. For example, you could stop logging to output an entire script that has comments.
```{:copy}
::stop-commands::{endtoken}
```
To stop the processing of workflow commands, pass a unique token to `stop-commands`. To resume processing workflow commands, pass the same token that you used to stop workflow commands.
{% warning %}
**Warning:** Make sure the token you're using is randomly generated and unique for each run. As demonstrated in the example below, you can generate a unique hash of your `github.token` for each run.
**Warning:** Make sure the token you're using is randomly generated and unique for each run.
{% endwarning %}
```
```{:copy}
::{endtoken}::
```
### Example stopping and starting workflow commands
### Example: Stopping and starting workflow commands
{% bash %}
{% raw %}
```yaml
```yaml{:copy}
jobs:
workflow-command-job:
runs-on: ubuntu-latest
steps:
- name: disable workflow commands
- name: Disable workflow commands
run: |
echo '::warning:: this is a warning'
echo "::stop-commands::`echo -n ${{ github.token }} | sha256sum | head -c 64`"
echo '::warning:: this will NOT be a warning'
echo "::`echo -n ${{ github.token }} | sha256sum | head -c 64`::"
echo '::warning:: this is a warning again'
echo '::warning:: This is a warning message, to demonstrate that commands are being processed.'
stopMarker=$(uuidgen)
echo "::stop-commands::$stopMarker"
echo '::warning:: This will NOT be rendered as a warning, because stop-commands has been invoked.'
echo "::$stopMarker::"
echo '::warning:: This is a warning again, because stop-commands has been turned off.'
```
{% endraw %}
{% endbash %}
{% powershell %}
{% raw %}
```yaml{:copy}
jobs:
workflow-command-job:
runs-on: windows-latest
steps:
- name: Disable workflow commands
run: |
Write-Output '::warning:: This is a warning message, to demonstrate that commands are being processed.'
$stopMarker = New-Guid
Write-Output "::stop-commands::$stopMarker"
Write-Output '::warning:: This will NOT be rendered as a warning, because stop-commands has been invoked.'
Write-Output "::$stopMarker::"
Write-Output '::warning:: This is a warning again, because stop-commands has been turned off.'
```
{% endraw %}
{% endpowershell %}
## Echoing command outputs
```
Enables or disables echoing of workflow commands. For example, if you use the `set-output` command in a workflow, it sets an output parameter but the workflow run's log does not show the command itself. If you enable command echoing, then the log shows the command, such as `::set-output name={name}::{value}`.
```{:copy}
::echo::on
::echo::off
```
Enables or disables echoing of workflow commands. For example, if you use the `set-output` command in a workflow, it sets an output parameter but the workflow run's log does not show the command itself. If you enable command echoing, then the log shows the command, such as `::set-output name={name}::{value}`.
Command echoing is disabled by default. However, a workflow command is echoed if there are any errors processing the command.
The `add-mask`, `debug`, `warning`, and `error` commands do not support echoing because their outputs are already echoed to the log.
You can also enable command echoing globally by turning on step debug logging using the `ACTIONS_STEP_DEBUG` secret. For more information, see "[Enabling debug logging](/actions/managing-workflow-runs/enabling-debug-logging)". In contrast, the `echo` workflow command lets you enable command echoing at a more granular level, rather than enabling it for every workflow in a repository.
### Example toggling command echoing
### Example: Toggling command echoing
```yaml
{% bash %}
```yaml{:copy}
jobs:
workflow-command-job:
runs-on: ubuntu-latest
@@ -278,9 +463,29 @@ jobs:
echo '::set-output name=action_echo::disabled'
```
The step above prints the following lines to the log:
{% endbash %}
{% powershell %}
```yaml{:copy}
jobs:
workflow-command-job:
runs-on: windows-latest
steps:
- name: toggle workflow command echoing
run: |
write-output "::set-output name=action_echo::disabled"
write-output "::echo::on"
write-output "::set-output name=action_echo::enabled"
write-output "::echo::off"
write-output "::set-output name=action_echo::disabled"
```
{% endpowershell %}
The example above prints the following lines to the log:
```{:copy}
::set-output name=action_echo::enabled
::echo::off
```
@@ -297,13 +502,13 @@ The `save-state` command can only be run within an action, and is not available
This example uses JavaScript to run the `save-state` command. The resulting environment variable is named `STATE_processID` with the value of `12345`:
``` javascript
```javascript{:copy}
console.log('::save-state name=processID::12345')
```
The `STATE_processID` variable is then exclusively available to the cleanup script running under the `main` action. This example runs in `main` and uses JavaScript to display the value assigned to the `STATE_processID` environment variable:
``` javascript
```javascript{:copy}
console.log("The running PID from the main action is: " + process.env.STATE_processID);
```
@@ -311,37 +516,70 @@ console.log("The running PID from the main action is: " + process.env.STATE_pro
During the execution of a workflow, the runner generates temporary files that can be used to perform certain actions. The path to these files are exposed via environment variables. You will need to use UTF-8 encoding when writing to these files to ensure proper processing of the commands. Multiple commands can be written to the same file, separated by newlines.
{% warning %}
{% powershell %}
**Warning:** On Windows, legacy PowerShell (`shell: powershell`) does not use UTF-8 by default.
{% note %}
When using `shell: powershell`, you must specify UTF-8 encoding. For example:
**Note:** PowerShell versions 5.1 and below (`shell: powershell`) do not use UTF-8 by default, so you must specify the UTF-8 encoding. For example:
```yaml
```yaml{:copy}
jobs:
legacy-powershell-example:
uses: windows-2019
runs-on: windows-latest
steps:
- shell: powershell
run: echo "mypath" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
run: |
"mypath" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
```
Alternatively, you can use PowerShell Core (`shell: pwsh`), which defaults to UTF-8.
PowerShell Core versions 6 and higher (`shell: pwsh`) use UTF-8 by default. For example:
{% endwarning %}
```yaml{:copy}
jobs:
powershell-core-example:
runs-on: windows-latest
steps:
- shell: pwsh
run: |
"mypath" >> $env:GITHUB_PATH
```
{% endnote %}
{% endpowershell %}
## Setting an environment variable
``` bash
{% bash %}
```bash{:copy}
echo "{environment_variable_name}={value}" >> $GITHUB_ENV
```
{% endbash %}
{% powershell %}
- Using PowerShell version 6 and higher:
```pwsh{:copy}
"{environment_variable_name}={value}" >> $env:GITHUB_ENV
```
- Using PowerShell version 5.1 and below:
```powershell{:copy}
"{environment_variable_name}={value}" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
```
{% endpowershell %}
You can make an environment variable available to any subsequent steps in a workflow job by defining or updating the environment variable and writing this to the `GITHUB_ENV` environment file. The step that creates or updates the environment variable does not have access to the new value, but all subsequent steps in a job will have access. The names of environment variables are case-sensitive, and you can include punctuation. For more information, see "[Environment variables](/actions/learn-github-actions/environment-variables)."
### Example
{% bash %}
{% raw %}
```
```yaml{:copy}
steps:
- name: Set the value
id: step_one
@@ -354,11 +592,31 @@ steps:
```
{% endraw %}
{% endbash %}
{% powershell %}
{% raw %}
```yaml{:copy}
steps:
- name: Set the value
id: step_one
run: |
"action_state=yellow" >> $env:GITHUB_ENV
- name: Use the value
id: step_two
run: |
Write-Output "${{ env.action_state }}" # This will output 'yellow'
```
{% endraw %}
{% endpowershell %}
### Multiline strings
For multiline strings, you may use a delimiter with the following syntax.
```
```{:copy}
{name}<<{delimiter}
{value}
{delimiter}
@@ -366,29 +624,75 @@ For multiline strings, you may use a delimiter with the following syntax.
#### Example
In this example, we use `EOF` as a delimiter and set the `JSON_RESPONSE` environment variable to the value of the curl response.
```yaml
This example uses `EOF` as a delimiter, and sets the `JSON_RESPONSE` environment variable to the value of the `curl` response.
{% bash %}
```yaml{:copy}
steps:
- name: Set the value
- name: Set the value in bash
id: step_one
run: |
echo 'JSON_RESPONSE<<EOF' >> $GITHUB_ENV
curl https://httpbin.org/json >> $GITHUB_ENV
curl https://example.lab >> $GITHUB_ENV
echo 'EOF' >> $GITHUB_ENV
```
## Adding a system path
{% endbash %}
``` bash
echo "{path}" >> $GITHUB_PATH
{% powershell %}
```yaml{:copy}
steps:
- name: Set the value in pwsh
id: step_one
run: |
"JSON_RESPONSE<<EOF" >> $env:GITHUB_ENV
(Invoke-WebRequest -Uri "https://example.lab").Content >> $env:GITHUB_ENV
"EOF" >> $env:GITHUB_ENV
shell: pwsh
```
{% endpowershell %}
## Adding a system path
Prepends a directory to the system `PATH` variable and automatically makes it available to all subsequent actions in the current job; the currently running action cannot access the updated path variable. To see the currently defined paths for your job, you can use `echo "$PATH"` in a step or an action.
{% bash %}
```bash{:copy}
echo "{path}" >> $GITHUB_PATH
```
{% endbash %}
{% powershell %}
```pwsh{:copy}
"{path}" >> $env:GITHUB_PATH
```
{% endpowershell %}
### Example
This example demonstrates how to add the user `$HOME/.local/bin` directory to `PATH`:
``` bash
{% bash %}
```bash{:copy}
echo "$HOME/.local/bin" >> $GITHUB_PATH
```
{% endbash %}
This example demonstrates how to add the user `$env:HOMEPATH/.local/bin` directory to `PATH`:
{% powershell %}
```pwsh{:copy}
"$env:HOMEPATH/.local/bin" >> $env:GITHUB_PATH
```
{% endpowershell %}

View File

@@ -342,6 +342,31 @@ steps:
uses: actions/heroku@1.0.0
```
#### Example: Using secrets
Secrets cannot be directly referenced in `if:` conditionals. Instead, consider setting secrets as job-level environment variables, then referencing the environment variables to conditionally run steps in the job.
If a secret has not been set, the return value of an expression referencing the secret (such as {% raw %}`${{ secrets.SuperSecret }}`{% endraw %} in the example) will be an empty string.
{% raw %}
```yaml
name: Run a step if a secret has been set
on: push
jobs:
my-jobname:
runs-on: ubuntu-latest
env:
super_secret: ${{ secrets.SuperSecret }}
steps:
- if: ${{ env.super_secret != '' }}
run: echo 'This step will only run if the secret has a value set.'
- if: ${{ env.super_secret == '' }}
run: echo 'This step will only run if the secret does not have a value set.'
```
{% endraw %}
For more information, see "[Context availability](/actions/learn-github-actions/contexts#context-availability)" and "[Encrypted secrets](/actions/security-guides/encrypted-secrets)."
### `jobs.<job_id>.steps[*].name`
A name for your step to display on {% data variables.product.prodname_dotcom %}.
@@ -714,6 +739,12 @@ The maximum number of minutes to let a job run before {% data variables.product.
If the timeout exceeds the job execution time limit for the runner, the job will be canceled when the execution time limit is met instead. For more information about job execution time limits, see {% ifversion fpt or ghec or ghes %}"[Usage limits and billing](/actions/reference/usage-limits-billing-and-administration#usage-limits)" for {% data variables.product.prodname_dotcom %}-hosted runners and {% endif %}"[About self-hosted runners](/actions/hosting-your-own-runners/about-self-hosted-runners/#usage-limits){% ifversion fpt or ghec or ghes %}" for self-hosted runner usage limits.{% elsif ghae %}."{% endif %}
{% note %}
**Note:** {% data reusables.actions.github-token-expiration %} For self-hosted runners, the token may be the limiting factor if the job timeout is greater than 24 hours. For more information on the `GITHUB_TOKEN`, see "[About the `GITHUB_TOKEN` secret](/actions/security-guides/automatic-token-authentication#about-the-github_token-secret)."
{% endnote %}
## `jobs.<job_id>.strategy`
{% data reusables.actions.jobs.section-using-a-build-matrix-for-your-jobs-strategy %}

View File

@@ -290,7 +290,7 @@ GitHub helps you avoid using third-party software that contains known vulnerabil
| Dependency Management Tool | Description |
|----|----|
| Dependabot Alerts | You can track your repository's dependencies and receive Dependabot alerts when your enterprise detects vulnerable dependencies. For more information, see "[About alerts for vulnerable dependencies](/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/about-alerts-for-vulnerable-dependencies)." |
| Dependabot Alerts | You can track your repository's dependencies and receive Dependabot alerts when your enterprise detects vulnerable dependencies. For more information, see "[About {% data variables.product.prodname_dependabot_alerts %}](/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/about-alerts-for-vulnerable-dependencies)." |
| Dependency Graph | The dependency graph is a summary of the manifest and lock files stored in a repository. It shows you the ecosystems and packages your codebase depends on (its dependencies) and the repositories and packages that depend on your project (its dependents). For more information, see "[About the dependency graph](/code-security/supply-chain-security/understanding-your-software-supply-chain/about-the-dependency-graph)." |{% ifversion ghes > 3.1 or ghec %}
| Dependency Review | If a pull request contains changes to dependencies, you can view a summary of what has changed and whether there are known vulnerabilities in any of the dependencies. For more information, see "[About dependency review](/code-security/supply-chain-security/understanding-your-software-supply-chain/about-dependency-review)" or "[Reviewing Dependency Changes in a Pull Request](/github/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/reviewing-dependency-changes-in-a-pull-request)." | {% endif %} {% ifversion ghec or ghes > 3.2 %}
| Dependabot Security Updates | Dependabot can fix vulnerable dependencies for you by raising pull requests with security updates. For more information, see "[About Dependabot security updates](/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/about-dependabot-security-updates)." |

View File

@@ -49,7 +49,7 @@ You can also choose to manually sync vulnerability data at any time. For more in
When {% data variables.product.product_location %} receives information about a vulnerability, it identifies repositories in {% data variables.product.product_location %} that use the affected version of the dependency and generates {% data variables.product.prodname_dependabot_alerts %}. You can choose whether or not to notify users automatically about new {% data variables.product.prodname_dependabot_alerts %}.
For repositories with {% data variables.product.prodname_dependabot_alerts %} enabled, scanning is triggered on any push to the default branch that contains a manifest file or lock file. Additionally, when a new vulnerability record is added to {% data variables.product.product_location %}, {% data variables.product.product_name %} scans all existing repositories on {% data variables.product.product_location %} and generates alerts for any repository that is vulnerable. For more information, see "[About alerts for vulnerable dependencies](/github/managing-security-vulnerabilities/about-alerts-for-vulnerable-dependencies)."
For repositories with {% data variables.product.prodname_dependabot_alerts %} enabled, scanning is triggered on any push to the default branch that contains a manifest file or lock file. Additionally, when a new vulnerability record is added to {% data variables.product.product_location %}, {% data variables.product.product_name %} scans all existing repositories on {% data variables.product.product_location %} and generates alerts for any repository that is vulnerable. For more information, see "[About {% data variables.product.prodname_dependabot_alerts %}](/github/managing-security-vulnerabilities/about-alerts-for-vulnerable-dependencies)."
{% ifversion ghes > 3.2 %}
### About {% data variables.product.prodname_dependabot_updates %}
@@ -67,7 +67,7 @@ After you enable {% data variables.product.prodname_dependabot_alerts %}, you ca
With {% data variables.product.prodname_dependabot_updates %}, {% data variables.product.company_short %} automatically creates pull requests to update dependencies in two ways.
- **{% data variables.product.prodname_dependabot_version_updates %}**: Users add a {% data variables.product.prodname_dependabot %} configuration file to the repository to enable {% data variables.product.prodname_dependabot %} to create pull requests when a new version of a tracked dependency is released. For more information, see "[About {% data variables.product.prodname_dependabot_version_updates %}](/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/about-dependabot-version-updates)."
- **{% data variables.product.prodname_dependabot_security_updates %}**: Users toggle a repository setting to enable {% data variables.product.prodname_dependabot %} to create pull requests when {% data variables.product.prodname_dotcom %} detects a vulnerability in one of the dependencies of the dependency graph for the repository. For more information, see "[About alerts for vulnerable dependencies](/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/about-alerts-for-vulnerable-dependencies)" and "[About {% data variables.product.prodname_dependabot_security_updates %}](/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/about-dependabot-security-updates)."
- **{% data variables.product.prodname_dependabot_security_updates %}**: Users toggle a repository setting to enable {% data variables.product.prodname_dependabot %} to create pull requests when {% data variables.product.prodname_dotcom %} detects a vulnerability in one of the dependencies of the dependency graph for the repository. For more information, see "[About {% data variables.product.prodname_dependabot_alerts %}](/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/about-alerts-for-vulnerable-dependencies)" and "[About {% data variables.product.prodname_dependabot_security_updates %}](/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/about-dependabot-security-updates)."
{% endif %}
## Enabling {% data variables.product.prodname_dependabot_alerts %}
@@ -101,7 +101,7 @@ After you enable {% data variables.product.prodname_dependabot_alerts %} for you
{% ifversion ghes %}
Before you enable {% data variables.product.prodname_dependabot_updates %}, you must configure {% data variables.product.product_location %} to use {% data variables.product.prodname_actions %} with self-hosted runners. For more information, see "[Getting started with {% data variables.product.prodname_actions %} for GitHub Enterprise Server](/admin/github-actions/enabling-github-actions-for-github-enterprise-server/getting-started-with-github-actions-for-github-enterprise-server)."
{% data variables.product.prodname_dependabot_updates %} are not supported on {% data variables.product.product_name %} if your enterprise uses clustering or a high-availability configuration.
{% data variables.product.prodname_dependabot_updates %} are not supported on {% data variables.product.product_name %} if your enterprise uses clustering.
{% endif %}
{% data reusables.enterprise_site_admin_settings.sign-in %}

View File

@@ -25,7 +25,7 @@ Some administrative ports are required to configure {% data variables.product.pr
| Port | Service | Description |
|---|---|---|
| 8443 | HTTPS | Secure web-based {% data variables.enterprise.management_console %}. Required for basic installation and configuration. |
| 8080 | HTTP | Plain-text web-based {% data variables.enterprise.management_console %}. Not required unless SSL is disabled manually. |
| 8080 | HTTP | Plain-text web-based {% data variables.enterprise.management_console %}. Not required unless TLS is disabled manually. |
| 122 | SSH | Shell access for {% data variables.product.product_location %}. Required to be open to incoming connections between all nodes in a high availability configuration. The default SSH port (22) is dedicated to Git and SSH application network traffic. |
| 1194/UDP | VPN | Secure replication network tunnel in high availability configuration. Required to be open for communication between all nodes in the configuration.|
| 123/UDP| NTP | Required for time protocol operation. |
@@ -38,7 +38,7 @@ Application ports provide web application and Git access for end users.
| Port | Service | Description |
|---|---|---|
| 443 | HTTPS | Access to the web application and Git over HTTPS. |
| 80 | HTTP | Access to the web application. All requests are redirected to the HTTPS port when SSL is enabled. |
| 80 | HTTP | Access to the web application. All requests are redirected to the HTTPS port if TLS is configured. |
| 22 | SSH | Access to Git over SSH. Supports clone, fetch, and push operations to public and private repositories. |
| 9418 | Git | Git protocol port supports clone and fetch operations to public repositories with unencrypted network communication. {% data reusables.enterprise_installation.when-9418-necessary %} |
@@ -51,3 +51,18 @@ Email ports must be accessible directly or via relay for inbound email support f
| Port | Service | Description |
|---|---|---|
| 25 | SMTP | Support for SMTP with encryption (STARTTLS). |
## {% data variables.product.prodname_actions %} ports
{% data variables.product.prodname_actions %} ports must be accessible for self-hosted runners to connect to {% data variables.product.product_location %}. For more information, see "[About self-hosted runners](/actions/hosting-your-own-runners/about-self-hosted-runners#communication-between-self-hosted-runners-and-github-enterprise-server)."
| Port | Service | Description |
|---|---|---|
| 443 | HTTPS | Self-hosted runners connect to {% data variables.product.product_location %} to receive job assignments and to download new versions of the runner application. Required if TLS is configured.
| 80 | HTTP | Self-hosted runners connect to {% data variables.product.product_location %} to receive job assignments and to download new versions of the runner application. Required if TLS is not configured.
If you enable automatic access to {% data variables.product.prodname_dotcom_the_website %} actions, {% data variables.product.prodname_actions %} will always search for an action on {% data variables.product.product_location %} first, via these ports, before checking {% data variables.product.prodname_dotcom_the_website %}. For more information, see "[Enabling automatic access to {% data variables.product.prodname_dotcom_the_website %} actions using {% data variables.product.prodname_github_connect %}](/admin/github-actions/managing-access-to-actions-from-githubcom/enabling-automatic-access-to-githubcom-actions-using-github-connect#about-resolution-for-actions-using-github-connect)."
## Further reading
- "[Configuring TLS](/admin/configuration/configuring-network-settings/configuring-tls)"

View File

@@ -22,6 +22,8 @@ shortTitle: About configuration
{% endif %}
{% ifversion ghae %}
To get started with {% data variables.product.product_name %}, you first need to deploy {% data variables.product.product_name %}. For more information, see "[Deploying {% data variables.product.product_name %}](/admin/configuration/configuring-your-enterprise/deploying-github-ae)."
The first time you access your enterprise, you will complete an initial configuration to get {% data variables.product.product_name %} ready to use. The initial configuration includes connecting your enterprise with an identity provider (IdP), authenticating with SAML SSO, configuring policies for repositories and organizations in your enterprise, and configuring SMTP for outbound email. For more information, see "[Initializing {% data variables.product.prodname_ghe_managed %}](/admin/configuration/initializing-github-ae)."
Later, you can use the site admin dashboard and enterprise settings to further configure your enterprise, manage users, organizations and repositories, and set policies that reduce risk and increase quality.

View File

@@ -0,0 +1,68 @@
---
title: Deploying GitHub AE
intro: 'You can deploy {% data variables.product.product_name %} to an available Azure region.'
versions:
ghae: '*'
topics:
- Accounts
- Enterprise
type: how_to
shortTitle: Deploy GitHub AE
redirect_from:
- /get-started/signing-up-for-github/setting-up-a-trial-of-github-ae
---
## About deployment of {% data variables.product.product_name %}
{% data reusables.github-ae.github-ae-enables-you %} For more information, see "[About {% data variables.product.prodname_ghe_managed %}](/admin/overview/about-github-ae)."
After you purchase or start a trial of {% data variables.product.product_name %}, you can deploy {% data variables.product.product_name %} to an available Azure region. This guide refers to the Azure resource that contains the deployment of {% data variables.product.product_name %} as the {% data variables.product.product_name %} account. You'll use the Azure portal at [https://portal.azure.com](https://portal.azure.com) to deploy the {% data variables.product.product_name %} account.
## Prerequisites
- Before you can deploy {% data variables.product.product_name %}, you must request access from your {% data variables.product.company_short %} account team. {% data variables.product.company_short %} will enable deployment of {% data variables.product.product_name %} for your Azure subscription. If you haven't already purchased {% data variables.product.product_name %}, you can contact {% data variables.contact.contact_enterprise_sales %} to check your eligibility for a trial.
- You must have permission to perform the `/register/action` operation for the resource provider in Azure. The permission is included in the `Contributor` and `Owner` roles. For more information, see [Azure resource providers and types](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider) in the Microsoft documentation.
## Deploying {% data variables.product.product_name %} with the {% data variables.actions.azure_portal %}
The {% data variables.actions.azure_portal %} allows you to deploy the {% data variables.product.product_name %} account in your Azure resource group.
1. Click one of the following two links to begin deployment of {% data variables.product.product_name %}. The link you should click depends on the Azure cloud where you plan to deploy {% data variables.product.product_name %}. For more information about Azure Government, see [What is Azure Government?](https://docs.microsoft.com/en-us/azure/azure-government/documentation-government-welcome) in the Microsoft documentation.
- [Deploy {% data variables.product.product_name %} to Azure Commercial](https://aka.ms/create-github-ae-instance)
- [Deploy {% data variables.product.product_name %} to Azure Government](https://aka.ms/create-github-ae-instance-gov)
1. To begin the process of adding a new {% data variables.product.product_name %} account, click **Create GitHub AE account**.
1. Complete the "Project details" and "Instance details" fields.
![{% data variables.actions.azure_portal %} search result](/assets/images/azure/github-ae-azure-portal-form.png)
- **Account name:** The hostname for your enterprise
- **Administrator username:** A username for the initial enterprise owner that will be created in {% data variables.product.product_name %}
- **Administrator email:** The email address that will receive the login information
1. To review a summary of the proposed changes, click **Review + create**.
1. After the validation process has completed, click **Create**.
The email address you entered above will receive instructions on how to access your enterprise. After you have access, you can get started by following the initial setup steps. For more information, see "[Initializing {% data variables.product.product_name %}](/admin/configuration/initializing-github-ae)."
{% note %}
**Note:** Software updates for your {% data variables.product.product_name %} deployment are performed by {% data variables.product.prodname_dotcom %}. For more information, see "[About upgrades to new releases](/admin/overview/about-upgrades-to-new-releases)."
{% endnote %}
## Navigating to your enterprise
You can use the {% data variables.actions.azure_portal %} to navigate to your {% data variables.product.product_name %} deployment. The resulting list includes all the {% data variables.product.product_name %} deployments in your Azure region.
1. On the {% data variables.actions.azure_portal %}, in the left panel, click **All resources**.
1. From the available filters, click **All types**, then deselect **Select all** and select **GitHub AE**:
![{% data variables.actions.azure_portal %} search result](/assets/images/azure/github-ae-azure-portal-type-filter.png)
## Next steps
- Once your deployment has been provisioned, the next step is to initialize {% data variables.product.product_name %}. For more information, see "[Initializing {% data variables.product.product_name %}](/github-ae@latest/admin/configuration/configuring-your-enterprise/initializing-github-ae)."
- If you're trying {% data variables.product.product_name %}, you can upgrade to a full license at any time during the trial period by contacting contact {% data variables.contact.contact_enterprise_sales %}. If you haven't upgraded by the last day of your trial, then the deployment is automatically deleted. If you need more time to evaluate {% data variables.product.product_name %}, contact {% data variables.contact.contact_enterprise_sales %} to request an extension.
## Further reading
- "[Enabling {% data variables.product.prodname_advanced_security %} features on {% data variables.product.product_name %}](/github/getting-started-with-github/about-github-advanced-security#enabling-advanced-security-features-on-github-ae)"
- "[{% data variables.product.product_name %} release notes](/github-ae@latest/admin/overview/github-ae-release-notes)"

View File

@@ -16,6 +16,7 @@ topics:
- Enterprise
children:
- /about-enterprise-configuration
- /deploying-github-ae
- /initializing-github-ae
- /accessing-the-management-console
- /accessing-the-administrative-shell-ssh

View File

@@ -33,7 +33,7 @@ topics:
{% data variables.product.prodname_actions %} helps your team work faster at scale. When large repositories start using {% data variables.product.prodname_actions %}, teams merge significantly more pull requests per day, and the pull requests are merged significantly faster. For more information, see "[Writing and shipping code faster](https://octoverse.github.com/writing-code-faster/#scale-through-automation)" in the State of the Octoverse.
You can create your own unique automations, or you can use and adapt workflows from our ecosystem of over 10,000 actions built by industry leaders and the open source community. For more information, see "[Finding and customizing actions](/actions/learn-github-actions/finding-and-customizing-actions)."
You can create your own unique automations, or you can use and adapt workflows from our ecosystem of over 10,000 actions built by industry leaders and the open source community. {% ifversion ghec %}For more information, see "[Finding and customizing actions](/actions/learn-github-actions/finding-and-customizing-actions)."{% else %}You can restrict your developers to using actions that exist on {% data variables.product.product_location %}, or you can allow your developers to access actions on {% data variables.product.prodname_dotcom_the_website %}. For more information, see "[About using actions in your enterprise](/admin/github-actions/managing-access-to-actions-from-githubcom/about-using-actions-in-your-enterprise)."{% endif %}
{% data variables.product.prodname_actions %} is developer friendly, because it's integrated directly into the familiar {% data variables.product.product_name %} experience.

Some files were not shown because too many files have changed in this diff Show More