1
0
mirror of synced 2025-12-22 03:16:52 -05:00

Update content/copilot/overview-of-github-copilot/about-github-copilot.md

Co-authored-by: Matt Pollard <mattpollard@users.noreply.github.com>
This commit is contained in:
Jules
2022-06-13 08:27:17 +02:00
committed by GitHub
parent f0034f5608
commit 4b72f23f27

View File

@@ -24,7 +24,7 @@ When using natural language comments to elicit suggestions, comments written in
You are responsible for ensuring the security and quality of your code. We recommend you take the same precautions when using code generated by {% data variables.product.prodname_copilot %} that you would when using any code you didn't write yourself. These precautions include rigorous testing, copyright analysis, and tracking for security vulnerabilities. {% data variables.product.company_short %} provides a number of features to help you monitor and improve code quality, such as {% data variables.product.prodname_actions %}, {% data variables.product.prodname_dependabot %}, {% data variables.product.prodname_codeql %} and {% data variables.product.prodname_code_scanning %}. All these features are free to use in public repositories. For more information, see "[Understanding {% data variables.product.prodname_actions %}](/actions/learn-github-actions/understanding-github-actions)" and "[{% data variables.product.company_short %} security features](/code-security/getting-started/github-security-features)."
{% data variables.product.prodname_copilot %} uses filters to block offensive words in the prompts and avoid producing suggestions in sensitive contexts. We are committed to constantly improving the filter system to more intelligently detect and remove offensive suggestions generated by {% data variables.product.prodname_copilot %}, including biased, discriminatory, or abusive outputs. If you see an offensive suggestion generated by {% data variables.product.prodname_copilot %}, please report it directly to copilot-safety@github.com so that we can improve our safeguards.
{% data variables.product.prodname_copilot %} uses filters to block offensive words in the prompts and avoid producing suggestions in sensitive contexts. We are committed to constantly improving the filter system to more intelligently detect and remove offensive suggestions generated by {% data variables.product.prodname_copilot %}, including biased, discriminatory, or abusive outputs. If you see an offensive suggestion generated by {% data variables.product.prodname_copilot %}, please report the suggestion directly to copilot-safety@github.com so that we can improve our safeguards.