1
0
mirror of synced 2026-01-01 18:05:46 -05:00
Files
docs/data/release-notes/enterprise-server/3-2/3.yml

31 lines
4.3 KiB
YAML

date: '2021-11-09'
sections:
security_fixes:
- A path traversal vulnerability was identified in {% data variables.product.prodname_pages %} builds on {% data variables.product.prodname_ghe_server %} that could allow an attacker to read system files. To exploit this vulnerability, an attacker needed permission to create and build a {% data variables.product.prodname_pages %} site on the {% data variables.product.prodname_ghe_server %} instance. This vulnerability affected all versions of {% data variables.product.prodname_ghe_server %} prior to 3.3, and was fixed in versions 3.0.19, 3.1.11, and 3.2.3. This vulnerability was reported through the {% data variables.product.company_short %} Bug Bounty program and has been assigned CVE-2021-22870.
- Packages have been updated to the latest security versions.
bugs:
- Some Git operations failed after upgrading a {% data variables.product.prodname_ghe_server %} 3.x cluster because of the HAProxy configuration.
- Unicorn worker counts might have been set incorrectly in clustering mode.
- Resqued worker counts might have been set incorrectly in clustering mode.
- If Ubuntu's Uncomplicated Firewall (UFW) status was inactive, a client could not clearly see it in the logs.
- Upgrading from {% data variables.product.prodname_ghe_server %} 2.x to 3.x failed when there were UTF8 characters in an LDAP configuration.
- Some pages and Git-related background jobs might not run in cluster mode with certain cluster configurations.
- The documentation link for Server Statistics was broken.
- When a new tag was created, the [push](/developers/webhooks-and-events/webhooks/webhook-events-and-payloads#push) webhook payload did not display a correct `head_commit` object. Now, when a new tag is created, the push webhook payload now always includes a `head_commit` object that contains the data of the commit that the new tag points to. As a result, the `head_commit` object will always contain the commit data of the payload's `after` commit.
- The enterprise audit log page would not display audit events for {% data variables.product.prodname_secret_scanning %}.
- There was an insufficient job timeout for replica repairs.
- A repository's releases page would return a 500 error when viewing releases.
- 'Users were not warned about potentially dangerous bidirectional unicode characters when viewing files. For more information, see "[Warning about bidirectional Unicode text](https://github.co/hiddenchars)" in {% data variables.product.prodname_blog %}.'
- Hookshot Go sent distribution type metrics that Collectd could not handle, which caused a ballooning of parsing errors.
- Public repositories displayed unexpected results from {% data variables.product.prodname_secret_scanning %} with a type of `Unknown Token`.
changes:
- Kafka configuration improvements have been added. When deleting repositories, package files are now immediately deleted from storage account to free up space. `DestroyDeletedPackageVersionsJob` now deletes package files from storage account for stale packages along with metadata records.
known_issues:
- On a freshly set up {% data variables.product.prodname_ghe_server %} without any users, an attacker could create the first admin user.
- Custom firewall rules are removed during the upgrade process.
- Git LFS tracked files [uploaded through the web interface](https://github.com/blog/2105-upload-files-to-your-repositories) are incorrectly added directly to the repository.
- Issues cannot be closed if they contain a permalink to a blob in the same repository, where the blob's file path is longer than 255 characters.
- When "Users can search GitHub.com" is enabled with GitHub Connect, issues in private and internal repositories are not included in GitHub.com search results.
- The {% data variables.product.prodname_registry %} npm registry no longer returns a time value in metadata responses. This was done to allow for substantial performance improvements. We continue to have all the data necessary to return a time value as part of the metadata response and will resume returning this value in the future once we have solved the existing performance issues.
- Resource limits that are specific to processing pre-receive hooks may cause some pre-receive hooks to fail.
- '{% data reusables.release-notes.ghas-3.4-secret-scanning-known-issue %}'