6.7 KiB
title, shortTitle, intro, versions, type, topics
| title | shortTitle | intro | versions | type | topics | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Configuring the backup service | Configure the backup service | Enable and configure the built-in backup service in the {% data variables.enterprise.management_console %}, and optionally migrate legacy settings. |
|
how_to |
|
Before configuring the backup service, ensure you have:
- A {% data variables.product.prodname_ghe_server %} instance running version 3.17 or later.
- A dedicated storage volume provisioned and managed for use as the backup target.
Storage requirements
To ensure reliable and performant backups, your storage must meet the following requirements:
-
Capacity: Allocate at least five times the amount of storage used by your primary {% data variables.product.github %} appliance data disk. This accounts for historical snapshots and future growth.
-
Filesystem support: The backup service uses hard links for efficient storage, and your {% data variables.product.github %} instance uses symbolic links. The backup target must support both symbolic and hard links, and it must use a case-sensitive filesystem to prevent conflicts.
You can test whether your filesystem supports hardlinking symbolic links by running:
cd /data/backup sudo touch file sudo ln -s file symlink sudo ln symlink hardlink ls -laIf the
ln symlink hardlinkcommand completes successfully, the filesystem is supported. -
Performance: Use high-performance storage with low latency and high IOPS to avoid slow backups and restores.
-
NFS: Avoid using an NFS mount for the backup directory (typically
/data/backup), as this can lead to timeouts and degraded performance.
Configuring the backup service
You can configure {% data variables.product.prodname_enterprise_backup_service %} through the {% data variables.enterprise.management_console %}.
Setting up the backup target
Before configuring the service, you must prepare the storage volume where backups will be stored.
Using a new block device
If you're using a dedicated block device as your backup target, you need to initialize it via SSH before proceeding in the {% data variables.enterprise.management_console %}. This process will format the device and erase all existing data.
-
Connect to your instance via SSH as the
adminuser. See AUTOTITLE. -
Attach your backup block device to the instance.
-
Identify the device name using
lsblkto list available block devices. Make sure you select the correct device to avoid data loss.lsblk -
Run the initialization command, replacing
YOUR_DEVICE_NAMEwith the actual device name identified in the previous step.[!WARNING] This command will permanently erase all data on the specified device. Double-check the device name and back up any important data before proceeding.
{% ifversion ghes > 3.17 %}
ghe-storage-init-backup /dev/YOUR_DEVICE_NAME{% else %}
/usr/local/share/enterprise/ghe-storage-init-backup /dev/YOUR_DEVICE_NAME{% endif %}
This command:
- Formats the device (erases all data).
- Prepares it for use by the backup service.
- Sets it to mount automatically at
/data/backupon boot.{% ifversion ghes > 3.19 %} - If in a clustered environment, configures the node in
cluster.confwith thebackup-serverrole.{% endif %}
{% ifversion ghes = 3.17 %} From {% data variables.product.prodname_ghe_server %} 3.17.4 onward, the script is installed in PATH so you can run it directly using:
ghe-storage-init-backup /dev/YOUR_DEVICE_NAME. {% endif %}
Reusing a previously initialized disk
If the device was already initialized using ghe-storage-init-backup, you can reuse it without reformatting:
-
Connect to your instance via SSH as the
adminuser. -
Attach the disk to the instance.
-
Create the mount point, if it doesn't exist.
sudo mkdir -p /data/backup -
Enable and start the mount service.
sudo systemctl enable ghe-backup-disk.service sudo systemctl start ghe-backup-disk.serviceThis will mount the device at
/data/backupand ensures it's mounted automatically in the future.
Configuring backup settings
After the backup target is mounted, the Backup Service page will become available in the {% data variables.enterprise.management_console %}. {% ifversion ghes > 3.19 %} If your instance is part of a clustered environment, the system will automatically detect the node that was initialized with ghe-storage-init-backup and treat it as the backup server. {% endif %}
[!NOTE] The settings page won’t appear until the backup storage is mounted at
/data/backupby completing the initialization or mount steps above.
If you're migrating from {% data variables.product.prodname_enterprise_backup_utilities %}, you can transfer your configuration in one of two ways:
-
Manual configuration: Recreate your settings directly in the {% data variables.enterprise.management_console %}.
-
Command-line migration: SSH into your instance, copy your
backup.configfile from backup-utils, and run:ghe-migrate-backup-config /path/to/your/backup.configUse the
--dry-runflag to preview changes without applying them.
Scheduling automated backups
Once the service is configured, you can define a backup schedule.
- In the {% data variables.enterprise.management_console %}, open the "Backups" tab from the top menu.
- In the "Backup Schedule" section, choose a predefined schedule (e.g., Daily) or enter a custom cron expression.
- Click Save to apply the changes.
The first run will be a full backup. Future runs will be incremental. If a new backup attempt starts while a previous one is still running, it may be skipped or fail. In that case, adjust the schedule to avoid overlap.
{% ifversion ghes > 3.19 %}
Configuring backups from a replica node
For high availability, you can designate a replica node as your backup server. To minimize latency, {% data variables.product.github %} recommends picking a replica node in the same region or datacenter as your primary node.
Important
Backups from cache replica nodes or active geo replica nodes are not supported.
To configure your backup server, run the following commands, replacing HOSTNAME with the hostname of the node:
ghe-config cluster.HOSTNAME.backup-server true
ghe-config-apply
You can now run ghe-backup directly on your replica node.
Warning
Due to the latency between primary and replica nodes, you may lose data when backing up from a replica node.
{% endif %}