This post intents to serve as my personal notebook, when someone know an improvement, feel free to comment.
A client is having a single PostgreSQL server instance to store production data (hosted by a big IaaS provider) with VPN access to the hosted network. Now the question is – how could the client backup the database data and restore the database in case of disaster? Many serious DB admins may hate this approach but the client has decided going this way instead of a proper DB cluster (or having the DB ran as PaaS as we adviced)
My idea is – to let the client backup (rsync) the database WAL files.
Setting up the backup
local replication postgres peer
wal_level = archive max_wal_senders = 1 archive_mode = on archive_command = 'test ! -f /var/lib/pgsql/9.6/backups/%f && gzip < %p > /var/lib/pgsql/9.6/backups/%f'
Create a base backup:
pg_basebackup -D /some/backup/folder -Ft -x -z
Here we have to assume the client can fetch (rsync) the base backup and incremental WAL log files from the backup location /var/lib/pgsql/9.6/backups/
Using selinux (e.g. RHEL), the postgres service require special permissions for directories to write to. Therefore if we want to backup to different folder (/mnt/shared/dbbackups), we have to add following permissions to the folder:
chcon system_u:object_r:postgresql_db_t:s0 /mnt/shared/dbbackups
Restore from backup
Restore the base backup (assuming the WAL log files are in the location /media/backup/wals
Reinstall postgresql (in the stopped state)
Restore the base backup
Create a recovery file: /var/lib/pgsql/9.6/data/recovery.conf
restore_command = 'gunzip < /media/backup/wals/%f > %p'
Restart PostgreSQL server