How To Configure Warm Standby with PITRTools » History » Revision 20
« Previous |
Revision 20/51
(diff)
| Next »
Ivan Lezhnjov, 05/07/2013 09:32 AM
- Table of contents
- How To Configure Warm Standby with pitrtools
- What Is pitrtools and Why Would You Want To Use It?
- Naming Conventions
- The Test Setup
- The Process
- Installing and Configuring pitrtools
- SSH Key-based Login for pitrtools
- Master Configuration File
- Slave aka Standby Server
- Failing Over
- Other things you could do with pitrtools and some tips
- Getting Help
How To Configure Warm Standby with pitrtools¶
This how-to demonstrates how to configure a warm standby using PostgreSQL 8.4 and pitrtools.
What Is pitrtools and Why Would You Want To Use It?¶
Essentially, pitrtools is a wrapper around standard tools, such as rsync and PostgreSQL's internal functionality, that makes, among other things, creating and managing standby configurations and subsequent failover to a standby a snap.
With the help of pitrtools you could do more, namely:- secure shipping of log files to configured standby server over SSL protected link;
- streaming replication;
- enable/disable archiving without the need to restart PostgreSQL;
- stay informed by generating alerts based on various levels of severity of events happening during the process on both ends of configuration;
- automatically take a base backup, including table spaces, restore archives and purge old ones (if PostgreSQL >8.3);
- failover to the latest restore point and point-in-time recovery (restore using timestamps);
- etc.
Obviously, pitrtools attempts to make things simpler, more secure and easier to manage.
Naming Conventions¶
Master server can be referred to also as archiver.
Slave is often called standby.
These names are used interchangeably.
The Test Setup¶
It's a good idea to first setup pitrtools and play with it before you actually go ahead and change configuration of your production servers. To show how pitrtools is configured and work, I'll describe the entire process using a 2 hosts test setup as an example.
So, there are 2 hosts named bitarena
and bitarena-clone
, both being virtualized instances of Debian Squeeze. I assume you're experienced enough to install Debian yourself and know how to find your way around the system.
bitarena
is designated role of master server aka archiver, bitarena-clone
is a slave aka standby server. pitrtools is installed on both hosts, but each host uses different tools from the package.
The Process¶
These are major steps in actual order of execution that one would have to follow to get pitrtools-enabled setup running:
On master server
- Turn on archiving
cmd_archiver -C $CONFIG -I
On slave server
cmd_standby -C $CONFIG -I
cmd_standby -C $CONFIG -B
cmd_standby -C $CONFIG -S
Installing and Configuring pitrtools¶
pitrtools project page can be found at https://public.commandprompt.com/projects/pitrtools. Of particular interest is a Wiki page https://public.commandprompt.com/projects/pitrtools/wiki where you can find information on how to obtain pitrtools, other useful notes and links.
You should use GIT version, because currently tarball offers an outdated version of pitrtools. I was told this is going to be fixed soon, but until then you should rely on the GIT repository.
Note that essentially you need to have contents of this repository on both master and slave server. For the sake of brevity, I'm going to show the layout I use only on master server. You would then need to repeat the steps for slave server, because all that is going to change is the host where you clone pitrtools repository to.
root@bitarena ~/GIT# git clone git://github.com/commandprompt/pitrtools.git Cloning into 'pitrtools'... remote: Counting objects: 284, done. remote: Compressing objects: 100% (183/183), done. remote: Total 284 (delta 182), reused 203 (delta 101) Receiving objects: 100% (284/284), 68.51 KiB, done. Resolving deltas: 100% (182/182), done. root@bitarena ~/GIT# ls pitrtools root@bitarena ~/GIT# cd pitrtools/ root@bitarena ~/GIT/pitrtools# ls cmd_archiver cmd_archiver.README cmd_standby.ini.sample cmd_standby.sql cmd_archiver.ini.sample cmd_standby cmd_standby.README cmd_worker.py root@bitarena ~/GIT/pitrtools# mkdir -p /var/lib/postgresql/pitrtools/bin root@bitarena ~/GIT/pitrtools# cp cmd_archiver cmd_standby cmd_worker.py /var/lib/postgresql/pitrtools/bin/ root@bitarena ~/GIT/pitrtools# cp *.ini.sample /var/lib/postgresql/pitrtools/ root@bitarena ~/GIT/pitrtools# cd /var/lib/postgresql/pitrtools/ root@bitarena /var/lib/postgresql/pitrtools# mv cmd_archiver.ini.sample cmd_archiver.ini root@bitarena /var/lib/postgresql/pitrtools# mv cmd_standby.ini.sample cmd_standby.ini root@bitarena /var/lib/postgresql/pitrtools# cd ~/GIT/pitrtools/ root@bitarena ~/GIT/pitrtools# chown -R postgres.postgres /var/lib/postgresql/pitrtools root@bitarena:~#
SSH Key-based Login for pitrtools¶
pitrtools relies on rsync and SSH heavily to do its work, e.g. making a base backup and shipping WAL log files from master to slave server -- all that happens over an SSL-protected communication channel. This is the area where pitrtools makes one's life easier, because otherwise you'd have to find a way to copy log files to a slave server somehow (most likely it would be some sort of networking mount point).
Therefore one of the prerequisites is to configure SSH key-based logins between the two hosts for postgres
system user that don't require a password or a passphrase.
Master
root@bitarena:~# su - postgres postgres@bitarena:~$ ssh-keygen -t rsa ... postgres@bitarena:~$ ls -la /var/lib/postgresql/.ssh/ total 16 drwx------ 2 postgres postgres 4096 Sep 28 09:17 . drwxr-xr-x 5 postgres postgres 4096 Sep 28 09:17 .. -rw------- 1 postgres postgres 1675 Sep 28 09:17 id_rsa -rw-r--r-- 1 postgres postgres 399 Sep 28 09:17 id_rsa.pub postgres@bitarena:~#
Slave
root@bitarena-clone:~# su - postgres postgres@bitarena-clone:~$ ssh-keygen -t rsa ...
Answer all the questions and ssh-keygen will create both private and public (*.pub file) RSA keys. For now just create the key pair, one on on each host (master and slave). When done, proceed to exchanging public keys between the hosts like this:
Master
postgres@bitarena:~$ ssh-copy-id bitarena-clone Now try logging into the machine, with "ssh 'bitarena-clone'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. postgres@bitarena:~$ ssh bitarena-clone Linux bitarena-clone 2.6.32-5-686 #1 SMP Sun May 6 04:01:19 UTC 2012 i686 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Thu Oct 18 06:07:22 2012 from bitarena.localdomain postgres@bitarena-clone:~$ less .ssh/authorized_keys postgres@bitarena-clone:~$
This will copy postgres
system user's public SSH key on master to standby into a file authorized_keys
, which will allow postgres
on master to login to slave host without a password and also in a secure fashion.
Remember that this has been done on master server only. In a similar manner, you should take care of the slave host.
It's worth noting that pitrtools runs some actions remotely. For example, when a base backup action is run on a slave host, cmd_standby script establishes SSH session to a master host and runs various psql commands to deal with checkpoints, copy log files, etc. This also requires PostgreSQL user password to access the database, and it is often needed to be entered 4 and more times for a base backup action to complete.
Once you played around with pitrtools enough to get hang of things, you could avoid having to enter password manually each time by either using .pgpass file (which is a standard feature of PostgreSQL) or make sure there's a trust relationship configured for localhost in pg_hba.conf
file.
For the sake of clarity, consider these examples:
.pgpass
127.0.0.1:*:postgres:postgres:postgrespass
pg_hba.conf
host postgres postgres 127.0.0.1/32 trust
In the console output examples you're going to see below I entered password manually, but this, of course, is really inconvenient in production environment.
Master Configuration File¶
Sample configuration files are pretty good as they are with the defaults they come. Chances are you're not going to change a lot in there. Make sure you read all *.README files, though, because they contain helpful extra information about pitrtools and configuration parameters that will help you decide how to best configure your servers.
We start with the master host by editing pitrtools configuration file for master/archiver.
postgres@bitarena:~$ vim pitrtools/cmd_archiver.ini [DEFAULT] ; online or offline state: online ; The base database directory pgdata: /var/lib/postgresql/8.4/main ; where to remotely copy archives r_archivedir: /var/lib/postgresql/archive ; where to locally copy archives l_archivedir: /var/lib/postgresql/archive ; where is rsync rsync_bin: /usr/bin/rsync ; extra rsync flags rsync_flags: -z ; option 2 or 3, if running RHEL5 or similar it is likely 2 ; if you are running something that ships remotely modern software ; it will be 3 rsync_version = 3 ; IP of slave slaves: bitarena-clone ; the user that will be using scp user: postgres ; if scp can't connect in 10 seconds error ssh_timeout: 10 ; command to process in ok notify_ok: echo OK ; command to process in warning notify_warning: echo WARNING ; command to process in critical notify_critical: echo CRITICAL ; if you want to debug on/off only debug: on ; if you want ssh debug (warning noisy) ssh_debug: off
Note that you can use domain names instead of IP addresses for slaves: parameter, it works either way just fine. You might also want to turn debugging on while you're learning pitrtools. It helps to see what software does, if you try to understand better how it works.
Turn On and Configure Archiving¶
If you are not using Streaming Replication / Hot Standby, we need to turn on archiving functionality in PostgreSQL. This is purely a PostgreSQL feature, but we can also take advantage of using pitrtools thanks to flexible design of PostgreSQL.
postgres@bitarena:~$ vim /etc/postgresql/8.4/main/postgresql.conf ... archive_mode=on archive_command = '/var/lib/postgresql/pitrtools/bin/cmd_archiver -C /var/lib/postgresql/pitrtools/cmd_archiver.ini -F %p' ...
Here we explicitly turn archiving mode on and tell PostgreSQL to use cmd_archiver
, part of pitrtools, as the archiving command.
It must be provided with a path to a configuration file (-C
switch), and a path to a log file (-F
switch). %p
is substituted by PostgreSQL with the actual log file location in the file system. To learn more about how to use cmd_archiver
run
postgres@bitarena:~$ pitrtools/bin/cmd_archiver --help
Please, keep in mind that pitrtools isn't supposed to be run by root
user. Technically, it can be run by any system user other than root, and unless you have a customized configuration your default PostgreSQL user will be postgres
, so pitrtools are expected to be run as that system user.
Restart PostgreSQL for changes to take effect.
postgres@bitarena:~$ /etc/init.d/postgresql restart Restarting PostgreSQL 8.4 database server: main. postgres@bitarena:~#
Install Helper Scripts¶
Apply cmd_standby.sql to the database of pitrtools user (usually postgres
). This is required for master server only.
root@bitarena:~# psql -U postgres < ~/GIT/pitrtools/cmd_standby.sql CREATE FUNCTION COMMENT CREATE FUNCTION CREATE FUNCTION CREATE FUNCTION COMMENT CREATE FUNCTION COMMENT root@bitarena:~#
Note: If you are running Postgres 9.2, use cmd_standby.92.sql
Initialize Master Environment¶
At this point we're pretty much done configuring the master server. All that is left to do is initialize master environment by running
postgres@bitarena:~$ pitrtools/bin/cmd_archiver -C pitrtools/cmd_archiver.ini -I We are initializing queues, one moment. NOTICE: init_env_func() NOTICE: generate_slave_list_func() NOTICE: Your slaves are: ['bitarena-clone'] postgres@bitarena:~$ ls -lah total 40K drwxr-xr-x 7 postgres postgres 4.0K Sep 28 10:25 . drwxr-xr-x 34 root root 4.0K Sep 27 02:32 .. drwxr-xr-x 3 postgres postgres 4.0K Sep 24 15:10 8.4 drwx------ 2 postgres postgres 4.0K Sep 27 02:23 .aptitude drwxr-xr-x 3 postgres postgres 4.0K Sep 28 10:25 archive -rw------- 1 postgres postgres 1001 Sep 28 10:03 .bash_history drwxr-xr-x 2 postgres postgres 4.0K Sep 28 10:25 pitrtools -rw------- 1 postgres postgres 1.4K Sep 28 05:08 .psql_history drwx------ 2 postgres postgres 4.0K Sep 28 09:29 .ssh -rw------- 1 postgres postgres 3.6K Sep 28 10:25 .viminfo postgres@bitarena:~$ ls -lah archive/bitarena-clone/ total 8.0K drwxr-xr-x 2 postgres postgres 4.0K Sep 28 10:25 . drwxr-xr-x 3 postgres postgres 4.0K Sep 28 10:25 .. postgres@bitarena:~#
As you can see -I
switch tells cmd_archiver
to do a couple of internal actions, as well as prepare file system layout by creating necessary directories.
archive/
directory has been created automatically by cmd_archiver
and contains sub-directory named after IP address or DNS name of a slave, bitarena-clone
in this example. This sub-directory is used in those circumstances when master fails to successfully transfer WAL log files to slave and stores them temporarily in this local directory. Once slave is back online, these files should be transferred to slave.
Effectively, after you've initialized master it starts trying to ship WAL files to slave. However, the slave host isn't configured yet, so the log delivery will fail and the log files should be found in l_archivedir/slave_FQDNorIP/
folder on master host. Once we configure slave, these will be shipped as soon as a new WAL segment is created on master.
Slave aka Standby Server¶
Now we can prepare the slave host. SSH key-based login for postgres
system user should be working in both directions, from master to slave, as well as from slave to master.
Assuming you've arranged for this as it was suggested before, we can now go on and edit slave configuration file to perform first important step and initialize slave environment.
One of the configuration file parameters, namely pgdata:
, will ask you to specify PostgreSQL data directory. You could look it up in postgresql.conf
, or, If your PostgreSQL is already running, you could see this information like this:
root@bitarena:~# ps axuwf |grep postgre root 4233 0.0 0.0 3304 756 pts/1 S+ 10:56 0:00 \_ grep postgre postgres 3352 0.0 0.4 46452 5464 ? S 09:59 0:03 /usr/lib/postgresql/8.4/bin/postgres -D /var/lib/postgresql/8.4/main -c config_file=/etc/postgresql/8.4/main/postgresql.conf ...
Remember, note down the path somewhere or copy it to the clipboard. We'll need it in a minute.
Slave Configuration File¶
Now edit slave configuration file. Most defaults are good and safe options, but your setup may be different from what I have in this how-to. So, be careful to understand what you're doing.
postgres@bitarena-clone:~$ vim pitrtools/cmd_standby.ini [DEFAULT] ; what major version are we using? pgversion: 8.4 ; Used for 8.2 (8.1?), should be set to something > than checkpoint_segments on master numarchives: 10 ; Whether or not to use streaming replication. If this is set to "on" ; pitrtools will configure the standby server to replicate from master ; using streaming replication. ; This can only be used with PostgreSQL 9.0 and up. use_streaming_replication: off ; File to touch to end replication when using streaming replication. trigger_file: /var/lib/postgresql/pitrtools/cmd_end_recovery ; User to connect to master DB while using streaming replication, ; ignored if not using streaming replication. repl_db_user: replication ; Password for the user repl_db_user. repl_db_password: secret ; sslmode to use when connecting for streaming replication. ; Accepted values: the same as libpq: disable, allow, prefer, require, verify-ca and verify-full ; Default: sslmode: prefer sslmode: prefer ; Commands needed for execution ; absolute path to ssh ssh: /usr/bin/ssh ; absolute path to rsync rsync: /usr/bin/rsync ; extra rsync flags rsync_flags: -z ; Confs ; This is the postgresql.conf to be used for the failover postgresql_conf_failover: /var/lib/postgresql/pitrtools/failover/postgesql.conf ; This is the pg_hba.conf to be used for the failover pg_hba_conf_failover: /var/lib/postgresql/pitrtools/failover/pg_hba.conf ; the path to to the postgres bin pg_standby: /usr/lib/postgresql/8.4/bin/pg_standby pg_ctl: /usr/lib/postgresql/8.4/bin/pg_ctl ; path to psql on the master r_psql: /usr/lib/postgresql/8.4/bin/psql ; Generalized information ; the port postgresql runs on (master) port: 5432 ; ip or name of master server master_public_ip: bitarena ; the ip address we should use when processing remote shell master_local_ip: 127.0.0.1 ; the user performed initdb user: postgres ; on or off debug: on ; on or off ssh_debug: off ; the timeout for ssh before we throw an alarm ssh_timeout: 30 ; should be the same as r_archivedir for archiver archivedir: /var/lib/postgresql/archive ; where you executed initdb -D to pgdata: /var/lib/postgresql/8.4/main ; Confs ; This is the postgresql.conf to be used when not in standby postgresql_conf: /etc/postgresql/8.4/main/postgresql.conf ; This is the pg_hba.conf to be used when not in standby pg_hba_conf: /etc/postgresql/8.4/main/pg_hba.conf ; By default postgresql.conf and pg_hba.conf will be copied from the ; locations specified above to pgdata directory on failover. ; ; Uncomment the following to make postgres actually use the above conf ; files w/o copying them to pgdata. ;no_copy_conf: true ; The recovery.conf file to create when starting up ; Defaults to %(pgdata)/recovery.conf recovery_conf: /var/lib/postgresql/8.4/main/recovery.conf ; Useful when postgresql.conf doesn't specify log destination ; Will be passed with -l to pg_ctl when starting the server. ; ; If you're worried about having complete logs, either make sure ; postgresql.conf points to a log file, or use the logfile: parameter. ; ; Otherwise postgresql will print on standard stdout and nothing ; will be recorded in the logs ; ;logfile: /var/log/postgresql/postgresql.log ; Alarms notify_critical: echo CRITICAL notify_warning: echo WARNING notify_ok: echo OK ; On failover action ; Whatever is placed here will be executed on -FS must return 0 action_failover: /var/lib/postgresql/pitrtools/failover.sh
action_failover:
script has to exist and have permissions of at least chmod u+x equivalent, it could be just a placeholder script with a simple action of:
#!/bin/bash touch /var/lib/postgresql/pitrtools/failover_happened
but it's meant as a way to let you do certain actions on failover, those would be very specific for each given setup. It's good to know, though, that pitrtools lets you take actions automatically when failover happens. Use this feature to make your setup more sophisticated.
In addition, when doing failover, there are two more additional options to take into consideration, namely postgresql_conf_failover:
and pg_hba_conf_failover:
. Both allow you to start server on failover using alternative configuration. This is meant to provide users with a way to prepare their failover scenario configuration in advance.
Initialize Slave Environment¶
First stop PostgreSQL, then initialize slave environment.
postgres@bitarena-clone:~$ /etc/init.d/postgresql stop postgres@bitarena-clone:~$ pitrtools/bin/cmd_standby -C pitrtools/cmd_standby.ini -I NOTICE: check_pgpid_func() DEBUG: executing query /usr/bin/ssh -o ConnectTimeout=30 -o StrictHostKeyChecking=no postgres@bitarena "/usr/lib/postgresql/8.4/bin/psql -A -t -Upostgres -p5432 -dpostgres -h127.0.0.1 by 'SELECT * FROM cmd_get_data_dirs()' Password for user postgres: postgrespass DEBUG: /var/lib/postgresql/8.4/main postgres@bitarena-clone:~$
Please, note that if archivedir: /var/lib/postgresql/archive
hasn't been created, you should do so manually as postgres
system user (or set postgres
user and group as the ownership information for this directory). pitrtools should do this automatically for you, but earlier versions were known not to do so. This is important, and the next step in slave configuration, which is base backup, will fail if archivedir:
doesn't exist.
Making a Base Backup¶
Before you proceed check if archivedir:
exists on slave and WAL files are being shipped to it from master host. WAL files are generated and shipped only when new data is stored in the database on master host. To help simulate data flow and check whether archiving and shipping is happening, try this SQL statement on master host:
postgres@bitarena:~$ psql psql (8.4.13) Type "help" for help. postgres=# create table testpitrtools1 as select * from pg_class, pg_description; postgresq=# \q postgres@bitarena:~$
You could create a couple of tables like that to generate enough WAL segments. See archivedir:
directory on slave to check whether any WAL files have been copied there. If they have, everything works as expected and you can try to make a base backup on slave host:
postgres@bitarena-clone:~$ pitrtools/bin/cmd_standby -C pitrtools/cmd_standby.ini -B NOTICE: check_pgpid_func() DEBUG: executing query /usr/bin/ssh -o ConnectTimeout=30 -o StrictHostKeyChecking=no postgres@bitarena "/usr/lib/postgresql/8.4/bin/psql -A -t -Upostgres -p5432 -dpostgres -h127.0.0.1 by 'checkpoint' Password for user postgres: postgrespass DEBUG: CHECKPOINT DEBUG: executing query /usr/bin/ssh -o ConnectTimeout=30 -o StrictHostKeyChecking=no postgres@bitarena "/usr/lib/postgresql/8.4/bin/psql -A -t -Upostgres -p5432 -dpostgres -h127.0.0.1 by 'SELECT cmd_pg_start_backup()' Password for user postgres: postgrespass DEBUG: cmd_pg_start_backup: 1 DEBUG: executing query /usr/bin/ssh -o ConnectTimeout=30 -o StrictHostKeyChecking=no postgres@bitarena "/usr/lib/postgresql/8.4/bin/psql -A -t -Upostgres -p5432 -dpostgres -h127.0.0.1 by 'SELECT * FROM cmd_get_data_dirs()' Password for user postgres: postgrespass DEBUG: executing query /usr/bin/ssh -o ConnectTimeout=30 -o StrictHostKeyChecking=no postgres@bitarena "/usr/lib/postgresql/8.4/bin/psql -A -t -Upostgres -p5432 -dpostgres -h127.0.0.1 by 'SELECT * FROM cmd_get_pgdata() LIMIT 1' Password for user postgres: postgrespass receiving incremental file list ./ backup_label backup_label.old postmaster.opts base/1/ base/1/pg_internal.init base/11564/ base/11564/pg_internal.init base/16499/ base/16499/pg_internal.init base/33069/ base/33069/33084 base/33069/33268 base/33069/33974 base/33069/33974_fsm base/33069/33974_vm base/33069/33980 base/33069/33993 base/33069/33993_fsm base/33069/33993_vm base/33069/33999 base/33069/33999_fsm base/33069/pg_internal.init global/ global/pg_auth global/pg_control global/pg_database pg_clog/0000 pg_multixact/offsets/0000 pg_stat_tmp/ pg_stat_tmp/pgstat.stat pg_subtrans/0001 Number of files: 1537 Number of files transferred: 25 Total file size: 189868759 bytes Total transferred file size: 22162037 bytes Literal data: 920320 bytes Matched data: 21241717 bytes File list size: 20500 File list generation time: 0.004 seconds File list transfer time: 0.000 seconds Total bytes sent: 65011 Total bytes received: 189901 sent 65011 bytes received 189901 bytes 101964.80 bytes/sec total size is 189868759 speedup is 744.84 DEBUG: executing query /usr/bin/ssh -o ConnectTimeout=30 -o StrictHostKeyChecking=no postgres@bitarena "/usr/lib/postgresql/8.4/bin/psql -A -t -Upostgres -p5432 -dpostgres -h127.0.0.1 by 'SELECT cmd_pg_stop_backup()' Password for user postgres: postgrespass DEBUG: cmd_pg_stop_backup: postgres@bitarena-clone:~$
As you can see what happens is that pitrtools puts master into backup mode, synchronizes data directories (including tablespaces, if any) from master to slave and then exits backup mode on master. If base backup action had failed before it properly finished (say, you had lost connection to slave while rsync was copying files over to it), you'd need to intervene and run manually -Astop_basebackup:
postgres@bitarena-clone:~$ pitrtools/bin/cmd_standby -C pitrtools/cmd_standby.ini -Astop_basebackup ...
After that run base backup action again. Just make sure it finishes its work properly (use console output for successful base backup action above as a reference).
If you want a cold standby you're done. If you need a warm standby, then run:
postgres@bitarena-clone:~$ pitrtools/bin/cmd_standby -C pitrtools/cmd_standby.ini -S NOTICE: check_pgpid_func() server starting postgres@bitarena-clone:~$ 2012-10-16 02:47:31 PDT LOG: database system was interrupted; last known up at 2012-10-16 02:44:37 PDT 2012-10-16 02:47:31 PDT LOG: starting archive recovery 2012-10-16 02:47:31 PDT LOG: restore_command = '/usr/lib/postgresql/8.4/bin/pg_standby -s5 -w0 -c -d /var/lib/postgresql/archive %f %p %r ' Trigger file : <not set> Waiting for WAL file : 00000001.history WAL file path : /var/lib/postgresql/archive/00000001.history Restoring to : pg_xlog/RECOVERYHISTORY Sleep interval : 5 seconds Max wait interval : 0 forever Command for restore : cp "/var/lib/postgresql/archive/00000001.history" "pg_xlog/RECOVERYHISTORY" Keep archive history : 000000000000000000000000 and later running restore :cp: cannot stat `/var/lib/postgresql/archive/00000001.history': No such file or directory cp: cannot stat `/var/lib/postgresql/archive/00000001.history': No such file or directory cp: cannot stat `/var/lib/postgresql/archive/00000001.history': No such file or directory cp: cannot stat `/var/lib/postgresql/archive/00000001.history': No such file or directory not restored history file not found Trigger file : <not set> Waiting for WAL file : 000000010000000100000025.00000020.backup WAL file path : /var/lib/postgresql/archive/000000010000000100000025.00000020.backup Restoring to : pg_xlog/RECOVERYHISTORY Sleep interval : 5 seconds Max wait interval : 0 forever Command for restore : cp "/var/lib/postgresql/archive/000000010000000100000025.00000020.backup" "pg_xlog/RECOVERYHISTORY" Keep archive history : 000000000000000000000000 and later running restore : OK 2012-10-16 02:48:01 PDT LOG: restored log file "000000010000000100000025.00000020.backup" from archive Trigger file : <not set> Waiting for WAL file : 000000010000000100000025 WAL file path : /var/lib/postgresql/archive/000000010000000100000025 Restoring to : pg_xlog/RECOVERYXLOG Sleep interval : 5 seconds Max wait interval : 0 forever Command for restore : cp "/var/lib/postgresql/archive/000000010000000100000025" "pg_xlog/RECOVERYXLOG" Keep archive history : 000000000000000000000000 and later running restore : OK 2012-10-16 02:48:02 PDT LOG: restored log file "000000010000000100000025" from archive 2012-10-16 02:48:02 PDT LOG: automatic recovery in progress 2012-10-16 02:48:02 PDT LOG: redo starts at 1/25000020, consistency will be reached at 1/2504FFC4 2012-10-16 02:48:03 PDT LOG: consistent recovery state reached Trigger file : <not set> Waiting for WAL file : 000000010000000100000026 WAL file path : /var/lib/postgresql/archive/000000010000000100000026 Restoring to : pg_xlog/RECOVERYXLOG Sleep interval : 5 seconds Max wait interval : 0 forever Command for restore : cp "/var/lib/postgresql/archive/000000010000000100000026" "pg_xlog/RECOVERYXLOG" Keep archive history : 000000010000000100000025 and later WAL file not present yet. WAL file not present yet. WAL file not present yet. WAL file not present yet. WAL file not present yet. WAL file not present yet. WAL file not present yet. WAL file not present yet. WAL file not present yet. WAL file not present yet.
At this point PostgreSQL armed with pitrtools on master server will be continuously shipping log files to archivedir:
on slave. Once shipped, the WAL files will be immediately replayed, because slave in standby mode continuously scans archivedir:
for new WAL files and replays them as soon as they become available (this can be seen from example console output above).
Failing Over¶
Now you have a warm standby mirroring changes occurring on the master server. When your master server becomes unavailable due to any reason, you could turn this warm standby server into a production instance by simply running on standby machine a failover action as shown below. For this PostgreSQL on master must not be running, otherwise pitrtools will throw out a warning and refuse to failover.
postgres@bitarena-clone:~$ pitrtools/bin/cmd_standby -C pitrtools/cmd_standby.ini -F999 NOTICE: check_pgpid_func() 2012-10-16 02:56:20 PDT LOG: received fast shutdown request 2012-10-16 02:56:20 PDT LOG: aborting any active transactions waiting for server to shut down....2012-10-16 02:56:20 PDT LOG: shutting down 2012-10-16 02:56:20 PDT LOG: database system is shut down done server stopped server starting NOTICE: Statistics are not replicated in warm standy mode. HINT: Execute ANALYZE on your databases postgres@bitarena-clone:~$ 2012-10-16 02:56:22 PDT LOG: database system was interrupted while in recovery at log time 2012-10-16 02:54:35 PDT 2012-10-16 02:56:22 PDT HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target. 2012-10-16 02:56:22 PDT LOG: starting archive recovery 2012-10-16 02:56:22 PDT LOG: restore_command = 'cp /var/lib/postgresql/archive/%f "%p"' cp: cannot stat `/var/lib/postgresql/archive/00000001.history': No such file or directory 2012-10-16 02:56:23 PDT LOG: restored log file "000000010000000100000026" from archive 2012-10-16 02:56:23 PDT LOG: automatic recovery in progress 2012-10-16 02:56:23 PDT LOG: redo starts at 1/2635B428, consistency will be reached at 1/27000000 cp: cannot stat `/var/lib/postgresql/archive/000000010000000100000027': No such file or directory 2012-10-16 02:56:23 PDT LOG: could not open file "pg_xlog/000000010000000100000027" (log file 1, segment 39): No such file or directory 2012-10-16 02:56:23 PDT LOG: redo done at 1/265AD744 2012-10-16 02:56:23 PDT LOG: last completed transaction was at log time 2012-10-16 02:55:07.387709-07 2012-10-16 02:56:24 PDT LOG: restored log file "000000010000000100000026" from archive cp: cannot stat `/var/lib/postgresql/archive/00000002.history': No such file or directory cp: cannot stat `/var/lib/postgresql/archive/00000003.history': No such file or directory cp: cannot stat `/var/lib/postgresql/archive/00000004.history': No such file or directory cp: cannot stat `/var/lib/postgresql/archive/00000005.history': No such file or directory cp: cannot stat `/var/lib/postgresql/archive/00000006.history': No such file or directory cp: cannot stat `/var/lib/postgresql/archive/00000007.history': No such file or directory cp: cannot stat `/var/lib/postgresql/archive/00000008.history': No such file or directory cp: cannot stat `/var/lib/postgresql/archive/00000009.history': No such file or directory 2012-10-16 02:56:24 PDT LOG: selected new timeline ID: 9 cp: cannot stat `/var/lib/postgresql/archive/00000001.history': No such file or directory 2012-10-16 02:56:25 PDT LOG: archive recovery complete 2012-10-16 02:56:26 PDT LOG: database system is ready to accept connections 2012-10-16 02:56:26 PDT LOG: autovacuum launcher started postgres@bitarena-clone:~$
This will create recovery.conf
file under pg_data:
directory and restart PostgreSQL to enter production mode of operation.
After this, you'd be basically running a copy of production master server. Keep in mind that you would also need to change IP addresses and/or load balancing configuration, routing, firewall rules or anything else that might be in the way of establishing a successful connection to this host. This is where action_failover:
script could come in handy.
Plan in advance, figure this all out to avoid any downtime before you will need to failover.
Other things you could do with pitrtools and some tips¶
This how-to is meant to help you get started with pitrtools. pitrtools can do more, though, than just help you configure standby.
Point-In-Time Recovery¶
postgres@bitarena-clone:~$ pitrtools/bin/cmd_standby -C pitrtools/cmd_standby.ini -F999 -R '2008-05-28 11:00:38.059389' ...
This is essentially a restore to a specific point in time action. In general, you can only restore to a point in time while using cold standby, because PITR will "stop" recovering at the point in time you've specified, and both warm and hot standby servers have already recovered all the WAL files they can. In this regard, it would be a good idea to have a cold standby around for disaster (at logical level) recovery.
Once this has been done, you can't choose another timestamp to restore to.
Entering Standby Mode After Failover On Slave¶
Suppose you failed over to your standby slave, which is running now as a replacement of master for your applications. You've fixed problems with the actual master and want this slave host to enter standby mode again. Here's how you'd do it:
postgres@bitarena-clone:~$ pitrtools/bin/cmd_standby -C pitrtools/cmd_standby.ini -Astop ...
This will stop entire PostgreSQL service. You could also use PostgreSQL init script to achieve the same instead. If you need more fine-grained control use pg_ctlcluser 8.4 main stop
(see man pg_ctlcluster
for more details). Take a new base backup as before and enter standby mode:
postgres@bitarena-clone:~$ pitrtools/bin/cmd_standby -C pitrtools/cmd_standby.ini -B ... postgres@bitarena-clone:~$ pitrtools/bin/cmd_standby -C pitrtools/cmd_standby.ini -S ...
Again, if you want a cold standby just don't run -S action after -B.
Alerts¶
Alerting is designed to run your custom scripts. You could easily integrate pitrtools alerting to your existing NMS be it Nagios, Zabbix, or anything else, send e-mails or take actions you decide.
Logging¶
It bears repeating -- just In case you overlooked this in standby sample configuration file notes -- if your postgresql.conf
doesn't specify a log file to write to and you don't use logfile:
parameter in cmd_standby.ini
the output will be directed to stdout (your console) and nothing will ever be written to a log file on disk.
If you restart PostgreSQL, that'll fix the problem, but you can avoid it in the first place by either specifying log file to write to in postgresql.conf
or by using logfile:
parameter in cmd_standby.ini
.
Troubleshooting¶
Set debug:
parameter in configuration files to on
and scrutinize the information. PostgreSQL log file is also a good place to look at.
Getting Help¶
A very low-traffic mailing list for pitrtools can be found here http://lists.commandprompt.com/mailman/listinfo/pitrtools/
There is also consulting available from Command Prompt
Updated by Ivan Lezhnjov over 11 years ago · 20 revisions