Skip to main content

Pgpool Configuration & Failback

I would like to share the pgpool configuration, and it's failback mechanism in this post.

Hope it will be helpful to you in creating pgpool and it's failback setup.

Pgpool Installation & Configuration

1. Download the pgpool from below link(Latest version is 3.2.1).

2. Untart the pgpool-II-3.2.1.tar.gz and goto pgpool-II-3.2.1 directory.

3. Install the pgpool by executing the below commands:
./configure ­­prefix=/opt/PostgreSQL92/ ­­--with­-pgsql­-includedir=/opt/PostgreSQL92/include/
make install
4. You can see the pgpool files in /opt/PostgreSQL92/bin location.
/opt/PostgreSQL92/bin $ ls
clusterdb   droplang  pcp_attach_node  pcp_proc_count pcp_systemdb_info  pg_controldata  pgpool pg_test_fsync pltcl_loadmod  reindexdb createdb    dropuser  pcp_detach_node  pcp_proc_info createlang  ecpg      pcp_node_count   pcp_promote_node oid2name  pcp_pool_status  pcp_stop_pgpool 

/opt/PostgreSQL92/bin $ ./pgpool ­version
pgpool­II version 3.2.1 (namameboshi)

Pgpool Parameters

listen_addresses = '*'

# This parameter tells about, which clients can connect to the pgpool. If we mention the listen_addresses = 'ip1,ip2', then these two ip's only can access this pgpool.
port = 9999

# We can define the pgpool port using this parameter.
socket_dir = '/tmp'

# This parameter tells about the socket directory.
pcp_port = 9898

# PCP is pgpool admin utility port. Using pcp port, we can execute some node attach and detaches.
pcp_socket_dir = '/tmp' 
# PCP socket directory.

backend_hostname0 = '' 
# Node 0 Host IP

backend_port0 = 5432
# Node 0 Port

backend_weight0 = 1

# Node 0 Weight in load balance mode.

backend_data_directory0 = '/opt/PostgreSQL92/data' 
# Node 0 Data Direcotry

backend_flag0 = 'ALLOW_TO_FAILOVER' 
# Node 0 can allow failover or not.

backend_hostname1 = ''
# Node 1 Host IP
backend_port1 = 5432
# Node 1 Port

backend_weight1 = 1

# Node 1 Weight in load balance mode.

backend_data_directory1 = '/opt/PostgreSQL92/data' 
# Node 1 Data Directory

backend_flag1 = 'ALLOW_TO_FAILOVER' 
# Node 1 can allow failover or not.

enable_pool_hba = on

# Enabling pool hba authentication like pg_hba authentication. In any case, we need to have the same pg_hba.conf entries in pool_hba.conf.If we do have different values, then we some of the users can face connectivity issues. Hence, requesting you to make sure both values are same. If we enable the pool_hba, then we need to create "pool_passwd" file in the pgpool.conf paramter's location. pool_passwd must contain the values "username:md5password". You can get this username and passwords from "pg_shadow" postgres table.

authentication_timeout = 60

# Pgpool client authentication timeout.

num_init_children = 32

# This parameter tells about how many initial pg connections need to make while pgpool starts.

max_pool = 3

# This parameter defines about how many pools for each num_init_children connections.

pid_file_name = '/opt/PostgreSQL92/data/'

# This parameter defines where the file need to place.

replication_mode = off

# This parameter need to off, if we are using pgpool on top of Slony/Streaming replication. We can also enable this parameter, if you want the pgpool replication rather than Slony/Streaming.

load_balance_mode = off

# This parameter need to on, if we are doing the load balance. i.e, if you want to distribute the "SELECT" queries between the primary and slave servers, then we need to enable this parameter. By enabling this parameter, we can balance work load between the primary and slave.

white_function_list = ''

# Readonly functions/procedures we can mention in this list. 
black_function_list = 'nextval,setval'

# Readwrite functions/procedures we need to mention in this list.

recovery_user = 'postgres'

# We need to provide recovery user name.

recovery_password = 'adminedb'

# We need to provide recovery user password. The above two parameters works only to the pcp commands like pcp_recovery_command, pcp_attach_node, pcp_detach_node.

revery_1st_stage_command = 'basebackup'

# We need to provide recovery 1st command which is doing the failback.

You can find more details of these parameters from the below link.

Pgpool start and stop commands

1. Start the pgpool using below command:
./pgpool ­f /opt/PostgreSQL92/etc/pgpool.conf -­F /opt/PostgreSQL92/etc/pcp.conf -a /opt/PostgreSQL92/etc/pool_hba.conf -­d -D -n >/opt/PostgreSQL92/data/pgpool.log 2>&1 &
2. Check whether pgpool status:
postgres~/bin> ps ­ef|grep pgpool
postgres  4288  3754  0 07:55 pts/1    00:00:00 ./pgpool ­f
/opt/PostgreSQL92/etc/pgpool.conf -F /opt/PostgreSQL92/etc/pcp.conf
­a /opt/PostgreSQL92/etc/pool_hba.conf -d -­D -­n
postgres  4289  4288  0 07:55 pts/1
connection request
3. Connect to database using pgpool port 9999.
postgres@localhost:~/bin> psql ­p 9999
psql.bin (9.2.1)

Type "help" for help.
postgres=# show port;

­­­­­­ 5432
(1 row)
4. You can stop the pgpool by using below command:
./pgpool ­f /opt/PostgreSQL92/etc/pgpool.conf -F /opt/PostgreSQL92/etc/pcp.conf -a /opt/PostgreSQL92/etc/pool_hba.conf ­-m fast stop
5. you can reload the pgpool by using below command:
./pgpool ­f /opt/PostgreSQL92/etc/pgpool.conf -F /opt/PostgreSQL92/etc/pcp.conf -a /opt/PostgreSQL92/etc/pool_hba.conf reload
Pgpool Failback

Failback is one of the features of PGPOOL which will re-initiate the failed master as a new slave server to the new master. For doing this operation, we need to follow the below steps.
1) Go to the Pgpool installer location on new master server.
cd /tmp/pgpool­II­3.2.1/sql/pgpool­recovery
export PATH=/opt/PostgreSQL92/bin:$PATH
make install
2) Then execute the pgpool-recovery.sql file in "template1" database of new master server.
3) Then prepare a script "basebackup" & "pgpool_remote_start" & "" as like attached scripts and place in new master's data directory.
4) Then include the following parameters in pgpool.conf where the pgpool instance is running . Once the modifications done,then reload the pgpool using the following command. Reload Command ­­­­­­­­­­­­­­
/opt/PostgreSQL92/bin/pgpool -­f /opt/PostgreSQL92/etc/pgpool.conf -F /opt/PostgreSQL92/etc/pcp.conf -a /opt/PostgreSQL92/etc/pool_hba.conf reload
Pgpool Parameters
recovery_user = 'postgres'
recovery_password = 'adminedb'
recovery_1st_stage_command = 'basebackup'
**Note: recovery_user credentials must match with the pcp.conf credentials. 
5) As a final step, we need to execute the pcp_recovery_command from either new master or new slave as shown below.
/opt/PostgreSQL92/bin/pcp_recovery_node ­d 1 9898
postgres adminedb 0
DEBUG: send: tos="R", len=46
DEBUG: recv: tos="r", len=21, data=AuthenticationOK
DEBUG: send: tos="D", len=6
DEBUG: recv: tos="c", len=20, data=CommandComplete
DEBUG: send: tos="X", len=4
If we get the above kind of message from the recovery command, then our setup has been accomplished. pcp_recovery_node Explanation ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ ­d, ­debug : enable debug message (optional) timeout : connection timeout value in seconds. command exits on timeout hostname : pgpool­II hostname port# : PCP port number username : username for PCP authentication password : password for PCP authentication nodeID : ID of a node to recover
export PATH=/opt/PostgreSQL92/bin:$PATH
export LD_LIBRARY_PATH=/opt/PostgreSQL92/lib:$LD_LIBRARY_PATH
export PGPASSWORD=adminedb
mv /opt/PostgreSQL92/data1 /opt/PostgreSQL92/data_old
mv /opt/PostgreSQL92/backup /opt/PostgreSQL92/data1
echo "restore_command = 'cp /var/arch_test/%f %p'">/opt/PostgreSQL92/data1/recovery.conf
echo "standby_mode = 'on'">>/opt/PostgreSQL92/data1/recovery.conf
echo "primary_conninfo = 'host= port=5433 user=replication password=replication application_name=Async_Rep'">>/opt/PostgreSQL92/data1/recovery.conf
rm -f /opt/PostgreSQL92/data1/_Promote_Me_Primary_Is_Down_5433
echo "trigger_file = '/opt/PostgreSQL92/data1/_Promote_Me_Primary_Is_Down_5433'">>/opt/PostgreSQL92/data1/recovery.conf
export PATH=/opt/PostgreSQL92/bin:$PATH
export LD_LIBRARY_PATH=/opt/PostgreSQL92/lib:$LD_LIBRARY_PATH
export PGPASSWORD=adminedb
echo "archive_command = 'cp %p /var/arch_test/%f && scp %p postgres@'" >> /opt/PostgreSQL92/data1/postgresql.conf
pg_ctl reload
psql -c "select pg_start_backup('pgpool_recovery')" -p 5433
ssh postgres@ mkdir /opt/PostgreSQL92/backup
rsync -C -a --exclude '' --exclude 'postmaster.opts' --exclude 'recovery.done' /opt/PostgreSQL92/data1/ postgres@
export PGPASSWORD=adminedb
psql -c "select pg_stop_backup()" -p 5433
rm -f /opt/PostgreSQL92/data/recovery.done
ssh postgres@ sh /opt/PostgreSQL92/
ssh postgres@ pg_ctl -D /opt/PostgreSQL92/data1 start

Dinesh Kumar


Popular posts from this blog

pgBucket - A new concurrent job scheduler

Hi All,

I'm so excited to announce about my first contribution tool for postgresql. I have been working with PostgreSQL from 2011 and I'm really impressed with such a nice database.

I started few projects in last 2 years like pgHawk[A beautiful report generator for Openwatch] , pgOwlt [CUI monitoring. It is still under development, incase you are interested to see what it is, attaching the image here for you ],

pgBucket [Which I'm gonna talk about] and learned a lot and lot about PostgreSQL/Linux internals.

Using pgBucket we can schedule jobs easily and we can also maintain them using it's CLI options. We can update/insert/delete jobs at online. And here is its architecture which gives you a basic idea about how it works.

Yeah, I know there are other good job schedulers available for PostgreSQL. I haven't tested them and not comparing them with this, as I implemented it in my way.
Features are: OS/DB jobsCron style sytaxOnline job modificationsRequired cli options

N-Node Mutlimaster Replication With Bucardo...!

Our team recently got  a problem, which is to solve the N-Node multi master replication in PostgreSQL.

We all know that, there are some other db engines like Postgres-XC which works in this way. But, we don't have any tool available in PostgreSQL, except Bucardo.

Bucardo is the nice solution for 2-Nodes. Is there a way we can exceed this limitation from 2 to N..?

As an initial step on this, I have done with 3 Nodes, which I believe, we can extend this upto N. { I might be wrong here.}

Please follow the below steps to set up the 1 - 1 multi master replication.

1. Follow the below steps to get all the pre-requisites for the Bucardo.

yum install perl-DBIx-Safe or apt-get install libdbix-safe-perl Install the below components from CPAN. DBI DBD::Pg Test::Simple boolean (Bucardo 5.0 and higher) Download the latest tarball from here. tar xvfz Bucardo-4.4.8.tar.gz cd Bucardo-4.4.8 perl Makefile.PL make sudo make install 2. We need to create plperl extension in db. For this, download…