Skip to main content

::Pg_dump With Load Balance::

Hi ,

Find the below steps to do pg_dump with load balance in PostgreSQL.

REQUIRED TABLES ON REMOTE:- [pg_dump is doing remotely]
==============================
CREATE TABLE LOAD_AVG(AVG REAL);
INSERT INTO LOAD_AVG VALUES(0.0);

REQUIRED SHELL SCRIPT FOR EVERY ONE MINUTE ON REMOTE:-
=======================================================
PGBIN="/opt/PostgreSQL/9.0/bin"
PORT=5432
USER="postgres"
PASSWORD="postgres"
DATABASE="postgres"
LOAD=$(cat /proc/loadavg|awk -F ' ' '{print $1}')
$PGBIN/psql -p $PORT -U $USER -d $DATABASE -c "UPDATE LOAD_AVG SET AVG='$LOAD'";


REQUIRED C PROGRAM ON LOCAL:-
================================
#include
#include
#include
#include
#include
#include
#include "/opt/PostgreSQL/9.0/include/libpq-fe.h"
int main(int argc,char *argv[])
{
FILE *fp,*remote_machine,*log;
char pid[10],cmd[30],con_string[110],host[40],port[4],dbname[20],user[20],password[20],pg_dump_cmd[1000];
int status,remote_avg,dump_initiated=0,dump_stopped=0;
PGconn *conn;
PGresult *res;
log=fopen("/tmp/remote_dump.log","w");
if(fp==NULL)
{
fprintf(log,"\n%s","remote_credentials file is not available");
return 1;
}
else
if(argc<5) { printf("\nProvide the following Arguments "); printf("\nhostaddress port dbname user password"); return 1; } sprintf(con_string,"hostaddr=%s port=%s dbname=%s user=%s password=%s",argv[1],argv[2],argv[3],argv[4],argv[5]); conn = PQconnectdb(con_string); if (PQstatus(conn) == CONNECTION_BAD) { fprintf(log,"\n Not able to connect to the database %s",PQerrorMessage(conn)); return 1; } while(1) { res=PQexec(conn,"BEGIN"); PQclear(res); res=PQexec(conn,"DECLARE find_cpu_load CURSOR FOR select round(avg) from public.load_avg"); if (PQresultStatus(res) != PGRES_COMMAND_OK) { fprintf(stderr, "Erorr %s", PQerrorMessage(conn)); PQclear(res); } PQclear(res); res = PQexec(conn, "FETCH ALL in find_cpu_load"); remote_avg=atoi(PQgetvalue(res,0,0)); PQclear(res); res=PQexec(conn,"END"); PQclear(res); if(!dump_initiated) { if(remote_avg<10) { fp=popen("/sbin/pidof -s pg_dump","r"); fgets(pid,10,fp); pid[strlen(pid)-1]='\0'; pclose(fp); if(strlen(pid)<2) { sprintf(pg_dump_cmd,"/opt/PostgreSQL/9.0/bin/pg_dump -f /opt/PostgreSQL/9.0/data/dump.sql -v -h %s -p %s -U %s %s >/tmp/remote_dump1.log 2>/tmp/remote_dump1.log &",argv[1],argv[2],argv[3],argv[4],argv[5]);
system(pg_dump_cmd);
fp=popen("/sbin/pidof -s pg_dump","r");
fgets(pid,10,fp);
pid[strlen(pid)-1]='\0';
pclose(fp);
dump_initiated=1;
}
while (waitpid(-1, &status, WNOHANG) > 0);
}
else
{
fprintf(log,"\nHigh Load Average .. So, Con't start pg_dump ... Load is %d",remote_avg);
PQfinish(conn);
fclose(log);
return 1;
exit(0);
}
}
else
{
fp=popen("/sbin/pidof -s pg_dump","r");
fgets(pid,10,fp);
pid[strlen(pid)-1]='\0';
pclose(fp);
if(strlen(pid)<2) { fprintf(log,"\nDump has completed .. hence closing the connections ... "); PQfinish(conn); return 1; } if(remote_avg<10&&!dump_stopped) { fprintf(log,"\nSleeping 10 seconds pid load is normal ==> %d",remote_avg);
sleep(10);
}
else if(remote_avg>10&&!dump_stopped)
{
fprintf(log,"\nSleeping 10 Seconds ****Load is high ****.. so stopping dump and Load is %d Processes is %d",remote_avg,atoi(pid));
sleep(10);
dump_stopped=1;
sprintf(cmd,"kill -s STOP %d",atoi(pid));
system(cmd);
}
else if(remote_avg>10&&dump_stopped)
{
fprintf(log,"\nSleeping 10 More Seconds Load is high ... Load is %d",remote_avg);
sleep(10);
}
else
{
fprintf(log,"\nLoad is Nomal Again so resumig the dump processes ... load is %d",remote_avg);
dump_stopped=0;
sprintf(cmd,"\nkill -s CONT %d",atoi(pid));
system(cmd);
}
}
while (waitpid(-1, &status, WNOHANG) > 0 );
}
fclose(fp);
}

HOW TO COMPILE ??
===============
1.export LD_LIBRARY_PATH=/opt/PostgreSQL/9.0/lib/
2.gcc -o mg MG.c -L/opt/PostgreSQL/9.0/lib -lpq -lssl -lcrypto

HOW TO RUN ??
============
./mg 172.24.35.67 5432 postgres postgres postgres
[host] [port] [user] [password] [database]

Comments

Popular posts from this blog

Pgpool Configuration & Failback

I would like to share the pgpool configuration, and it's failback mechanism in this post.

Hope it will be helpful to you in creating pgpool and it's failback setup.

Pgpool Installation & Configuration

1. Download the pgpool from below link(Latest version is 3.2.1).
    http://www.pgpool.net/mediawiki/index.php/Downloads


2. Untart the pgpool-II-3.2.1.tar.gz and goto pgpool-II-3.2.1 directory.

3. Install the pgpool by executing the below commands:

./configure ­­prefix=/opt/PostgreSQL92/ ­­--with­-pgsql­-includedir=/opt/PostgreSQL92/include/ --with­-pgsql­-libdir=/opt/PostgreSQL92/lib/ make make install 4. You can see the pgpool files in /opt/PostgreSQL92/bin location.
/opt/PostgreSQL92/bin $ ls clusterdb   droplang  pcp_attach_node  pcp_proc_count pcp_systemdb_info  pg_controldata  pgpool pg_test_fsync pltcl_loadmod  reindexdb createdb    dropuser  pcp_detach_node  pcp_proc_info createlang  ecpg      pcp_node_count   pcp_promote_node oid2name  pcp_pool_status  pcp_stop_pgpool  …

pgBucket - A new concurrent job scheduler

Hi All,

I'm so excited to announce about my first contribution tool for postgresql. I have been working with PostgreSQL from 2011 and I'm really impressed with such a nice database.

I started few projects in last 2 years like pgHawk[A beautiful report generator for Openwatch] , pgOwlt [CUI monitoring. It is still under development, incase you are interested to see what it is, attaching the image here for you ],


pgBucket [Which I'm gonna talk about] and learned a lot and lot about PostgreSQL/Linux internals.

Using pgBucket we can schedule jobs easily and we can also maintain them using it's CLI options. We can update/insert/delete jobs at online. And here is its architecture which gives you a basic idea about how it works.


Yeah, I know there are other good job schedulers available for PostgreSQL. I haven't tested them and not comparing them with this, as I implemented it in my way.
Features are: OS/DB jobsCron style sytaxOnline job modificationsRequired cli options

N-Node Mutlimaster Replication With Bucardo...!

Our team recently got  a problem, which is to solve the N-Node multi master replication in PostgreSQL.

We all know that, there are some other db engines like Postgres-XC which works in this way. But, we don't have any tool available in PostgreSQL, except Bucardo.

Bucardo is the nice solution for 2-Nodes. Is there a way we can exceed this limitation from 2 to N..?

As an initial step on this, I have done with 3 Nodes, which I believe, we can extend this upto N. { I might be wrong here.}

Please follow the below steps to set up the 1 - 1 multi master replication.

1. Follow the below steps to get all the pre-requisites for the Bucardo.

yum install perl-DBIx-Safe or apt-get install libdbix-safe-perl Install the below components from CPAN. DBI DBD::Pg Test::Simple boolean (Bucardo 5.0 and higher) Download the latest tarball from here. tar xvfz Bucardo-4.4.8.tar.gz cd Bucardo-4.4.8 perl Makefile.PL make sudo make install 2. We need to create plperl extension in db. For this, download…