Skip to main content

Posts

PostgreSQL High Performance Cookbook

Sharing knowledge which I have gained from last 6 years.

So glad to be part of PostgreSQL High Performance Cookbook, where I have discussed all the knowledge I have gained from PostgreSQL database.

PostgreSQL High Performance Cookbook

Working with PostgreSQL from last 6 years, I have gained so much of knowledge about database management systems. Being a DBA for several years, I explored so many tools which work great with PostgreSQL database. During this 6 years journey, I got a chance to meet many wonderful peoples who guided me very well. I would like to say thanks to everyone who taught me PostgreSQL database in soft/hard ways :-). Also, would like to say thanks to every PostgreSQL developer, and authors and bloggers, from where I have learned many more things.

Finally thanks to OpenSCG team who always treated me as a brother than an employee. :-)
Thanks to my wife manoja  for her wonderful support, and my friend Baji Shaik for his help in writing the content.

Recent posts

pgBucket v1.0 is ready

pgBucket v1.0 pgBucket v1.0 (concurrent job scheduler for PostgreSQL) is released. This version is more stable and fixed the issues which was observed in the previous beta releases.
Highlights of this tool are Schedule OS/DB level jobsCron style syntax {Schedule up to seconds}On fly job modificationsInstant daemon status by retrieving live job queue, job hashEnough cli options to deal with all the configured/scheduled job Here is the URL for the pgBucket build/usage instructions. https://bitbucket.org/dineshopenscg/pgbucket
I hope this tool will be helpful for the PostgreSQL users to get things done in the scheduled time. Note: This tool requires c++11{gcc version >= 4.9.3} to compile.
--Dinesh

pgBucket beta version is ready

Hi Everyone,

I would like to inform to you all that, pgBucket[Simple concurrent job scheduler for postgresql] beta version is ready with enhanced architecture and new features.

It would be more great if you could share your inputs and suggestions on this, which will help me to make this tool as stable.

Thank you all in advance.

--Dinesh

pgBucket - A new concurrent job scheduler

Hi All,

I'm so excited to announce about my first contribution tool for postgresql. I have been working with PostgreSQL from 2011 and I'm really impressed with such a nice database.

I started few projects in last 2 years like pgHawk[A beautiful report generator for Openwatch] , pgOwlt [CUI monitoring. It is still under development, incase you are interested to see what it is, attaching the image here for you ],


pgBucket [Which I'm gonna talk about] and learned a lot and lot about PostgreSQL/Linux internals.

Using pgBucket we can schedule jobs easily and we can also maintain them using it's CLI options. We can update/insert/delete jobs at online. And here is its architecture which gives you a basic idea about how it works.


Yeah, I know there are other good job schedulers available for PostgreSQL. I haven't tested them and not comparing them with this, as I implemented it in my way.
Features are: OS/DB jobsCron style sytaxOnline job modificationsRequired cli options

Parallel Operations With pl/pgSQL

Hi, I am pretty sure that, there will be a right heading for this post. For now, i am going with this. If you could suggest me proper heading, i will update it :-) OK. let me explain the situation. Then will let you know what i am trying to do here, and how i did it. Situation here is, We have a table, which we need to run update on “R” no.of records. The update query is using some joins to get the desired result, and do update the table.  To process these “R” no.of records, it is taking “H” no.of hours. That too, it’s giving load on the production server. So, we planned to run this UPDATE as batch process.  Per a batch process, we took “N” no.or records. To process this batch UPDATE, it is taking “S” no.of seconds. With the above batch process, production server is pretty stable, and doing great. So, we planned to run these Batch updates parallel.  I mean, “K” sessions, running different record UPDATEs. Of-course, we can also increase the Batch size here.  But we want to use much…

Heterogeneous Database Sync

Hi

As a part of ORACLE to PostgreSQL Migration, I come across to implement a trigger on Oracle, which sync it's data to PostgreSQL. I have tried with
a simple table as below, which is hopefully helpful to others.

Find this link to configure the heterogeneous dblink to postgres.

I believe, the below approach works effectively with the Primary Key tables of Oracle Database.
If we don't have primary key in a table, then the UPDATE,DELETE statements going to fire multiple times in Postgres, which leads performance issues.

ORACLE
CREATE TABLE test(t INT PRIMARY KEY); CREATE OR REPLACE TRIGGER testref AFTER INSERT OR UPDATE OR DELETE ON test FOR EACH ROW DECLARE PRAGMA AUTONOMOUS_TRANSACTION; C number; N number; BEGIN c:=DBMS_HS_PASSTHROUGH.OPEN_CURSOR@pglink; IF INSERTING THEN DBMS_HS_PASSTHROUGH.PARSE@pglink(c, 'INSERT INTO test VALUES('||:NEW.t||');'); n:=DBMS_HS_PASSTHROUGH.EXECUTE_NON_QUERY@pglink(c); ELSIF DELETING THEN DBMS_HS_PASSTHROUGH.PARSE@pgl…