Skip to main content


pgBucket 2.0 Beta Is Ready

I am so glad to announce pgBucket 2.0 beta version, which is evolved from the version 1.0. Below are this version feature highlights and hoping that everybody likes these features. Event jobs (Cascading jobs) Dedicated configuration file Extended table print Auto job disable Custom job failure Dedicated connection pooler Improved the daemon stability/coding standards Please find the below URL for the features review. dineshopenscg/pgbucket/ overview --Dinesh
Recent posts

PostgreSQL High Performance Cookbook

Sharing knowledge which I have gained from last 6 years. So glad to be part of PostgreSQL High Performance Cookbook, where I have discussed all the knowledge I have gained from PostgreSQL database. PostgreSQL High Performance Cookbook Working with PostgreSQL from last 6 years, I have gained so much of knowledge about database management systems. Being a DBA for several years, I explored so many tools which work great with PostgreSQL database. During this 6 years journey, I got a chance to meet many wonderful peoples who guided me very well. I would like to say thanks to everyone who taught me PostgreSQL database in soft/hard ways :-). Also, would like to say thanks to every PostgreSQL developer, and authors and bloggers, from where I have learned many more things. Finally thanks to OpenSCG team who always treated me as a brother than an employee. :-) Thanks to my wife manoja  for her wonderful support, and my friend Baji Shaik for his help in writing the content.

pgBucket v1.0 is ready

pgBucket v1.0 pgBucket v1.0 (concurrent job scheduler for PostgreSQL) is released. This version is more stable and fixed the issues which was observed in the previous beta releases. Highlights of this tool are Schedule OS/DB level jobs Cron style syntax {Schedule up to seconds} On fly job modifications Instant daemon status by retrieving live job queue, job hash Enough cli options to deal with all the configured/scheduled job Here is the URL for the pgBucket build/usage instructions. I hope this tool will be helpful for the PostgreSQL users to get things done in the scheduled time. Note: This tool requires c++11{gcc version >= 4.9.3} to compile. --Dinesh

pgBucket beta2 is ready

Hi Everyone, I would like to inform to you all that,  pgBucket  beta2[Simple concurrent job scheduler for postgresql] version is ready with more stability. Thank you all in advance for your inputs/comments/suggestions. --Dinesh

pgBucket beta version is ready

Hi Everyone, I would like to inform to you all that, pgBucket [Simple concurrent job scheduler for postgresql] beta version is ready with enhanced architecture and new features. It would be more great if you could share your inputs and suggestions on this, which will help me to make this tool as stable. Thank you all in advance. --Dinesh

pgBucket - A new concurrent job scheduler

Hi All, I'm so excited to announce about my first contribution tool for postgresql. I have been working with PostgreSQL from 2011 and I'm really impressed with such a nice database. I started few projects in last 2 years like pgHawk[A beautiful report generator for Openwatch] , pgOwlt [CUI monitoring. It is still under development, incase you are interested to see what it is, attaching the image here for you ], pgBucket [Which I'm gonna talk about] and learned a lot and lot about PostgreSQL/Linux internals. Using pgBucket we can schedule jobs easily and we can also maintain them using it's CLI options. We can update/insert/delete jobs at online. And here is its architecture which gives you a basic idea about how it works. Yeah, I know there are other good job schedulers available for PostgreSQL. I haven't tested them and not comparing them with this, as I implemented it in my way. Features are: OS/DB jobs Cron style sytax Online job modi

Parallel Operations With pl/pgSQL

Hi, I am pretty sure that, there will be a right heading for this post. For now, i am going with this. If you could suggest me proper heading, i will update it :-) OK. let me explain the situation. Then will let you know what i am trying to do here, and how i did it. Situation here is, We have a table, which we need to run update on “R” no.of records. The update query is using some joins to get the desired result, and do update the table.  To process these “R” no.of records, it is taking “H” no.of hours. That too, it’s giving load on the production server. So, we planned to run this UPDATE as batch process.  Per a batch process, we took “N” no.or records. To process this batch UPDATE, it is taking “S” no.of seconds. With the above batch process, production server is pretty stable, and doing great. So, we planned to run these Batch updates parallel.  I mean, “K” sessions, running different record UPDATEs. Of-course, we can also increase the Batch size here.  But