Skip to main content



Foreign data wrapper is a library which understand the heterogeneous database information. For example, PostgreSQL does not understand the MYSQL data structure/information since both engines have different mechanism. If we want to get any heterogeneous database information then we need to configure the respective fdw(Foreign Data Wrapper) into the PostgreSQL Library location.

Please find the below link, which gives you all the available Foreign Data Wrappers.

Here we have chosen MYSQL table as a source to PostgreSQL. Below are the steps.

1) Install mysql and mysql-devel using yum .

yum install mysql*
2) Install PostgreSQL 9.1 through EnterpriseDB graphical installer. 3) Get the MYSQL FDW from the below link.
4) set the "PATH" as shown below.
export PATH=<PostgreSQL 9.1 Bin>:<Mysql Bin>:$PATH;
[root@localhost mysql_fdw-master]# echo $PATH 
5) Make & Make Install
[root@localhost mysql_fdw-master]# make USE_PGXS=1
[root@localhost mysql_fdw-master]# make USE_PGXS=1 install
6) Create an Extension & Server as below.
postgres=# create EXTENSION mysql_fdw ;

postgres=# CREATE SERVER mysql_svr FOREIGN DATA WRAPPER mysql_fdw OPTIONS (address '', port '3306'); --MySql Port Default 3306
7) Create USER Mapping from PUBLIC users to "MySql Root".
SERVER mysql_svr
OPTIONS (username 'root', password 'root');
8) Create Foreign Table as below.
postgres=# CREATE FOREIGN TABLE TEST(T INT) SERVER mysql_svr OPTIONS(TABLE 'DINESH.XYZ'); --Dinesh is a database & XYZ is a table.
9) From MYSQL
mysql> \u DINESH
Database changed
| T    |
|    1 |
|    2 |
|    3 |
3 rows in set (0.00 sec)
10) From PostgreSQL
postgres=# select * from test;
(3 rows)

postgres=# explain analyze select * from test;
                                             QUERY PLAN                                             
 Foreign Scan on test  (cost=10.00..13.00 rows=3 width=4) (actual time=0.211..0.212 rows=3 loops=1)
   Local server startup cost: 10    MySQL query: SELECT * FROM DINESH.XYZ  Total runtime: 0.675 ms (4 rows)

దినేష్ కుమార్ 
Dinesh Kumar


  1. So how do you enumerate a list of these foreign tables via TSQL?


Post a Comment

Popular posts from this blog

Pgpool Configuration & Failback

I would like to share the pgpool configuration, and it's failback mechanism in this post.

Hope it will be helpful to you in creating pgpool and it's failback setup.

Pgpool Installation & Configuration

1. Download the pgpool from below link(Latest version is 3.2.1).

2. Untart the pgpool-II-3.2.1.tar.gz and goto pgpool-II-3.2.1 directory.

3. Install the pgpool by executing the below commands:

./configure ­­prefix=/opt/PostgreSQL92/ ­­--with­-pgsql­-includedir=/opt/PostgreSQL92/include/ --with­-pgsql­-libdir=/opt/PostgreSQL92/lib/ make make install 4. You can see the pgpool files in /opt/PostgreSQL92/bin location.
/opt/PostgreSQL92/bin $ ls clusterdb   droplang  pcp_attach_node  pcp_proc_count pcp_systemdb_info  pg_controldata  pgpool pg_test_fsync pltcl_loadmod  reindexdb createdb    dropuser  pcp_detach_node  pcp_proc_info createlang  ecpg      pcp_node_count   pcp_promote_node oid2name  pcp_pool_status  pcp_stop_pgpool  …

pgBucket v1.0 is ready

pgBucket v1.0 pgBucket v1.0 (concurrent job scheduler for PostgreSQL) is released. This version is more stable and fixed the issues which was observed in the previous beta releases.
Highlights of this tool are Schedule OS/DB level jobsCron style syntax {Schedule up to seconds}On fly job modificationsInstant daemon status by retrieving live job queue, job hashEnough cli options to deal with all the configured/scheduled job Here is the URL for the pgBucket build/usage instructions.
I hope this tool will be helpful for the PostgreSQL users to get things done in the scheduled time. Note: This tool requires c++11{gcc version >= 4.9.3} to compile.

pgBucket - A new concurrent job scheduler

Hi All,

I'm so excited to announce about my first contribution tool for postgresql. I have been working with PostgreSQL from 2011 and I'm really impressed with such a nice database.

I started few projects in last 2 years like pgHawk[A beautiful report generator for Openwatch] , pgOwlt [CUI monitoring. It is still under development, incase you are interested to see what it is, attaching the image here for you ],

pgBucket [Which I'm gonna talk about] and learned a lot and lot about PostgreSQL/Linux internals.

Using pgBucket we can schedule jobs easily and we can also maintain them using it's CLI options. We can update/insert/delete jobs at online. And here is its architecture which gives you a basic idea about how it works.

Yeah, I know there are other good job schedulers available for PostgreSQL. I haven't tested them and not comparing them with this, as I implemented it in my way.
Features are: OS/DB jobsCron style sytaxOnline job modificationsRequired cli options