Skip to main content

RMTK

Hi ,

Here are the steps how to set the runmtk tool kit steps.

How to Migrate Oracle Sample Data using RunMTK
++++++++++++++++++++++++++++++++++++++++++++++
Step 1:
-------
-bash-3.2$ pwd
/opt/PostgresPlus/9.1AS/etc
-bash-3.2$ vi toolkit.properties
SRC_DB_URL=jdbc:oracle:thin:@localhost:1521:DELTA
SRC_DB_USER=ar_test
SRC_DB_PASSWORD=ar_test
TARGET_DB_URL=jdbc:edb://localhost:5465/edb
TARGET_DB_USER=enterprisedb
TARGET_DB_PASSWORD=adminedb

Step 2:
-------
Start the Oracle Listner

Listner.ora
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DELTA =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
)
)
)

SID_LIST_DELTA =
(SID_LIST =
(SID_DESC=
(SID_NAME=DELTA)
(ORACLE_HOME=/u01/app/oracle/product/11.2.0)
)
)
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


-bash-3.2$ lsnrctl start

LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 04-APR-2012 21:31:18

Copyright (c) 1991, 2009, Oracle. All rights reserved.

TNS-01106: Listener using listener name LISTENER has already been started


Step 3:
-------
Check the Listner Status

-bash-3.2$ netstat -nuptl|grep 1521
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 :::1521 :::* LISTEN 4980/tnslsnr


Step 4:
-------
-bash-3.2$ ./runMTK.sh -dataOnly ar_test {"ar_test" is the username/schemaname in Oracle}
Source database connectivity info...
conn =jdbc:oracle:thin:@localhost:1521:DELTA
user =ar_test
password=******
Target database connectivity info...
conn =jdbc:edb://localhost:5465/edb
user =enterprisedb
password=******
Connecting with source Oracle database server...
Connecting with target EnterpriseDB database server...
Importing redwood schema ar_test...
Loading Table Data in 8 MB batches...
Loading Large Objects into table: TEST ...
[TEST] Migrated 1 rows.
[TEST] Table Data Load Summary: Total Time(s): 0.152 Total Rows: 1
Data Load Summary: Total Time (sec): 0.239 Total Rows: 1 Total Size(MB): 0.0

Schema ar_test imported successfully.

Migration process completed successfully.

Migration logs have been saved to /opt/PostgresPlus/9.0AS/.enterprisedb/migration-toolkit/logs

******************** Migration Summary ********************
Tables: 1 out of 1

Total objects: 1
Successful count: 1
Failure count: 0

Step 5:
-------
Check the traget database table entries.

--Dinesh

Comments

  1. Hai I need one query how to migrate the Oracle lob segment, index to postgres edb 9.5as please send me the some answers.

    ReplyDelete

Post a Comment

Popular posts from this blog

Pgpool Configuration & Failback

I would like to share the pgpool configuration, and it's failback mechanism in this post.

Hope it will be helpful to you in creating pgpool and it's failback setup.

Pgpool Installation & Configuration

1. Download the pgpool from below link(Latest version is 3.2.1).
    http://www.pgpool.net/mediawiki/index.php/Downloads


2. Untart the pgpool-II-3.2.1.tar.gz and goto pgpool-II-3.2.1 directory.

3. Install the pgpool by executing the below commands:

./configure ­­prefix=/opt/PostgreSQL92/ ­­--with­-pgsql­-includedir=/opt/PostgreSQL92/include/ --with­-pgsql­-libdir=/opt/PostgreSQL92/lib/ make make install 4. You can see the pgpool files in /opt/PostgreSQL92/bin location.
/opt/PostgreSQL92/bin $ ls clusterdb   droplang  pcp_attach_node  pcp_proc_count pcp_systemdb_info  pg_controldata  pgpool pg_test_fsync pltcl_loadmod  reindexdb createdb    dropuser  pcp_detach_node  pcp_proc_info createlang  ecpg      pcp_node_count   pcp_promote_node oid2name  pcp_pool_status  pcp_stop_pgpool  …

pgBucket - A new concurrent job scheduler

Hi All,

I'm so excited to announce about my first contribution tool for postgresql. I have been working with PostgreSQL from 2011 and I'm really impressed with such a nice database.

I started few projects in last 2 years like pgHawk[A beautiful report generator for Openwatch] , pgOwlt [CUI monitoring. It is still under development, incase you are interested to see what it is, attaching the image here for you ],


pgBucket [Which I'm gonna talk about] and learned a lot and lot about PostgreSQL/Linux internals.

Using pgBucket we can schedule jobs easily and we can also maintain them using it's CLI options. We can update/insert/delete jobs at online. And here is its architecture which gives you a basic idea about how it works.


Yeah, I know there are other good job schedulers available for PostgreSQL. I haven't tested them and not comparing them with this, as I implemented it in my way.
Features are: OS/DB jobsCron style sytaxOnline job modificationsRequired cli options

N-Node Mutlimaster Replication With Bucardo...!

Our team recently got  a problem, which is to solve the N-Node multi master replication in PostgreSQL.

We all know that, there are some other db engines like Postgres-XC which works in this way. But, we don't have any tool available in PostgreSQL, except Bucardo.

Bucardo is the nice solution for 2-Nodes. Is there a way we can exceed this limitation from 2 to N..?

As an initial step on this, I have done with 3 Nodes, which I believe, we can extend this upto N. { I might be wrong here.}

Please follow the below steps to set up the 1 - 1 multi master replication.

1. Follow the below steps to get all the pre-requisites for the Bucardo.

yum install perl-DBIx-Safe or apt-get install libdbix-safe-perl Install the below components from CPAN. DBI DBD::Pg Test::Simple boolean (Bucardo 5.0 and higher) Download the latest tarball from here. tar xvfz Bucardo-4.4.8.tar.gz cd Bucardo-4.4.8 perl Makefile.PL make sudo make install 2. We need to create plperl extension in db. For this, download…