Thursday, July 9, 2015

Parallel Operations With pl/pgSQL


Hi,

I am pretty sure that, there will be a right heading for this post. For now, i am going with this. If you could suggest me proper heading, i will update it :-)

OK. let me explain the situation. Then will let you know what i am trying to do here, and how i did it.

Situation here is,

  We have a table, which we need to run update on “R” no.of records. The update query is using some joins to get the desired result, and do update the table. To process these “R” no.of records, it is taking “H” no.of hours. That too, it’s giving load on the production server. So, we planned to run this UPDATE as batch process. Per a batch process, we took “N” no.or records. To process this batch UPDATE, it is taking “S” no.of seconds.

  With the above batch process, production server is pretty stable, and doing great. So, we planned to run these Batch updates parallel. I mean, “K” sessions, running different record UPDATEs. Of-course, we can also increase the Batch size here. But we want to use much cpus to complete all these UPDATES as soon as possible.

Problem here is,

  So, as i said, we need to run multiple UPDATEs on multiple records in parallel. But, how can one session is going to communicate with other sessions on this batch records.
I mean,
If one session is running updates on 1 to 1000, how could the second session knows that the other session was processing from 1 to 1000.
If the second session knows this information, this will start from 1001 to 2000 in parallel. This is the problem i am trying to solve here.

I am not sure whether this is the optimal solution, but as per my requirement it’s working. :-)  Let me know if you see any problems in it.
Object Definitions
                      Table "public.test"
 Column |  Type   |                     Modifiers
--------+---------+----------------------------------------------------
 t      | text    |
 i      | boolean |
 seq    | bigint  | not null default nextval('test_seq_seq'::regclass)

postgres=# INSERT INTO test VALUES(generate_series(1, 9000), false, generate_series(1, 9000));
INSERT 0 9000

postgres=# \ds testing
           List of relations
 Schema |  Name   |   Type   |  Owner
--------+---------+----------+----------
 public | testing | sequence | postgres
(1 row)


CREATE OR REPLACE FUNCTION public.update_test_parallel(batch integer)
 RETURNS void
 LANGUAGE plpgsql
AS $function$
DECLARE
VAR BIGINT;
DUMMY TEXT;
BEGIN

-- Adding this for Demo
--

SELECT pg_sleep(10) INTO DUMMY;

SELECT pg_advisory_lock(-1234) INTO DUMMY;

        SELECT nextval('testing') INTO VAR;
        EXECUTE E'SELECT nextval(\'testing\') FROM generate_series('||VAR||','||VAR+BATCH||')';

        -- We need to decrease the sequence value by one, since we executed nextval expression once
        -- Otherwise, it will affect the other session''s  execution.
        --
        SELECT setval('testing', currval('testing')-1) INTO DUMMY;

SELECT pg_advisory_unlock(-1234) INTO DUMMY;

        -- I want to update the test table of the column "I" with value "true".
        --
UPDATE test SET I=true WHERE SEQ BETWEEN VAR AND (VAR+BATCH);


RAISE NOTICE 'VAR IS %, VAR+BATCH IS %', VAR, (VAR+BATCH);
RAISE NOTICE 'CURRENT SEQ VALUE IS %', currval('testing');

EXCEPTION WHEN OTHERS THEN

        -- If there is an exception, we need to reset the sequence to it''s start position again.
        -- So that, the other sessions, will try with the same sequence numbers.
        --
        SELECT setval('testing', VAR-1) INTO DUMMY;
        SELECT pg_advisory_unlock(-1234) INTO DUMMY;
        RAISE EXCEPTION '%', SQLERRM;
END;
$function$;
Session 1
 
postgres=# SELECT public.update_test_parallel(3000);
NOTICE:  VAR IS 1, VAR+BATCH IS 3001
NOTICE:  CURRENT SEQ VALUE IS 3000
update_test_parallel
----------------------

(1 row)
Session 2
 
postgres=# SELECT public.update_test_parallel(3000);
NOTICE:  VAR IS 3001, VAR+BATCH IS 6001
NOTICE:  CURRENT SEQ VALUE IS 6000
update_test_parallel
----------------------

(1 row)
Session 3
 
postgres=# SELECT public.update_test_parallel(3000);
NOTICE:  VAR IS 6001, VAR+BATCH IS 9001
NOTICE:  CURRENT SEQ VALUE IS 9000
update_test_parallel
----------------------

(1 row)
Desired result
 
postgres=# SELECT COUNT(*) FROM test WHERE i is true;
count
-------
  9000
(1 row)

In the above implementation, i took "sequence" for the session's parallel execution with the help of advisory locks. Hope this helps to others as well.

Thanks as always for reading it, and welcome your inputs.
 --Dinesh Kumar

Tuesday, December 30, 2014

Heterogeneous Database Sync

Hi

As a part of ORACLE to PostgreSQL Migration, I come across to implement a trigger on Oracle, which sync it's data to PostgreSQL. I have tried with
a simple table as below, which is hopefully helpful to others.

Find this link to configure the heterogeneous dblink to postgres.

I believe, the below approach works effectively with the Primary Key tables of Oracle Database.
If we don't have primary key in a table, then the UPDATE,DELETE statements going to fire multiple times in Postgres, which leads performance issues.

ORACLE
CREATE TABLE test(t INT PRIMARY KEY);

CREATE OR REPLACE TRIGGER testref AFTER INSERT OR UPDATE OR DELETE ON test
FOR EACH ROW
DECLARE
 PRAGMA AUTONOMOUS_TRANSACTION;
 C number;
 N number;
BEGIN
 c:=DBMS_HS_PASSTHROUGH.OPEN_CURSOR@pglink;
 IF INSERTING THEN
  DBMS_HS_PASSTHROUGH.PARSE@pglink(c, 'INSERT INTO test VALUES('||:NEW.t||');');
  n:=DBMS_HS_PASSTHROUGH.EXECUTE_NON_QUERY@pglink(c);
 ELSIF DELETING THEN
  DBMS_HS_PASSTHROUGH.PARSE@pglink(c, 'DELETE FROM test WHERE t='||:OLD.t);
  n:=DBMS_HS_PASSTHROUGH.EXECUTE_NON_QUERY@pglink(c);
 ELSIF UPDATING THEN
  DBMS_HS_PASSTHROUGH.PARSE@pglink(c, 'UPDATE test SET t='||:NEW.T||' WHERE t='||:OLD.T);
  n:=DBMS_HS_PASSTHROUGH.EXECUTE_NON_QUERY@pglink(c);
 END IF;
 DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@pglink(c);
COMMIT;
END;
/
SQL> INSERT INTO test VALUES(1);
SQL> INSERT INTO test VALUES(2);
SQL> INSERT INTO test VALUES(3);


SQL> DELETE FROM test WHERE t=2;

SQL> UPDATE test SET t=10 WHERE t=1;
SQL> COMMIT;
Commit complete.

PostgreSQL

CREATE TABLE test(t INT PRIMARY KEY);
postgres=# SELECT * FROM test;
t
----
3
10
(2 rows)

NOTE
Oracle's DG4ODBC don't support the 2 phase commit with PostgreSQL. In the above case, if we issue ROLLBACK in oracle, the data in postgresql will remain, which will lead to inconsistency. To fix this, i believe we have to use Asynchronous data copy by scheduling a job in Oracle as below.

CREATE SEQUENCE test_seq ORDER;
CREATE TABLE test_backup(SEQ INT, EVT VARCHAR(2), t INT, PRIMARY KEY(SEQ, EVT, t));
-- Trigger, which records the test table's information into test_backup.
CREATE OR REPLACE TRIGGER testref AFTER INSERT OR UPDATE OR DELETE ON test
FOR EACH ROW
DECLARE
BEGIN
IF INSERTING THEN
  INSERT INTO test_backup VALUES(test_seq.nextval, 'I', :NEW.t);
ELSIF DELETING THEN
  INSERT INTO test_backup VALUES(test_seq.nextval, 'D', :OLD.t);
ELSIF UPDATING THEN
    INSERT INTO test_backup VALUES(test_seq.nextval, 'UD', :OLD.t);
  INSERT INTO test_backup VALUES(test_seq.nextval, 'UI', :NEW.t);
END IF;
END;
/

-- Procedure, which sync the test_backup table to postgres test table.
create or replace PROCEDURE SYNCORATOPG
IS
REC TEST_BACKUP%ROWTYPE;
C NUMBER;
N NUMBER;
PRAGMA AUTONOMOUS_TRANSACTION;
TYPE SYNCID IS TABLE OF INTEGER;
SYNCNUM SYNCID;
CNT INT:=0;
BEGIN
  
    BEGIN
        SYNCNUM:=SYNCID();
        FOR REC IN (SELECT * FROM test_backup ORDER BY SEQ ASC)
        LOOP
        SYNCNUM.EXTEND;
        CNT:=CNT+1;
        SYNCNUM(CNT):=REC.SEQ;
        c:=DBMS_HS_PASSTHROUGH.OPEN_CURSOR@TOPG;
        IF REC.EVT = 'I' OR REC.EVT='UI' THEN
           DBMS_HS_PASSTHROUGH.PARSE@TOPG(c, 'INSERT INTO test VALUES('||REC.t||');');
        ELSIF REC.EVT = 'D' OR REC.EVT='UD' THEN
           DBMS_HS_PASSTHROUGH.PARSE@TOPG(c, 'DELETE FROM test WHERE t='||REC.T);
        END IF;
      
         n:=DBMS_HS_PASSTHROUGH.EXECUTE_NON_QUERY@TOPG(c);
         DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@TOPG(c);
        END LOOP;
        
        COMMIT;
    END;
    
    BEGIN
      FOR i IN 1..SYNCNUM.COUNT
      LOOP
        DELETE FROM test_backup WHERE SEQ=SYNCNUM(i);
        SYNCNUM.DELETE(i);
      END LOOP;
       COMMIT;
    END;
  EXCEPTION
    WHEN OTHERS THEN
      RAISE_APPLICATION_ERROR(-20000, 'Fatal error.'||SQLERRM);
END;

--Scheduling a JOB in Oracle, which call the above SYNC procedure for every 1 minute.
BEGIN
DBMS_JOB.isubmit (
job => 113,
what => 'BEGIN SYNCORATOPG; END;',
next_date => SYSDATE,
interval => 'SYSDATE + 1/(24*60) /* 1 Minute */');

COMMIT;
END;
/
-- Inserting Sample Data In Oracle
INSERT INTO test VALUES(-1);
INSERT INTO test VALUES(-2);
INSERT INTO test VALUES(-3);

UPDATE test SET t=-20 WHERE t=-2;
DELETE FROM test WHERE t=-1;
INSERT INTO test VALUES(-2);
COMMIT;

-- PostgreSQL
postgres=# SELECT * FROM test;
 t
----
  -3
 -20
  -2
(3 rows)

As usual, welcome your inputs.

 --Dinesh Kumar

Tuesday, June 3, 2014

32-bit PostgreSQL Compilation On 64-bit CentOS 6.x

I am sure that, most of you aware of this. But, for me it's the first time, I accomplished it.

As one of my assigned tasks to build a 32-bit instance of postgresql on 64-bit machine, I have followed the below approach. I hope, it will be helpful to others as well, if you got any problems.

As an initial step on this task, I have tried to build a sample "c" program using "gcc -m32". Once, I resolved this, I moved to compile the PostgreSQL 9.0.
[root@localhost Desktop]# gcc -m32 -o test test.c
In file included from /usr/include/features.h:385,
                 from /usr/include/stdio.h:28,
                 from test.c:1:
/usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: No such file or directory
To resolve the above issue, I have installed the 32-bit glibc-devel package through yum.
yum -y install glibc-devel.i686 glibc-devel
Again, I have tried to run the same command.
[root@localhost Desktop]# gcc -m32 -o test test.c
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.4.6/libgcc_s.so when searching for -lgcc_s
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.4.6/libgcc_s.so when searching for -lgcc_s
/usr/bin/ld: cannot find -lgcc_s
collect2: ld returned 1 exit status
Now, I got a different error message, and tried to install the 32-bit libgcc.
[root@localhost Desktop]# yum install libgcc-*.i686
--> Running transaction check
---> Package libgcc.i686 0:4.4.7-4.el6 will be installed
--> Finished Dependency Resolution
Error: Protected multilib versions: libgcc-4.4.7-4.el6.i686 != libgcc-4.4.6-3.el6.x86_64
As it's complaining, we have the old version of x86_64, when compared the new one. Hence, I have tried to update the existing x86_64.
[root@localhost Desktop]# yum update libgcc-4.4.6-3.el6.x86_64
---> Package libgcc.x86_64 0:4.4.6-3.el6 will be updated
---> Package libgcc.x86_64 0:4.4.7-4.el6 will be an update
--> Finished Dependency Resolution
Once, it's updated the given library, I have again tried to install the libgcc-*.i686.
[root@localhost Desktop]# yum install libgcc-*.i686
Resolving Dependencies
--> Running transaction check
---> Package libgcc.i686 0:4.4.7-4.el6 will be installed
--> Finished Dependency Resolution
Now, I am trying to run the same "gcc -m32" command to check for any further issues.
[root@localhost Desktop]# gcc -m32 -o test test.c
[root@localhost Desktop]# ./test
Hello World
It looks, the sample "c program" is working fine, as a 32-bit application. Now, i am moving to PostgreSQL 9.0. As per my observation I have updated, installed the below components for the PostgreSQL.
yum update readline-6.0-3.el6.x86_64
yum install *readline*i686
yum update zlib-1.2.3-27.el6.x86_64
yum install *zlib*i686
Once, I got the all required 32-bit libraries, I have tried to compile the postgresql as below.
[root@localhost build]# CFLAGS=-m32 LDFLAGS=-m32 CXXFLAGS=-m32 ./configure --prefix=/opt/PostgreSQL/build
[root@localhost build]# make -j 4; make install;
....
make[1]: Leaving directory `/root/Downloads/postgresql-9.0.17/config'
PostgreSQL installation complete.
It seems, postgresql has built successfully on 64-bit machine as a 32-bit application.

Checking for the confirmation from OS, PG.
postgres=# SELECT version();
                                                      version                                                      
-------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.0.17 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3), 32-bit
(1 row)

[root@localhost build]# file /sbin/init
/sbin/init: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, stripped
And, at the end, I have the postgresql as 32-bit application in 64-bit machine.

Thank you for reading, and please comment on this, if you have any questions.


--Dinesh Kumar

Thursday, April 24, 2014

N-Node Mutlimaster Replication With Bucardo...!

Our team recently got  a problem, which is to solve the N-Node multi master replication in PostgreSQL.

We all know that, there are some other db engines like Postgres-XC which works in this way. But, we don't have any tool available in PostgreSQL, except Bucardo.

Bucardo is the nice solution for 2-Nodes. Is there a way we can exceed this limitation from 2 to N..?

As an initial step on this, I have done with 3 Nodes, which I believe, we can extend this upto N. { I might be wrong here.}

Please follow the below steps to set up the 1 - 1 multi master replication.

1. Follow the below steps to get all the pre-requisites for the Bucardo.

yum install perl-DBIx-Safe
or
apt-get install libdbix-safe-perl

Install the below components from CPAN.

DBI
DBD::Pg
Test::Simple
boolean (Bucardo 5.0 and higher)

Download the latest tarball from here.

tar xvfz Bucardo-4.4.8.tar.gz
cd Bucardo-4.4.8
perl Makefile.PL
make
sudo make install
2. We need to create plperl extension in db. For this, download the required active perl to set up or simply do as below.
apt-get install postgresql-plperl-9.3
or
yum install postgresql-plperl-9.3

Make a copy of Old $PGLIBPATH/plperl.so
Move the new plperl.so to $PGLIBPATH/plperl.so
3. plperl extension.
Create extension plperl;
4. Create 3 databases like "node1", "node2", "node3".
CREATE DATABASE node1;
CREATE DATABASE node2;
CREATE DATABASE node3;
5. Execute below statements on 2 databases (node1, node2).
CREATE TABLE test1(t INT PRIMARY KEY);
INSERT INTO test1 VALUES(generate_series(1, 10));
6. Install Bucardo catalog database using the below command.
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl install
Postgres version is: 9.3
Attempting to create and populate the bucardo database and schema
Database creation is complete
Connecting to database 'bucardo' as user 'bucardo'
Updated configuration setting "piddir"
Installation is now complete.
7. Adding databases, those will be part of mutli-master replication.
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl add db db1 dbname="node1"
Added database "db1"
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl add db db2 dbname="node2"
Added database "db2"
8. Creating a herd of db1 tables.
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl add all tables db=db1 herd=db1_herd
Creating herd: db1_herd
New tables added: 1
Already added: 0
9. Creating a sync of this herd from db1->db2.
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl add sync db1_to_db2 type=pushdelta source=db1_herd targetdb=db2
Added sync "db1_to_db2"
10. Creating a herd of db2 tables.
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl add all tables db=db2 herd=db2_herd
Creating herd: db2_herd
New tables added: 1
Already added: 0
11. Creating a sync of this herd from db2->db1.
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl add sync db2_to_db1 type=pushdelta source=db2_herd targetdb=db1
Added sync "db2_to_db2"
12. Start the Bucardo process. Default log location for the Bucardo instance is SYSLOG.
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl start
Checking for existing processes
Removing /tmp/fullstopbucardo
Starting Bucardo

postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ps -ef|grep bucardo
postgres 2740 1128 0 21:26 ? 00:00:00 postgres: bucardo bucardo [local] idle
postgres 2743 1128 0 21:26 ? 00:00:00 postgres: bucardo node1 [local] idle
13. Check the DML activities between these two nodes.
node1=# INSERT INTO test1 VALUES(-1);
INSERT 0 1
node1=# \c node2
node2=# SELECT * FROM test1 WHERE t=-1;
 t
---
-1
(1 row)

node2=# INSERT INTO test1 VALUES(-2)
INSERT 0 1
node2=# \c node1
node1=# SELECT * FROM test1 WHERE t=-2;
 t
---
-2
(1 row)
14. Check the status from the below command.
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl status
Days back: 3  User: bucardo  Database: bucardo  PID of Bucardo MCP: 2739
Name       Type  State PID  Last_good Time  I/U/D Last_bad Time
==========+=====+=====+====+=========+=====+=====+========+====
db1_to_db2| P   |idle |2746|7m49s    |0s   |1/0/0|unknown |    
db2_to_db1| P   |idle |2745|6m11s    |0s   |1/0/0|unknown |    
 
It seems everything is perfect, and data is replicating from node1 to node2 and vice-versa. Now, let's try to add a new node, "node3" as part of this mutli-master replication.

15. Add new replication database to Bucardo.
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl add db db3 dbname="node3"
Added database "db3"
16. Dump and restore of test1 table from either node1 or node2.

17. Create a herd of db3 tables
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl add all tables db=db3 herd=db3_herd
Creating herd: db3_herd
New tables added: 1
Already added: 0
18. Create a new sync from db2->db3, db3->db2.
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl add sync db2_to_db3 type=pushdelta source=db2_herd targetdb=db3
Added sync "db2_to_db3"
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl add sync db3_to_db2 type=pushdelta source=db3_herd targetdb=db2
Added sync "db3_to_db2"

19. Restart the Bucardo to check for the latest node status.
postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl stop
Creating /tmp/fullstopbucardo ... Done

postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl start
Checking for existing processes
Removing /tmp/fullstopbucardo
Starting Bucardo
 

postgres@dinesh-VirtualBox:~/Bucardo-4.5.0$ ./bucardo_ctl status
Days back: 3  User: bucardo  Database: bucardo  PID of Bucardo MCP: 2553
Name       Type  State PID  Last_good Time  I/U/D Last_bad Time
==========+=====+=====+====+=========+=====+=====+========+====
db1_to_db2| P   |idle |2563|1s       |0s   |0/0/0|unknown |    
db2_to_db1| P   |idle |2560|1s       |0s   |0/0/0|unknown |    
db2_to_db3| P   |idle |2561|0s       |0s   |1/0/1|unknown |    
db3_to_db2| P   |idle |2562|0s       |0s   |0/0/0|unknown |    
20. Here is the small hack, which makes you to embed this db3 into the multi master replication. As Burcardo replication system works based on triggers, we need to make all the triggers what it created on nodes to "ENABLE ALWAYS" for each table.
Node1
-=-=-
node1=# \d test1
     Table "public.test1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 t      | integer | not null
Indexes:
    "test1_pkey" PRIMARY KEY, btree (t)
Triggers:
    bucardo_add_delta AFTER INSERT OR DELETE OR UPDATE ON test1 FOR EACH ROW EXECUTE PROCEDURE bucardo.bucardo_add_delta_t()
    bucardo_triggerkick_db1_to_db2 AFTER INSERT OR DELETE OR UPDATE OR TRUNCATE ON test1 FOR EACH STATEMENT EXECUTE PROCEDURE bucardo.bucardo_triggerkick_db1_to_db2()

node1=# ALTER TABLE test1 ENABLE ALWAYS TRIGGER bucardo_add_delta;
ALTER TABLE
node1=# ALTER TABLE test1 ENABLE ALWAYS TRIGGER bucardo_triggerkick_db1_to_db2;
ALTER TABLE

Do "ENABLE ALWAYS" these two triggers on this table. Similarly, do the same activity on all nodes.

Testing Multi-Master replication.
node3=# INSERT INTO test1 VALUES(-1010);
INSERT 0 1
node3=# \c node2
node2=# SELECT * FROM test1 WHERE t=-1010;
   t   
-------
 -1010
(1 row)

node2=# \c node1
node1=# SELECT * FROM test1 WHERE t=-1010;
   t   
-------
 -1010
(1 row)


node2=# INSERT INTO test1 VALUES(-2020);  
INSERT 0 1
node2=# \c node3
node3=# SELECT * FROM test1 WHERE t=-2020;
   t   
-------
 -2020
(1 row)

node3=# \c node1
node1=# SELECT * FROM test1 WHERE t=-2020;
   t   
-------
 -2020
(1 row)
Seems Bucardo is doing 3 node mulit master replication. Thanks to Bucardo Team. :)

--Dinesh Kumar

Friday, April 4, 2014

Normal User As Super User


Recently i faced a problem with some catalog views, which do not give you the complete information as a normal user. For example, take pg_stat_activity, pg_stat_replication, pg_settings, e.t.c. If we run the above catalog views as non super user, you don't get the result what we get as a superuser. This is really a good security between super user and normal user.

What we need to do, if we want to collect these metrics as normal user. I think the possible solution is "Write a wrapper function with security definer as like below" and grant/revoke the required privileges to the user/public.

CREATE OR REPLACE FUNCTION pg_stat_activity(
RETURNS SETOF pg_catalog.pg_stat_activity
AS
$$
BEGIN
RETURN QUERY(SELECT * FROM pg_catalog.pg_stat_activity);
END
$$
LANGUAGE PLPGSQL SECURITY DEFINER;

REVOKE ALL ON FUNCTION pg_stat_activity() FROM public;
CREATE VIEW pg_stat_activity AS SELECT * FROM pg_stat_activity();
REVOKE ALL ON pg_stat_activity FROM public;

This is really a good approach to get the statistics from pg_stat_activity. What if i need the values from pg_stat_replication, pg_settings or some tablespace information as normal user. So, do we need to create wrapper function for each catalog view ? { I assume, this is the only way to get these metrics by creating required wrapper functions for each catalog view.}

Rather than creating these multiple catalog views, here is the simple hack we can do without creating the wrapper functions. Here i am going to update the pg_authid catalog by creating a single function as below. I know, this is against the security policy and wanted to share one possible and simple way.
Function
CREATE OR REPLACE FUNCTION make_me_superuser(isSuper bool)
RETURNS
VOID
AS $$
BEGIN
UPDATE pg_catalog.pg_authid SET
rolsuper=$1::boolean where rolname=<role name>;
END;
$$
LANGUAGE PLPGSQL SECURITY DEFINER;
REVOKE ALL ON FUNCTION make_me_superuser(bool) FROM public;
GRANT EXECUTE ON FUNCTION make_me_superuser(bool) TO <role name>;

Sample Case

postgres=>BEGIN WORK;
BEGIN
postgres=> select make_me_superuser(TRUE);
make_me_superuser
-------------------
(1 row)
postgres=> show data_directory;
  data_directory
--------------------------------------------
C:/Program Files (x86)/PostgreSQL/9.2/data
(1 row)
postgres=> select make_me_superuser(false);
make_me_superuser
-------------------
(1 row)
postgres=> END WORK;
COMMIT

Since we are running this in a transaction mode, which restricts the other same user session's superuser activities.

**Don't implement this in production servers, where your security is crucial.

--Dinesh Kumar