Shell script to export all APEX applications as SQL scripts

Environment: Oracle 11gR2 EE, Application Express 4.2.2, Oracle Linux 6.4 64-bit, Java JDK 7u21

The APEX software contains a Java class which can be used to script or schedule the export of one or more APEX applications. I will show you in this post how to do this.

First, I unzipped the APEX software to /u01/app/apex. The Java class file is called “APEXExport.class” and can be found in the folder /u01/app/apex/utilities/oracle/apex.

You also need a Java JDK. I installed Java 7 JDK (update 21) under /u01/app/java/java7.

You might need to unlock (and set a password) for the APEX schema owner (in my case, APEX_040200). I’m not sure if it’s possible to export the applications with another user.

This is how my script looks like:

#!/bin/sh
# script to export all ApEx applications as sql scripts

CLASSPATH=$CLASSPATH:/u01/app/oracle/product/11.2.0/db_1/oui/jlib/classes12.jar; export CLASSPATH
BACKUPDIR=/u05/apex_backup

cd /u01/app/apex/utilities

/u01/app/java/java7/bin/java oracle.apex.APEXExport -db localhost:1521:ora11g01 -user apex_040200 -password <password> -instance

mv f*.sql $BACKUPDIR

if [ -f $BACKUPDIR/apex_export.log ]
then
    cat $BACKUPDIR/apex_export.log|mail -s "APEX export - `hostname -s`" matthiash@mydomain.com
fi

You need to include the file classes12.jar in your CLASSPATH or the JDBC connection won’t work.

The -instance setting will export all your applications. To limit the number of applications, you can also use -workspaceid or -applicationid. Check the readme.txt file under the utilities folder for more information.

To schedule the script with cron, I added the following line to my crontab file:

00 22 * * * /home/oracle/scripts/apex_export.sh 1>/u05/apex_backup/apex_export.log 2>&1

This will execute the script each night at 22:00 and mail the log file to the mail address inside the script.

Happy scripting! :-)

Matthias

Advertisements

ORA-06550 “‘WWV_FLOW_API#G_ID_OFFSET’ must be declared” importing workspace into APEX 4.2.2.00.11

Environment: Oracle 11gR2, Application Express 4.2.2.00.11

I was trying to export a workspace from Oracle 10g to Oracle 11g (both with APEX 4.2.2.00.11 installed) when I received the following error:

 

Execution of the statement was unsuccessful. ORA-06550: line 3, column 4: PLS-00201: identifier

‘WWV_FLOW_API#G_ID_OFFSET’ must be declared ORA-06550: line 3, column 4: PL/SQL: Statement ignored

begin

wwv_flow_api#g_id_offset := 0;

wwv_flow_hint.g_schema := ‘MATTHIASH’;

wwv_flow_hint.check_schema_privs

end;

 

I checked the OTN forum and I replied to this post. Apparently, it’s a known bug in APEX 4.2.2.00.11. The bug number is 16846844. For those with access to My Oracle Support, there’s a patch out with patch ID 16781538. This bug fix will be included in the forthcoming APEX 4.2.3 patch set.

However, there is a workaround. Directly executing the sql file from SQL*Plus connected as the APEX schema owner (APEX_040200) works fine!

 

HTH,

Matthias

 

 

Getting APEX version number from data dictionary

Environment: Oracle 10gR2, Oracle 11gR2, APEX 4.2.2.00.11

If you don’t have access to the development environment, and you need to find the version of APEX installed into your system, there are several ways to do that:

 

1) Query APEX_RELEASE

APEX_RELEASE is a public view which is present in each APEX schema.

To query the current enabled version:

SELECT VERSION_NO FROM APEX_RELEASE;

4.2.2.00.11

 

To query other installed versions of APEX:

SELECT VERSION_NO FROM APEX_040100.APEX_RELEASE;

4.1.1.00.23

 

2) Query DBA_REGISTRY

The DBA_REGISTRY keeps track of all components installed into the database.

SELECT VERSION FROM DBA_REGISTRY WHERE COMP_NAME = ‘Oracle Application Express’;

4.2.2.00.11

 

HTH,

Matthias

Configuration of HugePages for Oracle database 11.2.0.3 on Oracle Linux 6.4

Environment: Oracle database 11.2.0.3 Enterprise Edition, Oracle Linux 6.4 64-bit

HugePages can give a performance boost on Linux systems with large SGA sizes. However, this is not set up by default, and the configuration can sometimes be a bit tricky.

This is how I did it on a system with a 4GB SGA size:

1) Disable Automatic Memory Management (AMM)

AMM is not compatible with HugePages. I disabled it by setting the following memory-related initialization parameters:

ALTER SYSTEM SET memory_max_target=0 SCOPE=SPFILE;

ALTER SYSTEM SET memory_target=0 SCOPE=BOTH;

ALTER SYSTEM SET sga_max_size=4016M SCOPE=SPFILE;

ALTER SYSTEM SET sga_target=4016M SCOPE=BOTH;

+ restart the instance

2) Calculate the number of HugePages needed

The size of one HugePage can be found as follows:

$ cat /proc/meminfo|grep Hugepagesize
Hugepagesize: 2048 kB

The amount of HugePages that you need can be found with the following script:

#!/bin/bash
#
# hugepages_settings.sh
#
# Linux bash script to compute values for the
# recommended HugePages/HugeTLB configuration
#
# Note: This script does calculation for all shared memory
# segments available when the script is run, no matter it
# is an Oracle RDBMS shared memory segment or not.
# Check for the kernel version
KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
# Find out the HugePage size
HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}`
# Start from 1 pages to be on the safe side and guarantee 1 free HugePage
NUM_PG=1
# Cumulative number of pages required to handle the running shared memory segments
for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"`
do
   MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
   if [ $MIN_PG -gt 0 ]; then
      NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
   fi
done
# Finish with results
case $KERN in
   '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
          echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
   '2.6') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
    *) echo "Unrecognized kernel version $KERN. Exiting." ;;
esac
# End

$ ./hugepages_settings.sh
Recommended setting: vm.nr_hugepages = 2013

3) Set the number of HugePages in /etc/sysctl.conf

Add the following to /etc/sysctl.conf:

vm.nr_hugepages = 2013

4) Set the memory limits for user “oracle”

The memory limits for user “oracle” can be calculated by multiplying the number of HugePages with the Hugepagesize (so, in my case, 2013*2048 = 4122624).

Add this number to /etc/security/limits.conf:

oracle soft memlock 4122624
oracle hard memlock 4122624

5) Set USE_LARGE_PAGES

By setting the initialization parameter USE_LARGE_PAGES, you can force the instance to only start up when enough HugePages are available.

ALTER SYSTEM SET USE_LARGE_PAGES=only SCOPE=SPFILE;

6) Reboot the system

Now reboot the server. You can check if large pages are being used in 2 ways:

$ cat /proc/meminfo|grep HugePages

AnonHugePages: 237568 kB
HugePages_Total: 2013
HugePages_Free: 1532
HugePages_Rsvd: 1528
HugePages_Surp: 0

Or check the alert.log file for the database instance:

Starting ORACLE instance (normal)
****************** Large Pages Information *****************
Parameter use_large_pages = ONLY

Total Shared Global Region in Large Pages = 4018 MB (100%)

Large Pages used by this instance: 2009 (4018 MB)
Large Pages unused system wide = 4 (8192 KB) (alloc incr 16 MB)
Large Pages configured system wide = 2013 (4026 MB)
Large Page size = 2048 KB
***********************************************************

Good luck! :-)

 

Matthias

Oracle 11g Flashback examples

Environment: Oracle database 11.2.0.3.0 Enterprise Edition.

This is a quick overview (mostly for myself ;-)) of the flashback features that are available with Oracle 11g.

— query table data from specific point in time

SELECT * FROM matthiash
AS OF TIMESTAMP
TO_TIMESTAMP('2013-05-03 14:00:00', 'YYYY-MM-DD HH24:MI:SS');

— query table data from one hour ago

SELECT * FROM matthiash
AS OF TIMESTAMP
(SYSTIMESTAMP - INTERVAL '60' MINUTE);

— query all versions of one or more records

SELECT versions_startscn, versions_starttime,
versions_endscn, versions_endtime,
versions_xid, versions_operation,m_id
FROM matthiash
VERSIONS BETWEEN TIMESTAMP
TO_TIMESTAMP('2013-05-03 13:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND TO_TIMESTAMP('2013-05-03 14:45:00', 'YYYY-MM-DD HH24:MI:SS')

— get the user name of the user who modified records

GRANT SELECT ANY TRANSACTION TO matthiash;

SELECT xid, logon_user
FROM flashback_transaction_query
WHERE xid IN (
SELECT versions_xid FROM matthiash VERSIONS BETWEEN TIMESTAMP
TO_TIMESTAMP('2013-05-03 13:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND TO_TIMESTAMP('2013-05-03 14:45:00', 'YYYY-MM-DD HH24:MI:SS')
)
AND xid = HEXTORAW('08000800C6070000');

— flashback entire table to specific point in time

ALTER TABLE matthiash ENABLE ROW MOVEMENT

FLASHBACK TABLE matthiash
TO TIMESTAMP TO_TIMESTAMP('2013-05-03 13:00:00', 'YYYY-MM-DD HH24:MI:SS');

— creating restore point and flashback table to restore point

CREATE RESTORE POINT before_table_update;

SELECT NAME, SCN, TIME
FROM V$RESTORE_POINT;

UPDATE matthiash SET m_id=0;

FLASHBACK TABLE matthiash.matthiash
TO RESTORE POINT before_table_update;

— recover dropped table from recycle bin (note: does not work for all types of tables)

DROP TABLE matthiash;

SELECT * FROM recyclebin;

FLASHBACK TABLE matthiash TO BEFORE DROP;

FLASHBACK TABLE "BIN$29FWNr5ICjbgQ68BEqzYLw==$0" TO BEFORE DROP RENAME TO matthiash;

— recover entire database to restore point, SCN or point in time

CREATE RESTORE POINT before_table_drop GUARANTEE FLASHBACK DATABASE;

DROP TABLE matthiash PURGE;

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;

FLASHBACK DATABASE TO RESTORE POINT BEFORE_TABLE_DROP;

FLASHBACK DATABASE TO SCN 46963;

FLASHBACK DATABASE TO TIME "TO_TIMESTAMP('2013-05-03 15:00:00', 'YYYY-MM-DD HH24:MI:SS')";

ALTER DATABASE OPEN RESETLOGS;

Matthias

Measuring Oracle I/O performance using DBMS_RESOURCE_MANAGER.CALIBRATE_IO

Environment: Oracle database 11.2.0.3.0, Oracle Linux 6.4

Oracle 11g has a procedure named “CALIBRATE_IO” which you can use to measure the I/O performance of your Oracle system. You can use this procedure to evaluate disk subsystem layout or configuration changes.

The procedure is part of the DBMS_RESOURCE_MANAGER PL/SQL package. Only users with the SYSDBA privilege can execute this procedure. You also need to enable ASYNCH_IO for your datafiles.

Note: do not run this on a live production system. For optimal results, use this on a quiet system with no other users connected.

This is an example with user SYS as SYSDBA (you have to enable DBMS_OUTPUT to see the results):

DECLARE
  lat  INTEGER;
  iops INTEGER;
  mbps INTEGER;
BEGIN
  DBMS_RESOURCE_MANAGER.CALIBRATE_IO(
      1 /* # of disks */
      , 10 /* maximum tolerable latency in milliseconds */
      , iops /* I/O rate per second */
      , mbps /* throughput, MB per second */
      , lat  /* actual latency in milliseconds */
  );
  DBMS_OUTPUT.PUT_LINE('max_iops = ' || iops);
  DBMS_OUTPUT.PUT_LINE('latency  = ' || lat);
  DBMS_OUTPUT.PUT_LINE('max_mbps = ' || mbps);
END;
/

On my system, the total execution time was about 8 minutes.
The output is as follows:

max_iops = 2588
latency = 8
max_mbps = 241

The results can also be found in the system table DBA_RSRC_IO_CALIBRATE.

HTH,
Matthias

OPatch warnings after installation of PSU 11.2.0.3.6 on Oracle 11.2.0.3.0

Environment: Oracle Database 11.2.0.3.0, Oracle Linux 6.4, OPatch v. 11.2.0.3.4

I was installing Patch Set Update 11.2.0.3.6 (as part of the CPU for April 2013) on a freshly installed Oracle 11.2.0.3.0 database, when I noticed the following warnings in the OPatch log file:

OPatch found the word “warning” in the stderr of the make command.
Please look at this stderr. You can re-run this make command.
Stderr output:
ins_precomp.mk:19: warning: overriding commands for target `pcscfg.cfg’
/u01/app/oracle/product/11.2.0/db_1/precomp/lib/env_precomp.mk:2160: warning: ignoring old commands for target `pcscfg.cfg’
/u01/app/oracle/product/11.2.0/db_1/precomp/lib/ins_precomp.mk:19: warning: overriding commands for target `pcscfg.cfg’
/u01/app/oracle/product/11.2.0/db_1/precomp/lib/env_precomp.mk:2160: warning: ignoring old commands for target `pcscfg.cfg’

The database seemed fine, though. I contacted Oracle support just in case, and it was confirmed as a known issue. The document with ID 1448337.1 has more information on it. So basically, these warnings can be safely ignored…

 

Matthias