Shell script to export all APEX applications as SQL scripts

Environment: Oracle 11gR2 EE, Application Express 4.2.2, Oracle Linux 6.4 64-bit, Java JDK 7u21

The APEX software contains a Java class which can be used to script or schedule the export of one or more APEX applications. I will show you in this post how to do this.

First, I unzipped the APEX software to /u01/app/apex. The Java class file is called “APEXExport.class” and can be found in the folder /u01/app/apex/utilities/oracle/apex.

You also need a Java JDK. I installed Java 7 JDK (update 21) under /u01/app/java/java7.

You might need to unlock (and set a password) for the APEX schema owner (in my case, APEX_040200). I’m not sure if it’s possible to export the applications with another user.

This is how my script looks like:

#!/bin/sh
# script to export all ApEx applications as sql scripts

CLASSPATH=$CLASSPATH:/u01/app/oracle/product/11.2.0/db_1/oui/jlib/classes12.jar; export CLASSPATH
BACKUPDIR=/u05/apex_backup

cd /u01/app/apex/utilities

/u01/app/java/java7/bin/java oracle.apex.APEXExport -db localhost:1521:ora11g01 -user apex_040200 -password <password> -instance

mv f*.sql $BACKUPDIR

if [ -f $BACKUPDIR/apex_export.log ]
then
    cat $BACKUPDIR/apex_export.log|mail -s "APEX export - `hostname -s`" matthiash@mydomain.com
fi

You need to include the file classes12.jar in your CLASSPATH or the JDBC connection won’t work.

The -instance setting will export all your applications. To limit the number of applications, you can also use -workspaceid or -applicationid. Check the readme.txt file under the utilities folder for more information.

To schedule the script with cron, I added the following line to my crontab file:

00 22 * * * /home/oracle/scripts/apex_export.sh 1>/u05/apex_backup/apex_export.log 2>&1

This will execute the script each night at 22:00 and mail the log file to the mail address inside the script.

Happy scripting! :-)

Matthias

Advertisements

Example Data Pump Export script for Linux

Environment: Oracle database 11.2.0.3, Oracle Linux 6.2

Data Pump is a command-line utility for importing and exporting objects like user tables and pl/sql source code from a Oracle database. It’s new since Oracle 10g, and it’s a better alternative for the “old” exp/imp utilities. However, do not use Data Pump to replace a full physical database backup with RMAN. Complete point-in-time recovery is not possible with Data Pump. Therefore, it should only be used for data migrations or in conjunction with RMAN.

In this blog post, I will show you how you can create a script to execute and schedule a full Data Pump export on Linux.

First, we need to define a directory object. This is an alias for a file system folder that we will need in the Data Pump script. Execute with user SYS as SYSDBA:

create directory expdir as '/expdir';
select * from dba_directories;

Note: make sure user “oracle” has write permissions on the file system folder used for the Data Pump export files (in this case: /expdir).

Next, we will create a database user that will be used for the export. This user will at least need the “EXP_FULL_DATABASE” role, CREATE SESSION and CREATE TABLE rights, and read/write access on the directory object we previously created.

Note: do NOT use “SYS as SYSDBA” for the export, this user should only be used when requested by Oracle support!

CREATE USER EXPORT
IDENTIFIED BY password
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
PROFILE "UNLIMITED PASSWORD EXPIRATION"
ACCOUNT UNLOCK;

GRANT CREATE SESSION TO EXPORT;
GRANT CREATE TABLE TO EXPORT;
ALTER USER EXPORT QUOTA UNLIMITED ON USERS;
GRANT EXP_FULL_DATABASE TO EXPORT;
GRANT READ, WRITE ON DIRECTORY EXPDIR TO EXPORT;

Now we will create the Linux shell script:

$ vi ora_expdp_full.sh

#!/bin/sh

# script to make full export of Oracle db using Data Pump

STARTTIME=`date`
export ORACLE_SID=oratst
export ORACLE_HOME=`cat /etc/oratab|grep ^${ORACLE_SID}:|cut -d':' -f2`
export EXPLOG=expdp_${ORACLE_SID}.log
export EXPDIR=/expdir
export PATH=$PATH:$ORACLE_HOME/bin
DATEFORMAT=`date +%Y%m%d`
STARTTIME=`date`

# Data Pump export
expdp export/password content=ALL directory=expdir dumpfile=expdp_`echo $ORACLE_SID`_%U_`echo $DATEFORMAT`.dmp filesize=2G full=Y logfile=$EXPLOG nologfile=N parallel=2

ENDTIME=`date`
SUBJECT=`hostname -s`:$ORACLE_SID:`tail -1 $EXPDIR/$EXPLOG`
echo -e "Start time:" $STARTTIME "\nEnd time:" $ENDTIME | mail -s "$SUBJECT" dba@mydomain.com

This script will create 2GB export files and dynamically append the date to them. So, in this case, the first file will be “expdp_oratst_01_20120503.dmp”, the second one “expdp_oratst_02_20120503.dmp”, and so on. Finally, a mail will be sent to the DBA with as mail subject the last line of the export log file, and as mail body the start and end time.

Note: You need to replace the ORACLE_SID and EXPDIR variables in the script by the ones suitable for your environment.

Make sure only the owner of the script has read and write access on it:

$ chmod 700 ora_expdp_full.sh

Finally, we can schedule the script with the Linux utility cron:

$ crontab -e

Add the following lines:

# Daily logical export
00 23 * * * /home/oracle/scripts/export/ora_expdp_full.sh 1>/dev/null 2>&1

 

Enjoy :-)

Matthias

ORA-02374, ORA-12899, ORA-02372 during Data Pump export/import from 10g WE8MSWIN1252 into 11g AL32UTF8

Environment: Oracle 10.2.0.5, Oracle 11.2.0.3, Oracle Linux 6.2

During a data pump export/import from 10g into 11g, I received several errors in the data pump import log file:

ORA-02374: conversion error loading table “COMPOUND”.”COMPOUND”
ORA-12899: value too large for column COMPMP (actual: 21, maximum: 20)
ORA-02372: data for row: COMPMP : 0X’3E323030B043206465636F6D706F736974696F6E’

The cause of these errors is related to the fact that I am migrating data from a database with a single-byte character set (WE8MSWIN1252) into one with a multi-byte character set (AL32UTF8). This means that some single-byte characters will be “expanded” into multi-byte characters, and if the column was already filled completely, ORA-12899 will be raised, showing the maximum allowed column value and the actual value needed to fit all the characters into the multi-byte column.

There are several solutions to this problem: increase the size of the source column or truncate the data before the import, stick to the same character set, pre-create the tables with modified column sizes in the source database before the import, … But I won’t elaborate too much on this.

What if you want to see the actual values or ROWID’s of the affected rows? If you have only a few rows with problems, you could query the original source table based on the column value from the import log file, converted to ASCII:

select rowid,compmp from compound.compound where compmp = utl_raw.cast_to_varchar2('3E323030B043206465636F6D706F736974696F6E')

Output:

ROWID,COMPMP
AAAQAYAAKAABjDuAAd,>200°C decomposition

Here we see that the character for degrees (°) is causing the problem during the import, since 1 byte is converted into 2 bytes and the column already has 20 characters.

However, if you have a lot of rows with conversion errors in your import log file, there’s another solution. This requires the installation of the Database Character Set Scanner utility (csscan) into your source database.

Csscan is installed by running the csminst.sql script under $ORACLE_HOME/rdbms/admin. This script will create a user “csmig”. It’s a good idea to first modify the following line so the user csmig doesn’t write into the SYSTEM tablespace:

alter user csmig default tablespace SYSTEM quota unlimited on SYSTEM

In my case, I replaced SYSTEM by SYSAUX.

Let’s install csscan into our source database:

$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus /nolog

SQL*Plus: Release 10.2.0.5.0 – Production on Wed Apr 11 13:21:33 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

SQL> conn / as sysdba
Connected.
SQL> set TERMOUT ON
SQL> set ECHO ON
SQL> spool csminst.log
SQL> @csminst.sql

After this you should check the csminst.log file for errors. In my case, there were some errors because the script tried to drop public synonyms that didn’t exist. Nothing to worry about.

Now that csscan is installed, you can use it to check specific schemas or tables for data conversion problems during the migration to another character set. I executed it as follows (you will need the password of the SYS user):

$ csscan \”sys as sysdba\” LOG=/tmp/csscan.log USER=compound CAPTURE=Y TOCHAR=AL32UTF8 ARRAY=1024000 PROCESS=3

This will analyze all character data stored in the tables of the schema “compound”. The script will create 3 log files. The log file with the extension “.err” has the list of all affected rows with their corresponding ROWID’s. You can count the affected rows from the log file using the UNIX “grep” command:

$ grep -i “exceed column size” /tmp/csscan.log.err|wc -l

568

So, in my case, 568 rows from the “compound” schema will have conversion problems during the migration to the AL32UTF8 character set.

You can remove csscan by dropping the user “csmig”. More information regarding csscan can be found on My Oracle Support, please see document ID 1297961.1.

Matthias