Getting APEX version number from data dictionary

Environment: Oracle 10gR2, Oracle 11gR2, APEX 4.2.2.00.11

If you don’t have access to the development environment, and you need to find the version of APEX installed into your system, there are several ways to do that:

 

1) Query APEX_RELEASE

APEX_RELEASE is a public view which is present in each APEX schema.

To query the current enabled version:

SELECT VERSION_NO FROM APEX_RELEASE;

4.2.2.00.11

 

To query other installed versions of APEX:

SELECT VERSION_NO FROM APEX_040100.APEX_RELEASE;

4.1.1.00.23

 

2) Query DBA_REGISTRY

The DBA_REGISTRY keeps track of all components installed into the database.

SELECT VERSION FROM DBA_REGISTRY WHERE COMP_NAME = ‘Oracle Application Express’;

4.2.2.00.11

 

HTH,

Matthias

Advertisements

Finding duplicate BLOBS using DBMS_CRYPTO.HASH and analytic functions

Environment: Oracle Database EE 10.2.0.5

Suppose that you have a table with a BLOB column, and you want to know if there are duplicate BLOBS in that table. A GROUP BY query with COUNT doesn’t work on BLOB columns… One way would be to write a PL/SQL or Perl program that extracts all BLOBS and compares the values between all rows. So, is there no simple way to do this in SQL? Yes, there is! You can use the DBMS_CRYPTO package to hash the values of the BLOB column and then group the hashed values using an analytic function.

Let’s work out an example.

First, we create a simple table with one BLOB column:

CREATE TABLE BLOBS
(
  BLOBID       NUMBER                           NOT NULL,
  BLOBNAME     VARCHAR2(50)                     NOT NULL,
  BLOBCONTENT  BLOB                             NOT NULL
)
;

ALTER TABLE BLOBS ADD (
  CONSTRAINT BLOBS_PK
 PRIMARY KEY
 (BLOBID))
;

Next, let’s fill the table with some records. In this example, records 1, 4 and 5 have the same BLOB values, as have records 2 and 6.

insert into blobs(blobid,blobname,blobcontent)
values(1,'blob 1',utl_raw.cast_to_raw('contents of blob 1'));

insert into blobs(blobid,blobname,blobcontent)
values(2,'blob 2',utl_raw.cast_to_raw('contents of blob 2'));

insert into blobs(blobid,blobname,blobcontent)
values(3,'blob 3',utl_raw.cast_to_raw('contents of blob 3'));

insert into blobs(blobid,blobname,blobcontent)
values(4,'blob 4',utl_raw.cast_to_raw('contents of blob 1'));

insert into blobs(blobid,blobname,blobcontent)
values(5,'blob 5',utl_raw.cast_to_raw('contents of blob 1'));

insert into blobs(blobid,blobname,blobcontent)
values(6,'blob 6',utl_raw.cast_to_raw('contents of blob 2'));

commit;

Now we need to grant our user execute rights on the DBMS_CRYPTO package.
Execute the following SQL with user SYS as SYSDBA:

grant execute on dbms_crypto to matthiash;

Now comes the difficult part: create a query so that the contents of the BLOBS are hashed using DBMS_CRYPTO, hashes that have the same value are grouped together, and the record ID’s are shown so that it’s clear which records have identical BLOB values. I managed to do this using window partitioning. Basically, we group the results into different windows, where each window contains the records which have the same hashed BLOB contents.

This is the query:

select blobid, blobname, blobcontent,row_number()
over(partition by dbms_crypto.hash(blobcontent,2) order by blobid asc) row_number,
count(blobid) over(partition by dbms_crypto.hash(blobcontent,2)) blobcount 
from blobs 
order by blobcount desc, dbms_crypto.hash(blobcontent,2) asc, row_number asc;

And these are the results:

BLOBID,BLOBNAME,ROW_NUMBER,BLOBCOUNT
1,blob 1,1,3
4,blob 4,2,3
5,blob 5,3,3
2,blob 2,1,2
6,blob 6,2,2
3,blob 3,1,1

So, BLOBS 1, 4 and 5 are grouped, next we have BLOBS 2 and 6, and finally BLOB 3. Mission accomplished! :-) Mind you that this is a very small table, so the query runs very fast. Using DBMS_CRYPTO.HASH on tables with millions of rows or very large BLOBS might need some CPU power!

HTH,
Matthias

 

Script to monitor process on Linux

Environment: Oracle Linux 6.2

Sometimes you want to monitor a process on your Linux server and do some alerting when the process is not running anymore.

The following shell script can be used as an example. It will monitor the “dbconsole” process (database control) and send a e-mail message when the process is not running anymore.

$ vi check_db_control.sh

#!/bin/sh

PROCESSFILE='dbconsole'

if [ `ps ax|grep -v grep|grep -ic $PROCESSFILE` -gt 0 ]
then
        echo "$PROCESSFILE is running, everything is fine"
else
        echo "$PROCESSFILE is not running on `hostname -s`!" | mail -s "$PROCESSFILE down!" matthiash@mydomain.com
fi

exit

$ chmod u+x check_db_control.sh

To schedule the script, you can add it to cron. For example, this script will run every day at 10:00AM:

$ crontab -e

# Monitoring dbconsole process
00 10 * * * /home/oracle/scripts/check_db_control.sh 1>/dev/null 2>&1

 

HTH,
Matthias

Extracting hour and minute from a TIMESTAMP and DATE variable

Environment: Oracle 10g (10.2.0.5), Oracle 11gR2

To extract the HOUR and MINUTE parts from a TIMESTAMP column. Oracle offers the EXTRACT(HOUR FROM …) and EXTRACT(MINUTE FROM …) functions.

This is an example using SYSTIMESTAMP:

SELECT EXTRACT(HOUR FROM SYSTIMESTAMP) AS CURRENT_HOUR FROM DUAL;

SELECT EXTRACT(MINUTE FROM SYSTIMESTAMP) AS CURRENT_MINUTE FROM DUAL;

 

To extract the HOUR and MINUTE parts from a DATE column. Oracle offers the TO_CHAR(date, ‘HH24′) and TO_CHAR(date,’MI’) functions.

This is an example using SYSDATE:

SELECT TO_CHAR(SYSDATE,’HH24′) AS CURRENT_HOUR FROM DUAL;

SELECT TO_CHAR(SYSDATE,’MI’) AS CURRENT_MINUTE FROM DUAL;

 

Matthias

ORA-02374, ORA-12899, ORA-02372 during Data Pump export/import from 10g WE8MSWIN1252 into 11g AL32UTF8

Environment: Oracle 10.2.0.5, Oracle 11.2.0.3, Oracle Linux 6.2

During a data pump export/import from 10g into 11g, I received several errors in the data pump import log file:

ORA-02374: conversion error loading table “COMPOUND”.”COMPOUND”
ORA-12899: value too large for column COMPMP (actual: 21, maximum: 20)
ORA-02372: data for row: COMPMP : 0X’3E323030B043206465636F6D706F736974696F6E’

The cause of these errors is related to the fact that I am migrating data from a database with a single-byte character set (WE8MSWIN1252) into one with a multi-byte character set (AL32UTF8). This means that some single-byte characters will be “expanded” into multi-byte characters, and if the column was already filled completely, ORA-12899 will be raised, showing the maximum allowed column value and the actual value needed to fit all the characters into the multi-byte column.

There are several solutions to this problem: increase the size of the source column or truncate the data before the import, stick to the same character set, pre-create the tables with modified column sizes in the source database before the import, … But I won’t elaborate too much on this.

What if you want to see the actual values or ROWID’s of the affected rows? If you have only a few rows with problems, you could query the original source table based on the column value from the import log file, converted to ASCII:

select rowid,compmp from compound.compound where compmp = utl_raw.cast_to_varchar2('3E323030B043206465636F6D706F736974696F6E')

Output:

ROWID,COMPMP
AAAQAYAAKAABjDuAAd,>200°C decomposition

Here we see that the character for degrees (°) is causing the problem during the import, since 1 byte is converted into 2 bytes and the column already has 20 characters.

However, if you have a lot of rows with conversion errors in your import log file, there’s another solution. This requires the installation of the Database Character Set Scanner utility (csscan) into your source database.

Csscan is installed by running the csminst.sql script under $ORACLE_HOME/rdbms/admin. This script will create a user “csmig”. It’s a good idea to first modify the following line so the user csmig doesn’t write into the SYSTEM tablespace:

alter user csmig default tablespace SYSTEM quota unlimited on SYSTEM

In my case, I replaced SYSTEM by SYSAUX.

Let’s install csscan into our source database:

$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus /nolog

SQL*Plus: Release 10.2.0.5.0 – Production on Wed Apr 11 13:21:33 2012

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.

SQL> conn / as sysdba
Connected.
SQL> set TERMOUT ON
SQL> set ECHO ON
SQL> spool csminst.log
SQL> @csminst.sql

After this you should check the csminst.log file for errors. In my case, there were some errors because the script tried to drop public synonyms that didn’t exist. Nothing to worry about.

Now that csscan is installed, you can use it to check specific schemas or tables for data conversion problems during the migration to another character set. I executed it as follows (you will need the password of the SYS user):

$ csscan \”sys as sysdba\” LOG=/tmp/csscan.log USER=compound CAPTURE=Y TOCHAR=AL32UTF8 ARRAY=1024000 PROCESS=3

This will analyze all character data stored in the tables of the schema “compound”. The script will create 3 log files. The log file with the extension “.err” has the list of all affected rows with their corresponding ROWID’s. You can count the affected rows from the log file using the UNIX “grep” command:

$ grep -i “exceed column size” /tmp/csscan.log.err|wc -l

568

So, in my case, 568 rows from the “compound” schema will have conversion problems during the migration to the AL32UTF8 character set.

You can remove csscan by dropping the user “csmig”. More information regarding csscan can be found on My Oracle Support, please see document ID 1297961.1.

Matthias