Measuring Oracle I/O performance using DBMS_RESOURCE_MANAGER.CALIBRATE_IO

Environment: Oracle database 11.2.0.3.0, Oracle Linux 6.4

Oracle 11g has a procedure named “CALIBRATE_IO” which you can use to measure the I/O performance of your Oracle system. You can use this procedure to evaluate disk subsystem layout or configuration changes.

The procedure is part of the DBMS_RESOURCE_MANAGER PL/SQL package. Only users with the SYSDBA privilege can execute this procedure. You also need to enable ASYNCH_IO for your datafiles.

Note: do not run this on a live production system. For optimal results, use this on a quiet system with no other users connected.

This is an example with user SYS as SYSDBA (you have to enable DBMS_OUTPUT to see the results):

DECLARE
  lat  INTEGER;
  iops INTEGER;
  mbps INTEGER;
BEGIN
  DBMS_RESOURCE_MANAGER.CALIBRATE_IO(
      1 /* # of disks */
      , 10 /* maximum tolerable latency in milliseconds */
      , iops /* I/O rate per second */
      , mbps /* throughput, MB per second */
      , lat  /* actual latency in milliseconds */
  );
  DBMS_OUTPUT.PUT_LINE('max_iops = ' || iops);
  DBMS_OUTPUT.PUT_LINE('latency  = ' || lat);
  DBMS_OUTPUT.PUT_LINE('max_mbps = ' || mbps);
END;
/

On my system, the total execution time was about 8 minutes.
The output is as follows:

max_iops = 2588
latency = 8
max_mbps = 241

The results can also be found in the system table DBA_RSRC_IO_CALIBRATE.

HTH,
Matthias

Advertisements

APEX 4.2.2 released!

Oracle released a new minor version of Application Express: 4.2.2, or to be complete: 4.2.2.00.11.

Those who are already on version 4.2.x can download a patch from My Oracle Support (if you have a support contract). The patch number is 16277995. Other users can download the full version here.

Interesting documents to read are the known issues and the list of bugs fixed with this release. Unfortunately, the problem with APEX_LDAP.AUTHENTICATE and accented user names (see my earlier blog post) is not mentioned in the list of bugs.

 

Matthias

 

OPatch warnings after installation of PSU 11.2.0.3.6 on Oracle 11.2.0.3.0

Environment: Oracle Database 11.2.0.3.0, Oracle Linux 6.4, OPatch v. 11.2.0.3.4

I was installing Patch Set Update 11.2.0.3.6 (as part of the CPU for April 2013) on a freshly installed Oracle 11.2.0.3.0 database, when I noticed the following warnings in the OPatch log file:

OPatch found the word “warning” in the stderr of the make command.
Please look at this stderr. You can re-run this make command.
Stderr output:
ins_precomp.mk:19: warning: overriding commands for target `pcscfg.cfg’
/u01/app/oracle/product/11.2.0/db_1/precomp/lib/env_precomp.mk:2160: warning: ignoring old commands for target `pcscfg.cfg’
/u01/app/oracle/product/11.2.0/db_1/precomp/lib/ins_precomp.mk:19: warning: overriding commands for target `pcscfg.cfg’
/u01/app/oracle/product/11.2.0/db_1/precomp/lib/env_precomp.mk:2160: warning: ignoring old commands for target `pcscfg.cfg’

The database seemed fine, though. I contacted Oracle support just in case, and it was confirmed as a known issue. The document with ID 1448337.1 has more information on it. So basically, these warnings can be safely ignored…

 

Matthias

 

ORA-20001 when dropping DBMS_SCHEDULER job with white spaces in name

Environment: Oracle Database 11g EE 11.2.0.3.0

When I tried to drop a scheduled job using DBMS_SCHEDULER.DROP_JOB, I received the following error message:

BEGIN
DBMS_SCHEDULER.DROP_JOB(
job_name => 'MATTHIASH.gather schema stats SCOTT'
);
END;

ORA-20001: comma-separated list invalid near T
ORA-06512: in "SYS.DBMS_UTILITY", regel 236
ORA-06512: in "SYS.DBMS_UTILITY", regel 272
ORA-06512: in "SYS.DBMS_SCHEDULER", regel 623
ORA-06512: in line 2

I was puzzled for a while… and then I saw what the problem was: the job name (“gather schema stats SCOTT”) contains white spaces, and this causes DBMS_SCHEDULER.DROP_JOB to throw an error…
However, there’s a workaround! If you enclose the job name in double quotes, the procedure DROP_JOB works fine:

BEGIN
DBMS_SCHEDULER.DROP_JOB(
job_name => 'MATTHIASH."gather schema stats SCOTT"'
);
END;

Btw: the same issue occurs with other DBMS_SCHEDULER procedures, like CREATE_JOB, RUN_JOB and such…

HTH,
Matthias

Finding duplicate BLOBS using DBMS_CRYPTO.HASH and analytic functions

Environment: Oracle Database EE 10.2.0.5

Suppose that you have a table with a BLOB column, and you want to know if there are duplicate BLOBS in that table. A GROUP BY query with COUNT doesn’t work on BLOB columns… One way would be to write a PL/SQL or Perl program that extracts all BLOBS and compares the values between all rows. So, is there no simple way to do this in SQL? Yes, there is! You can use the DBMS_CRYPTO package to hash the values of the BLOB column and then group the hashed values using an analytic function.

Let’s work out an example.

First, we create a simple table with one BLOB column:

CREATE TABLE BLOBS
(
  BLOBID       NUMBER                           NOT NULL,
  BLOBNAME     VARCHAR2(50)                     NOT NULL,
  BLOBCONTENT  BLOB                             NOT NULL
)
;

ALTER TABLE BLOBS ADD (
  CONSTRAINT BLOBS_PK
 PRIMARY KEY
 (BLOBID))
;

Next, let’s fill the table with some records. In this example, records 1, 4 and 5 have the same BLOB values, as have records 2 and 6.

insert into blobs(blobid,blobname,blobcontent)
values(1,'blob 1',utl_raw.cast_to_raw('contents of blob 1'));

insert into blobs(blobid,blobname,blobcontent)
values(2,'blob 2',utl_raw.cast_to_raw('contents of blob 2'));

insert into blobs(blobid,blobname,blobcontent)
values(3,'blob 3',utl_raw.cast_to_raw('contents of blob 3'));

insert into blobs(blobid,blobname,blobcontent)
values(4,'blob 4',utl_raw.cast_to_raw('contents of blob 1'));

insert into blobs(blobid,blobname,blobcontent)
values(5,'blob 5',utl_raw.cast_to_raw('contents of blob 1'));

insert into blobs(blobid,blobname,blobcontent)
values(6,'blob 6',utl_raw.cast_to_raw('contents of blob 2'));

commit;

Now we need to grant our user execute rights on the DBMS_CRYPTO package.
Execute the following SQL with user SYS as SYSDBA:

grant execute on dbms_crypto to matthiash;

Now comes the difficult part: create a query so that the contents of the BLOBS are hashed using DBMS_CRYPTO, hashes that have the same value are grouped together, and the record ID’s are shown so that it’s clear which records have identical BLOB values. I managed to do this using window partitioning. Basically, we group the results into different windows, where each window contains the records which have the same hashed BLOB contents.

This is the query:

select blobid, blobname, blobcontent,row_number()
over(partition by dbms_crypto.hash(blobcontent,2) order by blobid asc) row_number,
count(blobid) over(partition by dbms_crypto.hash(blobcontent,2)) blobcount 
from blobs 
order by blobcount desc, dbms_crypto.hash(blobcontent,2) asc, row_number asc;

And these are the results:

BLOBID,BLOBNAME,ROW_NUMBER,BLOBCOUNT
1,blob 1,1,3
4,blob 4,2,3
5,blob 5,3,3
2,blob 2,1,2
6,blob 6,2,2
3,blob 3,1,1

So, BLOBS 1, 4 and 5 are grouped, next we have BLOBS 2 and 6, and finally BLOB 3. Mission accomplished! :-) Mind you that this is a very small table, so the query runs very fast. Using DBMS_CRYPTO.HASH on tables with millions of rows or very large BLOBS might need some CPU power!

HTH,
Matthias

 

APEX Listener 2.0.1 released

This has escaped my attention: since the beginning of March there is a new bug fix version of the APEX Listener 2.0 available for download. The latest version is now 2.0.1 and it contains the following bug fixes:

  • Support the handling of responses around 32KB in size on Oracle 10.2 database
  • Handling of large identifiers that prevented some RESTful Services being found or dispatched
  • Change the behaviour of BDB storage, each restart of listener gets it’s own folder for BDB storage, allowing multiple (e.g. clustered) nodes to share a single configuration folder, without getting an error
  • JSON streams not being parsed as UTF-8 encoded text
  • Dispatching of RESTful Services was causing a 404 not found exception when more than one database was configured
  • Improved the handling of Media Resources that are generated from large XMLType values
  • Fixed 503 Service Unavailable statuses generated by the PL/SQL Gateway to be mapped to a 404 Not Found status
  • Improve the error message generated when the icap.server/icap.port settings are misconfigured
  • PDF printing not handling results & XSL that totalled over 32766 bytes in size
  • RESTful Service Media Resource source type, that was preventing CLOBs over 4000 bytes in size from being returned without error
  • Handling of the icap.server and icap.port configuration settings
  • Virus scanning of empty and very small files
  • Debug.debugger configuration setting not working properly
  • Class loading in Standalone Mode was preventing FOP (PDF) printing working
  • Reporting of connection pool usage statistics

It’s cool that PDF printing now supports results that are over 32 kB in size, and also that there is now reporting available on the connection pool usage.

Maybe it’s time to upgrade the APEX Listener on our production environment :-)

 

Matthias