Configuration of HugePages for Oracle database 11.2.0.3 on Oracle Linux 6.4

Environment: Oracle database 11.2.0.3 Enterprise Edition, Oracle Linux 6.4 64-bit

HugePages can give a performance boost on Linux systems with large SGA sizes. However, this is not set up by default, and the configuration can sometimes be a bit tricky.

This is how I did it on a system with a 4GB SGA size:

1) Disable Automatic Memory Management (AMM)

AMM is not compatible with HugePages. I disabled it by setting the following memory-related initialization parameters:

ALTER SYSTEM SET memory_max_target=0 SCOPE=SPFILE;

ALTER SYSTEM SET memory_target=0 SCOPE=BOTH;

ALTER SYSTEM SET sga_max_size=4016M SCOPE=SPFILE;

ALTER SYSTEM SET sga_target=4016M SCOPE=BOTH;

+ restart the instance

2) Calculate the number of HugePages needed

The size of one HugePage can be found as follows:

$ cat /proc/meminfo|grep Hugepagesize
Hugepagesize: 2048 kB

The amount of HugePages that you need can be found with the following script:

#!/bin/bash
#
# hugepages_settings.sh
#
# Linux bash script to compute values for the
# recommended HugePages/HugeTLB configuration
#
# Note: This script does calculation for all shared memory
# segments available when the script is run, no matter it
# is an Oracle RDBMS shared memory segment or not.
# Check for the kernel version
KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
# Find out the HugePage size
HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}`
# Start from 1 pages to be on the safe side and guarantee 1 free HugePage
NUM_PG=1
# Cumulative number of pages required to handle the running shared memory segments
for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"`
do
   MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
   if [ $MIN_PG -gt 0 ]; then
      NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
   fi
done
# Finish with results
case $KERN in
   '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
          echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
   '2.6') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
    *) echo "Unrecognized kernel version $KERN. Exiting." ;;
esac
# End

$ ./hugepages_settings.sh
Recommended setting: vm.nr_hugepages = 2013

3) Set the number of HugePages in /etc/sysctl.conf

Add the following to /etc/sysctl.conf:

vm.nr_hugepages = 2013

4) Set the memory limits for user “oracle”

The memory limits for user “oracle” can be calculated by multiplying the number of HugePages with the Hugepagesize (so, in my case, 2013*2048 = 4122624).

Add this number to /etc/security/limits.conf:

oracle soft memlock 4122624
oracle hard memlock 4122624

5) Set USE_LARGE_PAGES

By setting the initialization parameter USE_LARGE_PAGES, you can force the instance to only start up when enough HugePages are available.

ALTER SYSTEM SET USE_LARGE_PAGES=only SCOPE=SPFILE;

6) Reboot the system

Now reboot the server. You can check if large pages are being used in 2 ways:

$ cat /proc/meminfo|grep HugePages

AnonHugePages: 237568 kB
HugePages_Total: 2013
HugePages_Free: 1532
HugePages_Rsvd: 1528
HugePages_Surp: 0

Or check the alert.log file for the database instance:

Starting ORACLE instance (normal)
****************** Large Pages Information *****************
Parameter use_large_pages = ONLY

Total Shared Global Region in Large Pages = 4018 MB (100%)

Large Pages used by this instance: 2009 (4018 MB)
Large Pages unused system wide = 4 (8192 KB) (alloc incr 16 MB)
Large Pages configured system wide = 2013 (4026 MB)
Large Page size = 2048 KB
***********************************************************

Good luck! :-)

 

Matthias

Measuring Oracle I/O performance using DBMS_RESOURCE_MANAGER.CALIBRATE_IO

Environment: Oracle database 11.2.0.3.0, Oracle Linux 6.4

Oracle 11g has a procedure named “CALIBRATE_IO” which you can use to measure the I/O performance of your Oracle system. You can use this procedure to evaluate disk subsystem layout or configuration changes.

The procedure is part of the DBMS_RESOURCE_MANAGER PL/SQL package. Only users with the SYSDBA privilege can execute this procedure. You also need to enable ASYNCH_IO for your datafiles.

Note: do not run this on a live production system. For optimal results, use this on a quiet system with no other users connected.

This is an example with user SYS as SYSDBA (you have to enable DBMS_OUTPUT to see the results):

DECLARE
  lat  INTEGER;
  iops INTEGER;
  mbps INTEGER;
BEGIN
  DBMS_RESOURCE_MANAGER.CALIBRATE_IO(
      1 /* # of disks */
      , 10 /* maximum tolerable latency in milliseconds */
      , iops /* I/O rate per second */
      , mbps /* throughput, MB per second */
      , lat  /* actual latency in milliseconds */
  );
  DBMS_OUTPUT.PUT_LINE('max_iops = ' || iops);
  DBMS_OUTPUT.PUT_LINE('latency  = ' || lat);
  DBMS_OUTPUT.PUT_LINE('max_mbps = ' || mbps);
END;
/

On my system, the total execution time was about 8 minutes.
The output is as follows:

max_iops = 2588
latency = 8
max_mbps = 241

The results can also be found in the system table DBA_RSRC_IO_CALIBRATE.

HTH,
Matthias

Monitoring JVM memory performance of GlassFish 3.1.2

Environment: GlassFish OSE 3.1.2.2, Oracle Linux 6.3

In this blog post I will explain how you can enable and monitor the memory statistics of the Java Virtual Machine in GlassFish 3.1.2.

1) Enable monitoring for the JVM module

There are 2 ways to enable JVM monitoring: by using the graphical admin console, or from the command line by running the asadmin command.

From the graphical admin console

Start up the admin console (by default this runs on port 4848), log in as admin and click on “Monitoring Data” in the left menu. Now click on “Configure Monitoring” in the right menu for the instance you want to monitor. You should now see the screen with the list of configurable components (see screenshot below). Set the Monitoring Level of the JVM component to “Low” and click on Save. No instance restart is needed for this change.

From the command line

Start up the asadmin command (under the ../glassfish/bin folder). Use the subcommand “enable-monitoring” or “disable-monitoring” to enable/disable monitoring for a specific component. To specify the instance that you want to configure, add the “–target” option.

asadmin> enable-monitoring --modules jvm=LOW --target myinstance
Command enable-monitoring executed successfully.

2) View JVM memory statistics

From the graphical admin console

Click on “Monitoring Data” in the left menu and then on “Server” (under “View Monitoring Data”) in the right menu. You should now see the Server Monitoring screen (see screenshot below). To view the JVM memory statistics, click on the blue arrow next to “JVM: Memory Statistics”.

From the command line

Start up the asadmin command. Use the “get –monitor” subcommand to retrieve the required monitoring statistics.

asadmin> get --monitor myinstance.jvm.memory.usedheapsize.*
myinstance:
myinstance.jvm.memory.dotted-name = myinstance.jvm.memory
myinstance.jvm.memory.usedheapsize-count-count = 119150920
myinstance.jvm.memory.usedheapsize-count-description = Amount of used memory in bytes
myinstance.jvm.memory.usedheapsize-count-lastsampletime = 1346166888774
myinstance.jvm.memory.usedheapsize-count-name = UsedHeapSize
myinstance.jvm.memory.usedheapsize-count-starttime = 1346144878093
myinstance.jvm.memory.usedheapsize-count-unit = bytes

Command get executed successfully.

3) Schedule monitoring

To schedule the monitoring of memory statistics, you could easily add a command line entry to your crontab file. The following example will execute every 10 minutes and write the datetime and the usedheapsize to a log file.

>crontab -l
*/10 * * * * echo `date`>>/tmp/glassfish_memory.log;/u01/app/glassfish/bin/asadmin get --monitor myinstance.jvm.memory.usedheapsize-count-count>>/tmp/glassfish_memory.log

HTH,
Matthias

Oracle 11g listener causing failed tcp connection attempts on a non-RAC system

Environment: Oracle database 11.2.0.3, Oracle Linux 6.2

I was debugging a possible network problem on a Oracle Linux server with a fresh installation of Oracle 11gR2, when I noticed the following:

$ netstat -s

Tcp:
1178980 active connections openings
273496 passive connection openings
911657 failed connection attempts

The number of “failed connection attempts” increased by one for every 2 seconds. So, something was clearly trying to connect to something else and this failed for some reason. I didn’t find how I could further debug this problem with netstat. However, I had free access to the server, so I tried to shut down the Oracle services one by one to see if this would stop the growing number of failed connection attempts. When I stopped the Oracle listener service, it did…

Listener logging was on and there were a lot of warnings in the listener.log file:

WARNING: Subscription for node down event still pending

I did a search on My Oracle Support and I found more information in the MOS document with ID 372959.1. Apparently, the listener service tries to contact a RAC service, but, since this is a non-RAC installation, the connection fails. The solution is to add SUBSCRIBE_FOR_NODE_DOWN_EVENT_<listener name>=OFF to your listener.ora file, and then restart the listener. This should remove the warnings and stop the failed connection attempts.

This is how the listener.ora file now looks like for my listener with name “LISTENER”:

# listener.ora Network Configuration File: /u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora
# Generated by Oracle configuration tools.

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = oracle-tst.mydomain.com)(PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

ADR_BASE_LISTENER = /u01/app/oracle

# see metalink note 372959.1
SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF

Matthias

APEX and the Result Cache in Oracle 11gR2 EE

Environment: Oracle database 11.2.0.3 EE, APEX 4.1.1.00.23

I was exploring the Result Cache features of Oracle 11g, when I noticed that APEX is already using it:

SELECT NAME FROM V$RESULT_CACHE_OBJECTS WHERE rownum < 20;

NAME
APEX_040100.WWV_FLOW_MENUS
APEX_040100.WWV_FLOW_WORKSHEET_COLUMNS
APEX_040100.WWV_FLOW_WORKSHEETS
APEX_040100.WWV_FLOW_REGION_REPORT_COLUMN
APEX_040100.WWV_FLOW_STEP_BRANCHES
APEX_040100.WWV_FLOW_STEP_COMPUTATIONS
APEX_040100.WWV_FLOW_JAVASCRIPT_EVENTS
APEX_040100.WWV_FLOW_NATIVE_PLUGINS
APEX_040100.WWV_FLOW_PAGE_DA_EVENTS
APEX_040100.WWV_FLOW_STANDARD_EVENTS
APEX_040100.WWV_FLOW_PLUGIN_EVENTS
APEX_040100.WWV_FLOW_LANG
APEX_040100.WWV_FLOW_LISTS_OF_VALUES$
APEX_040100.WWV_FLOW_FLASH_CHARTS_5
APEX_040100.WWV_FLOW_PLUGIN_ATTR_VALUES
APEX_040100.WWV_FLOW
APEX_040100.APEX_APPLICATION_PAGE_ITEMS
APEX_040100.WWV_FLOW_STEP_ITEM_HELP

Apparently, the result cache for APEX is not enabled at the table level, but by using the hint  /*+ result_cache */ in the SQL queries.

This could be one of the reasons why we see a better performance of APEX on 11g compared to 10g.

However, I now wonder if this requires any performance tuning of the Result Cache initialization parameters of the database? For example, by setting result_cache_max_size to a higher value than the default value (which depends on the memory settings) ? I don’t remember reading anything about this in the APEX documentation, and I also couldn’t find anything relevant on OTN. Did anyone do this kind of tuning?

Note: Result cache is not available on Oracle 11g XE (Express Edition)…

 

Matthias