How to export AWS CLI CloudWatch data to Oracle using SQL*Loader

Environment: AWS CLI 1.17.7, Linux CentOS 7, Oracle XE 18c

In my previous blog post I explained how you can load AWS CLI CloudWatch data into a MySQL database. In this blog post I will explain how you can do something similar using Oracle as database. The goal is to retrieve CPU CloudWatch metrics from an Oracle database on RDS and load the data into a local Oracle database using the utility SQL*Loader.

Note: I assume that you already installed the AWS CLI tools and also have access and connectivity to an Oracle database (for this example I used Oracle XE 18c, installed on the same host).

Let’s first create a table to store the AWS CloudWatch data:

CREATE TABLE MATTHIAS.AWS_CLOUDWATCH_CPU_MON
(
AC_ID NUMBER GENERATED ALWAYS AS IDENTITY,
INSTANCE_ID VARCHAR(50) NOT NULL,
CPU_MAX_PERCENT NUMBER NOT NULL,
MON_TIMESTAMP TIMESTAMP NOT NULL,
PRIMARY KEY (AC_ID)
);

ALTER TABLE MATTHIAS.AWS_CLOUDWATCH_CPU_MON
ADD CONSTRAINT aws_cl_uc1 UNIQUE (INSTANCE_ID,MON_TIMESTAMP);

Some explanation about the columns in the database table:

  • AC_ID: auto-generated ID and primary key constraint
  • INSTANCE_ID: the name of the RDS database instance
  • CPU_MAX_PERCENT: the max CPU CloudWatch metric (a decimal number)
  • MON_TIMESTAMP: date and time for the CPU metric

Similar to the MySQL example in my previous blog post, we will first get the data using AWS CLI, print the data to a text file, filter out the unneeded columns, and finally load the text file into Oracle using SQL*Loader.

Let’s start by creating two files to store the SQL*Loader configuration data: the parameter file (sqlldr.par) and the control file (sqlldr.ctl).

In the parameter file we will store the Oracle database connection details, the name of the control file, and the name of the log file:

$ cd /home/mhoys/oracle-scripts

$ vi sqlldr.par

USERID=MATTHIAS/<password>@xepdb1
CONTROL=sqlldr.ctl
LOG=sqlldr.log

$ chmod 600 sqlldr.par

MATTHIAS is the database user that will be used to connect to the Oracle database that has the connect identifier “xepdb1”.

Don’t forget to change the file permissions to 600, so that only the file owner can read the file.

Now let’s create the SQL*Loader control file:

$ cd /home/mhoys/oracle-scripts

$ vi sqlldr.ctl

OPTIONS (SKIP=1)
LOAD DATA
INFILE 'aws_cloudwatch_cpu_mon.txt'
INTO TABLE aws_cloudwatch_cpu_mon
APPEND
FIELDS TERMINATED BY "\t"
TRAILING NULLCOLS
(INSTANCE_ID CONSTANT "rdsoradb001",
CPU_MAX_PERCENT DECIMAL EXTERNAL,
MON_TIMESTAMP "to_timestamp(:MON_TIMESTAMP,'YYYY-MM-DD\"T\"hh24:mi:ss\"Z\"')" )

Some explanation:

  • OPTIONS (SKIP=1): the first row in the file aws_cloudwatch_cpu_mon.txt (the output of the AWS CLI command) is empty, so we’re going to skip it
  • INFILE ‘aws_cloudwatch_cpu_mon.txt’: the name of the AWS CLI output file where we are reading the data from
  • APPEND: this means we are going to append new data to the existing data in our table
  • FIELDS TERMINATED BY “\t”: the fields in the text file are delimited by the TAB character
  • INSTANCE_ID CONSTANT “rdsoradb001”: the name of the RDS database, treated as constant here
  • CPU_MAX_PERCENT DECIMAL EXTERNAL: the max CPU CloudWatch metric is a decimal value (for example: 25.00)
  • MON_TIMESTAMP “to_timestamp(:MON_TIMESTAMP,’YYYY-MM-DD\”T\”hh24:mi:ss\”Z\”‘)” ): the date value from AWS CloudWatch is returned in the format “2020-02-05T12:46:00Z”, so we are converting it here to the Oracle TIMESTAMP format.

For your info, the full SQL*Loader documentation can be found here.

Now that we have the SQL*Loader parameter file and control file in place, let’s create the main shell script to call the AWS CLI and SQL*Loader commands:

$ cd /home/mhoys/oracle-scripts

$ vi aws_cw_cpu_ora_load.sh

#!/bin/bash

# change to the script directory
cd /home/mhoys/oracle-scripts

# set Oracle environment variables (needed for cron)
ORACLE_BASE=/opt/oracle; export ORACLE_BASE
ORACLE_HOME=/opt/oracle/product/18c/dbhomeXE;export ORACLE_HOME

start_time=`date --utc +%FT%TZ -d "12 hours ago"`
end_time=`date --utc +%FT%TZ`
instance_id="rdsoradb001"
file_name="aws_cloudwatch_cpu_mon.txt"

# run aws cloudwatch script and print output to text file
/home/mhoys/.local/bin/aws cloudwatch get-metric-statistics --metric-name CPUUtilization --start-time "$start_time" --end-time "$end_time" --period 300 --namespace AWS/RDS --statistics Maximum --dimensions Name=DBInstanceIdentifier,Value="$instance_id" --output text|awk '{print $2 "\t" $3}' > $file_name

# run sqlloader to load text file into Oracle database
/opt/oracle/product/18c/dbhomeXE/bin/sqlldr parfile=sqlldr.par

Here we are getting the CPU Max metrics for our RDS database instance “rdsoradb001” for the last 12 hours, with an interval of 5 minutes (300 seconds) – so there will be 144 records in total. We only need the second and third column of the output, that’s why we are piping the results to the “awk” command.

In the last line we are calling the SQL*Loader executable (sqlldr), passing the parameter file that we created earlier (the parameter file contains the connection details and the name of the control file to be used).

Once you have saved the script, let’s try to run it manually:

$ cd /home/mhoys/oracle-scripts

$ ./aws_cw_cpu_ora_load.sh

SQL*Loader: Release 18.0.0.0.0 - Production on Wed Feb 5 17:43:07 2020
Version 18.4.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Path used: Conventional
Commit point reached - logical record count 144

Table AWS_CLOUDWATCH_CPU_MON:
144 Rows successfully loaded.

Check the log file:
sqlldr.log
for more information about the load.

Victory! 144 records were loaded into the table AWS_CLOUDWATCH_CPU_MON. Check the sqlldr.log and aws_cloudwatch_cpu_mon.txt files that should have been created in the same directory. If there are rejected records, there should also be a file called “aws_cloudwatch_cpu_mon.bad”.

We can now schedule the script using the cron scheduler:

$ crontab -e

0 8,20 * * * /home/mhoys/oracle-scripts/aws_cw_cpu_ora_load.sh > /dev/null

In this example we scheduled the script to run twice daily (at 08:00 and 20:00). The standard output is redirected to /dev/null to avoid that cron sends an e-mail on every execution of the script.

That’s it!

HTH

Matthias

 

 

How to export AWS CLI CloudWatch data to MySQL

Environment: AWS CLI 1.17.7, Linux CentOS 7, MySQL 5.6

AWS CLI (AWS Command Line Interface) is a set of tools to query and manage Amazon Web Services from the command line. These tools export their results in text format (clear text, table format or JSON format), and there is no direct way to store the output in a database table. In this blog post I will show you how you can use a shell script to load the data from AWS CLI into a table in MySQL.

If you don’t have the AWS CLI tools yet, you can find some installation instructions here. You also need to install the MySQL client libraries if you are not running the AWS CLI tools on the same server as where your MySQL database is running (in my case, they are both on the same system).

I now want to capture the CloudWatch CPU metrics (processor usage) of an EC2 server on AWS, output the results in text format and then load these results into a MySQL database.

Let’s first create the table in MySQL:

USE matthias;

CREATE TABLE aws_cloudwatch_cpu_mon
( 
AC_ID INT NOT NULL AUTO_INCREMENT, 
INSTANCE_ID VARCHAR(50) NOT NULL, 
CPU_MAX_PERCENT DECIMAL(5,2) NOT NULL,
MON_TIMESTAMP TIMESTAMP NOT NULL,
PRIMARY KEY (AC_ID)
);

ALTER TABLE aws_cloudwatch_cpu_mon
ADD CONSTRAINT aws_cl_uc1 UNIQUE (instance_id,mon_timestamp);

Here I created a table with name “aws_cloudwatch_cpu_mon” in my database “matthias” with the following 4 columns:

  • AC_ID: autonumber ID column, also the primary key
  • INSTANCE_ID: the instance ID of the EC2 server in AWS
  • CPU_MAX_PERCENT: the maximum percentage CPU CloudWatch metric
  • MON_TIMESTAMP: a timestamp field capturing the date/time of the CPU metric

I also added a constraint so that the combination INSTANCE_ID/MON_TIMESTAMP is UNIQUE (no duplicate values allowed).

I’m now going to create a small file to store the MySQL password needed to connect to the database. We want to be able to schedule the script using cron, so the password must be read from a file during the execution of the script (replace <password> with your real password).

$ vi mysql-opts

[client]
password="<password>"

$ chmod 600 mysql-opts

I also changed the file access rights to 600 so that only the owner of the file can read it.

Let’s create the full script now:

$ vi aws_cw_cpu_mysql_load.sh

#!/bin/bash

start_time=`date --utc +%FT%TZ -d "12 hours ago"`
end_time=`date --utc +%FT%TZ`
instance_id="i-123456789"
file_name="/home/mhoys/mysql-scripts/aws_cloudwatch_cpu_mon.txt"

/home/mhoys/.local/bin/aws cloudwatch get-metric-statistics --metric-name CPUUtilization --start-time "$start_time" --end-time "$end_time" --period 300 --namespace AWS/EC2 --statistics Maximum --dimensions Name=InstanceId,Value="$instance_id" --output text|awk '{print $2 "\t" $3}' > $file_name

mysql --defaults-file=/home/mhoys/mysql-scripts/mysql-opts -u matthias<<EOF
use matthias;
LOAD DATA LOCAL INFILE '$file_name'
INTO TABLE aws_cloudwatch_cpu_mon
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(CPU_MAX_PERCENT,MON_TIMESTAMP)
SET INSTANCE_ID = '$instance_id';
EOF

Some explanation:

I want to capture the metrics every 5 minutes (period = 300 seconds) between the time of execution and 12 hours before that time (so there will be 144 data points), that’s why I used the –start_time and –end_time flags of the aws cloudwatch command.

The “instance_id” variable is the AWS EC2 server’s unique instance identifier.

The “file_name” variable is the name of the text file where we are going to write the aws output to. Before writing into the text file I pipe the output to the command “awk” to only print the second and third column of the ouput from the aws cloudwatch command (we don’t need the other columns).

The data is then loaded into the MySQL database using the command LOAD DATA LOCAL INFILE. IGNORE 1 LINES is needed to skip the first line which contains some header information. Using SET INSTANCE_ID we are updating the INSTANCE_ID column with the value from the instance_id variable.

Let’s run the script:

$ ./aws_cw_cpu_mysql_load.sh
$ ls -la
aws_cloudwatch_cpu_mon.txt
aws_cw_cpu_mysql_load.sh
mysql-opts

If the script executed successfully, there should be no errors, you should see a text file called “aws_cloudwatch_cpu_mon.txt” and some data in your database table:

use matthias;
select count(*) from aws_cloudwatch_cpu_mon

144

select * from aws_cloudwatch_cpu_mon order by cpu_max_percent desc

3727 i-123456789 8.00 2020-01-27 07:18:00.0
3728 i-123456789 5.00 2020-01-27 13:48:00.0
3837 i-123456789 5.00 2020-01-27 13:53:00.0
3758 i-123456789 4.00 2020-01-27 15:18:00.0

You can now schedule the script using cron, for example to run it every day at 08:00 and 20:00:

$ crontab -e

0 8,20 * * * /home/mhoys/mysql-scripts/aws_cw_cpu_mysql_load.sh

That’s it!

HTH

Matthias

 

 

 

 

 

 

 

 

 

Shell script to export all APEX applications as SQL scripts

Environment: Oracle 11gR2 EE, Application Express 4.2.2, Oracle Linux 6.4 64-bit, Java JDK 7u21

The APEX software contains a Java class which can be used to script or schedule the export of one or more APEX applications. I will show you in this post how to do this.

First, I unzipped the APEX software to /u01/app/apex. The Java class file is called “APEXExport.class” and can be found in the folder /u01/app/apex/utilities/oracle/apex.

You also need a Java JDK. I installed Java 7 JDK (update 21) under /u01/app/java/java7.

You might need to unlock (and set a password) for the APEX schema owner (in my case, APEX_040200). I’m not sure if it’s possible to export the applications with another user.

This is how my script looks like:

#!/bin/sh
# script to export all ApEx applications as sql scripts

CLASSPATH=$CLASSPATH:/u01/app/oracle/product/11.2.0/db_1/oui/jlib/classes12.jar; export CLASSPATH
BACKUPDIR=/u05/apex_backup

cd /u01/app/apex/utilities

/u01/app/java/java7/bin/java oracle.apex.APEXExport -db localhost:1521:ora11g01 -user apex_040200 -password <password> -instance

mv f*.sql $BACKUPDIR

if [ -f $BACKUPDIR/apex_export.log ]
then
    cat $BACKUPDIR/apex_export.log|mail -s "APEX export - `hostname -s`" matthiash@mydomain.com
fi

You need to include the file classes12.jar in your CLASSPATH or the JDBC connection won’t work.

The -instance setting will export all your applications. To limit the number of applications, you can also use -workspaceid or -applicationid. Check the readme.txt file under the utilities folder for more information.

To schedule the script with cron, I added the following line to my crontab file:

00 22 * * * /home/oracle/scripts/apex_export.sh 1>/u05/apex_backup/apex_export.log 2>&1

This will execute the script each night at 22:00 and mail the log file to the mail address inside the script.

Happy scripting! :-)

Matthias

Posting source code in WordPress blogs

In my previous blog post, I published the source code of a bash shell script. I noticed that WordPress kept on messing around with the code (like replacing some of the double quotes), and at first I didn’t know how to solve this.

And then I found the following support article:

https://en.support.wordpress.com/wordpress-editor/blocks/syntax-highlighter-code-block/2/

So, basically, you need to wrap your source code into “code” blocks, using the HTML editor.

There are also some nice options, like setting the language, enabling highlighting or line wrapping etc.

This is an example with code language=”bash” as option:

#!bin/bash

# description: source code example

echo "hello world!"

exit 0

 

 

Matthias

Automatically start GlassFish 3.1.2 on Oracle Linux 6.2 after server reboot

Environment: GlassFish 3.1.2 Open Source Edition (Web Profile), Oracle Linux 6.2

When you installed GlassFish on Oracle Linux, you probably want to make sure that your Domain Administration Server (DAS) and/or any standalone instances that you created are automatically started after a server reboot. You can do this with a shell script which you add as a service under Linux.

The first step is to create a script under /etc/init.d (with user root):

$ vi /etc/init.d/glassfish

#!bin/bash

# description: glassfish start stop restart
# processname: glassfish
# chkconfig: 2345 20 80

GLASSFISH_HOME=/u01/app/glassfish3
GLASSFISH_USER=oracle
RETVAL=0

case "$1" in
start)
su - $GLASSFISH_USER -c "$GLASSFISH_HOME/bin/asadmin start-domain domain1"
su - $GLASSFISH_USER -c "$GLASSFISH_HOME/bin/asadmin start-local-instance instance1"
;;
stop)
su - $GLASSFISH_USER -c "$GLASSFISH_HOME/bin/asadmin stop-local-instance instance1"
su - $GLASSFISH_USER -c "$GLASSFISH_HOME/bin/asadmin stop-domain domain1"
;;
restart)
su - $GLASSFISH_USER -c "$GLASSFISH_HOME/bin/asadmin restart-local-instance instance1"
su - $GLASSFISH_USER -c "$GLASSFISH_HOME/bin/asadmin restart-domain domain1"
;;
*)
echo $"Usage: $0 {start|stop|restart}"
RETVAL=1
esac
exit $RETVAL

This script will start, stop or restart the DAS with name “domain1” and the local instance with name “instance1” on your Linux server. You should adapt the script for your specific GlassFish instances.

GLASSFISH_HOME is the folder where GlassFish was installed (in my case: /u01/app/glassfish3). GLASSFISH_USER is the name of the user account under which GlassFish is running (in my case: user oracle).

Note: GlassFish needs a JAVA_HOME variable pointing to the folder of your Java binaries. In my case, JAVA_HOME is defined in the .bash_profile of the user “oracle”. You could also add it to the /etc/init.d/glassfish script.

Next, set the correct file permissions on the script:

$ chmod 755 /etc/init.d/glassfish

You can test the script as follows:

$ sh /etc/init.d/glassfish start

Now we need to create a service from the script:

$ chkconfig –add glassfish

This will create a service and start it in the runlevels 2,3,4 and 5. To check this, run the following command:

$ chkconfig|grep glassfish
glassfish 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Let’s check the service:

$ service glassfish stop

Waiting for the instance to stop …
Command stop-local-instance executed successfully.
Waiting for the domain to stop ….
Command stop-domain executed successfully.

Looks ok! The last thing you should test is rebooting your server to see if GlassFish is automatically started after the reboot. Have fun!

Matthias