Deploying Oracle Database XE 18c as container on Play With Docker (PWD)

Play with Docker (PWD) is a free Docker environment that you can use for up to 4 hours – you only need a web browser and a Docker account! It uses the “Docker-in-Docker” technology.

Now let’s see if we can use this environment for the installation of Oracle XE 18c…

First, you will need a (free) Docker account – go to https://www.docker.com/ to create one if you have not done this already.

Next, go to https://labs.play-with-docker.com/, log in with your Docker account and click on Start.

Once you are logged in click on the “wrench” icon and click on “1 Manager and 1 Worker” to create a manager node and a worker node – this should only take a couple of seconds.

We are now going to use the Manager node to deploy Oracle XE 18c as container.

In the “manager1” command window, enter the following command to clone the Oracle Docker Git project:

$ git clone https://github.com/oracle/docker-images.git

You will now see a folder with the name “docker-images”. Enter this folder and execute the build script for Oracle XE 18c (version = 18.4.0, Express Edition = x, see here for more information):

$ cd docker-images/OracleDatabase/SingleInstance/dockerfiles
$ ./buildDockerImage.sh -v 18.4.0 -x

The download and installation of the necessary software libraries and packages will now start, and this can take a while (somewhere between 15 and 30 minutes), because it will also download the 5.2 GB RPM software binaries file for Oracle XE 18c.

After the installation you should see something like this:

Let’s check the creation of the docker image with the following command:

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
oracle/database 18.4.0-xe 67a09c069d53 2 minutes ago 5.87GB

We can now use this Docker image to start a “containerised” Oracle XE 18c database on the “manager1” host, but before we do this, let’s create some local folders so that the container mount points can be mapped to local host storage. You also need to change the ownership of the local host folders to UID 54321, which is the UID of the “oracle” user within the container:

$ mkdir -p /home/xedb1/oradata
$ mkdir -p /home/xedb1/scripts/setup
$ mkdir -p /home/xedb1/scripts/startup
$ chown 54321:54321 /home/xedb1/oradata
$ chown 54321:54321 /home/xedb1/
$ chown 54321:54321 /home/xedb1/oradata
$ chown 54321:54321 /home/xedb1/scripts/setup
$ chown 54321:54321 /home/xedb1/scripts/startup

Now that the local folders have been created and the permissions have been set, let’s start a container using the “docker run” command. Please note that you need to replace <password> with a proper password in the below command:

docker run --name xedb1 \
-d \
-p 51521:1521 \
-p 55500:5500 \
-e ORACLE_PWD=<password> \
-e ORACLE_CHARACTERSET=AL32UTF8 \
-v /home/xedb1/oradata:/opt/oracle/oradata \
-v /home/xedb1/scripts/setup:/opt/oracle/scripts/setup \
-v /home/xedb1/scripts/startup:/opt/oracle/scripts/startup \
oracle/database:18.4.0-xe

After a couple of minutes, check the status of the container with the command “docker container ls -a”:

$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b95cd801cdc oracle/database:18.4.0-xe "/bin/sh -c 'exec $O…" 22 minutes ago Up About a minute (healthy) 0.0.0.0:51521->1521/tcp, 0.0.0.0:55500->5500/tcp xedb1

You should now be able to connect to the container database “xedb1” using the following commands:

$ docker exec -it --user=oracle xedb1 bash
[oracle@7b95cd801cdc /]$ . oraenv
ORACLE_SID = [XE] ? 
The Oracle base remains unchanged with value /opt/oracle
[oracle@7b95cd801cdc /]$ sqlplus /nolog

SQL*Plus: Release 18.0.0.0.0 - Production on Tue Jun 16 16:15:15 2020
Version 18.4.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

SQL> conn / as sysdba
Connected.

Victory! Finally, let’s try to connect to the pluggable database XEPDB1 and create a new user account:

SQL> alter session set container=XEPDB1;

Session altered.

SQL> create user Matthias identified by Matthias123;

User created.

Nice!

This concludes my blog post, where I have shown how you can use the free Play with Docker environment to deploy Oracle XE 18c as docker container.

HTH

Matthias

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

How to export AWS CLI CloudWatch data to MySQL

Environment: AWS CLI 1.17.7, Linux CentOS 7, MySQL 5.6

AWS CLI (AWS Command Line Interface) is a set of tools to query and manage Amazon Web Services from the command line. These tools export their results in text format (clear text, table format or JSON format), and there is no direct way to store the output in a database table. In this blog post I will show you how you can use a shell script to load the data from AWS CLI into a table in MySQL.

If you don’t have the AWS CLI tools yet, you can find some installation instructions here. You also need to install the MySQL client libraries if you are not running the AWS CLI tools on the same server as where your MySQL database is running (in my case, they are both on the same system).

I now want to capture the CloudWatch CPU metrics (processor usage) of an EC2 server on AWS, output the results in text format and then load these results into a MySQL database.

Let’s first create the table in MySQL:

USE matthias;

CREATE TABLE aws_cloudwatch_cpu_mon
( 
AC_ID INT NOT NULL AUTO_INCREMENT, 
INSTANCE_ID VARCHAR(50) NOT NULL, 
CPU_MAX_PERCENT DECIMAL(5,2) NOT NULL,
MON_TIMESTAMP TIMESTAMP NOT NULL,
PRIMARY KEY (AC_ID)
);

ALTER TABLE aws_cloudwatch_cpu_mon
ADD CONSTRAINT aws_cl_uc1 UNIQUE (instance_id,mon_timestamp);

Here I created a table with name “aws_cloudwatch_cpu_mon” in my database “matthias” with the following 4 columns:

  • AC_ID: autonumber ID column, also the primary key
  • INSTANCE_ID: the instance ID of the EC2 server in AWS
  • CPU_MAX_PERCENT: the maximum percentage CPU CloudWatch metric
  • MON_TIMESTAMP: a timestamp field capturing the date/time of the CPU metric

I also added a constraint so that the combination INSTANCE_ID/MON_TIMESTAMP is UNIQUE (no duplicate values allowed).

I’m now going to create a small file to store the MySQL password needed to connect to the database. We want to be able to schedule the script using cron, so the password must be read from a file during the execution of the script (replace <password> with your real password).

$ vi mysql-opts

[client]
password="<password>"

$ chmod 600 mysql-opts

I also changed the file access rights to 600 so that only the owner of the file can read it.

Let’s create the full script now:

$ vi aws_cw_cpu_mysql_load.sh

#!/bin/bash

start_time=`date --utc +%FT%TZ -d "12 hours ago"`
end_time=`date --utc +%FT%TZ`
instance_id="i-123456789"
file_name="/home/mhoys/mysql-scripts/aws_cloudwatch_cpu_mon.txt"

/home/mhoys/.local/bin/aws cloudwatch get-metric-statistics --metric-name CPUUtilization --start-time "$start_time" --end-time "$end_time" --period 300 --namespace AWS/EC2 --statistics Maximum --dimensions Name=InstanceId,Value="$instance_id" --output text|awk '{print $2 "\t" $3}' > $file_name

mysql --defaults-file=/home/mhoys/mysql-scripts/mysql-opts -u matthias<<EOF
use matthias;
LOAD DATA LOCAL INFILE '$file_name'
INTO TABLE aws_cloudwatch_cpu_mon
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(CPU_MAX_PERCENT,MON_TIMESTAMP)
SET INSTANCE_ID = '$instance_id';
EOF

Some explanation:

I want to capture the metrics every 5 minutes (period = 300 seconds) between the time of execution and 12 hours before that time (so there will be 144 data points), that’s why I used the –start_time and –end_time flags of the aws cloudwatch command.

The “instance_id” variable is the AWS EC2 server’s unique instance identifier.

The “file_name” variable is the name of the text file where we are going to write the aws output to. Before writing into the text file I pipe the output to the command “awk” to only print the second and third column of the ouput from the aws cloudwatch command (we don’t need the other columns).

The data is then loaded into the MySQL database using the command LOAD DATA LOCAL INFILE. IGNORE 1 LINES is needed to skip the first line which contains some header information. Using SET INSTANCE_ID we are updating the INSTANCE_ID column with the value from the instance_id variable.

Let’s run the script:

$ ./aws_cw_cpu_mysql_load.sh
$ ls -la
aws_cloudwatch_cpu_mon.txt
aws_cw_cpu_mysql_load.sh
mysql-opts

If the script executed successfully, there should be no errors, you should see a text file called “aws_cloudwatch_cpu_mon.txt” and some data in your database table:

use matthias;
select count(*) from aws_cloudwatch_cpu_mon

144

select * from aws_cloudwatch_cpu_mon order by cpu_max_percent desc

3727 i-123456789 8.00 2020-01-27 07:18:00.0
3728 i-123456789 5.00 2020-01-27 13:48:00.0
3837 i-123456789 5.00 2020-01-27 13:53:00.0
3758 i-123456789 4.00 2020-01-27 15:18:00.0

You can now schedule the script using cron, for example to run it every day at 08:00 and 20:00:

$ crontab -e

0 8,20 * * * /home/mhoys/mysql-scripts/aws_cw_cpu_mysql_load.sh

That’s it!

HTH

Matthias

 

 

 

 

 

 

 

 

 

How to upgrade AWS CLI on Linux

AWS CLI (AWS Command Line Interface) is a set of tools to manage Amazon Web Services from the command line. New features are being added almost daily, so if you previously installed a version of the tool, it’s worthwhile to upgrade to the latest version. In this blog post I’m going to explain how you can upgrade to the latest version of AWS CLI on Linux.

Note: I’m only covering AWS CLI version 1 here, version 2 is currently provided as evaluation software.

The last available version of AWS CLI can be found here: https://github.com/aws/aws-cli/blob/master/CHANGELOG.rst. At the time of writing, version 1.17.7 is the most recent one.

Let’s check the current version and installation location on my system (CentOS 7 Linux):

$ aws --version
aws-cli/1.16.248 Python/3.6.8 Linux/3.10.0-1062.9.1.el7.x86_64 botocore/1.12.238
$ which aws
~/.local/bin/aws

So you can see that my current version is 1.16.248 and it’s installed in my home folder in .local/bin/aws.

You also need to check the version of Python. Python is a prerequisite for the installation of AWS CLI version 1, and you need at least Python 2 version 2.7 or Python 3 version 3.4 to support recent versions of the AWS CLI tool set. If you have a lower version of Python, please upgrade your Python installation first.

Let’s check my installed versions of Python 2 and 3:

$ python --version
Python 2.7.5
$ python3 --version
Python 3.6.8

Looks like I’m good! So let’s continue with the upgrade.

Basically there are two ways to upgrade an installation of AWS CLI: by using the Python package manager “pip” (or pip3 if you are using Python 3), or – if you don’t have access to the Pyhon package manager – by downloading and using the AWS CLI Bundled Installer.

1) Upgrade using pip (or pip3)

First, check if you have outdated versions of packages using the following command:

$ pip3 list -o|grep awscli
awscli 1.16.248 1.17.7 wheel
WARNING: You are using pip version 19.2.3, however version 20.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.

I’m getting a warning that my version of pip is outdated, so let’s upgrade this first using the command provided in the warning. I installed pip only for my user, so I added the flag –user to the command:

$ pip install --upgrade pip --user
Collecting pip
Installing collected packages: pip
Successfully installed pip-20.0.1

Now that pip has been upgraded, let’s upgrade the package awscli:

$ pip3 install awscli --upgrade --user

Installing collected packages: botocore, s3transfer, awscli
Attempting uninstall: botocore
Found existing installation: botocore 1.12.238
Uninstalling botocore-1.12.238:
Successfully uninstalled botocore-1.12.238
Attempting uninstall: s3transfer
Found existing installation: s3transfer 0.2.0
Uninstalling s3transfer-0.2.0:
Successfully uninstalled s3transfer-0.2.0
Attempting uninstall: awscli
Found existing installation: awscli 1.16.248
Uninstalling awscli-1.16.248:
Successfully uninstalled awscli-1.16.248
Successfully installed awscli-1.17.7 botocore-1.14.7 s3transfer-0.3.1

You will notice that a number of dependent packages has been upgraded also.

Now let’s check the version of AWS CLI again:

$ aws --version
aws-cli/1.17.7 Python/3.6.8 Linux/3.10.0-1062.9.1.el7.x86_64 botocore/1.14.7

Done!

2) Upgrade using the Bundled Installer

If you cannot upgrade using the pip package manager, there’s another way to upgrade your installation of AWS CLI using the Bundled Installer.

First download and unzip the Bundled Installer from https://s3.amazonaws.com/aws-cli/awscli-bundle.zip onto the local system:

$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
$ unzip awscli-bundle.zip

Now run the “install” script inside the unzipped Bundled Installer. The “install” script accepts two parameters:

$ ./install --help
Usage: install [options]

Options:
-h, --help show this help message and exit
-i INSTALL_DIR, --install-dir=INSTALL_DIR
The location to install the AWS CLI. The default
value is ~/.local/lib/aws
-b BIN_LOCATION, --bin-location=BIN_LOCATION
If this argument is provided, then a symlink will be
created at this location that points to the aws
executable. This argument is useful if you want to put
the aws executable somewhere already on your path,
e.g. -b /usr/local/bin/aws. This is an optional
argument. If you do not provide this argument you will
have to add INSTALL_DIR/bin to your PATH.

Our installation of AWS CLI is installed in ~/.local, so we are going to use this as parameter for –install-dir:

$ cd awscli-bundle/
$ ./install --install-dir=~/.local

More installation options and documentation can be found here: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html

 

HTH

Matthias

 

Using APEX as front-end for MySQL databases REST-enabled with Node.js

Environment: Oracle XE 18c, CentOS Linux 7, APEX 19.2, MySQL 5.6, Node.js 12.14.1

Using APEX (Oracle Application Express) you can quickly build nice-looking web applications on top of an Oracle database. More recent versions of APEX also support connectivity to non-Oracle databases through RESTful API’s.

In this blog post I will show how you can build reports in APEX 19.2 with data coming from a MySQL database that is REST-enabled using Node.js.

I assume that you already have installed MySQL and APEX. In my case both MySQL, Oracle XE 18c and APEX are running on the same host, a CentOS 7 Linux server. For the installation of APEX 19.2 on Oracle XE 18c I refer to one of my previous blog posts.

First, I created the following demo table in my MySQL database. The goal of this blog post is to create an APEX report using the data from this table:

CREATE TABLE DEMO_PEOPLE 
( ID INT NOT NULL AUTO_INCREMENT, 
RATING INT, 
NAME VARCHAR(255), 
COUNTRY VARCHAR(255), 
FROM_YR INT, 
TO_YR INT, 
LINK VARCHAR(255), 
CATEGORY VARCHAR(10), 
GENDER VARCHAR(1), 
FLEX1 VARCHAR(1000), 
FLEX2 VARCHAR(1000), 
FLEX3 VARCHAR(1000),
PRIMARY KEY (ID)
) ;

Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100293,30.17526,'Christopher Columbus','Italy',1451,1506,'https://en.wikipedia.org/wiki/Christopher_Columbus','A','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100294,30.167197,'Johann Sebastian Bach','Germany',1685,1750,'https://en.wikipedia.org/wiki/Johann_Sebastian_Bach','A','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100295,30.109331,'Ludwig van Beethoven','Germany',1770,1827,'https://en.wikipedia.org/wiki/Ludwig_van_Beethoven','A','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100296,30.036677,'Cleopatra VII of Egypt','Egypt',-69,-30,'https://en.wikipedia.org/wiki/Cleopatra','P','F',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100297,29.96003,'Galileo Galilei','Italy',1564,1642,'https://en.wikipedia.org/wiki/Galileo_Galilei','S_T','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100298,29.884558,'Euclid','Unknown',-325,-265,'https://en.wikipedia.org/wiki/Euclid','S_T','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100299,29.859917,'Nero','Italy',37,68,'https://en.wikipedia.org/wiki/Nero','P','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100300,29.842897,'Karl Marx','Germany',1818,1883,'https://en.wikipedia.org/wiki/Karl_Marx','S_T','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100301,29.771886,'Marco Polo','Italy',1254,1324,'https://en.wikipedia.org/wiki/Marco_Polo','A','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100302,29.742594,'Cicero','Italy',-106,-43,'https://en.wikipedia.org/wiki/Cicero','P','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100303,29.738438,'Vincent van Gogh','Netherlands',1853,1890,'https://en.wikipedia.org/wiki/Vincent_van_Gogh','A','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100304,29.636065,'Hippocrates','Greece',-460,-375,'https://en.wikipedia.org/wiki/Hippocrates','S_T','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100305,29.629269,'Sigmund Freud','Czech Republic',1856,1939,'https://en.wikipedia.org/wiki/Sigmund_Freud','S_T','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100306,29.595729,'Pablo Picasso','Spain',1881,1973,'https://en.wikipedia.org/wiki/Pablo_Picasso','A','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100307,29.519164,'Qin Shi Huang','China',-259,-210,'https://en.wikipedia.org/wiki/Qin_Shi_Huang','P','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100308,29.395324,'Tutankhamun','Egypt',-1341,-1323,'https://en.wikipedia.org/wiki/Tutankhamun','P','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100309,29.330534,'Marcus Aurelius','Italy',121,180,'https://en.wikipedia.org/wiki/Marcus_Aurelius','P','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100310,29.254359,'Blaise Pascal','France',1623,1662,'https://en.wikipedia.org/wiki/Blaise_Pascal','S_T','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100311,29.241855,'David','Palestine',-1040,-970,'https://en.wikipedia.org/wiki/David','P','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100312,29.210766,'Raphael','Italy',1483,1520,'https://en.wikipedia.org/wiki/Raphael','A','M',null,null,null);
Insert into DEMO_PEOPLE (ID,RATING,NAME,COUNTRY,FROM_YR,TO_YR,LINK,CATEGORY,GENDER,FLEX1,FLEX2,FLEX3) values (100313,29.197855,'Charles Darwin','United Kingdom',1809,1882,'https://en.wikipedia.org/wiki/Charles_Darwin','S_T','M',null,null,null);

To be able to share data between MySQL and APEX we need to build a RESTful API on top of the MySQL database. You can do this with Node.js. Node.js is (simply said) a server-side version of JavaScript. It comes with many different modules, for example to connect to databases, to build web services and so on. For the installation of Node.js on CentOS 7 I refer to one of my previous blog posts. It’s actually pretty straightforward.

I’m going to put all the .json files for this project in a new folder called /home/mhoys/nodejs/rest_api. For this example you need to install the following Node.js modules: express, body-parser, mysql and dotenv. The “npm init” command creates a package.json file that will hold your project dependencies and metadata.

$ mkdir -p /home/mhoys/nodejs/rest_api
$ cd /home/mhoys/nodejs/rest_api
$ npm init
$ npm install express body-parser mysql dotenv
$ npm list

First we are going to create a file called “.env“. This file will hold the environment variables for the database connection and the web server.

$ vi .env
DB_HOST=localhost
DB_USER=matthias
DB_PASSWORD=<password>
DB_DATABASE=matthias
SERVER_PORT=3000

“localhost” is the hostname of my MySQL database, “matthias” is the name of the database account to connect to the database (with the password listed in DB_PASSWORD), and “matthias” is the name of the MySQL database holding the table DEMO_PEOPLE (see higher).

Next, create a file called “mysql_config.js“. In this file the MySQL configuration variables will be read from the .env file and exported:

$ vi mysql_config.js

const dotenv = require('dotenv').config({path: './.env'});

if (dotenv.error) {
console.log(dotenv.error);
}
//console.log(dotenv.parsed);

// Get db_host variable from .env file
const host = process.env.DB_HOST;

// Get db_user variable from .env file
const user = process.env.DB_USER;

// Get db_password variable from .env file
const password = process.env.DB_PASSWORD;

// Get db_database variable from .env file
const database = process.env.DB_DATABASE;

// export the variables for use in other scripts
module.exports.host = host;
module.exports.user = user;
module.exports.password = password;
module.exports.database = database;

Now create another file “server_config.js” which will hold the web server configuration variables:

$ vi server_config.js

const dotenv = require('dotenv').config({path: './.env'});

if (dotenv.error) {
console.log(dotenv.error);
}
//console.log(dotenv.parsed);

// Get server_port variable from .env file
const port = process.env.SERVER_PORT;

module.exports.port = port;

Create another file “queries.js” that will be used to store the database queries for this demo.

$ vi queries.js

const demo_people_query = 'select ID, NAME, GENDER, COUNTRY, FROM_YR, TO_YR, LINK from DEMO_PEOPLE';
const demo_people_by_country_query = 'select COUNTRY, count(*) as people_count from DEMO_PEOPLE group by COUNTRY order by count(*) desc';

module.exports.demo_people_query = demo_people_query;
module.exports.demo_people_by_country_query = demo_people_by_country_query;

The first query selects all records from the DEMO_PEOPLE table. The second query counts the number of persons in the DEMO_PEOPLE table grouped by country.

Next, create the main file “server.js“:

$ vi server.js

const mysql = require('mysql');
const mysql_config = require('./mysql_config');
const queries = require('./queries');
const express = require('express');
const app = express();
const server_config = require('./server_config');
const bodyParser = require('body-parser');

const conn = mysql.createConnection({
host : mysql_config.host,
user : mysql_config.user,
password : mysql_config.password,
database: mysql_config.database
});

conn.connect();

app.use(bodyParser.json({ type: 'application/*+json' }));

app.get('/api/demo_people', function(request, response){
     conn.query(queries.demo_people_query, function(err, results){
          if ( err ){
               response.status(400).send(err);
          } else {
               response.send(results);
          }
     });
});

app.get('/api/demo_people_by_country', function(request, response){
     conn.query(queries.demo_people_by_country_query, function(err, results){
          if ( err ){
               response.status(400).send(err);
          } else {
               response.send(results);
          }
     });
});

app.listen(server_config.port, function () {
console.log('Express server is listening on port ' + server_config.port);
});

You should now have the following files in the rest_api folder:

$ ls -la /home/mhoys/nodejs/rest_api/

.env
mysql_config.js
node_modules
package.json
package-lock.json
queries.js
server_config.js
server.js

Let’s see if the server starts without errors and if you can access the two API’s from a web browser:

$ node ./server.js
Express server is listening on port 3000

Now open a web browser and browse to:

http://<hostname>:3000/api/demo_people

http://<hostname>:3000/api/demo_people_by_country

You should see data from the MySQL table returned in JSON format.

If you cannot reach the node server, it could be that the CentOS firewall is blocking port 3000. You can open it with the following command:

$ sudo firewall-cmd --permanent --add-port=3000/tcp
success
$ sudo firewall-cmd --reload
success
$ sudo firewall-cmd --list-ports

Another way to test the API’s is by using Postman:

  • Start Postman
  • Click on New > Request
  • Fill in a name and click on Save to My requests
  • Click on the new request, copy the URL http://hostname:3000/api/demo_people to the GET window and click on Send
  • Check if you see the correct data in the Body window:

Now that we have our API’s in place, let’s create a new APEX application with two reports using data from these API’s.

Log in to your APEX 19.2 workspace using a developer account:

Click on App Builder > Create > New Application.

Enter a Name and click on Create Application. Do not add additional pages yet.

Now in the App Builder for your new application, click on Shared Components > Web Source Modules (under Data Sources):

Click on Create to create a new Web Source Module.

Click on “from scratch”.

Now select Simple HTTP, fill in a name and the URL for the demo people API:

Click on Next and Discover (leave the Authentication disabled).

If all goes well, you should see the DEMO_PEOPLE data in the Web Source Discovery window:

Click on Create Web Source.

Repeat these steps to create another web source module for the “demo people by country” API.

Now we are ready to build two reports in our APEX application using the two web source modules that we just defined.

Click on App Builder > your application > Create Page.

Select Report > Interactive Report.

Give the new page a name and click on Next.

Create a new navigation menu entry and click on Next:

As Data Source, choose Web Source and select the first of the two web source modules that you created earlier. Drag all columns to the right and rearrange them if needed:

Click on Create and now run the page or the application, you should  see the report page with data from the DEMO_PEOPLE table:

If you need to rearrange columns (the order from the database query is not always respected), you can do this from the Actions > Columns menu in the report, then click on Actions > Report > Save Report to overwrite the default report settings.

Repeat these steps to add another interactive report to your APEX application, now using the “demo people by country” web source module:

Now let’s create a chart instead of a report.

Click on Actions > Chart inside the report.

Select Pie as chart type, Country as Label, People Count as Value and Value – Descending as Sort option:

Nice! :-)

Finally click on Actions > Report > Save Report > As Default Report Settings > Primary to save the chart as default report view.

That’s it! This very simple demo shows how you can use Node.js to REST-enable a MySQL database so that the data can be visualised in APEX using Web Source Modules.

Some additional things that can be done to build further on this demo:

  • use HTTPS instead of HTTP to better secure the data stream
  • add a layer of API authentication
  • use express.Router() to build more complex routes to GET, POST, PUT and DELETE requests (only GET is covered in this demo)
  • ….

 

HTH

Matthias

 

 

 

 

 

 

 

 

 

How to upgrade Node.js from v6 to v12 on CentOS Linux 7

In this blog post I will explain how you can upgrade Node.js to the most recent version using the “yum” package manager utility on CentOS Linux 7.

On my machine, I had previously installed Node.js using the following command:

$ sudo yum install nodejs

Let’s check the current version and location of Node.js and npm (the Node.js package manager):

$ node --version
v6.17.1
$ npm --version
3.10.10

$ which node
/bin/node
$ which npm
/bin/npm

I can see from https://nodejs.org/en/about/releases/ that the last Active LTS version is v12, so I would like to upgrade to that version.

The upgrade steps for CentOS are explained here: https://github.com/nodesource/distributions#rpminstall

First, you need to add the NodeSource yum repository to your system. This can be done by using the following command as user root:

$ curl -sL https://rpm.nodesource.com/setup_12.x | bash -

This command will first use the curl tool to download the “setup_12.x” script and then execute this script using bash.

This is the output from the script:

## Installing the NodeSource Node.js 12.x repo...


## Inspecting system...

+ rpm -q --whatprovides redhat-release || rpm -q --whatprovides centos-release || rpm -q --whatprovides cloudlinux-release || rpm -q --whatprovides sl-release
+ uname -m

## Confirming "el7-x86_64" is supported...

+ curl -sLf -o /dev/null 'https://rpm.nodesource.com/pub_12.x/el/7/x86_64/nodesource-release-el7-1.noarch.rpm'

## Downloading release setup RPM...

+ mktemp
+ curl -sL -o '/tmp/tmp.kNjfOcFMcS' 'https://rpm.nodesource.com/pub_12.x/el/7/x86_64/nodesource-release-el7-1.noarch.rpm'

## Installing release setup RPM...

+ rpm -i --nosignature --force '/tmp/tmp.kNjfOcFMcS'

## Cleaning up...

+ rm -f '/tmp/tmp.kNjfOcFMcS'

## Checking for existing installations...

+ rpm -qa 'node|npm' | grep -v nodesource

## Your system appears to already have Node.js installed from an alternative source.
Run `sudo yum remove -y nodejs npm` to remove these first.


## Run `sudo yum install -y nodejs` to install Node.js 12.x and npm.
## You may also need development tools to build native addons:
sudo yum install gcc-c++ make
## To install the Yarn package manager, run:
curl -sL https://dl.yarnpkg.com/rpm/yarn.repo | sudo tee /etc/yum.repos.d/yarn.repo
sudo yum install yarn

Let’s check if the new yum repository has been added correctly:

$ ls -la /etc/yum.repos.d/|grep nodesource
-rw-r--r--. 1 root root 474 Apr 23 2019 nodesource-el7.repo

Based on the output from the “setup_12.x” script, we first need to remove the existing version of Node.js:

$ sudo yum remove -y nodejs npm
...
Removed:
  nodejs.x86_64 1:6.17.1-1.el7                                   npm.x86_64 1:3.10.10-1.6.17.1.1.el7

Complete!

Now let’s install the new version of Node.js:

$ sudo yum list available nodejs

Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Available Packages
nodejs.x86_64 2:12.14.1-1nodesource nodesource

$ sudo yum install nodejs

Total download size: 22 M
Installed size: 65 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7/nodesource/packages/nodejs-12.14.1-1nodesource.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 34fa74dd: NOKEY
Public key for nodejs-12.14.1-1nodesource.x86_64.rpm is not installed
nodejs-12.14.1-1nodesource.x86_64.rpm                                                                           |  22 MB  00:00:02
Retrieving key from file:///etc/pki/rpm-gpg/NODESOURCE-GPG-SIGNING-KEY-EL
Importing GPG key 0x34FA74DD:
 Userid     : "NodeSource <gpg-rpm@nodesource.com>"
 Fingerprint: 2e55 207a 95d9 944b 0cc9 3261 5ddb e8d4 34fa 74dd
 Package    : nodesource-release-el7-1.noarch (installed)
 From       : /etc/pki/rpm-gpg/NODESOURCE-GPG-SIGNING-KEY-EL
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : 2:nodejs-12.14.1-1nodesource.x86_64                                                                                 1/1
  Verifying  : 2:nodejs-12.14.1-1nodesource.x86_64                                                                                 1/1

Installed:
  nodejs.x86_64 2:12.14.1-1nodesource

Complete!

Let’s check the versions again:

$ node --version
v12.14.1
$ npm --version
6.13.4

All done!

HTH

Matthias

 

 

 

 

 

 

 

 

Patching APEX 19.2 on Oracle Database 18c XE (Express Edition)

In my previous blog post I explained how to install APEX 19.2 on Oracle Database 18c XE.

Since the release of APEX 19.2 there have been a number of bugs and known issues. The complete list of all known issues for APEX 19.2 can be found here.

There is a patch set bundle available to fix a number of these issues, and I will explain in this blog post how to patch an installation of APEX 19.2 on Oracle 18c XE.

The patch set bundle is updated regularly and can be reapplied repeatedly. The official name of the patch set bundle is “Bug 30392181“. The version that I will be using here is the version of December 13, 2019.

The following bugs are fixed in this version:

# 30447953 - PDBS USING APEX METRIC IN APEX_USAGE_METRICS VIEW INCONSISTENT WITH OTHER METRICS
# 30470744 - SQL WORKSHOP DATA LOADING AND APEX_DATA_PARSER DO NOT TREAT BOM IN CSV FILES CORRECTLY
# 30487644 - SQL WORKSHOP DATA LOADING / APEX_DATA_PARSER SHOULD NOT CUT COLUMN NAMES TO 30 CHARS ON 12.2 AND HIGHER DATABASES
# 30489131 - READ ONLY INTERACTIVE GRID LOV COLUMN DOES NOT RESPECT DISPLAY EXTRA=NO
# 30498697 - INTERACTIVE GRID DOWNLOAD FAILS WITH ORA-1403 WHEN SQL QUERY IS USED AND ROWID IS THE FIRST COLUMN IN THE LIST
# 30461018 - PARSING SCHEMA POPUP LOV ON APP SECURITY PAGE DISPLAYS NO VISIBLE RESULTS
# 30464422 - SQL WORKSHOP DATA LOAD WIZARD TO EXISTING TABLE COLUMN MAPPING ERROR 06502 WITH 62 COLUMNS
# 30499999 - LOAD TO EXISTING TABLE COLUMN MAPPING DOES NOT SHOW EXTRA FILE COLUMNS
# 30501399 - LOAD TO EXISTING TABLE IGNORES PARSED FORMAT MASKS
# 30504903 - APEX_REGION.OPEN_QUERY_CONTEXT DOES NOT WORK WITH INTERACT. REPORT PIVOT VIEWS
# 30505223 - TABLE/VIEW OWNER SELECT LIST ON EDIT LOV (4000:4111) PAGE CAN INITIALIZE WITH WRONG SCHEMA
# 30511566 - FACETED SEARCH: COMPUTE COUNTS DOES NOT PROCESS WEB SRC PARAMETERS CORRECTLY
# 30517479 - SAVE CHANGES FOR MASTER DETAIL INTERACT GRID FAILS WITH ORA-06502 FOR NEW MASTER WITH NEW DETAIL ROWS
# 30520385 - INTERACTIVE GRID DML PROCESS WITH CUSTOM PL/SQL CODE USING :ROWID ERRORS OUT WITH "STOP APEX ENGINE"
# 30512102 - SWITCH ITEM RETURNS EMPTY STRING IF ITS ON OR OFF VALUE IS SET TO 0
# 30512418 - APEX.ITEM('P1_SWITCH_ITEM').ISCHANGED() ALWAYS RETURNS TRUE
# 30535914 - INTERACTIVE GRID DOWNLOAD RETURNS NULL VALUES FOR CASCADING LOV COLUMNS
# 30544387 - POPUP LOV ON WEB SOURCE MODULE THROWS ORA 903 ("INVALID TABLE NAME") ON PAGE RENDERING WHEN ITEM VALUE IS NOT NULL
# 30564073 - ADW: PERMIT SETTING MAX_WEBSERVICE_REQUESTS WITH APEX_INSTANCE_ADMIN
# 30580984 - INSTALLATION THROWS ERRORS FOR WWV_FLOW_DATA_PARSER WHEN INSTALLED INTO A CDB WITH MAX_STRING_SIZE=EXTENDED
# 30589770 - FORM REGION INIT PROCESS THROWS ORA-06502 WITH MAX_STRING_SIZE=EXTENDED AND VARCHAR2 VALUES > 4000 BYTE
# 30616156 - ORA-06550 WHEN ATTEMPTING TO EXPORT SQL SCRIPT VIA SQL WORKSHOP > SQL SCRIPTS
# 30617262 - INTERACTIVE GRID FAILS WITH ORA 6502 ON 12.1 OR 11.2 IF LOV COLUMN HAS MORE THAN 26 CHARACTERS
# 30533928 - ORACLE_APEX_DICTIONARY ORA-29913: ERROR IN EXECUTING ODCIEXTTABLEOPEN CALLOUT 
# 30636867 - APEX_WEB_SERVICE.MAKE_REST_REQUEST IN APEX 19.2 DOES NOT IMPLICITLY CONVERT P_BODY TO UNICODE ANY MORE
# 26788756 - THE E-MAIL ATTACHMENT GENERATED BY APEX IS NOT IN COMPLIANCE WITH RFC 2045

As a first step you need to download the patch set bundle from the Oracle support site. The direct link is here. APEX 19.2 is free to download and install, but unfortunately the patches are not. You need a valid Oracle support contract to download them (if you don’t have this, the only option is to wait for the next major release of APEX).

Once you have downloaded the patch set bundle, unzip it into a new folder on your Oracle 18c XE Linux server.

The installation of a patch set bundle for APEX is relatively easy but there are number of things you need to watch out for. Everything is explained in the README.txt file in the folder where you unzipped the patch set bundle.

First, make sure that no can access your installation of APEX while you are installing the patch set bundle. In my case, I’m using ORDS (Oracle REST Data Services) 19.2 as standalone service so I’m going to stop it by killing the running process:

$ ps -ef|grep ords
oracle 16016 15324 0 16:40 pts/0 00:00:00 grep --color=auto ords
oracle 24921 1 0 Dec12 ? 00:11:00 java -Dorg.eclipse.jetty.server.Request.maxFormContentSize=20971520 -jar ords.war standalone
$ kill 24921

Once you have stopped your web server, you can proceed with the installation of the patch set bundle. This is done by running one of the following three scripts with user SYS AS SYSBDA:

  • catpatch.sql: for Oracle Database 11.2 and earlier, for non-CDB, and for CDB where APEX is not installed in the root
  • catpatch_con.sql: for CDB where APEX is installed in the root
  • catpatch_appcon.sql: for installations where APEX is installed in an application container

In our case APEX is installed in the pluggable database XEPDB1, so we are going to switch to XEPDB1 first and then execute the catpatch.sql script with SYS AS SYSDBA:

$ sudo su - oracle
$ cd /home/oracle/apex192_patch/30392181

$ sqlplus /nolog

SQL*Plus: Release 18.0.0.0.0 - Production on Mon Dec 16 16:56:03 2019
Version 18.4.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

SQL> conn / as sysdba
Connected.
SQL> alter session set container=XEPDB1;

Session altered.

SQL> @catpatch.sql

The script will only take a few minutes to run, and at the end you should see the following output:

...Validating APEX
...(16:59:28) Starting validate_apex for APEX_190200
...(16:59:29) Checking missing sys privileges
...(16:59:29) Re-generating APEX_190200.wwv_flow_db_version
... wwv_flow_db_version is up to date
...(16:59:29) Recompiling APEX_190200 ... with dbms_utility.compile_schema
...(17:00:21) Checking for objects that are still invalid
...(17:00:21) Key object existence check
...(17:00:22) Setting DBMS Registry for APEX to valid
...(17:00:22) Exiting validate_apex

PL/SQL procedure successfully completed.

...Recompiling invalid public synonyms

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

timing for: Complete Patch 30392181
Elapsed: 00:01:51.70

Don’t forget to copy the changed “images” (images, JavaScript files etc) from the patch folder to your ORDS images directory:

$ cp -Rf /home/oracle/apex192_patch/30392181/images/* /home/oracle/ords192/images/

We are done! Start ORDS again and check if your APEX installation is working correctly.

Update:

To find out if and when the patch set bundle was installed on your system, execute the following query with user SYS:

select name, value from APEX_190200.wwv_flow_platform_prefs where name = 'APEX_19_2_0_PATCH_30392181'; 

APEX_19_2_0_PATCH_30392181 2019-12-18_08-34-25

Or:

select APEX_INSTANCE_ADMIN.GET_PARAMETER('APEX_19_2_0_PATCH_30392181') from dual;

2019-12-18_08-34-25

See the following thread on the APEX forum.

Update 2:

Oracle has released a new version of patch set bundle 30392181 on December 18th, it contains two new bug fixes:

# 30675656 - ORACLE_APEX_DICTIONARY_CACHE FAIL WITH ORA-00001: UNIQUE CONSTRAINT (APEX_190200.WWV_DICTIONARY_CACHE_OBJ_IDX1) 
# 30675432 - INTERACTIVE GRID VIRTUAL COLUMN WITH SOURCE=NONE DISPLAY NO ROWS

I just installed it on top of the patch set bundle from December 13th (you don’t need to uninstall the previous patch set), and I had no issues:

select APEX_INSTANCE_ADMIN.GET_PARAMETER('APEX_19_2_0_PATCH_30392181') from dual;
2019-12-20_10-29-45

Unfortunately there is no easy way to see from the database which version of the patch set bundle has been installed. You can check the date from the README.txt file included with the patch set bundle, but the database just shows the installation date and the patch set number, not the release date for the version that was installed.

Update 3:

After the installation of the December 18th patch set bundle I did see an error at the end of the catpatch.log file:

...Recompiling invalid public synonyms
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
begin
*
ERROR at line 1:
ORA-20001: Stop further processing. This error can be ignored.
ORA-06512: at line 3

The custom error message says “This error can be ignored”, so maybe it doesn’t mean anything… I didn’t have this when I installed the December 13th patch set bundle (but that was the first version that I installed on my system, maybe the error only occurs with repeating installs of the patch set).

Update 4:

Oracle has released a new version of Patch Set Bundle 30392181 on January 10th, 2020. The following additional bug is fixed in this version of the patch set:

# 30748465 - CREATE NEW SESSION AS PART OF "BECOME USER" DOES NOT WORK

I managed to successfully patch my installation of APEX 19.2 (on top of the December 18th patch set). Don’t forget to copy the contents of the images folder because some static files have been modified with this patch set.

Update 5:

Oracle has released a new version of Patch Set Bundle 30392181 on January 24th, 2020. The following additional bugs are fixed in this version of the patch set:

# 30646890 - ORACLE_APEX_DICTIONARY_CACHE JOB FAILING IN 18C DB - ORA-06502 
# 30783722 - ORA-904 OR ORA-911 WHEN CHANGING MIXED CASE COLUMN NAME ATTRIBUTE IN INTERACTIVE GRID
# 30798575 - IR PDF, XLSX OR RTF DOWNLOADS FAIL WITH ORA-904 WHEN ONE COLUMN HAS A FORMAT MASK
# 30808874 - 19.2 PSE BUNDLE FAILS IN CDB INSTALLS

The total number of bugs fixed in Patch Set Bundle 30392181 is now 33. I installed the new version of the patch set on top of the January 10th version, and everything went fine.

Update 6:

Oracle has released a new version of Patch Set Bundle 30392181 on January 29th, 2020. The following additional bugs are fixed in this version of the patch set:

# 30824026 - REGRESSION - TEXT FIELD WITH AUTOCOMPLETE: MAXIMUM VALUES IN LIST IGNORED

The total number of bugs fixed in Patch Set Bundle 30392181 is now 34. I installed the new version of the patch set on top of the January 24th version, and there were no issues.

Update 7:

Oracle has released a new version of Patch Set Bundle 30392181 on February 5th, 2020. The following additional bugs are fixed in this version of the patch set:

# 30837064 - APPLICATION IMPORT INVALIDATES PUBLIC AND PRIVATE REPORTS OF INTERACTIVE GRIDS
# 30741141 - 19.2 REGRESSION - LOV JSON RETURNED BY AJAX (CASCADING) HAS CHANGED

The total number of bugs fixed in Patch Set Bundle 30392181 is now 36. I installed the new version of the patch set on top of the January 29th version, and I did not encounter any issues.

Update 8:

Oracle has released a new version of Patch Set Bundle 30392181 on February 11th, 2020. The following two additional bugs are fixed in this version of the patch set:

# 30762484 - ER: CAPTURE NUMBER OF APEX APPS CREATED ON ADB
# 30875996 - INTERACTIVE REPORT THROWS ORA-904 WHEN SINGLE ROW VIEW IS ENABLED AND USES ROWID

The total number of bugs fixed in Patch Set Bundle 30392181 is now 38. I installed the new version of the patch set on top of the February 5th version, and I did not see any issues.

Update 9:

Oracle has released a new version of Patch Set Bundle 30392181 on March 12th, 2020. The following three additional bugs are fixed in this version of the patch set:

# 30895841 - FACETED SEARCH FILTERS ARE NOT APPLIED CORRECTLY ON TRANSLATED APPLICATIONS
# 30924852 - CREATE APP FROM A SPREADSHEET FAILS TO CREATE DASHBOARD
# 30537256 - POPUP LOV: ADDITIONAL OUTPUT ITEMS GET CLEARED ON PAGE LOAD

The total number of bugs fixed in Patch Set Bundle 30392181 is now 41. I installed the new version of the patch set on top of the February 11th version, and I did not encounter any issues.

Update 10:

Oracle has released a new version of Patch Set Bundle 30392181 on April 10th, 2020. The following four additional bugs are fixed in this version of the patch set:

# 31086829 - OPTIMIZE QUERIES OF NLS_DATABASE_PARAMETERS AND NLS_SESSION_PARAMETERS
# 31067197 - INTERACTIVE GRID: AGGREGATION LINE DUPLICATED FOR MORE THAN 50 ROWS
# 31135735 - RESTRICT ACCESS TO THE DEV ENV WHEN X-DOOR REQUEST HEADER EXISTS
# 31014579 - ORACLE_APEX_DICTIONARY_CACHE JOB ERRORS OUT WITH ORA-20901: MAXIMUM EXECUTION TIME EXCEEDED

The total number of bugs fixed in Patch Set Bundle 30392181 is now 45. I installed the new version of the patch set on top of the March 12th version, and as before I did not encounter any issues.

 

HTH

Matthias

 

Installation of Oracle Database Express Edition (Oracle XE) 18c on Linux

It’s all Cloud these days but having a local installation of Oracle software to play with can still be worthwhile (and cheaper ;-)).

In this post I will explain how to install Oracle Database Express Edition (Oracle XE) version 18c on Linux (to be more specific, CentOS 7.6).

First, let’s check the recommended system requirements:

  • 2 GB RAM (check with free command)
  • 10 GB free disk space (check with df -h command)

Let’s see what system I have:


$ cat /etc/redhat-release

$ uname -a

This is the output:

CentOS Linux release 7.6.1810 (Core)

Linux myhost.mydomain.com 3.10.0-957.12.1.el7.x86_64 #1 SMP Mon Apr 29 14:59:59 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Now download the following two files from https://www.oracle.com/technetwork/database/database-technologies/express-edition/downloads/index.html (note that you will need to register) and put them into a local directory on your Linux computer (you can safely remove them after installation):

Become root and run the preinstall rpm package. This will download and install or upgrade the required software libraries, create a new user (oracle) and groups and set some kernel parameters. Also, a backup is taken of configuration files that are changed during the installation.


$ sudo su

$ yum localinstall oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm

Transaction Summary
=================================================
Install 1 Package (+23 Dependent packages)
Upgrade ( 2 Dependent packages)
Total size: 29 M
Total download size: 29 M
Is this ok [y/d/N]: yInstalled:
oracle-database-preinstall-18c.x86_64 0:1.0-1.el7
Dependency Installed:
compat-libcap1.x86_64 0:1.10-7.el7 compat-libstdc++-33.x86_64 0:3.2.3-72.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5
glibc-headers.x86_64 0:2.17-260.el7_6.5 gssproxy.x86_64 0:0.7.0-21.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7
keyutils.x86_64 0:1.5.8-3.el7 ksh.x86_64 0:20120801-139.el7 libXinerama.x86_64 0:1.1.3-2.1.el7
libXmu.x86_64 0:1.1.2-2.el7 libXrandr.x86_64 0:1.5.1-2.el7 libXt.x86_64 0:1.1.5-3.el7
libXv.x86_64 0:1.0.11-1.el7 libXxf86dga.x86_64 0:1.1.4-2.1.el7 libXxf86misc.x86_64 0:1.0.3-7.1.el7
libXxf86vm.x86_64 0:1.1.4-1.el7 libaio-devel.x86_64 0:0.3.109-13.el7 libdmx.x86_64 0:1.1.3-3.el7
libstdc++-devel.x86_64 0:4.8.5-36.el7_6.2 libverto-tevent.x86_64 0:0.2.5-4.el7 nfs-utils.x86_64 1:1.3.0-0.61.el7
xorg-x11-utils.x86_64 0:7.5-23.el7 xorg-x11-xauth.x86_64 1:1.0.9-1.el7
Dependency Updated:
glibc.x86_64 0:2.17-260.el7_6.5 glibc-common.x86_64 0:2.17-260.el7_6.5
Complete!

Now install the database software:


$ yum localinstall oracle-database-xe-18c-1.0-1.x86_64.rpm

Examining oracle-database-xe-18c-1.0-1.x86_64.rpm: oracle-database-xe-18c-1.0-1.x86_64
Marking oracle-database-xe-18c-1.0-1.x86_64.rpm to be installed
Resolving Dependencies
–> Running transaction check
—> Package oracle-database-xe-18c.x86_64 0:1.0-1 will be installed
–> Finished Dependency Resolution
Dependencies Resolved
============================
Package Arch Version Repository Size
============================
Installing:
oracle-database-xe-18c x86_64 1.0-1 /oracle-database-xe-18c-1.0-1.x86_64 5.2 G
Transaction Summary
============================
Install 1 Package
Total size: 5.2 G
Installed size: 5.2 G
Is this ok [y/d/N]:y
[INFO] Executing post installation scripts…
[INFO] Oracle home installed successfully and ready to be configured.
To configure Oracle Database XE, optionally modify the parameters in ‘/etc/sysconfig/oracle-xe-18c.conf’ and then execute ‘/etc/init.d/oracle-xe-18c configure’ as root.
Verifying : oracle-database-xe-18c-1.0-1.x86_64 1/1
Installed:
oracle-database-xe-18c.x86_64 0:1.0-1
Complete!

Check the log files in the folders /var/log/oracle-database-preinstall-18c and /var/log/oracle-database-xe-18c to make sure there were no errors during installation.

Now create and configure a new database. If you want to change a configuration parameter or the location of the data files, edit the file /etc/sysconfig/oracle-xe-18c.conf first.


$ sudo su

$ /etc/init.d/oracle-xe-18c configure

Enter passwords:
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
31% complete
34% complete
38% complete
41% complete
43% complete
Completing Database Creation
47% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
/opt/oracle/cfgtoollogs/dbca/XE.
Database Information:
Global Database Name:XE
System Identifier(SID):XE
Look at the log file “/opt/oracle/cfgtoollogs/dbca/XE/XE.log” for further details.
Connect to Oracle Database using one of the connect strings:
Pluggable database: myhost.mydomain.com/XEPDB1
Multitenant container database: myhost.mydomain.com
Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE

Check the XE.log file in the folder /opt/oracle/cfgtoollogs/dbca/XE.

To check if the database and listener are running, execute following commands:


$ ps -ef|grep XE

$ ps -ef|grep LIST

To manually stop/start the database and listener:


$ sudo su

$ systemctl stop oracle-xe-18c

$ systemctl start oracle-xe-18c

To auto-start the database and listener after system reboot:


$ sudo su

$ systemctl daemon-reload

$ systemctl enable oracle-xe-18c

Shutdown and reboot the Linux machine to verify that all services are auto-started correctly.

Add the following environment variables to the .bash_profile file of the user that will run query tools (in my case, user oracle):

# User specific environment and startup programs
export ORACLE_SID=XE
export ORACLE_BASE=/opt/oracle
export ORACLE_HOME=/opt/oracle/product/18c/dbhomeXE
PATH=$PATH:$HOME/.local/bin:$HOME/bin:$ORACLE_HOME/bin
export PATH

Test this with:


$ su - oracle

$ sqlplus /nolog

SQL*Plus: Release 18.0.0.0.0 - Production on Wed May 22 20:10:30 2019

Version 18.4.0.0.0

When you want to connect remotely to the database and you have a firewall running, you will need to open port 1521:


$ sudo su

$ firewall-cmd --permanent --add-port=1521/tcp

Example of a connection from SQL Developer 18.4:sqld

To use Enterprise Manager Express (EM) from remote clients, connect to the database using sqlplus and execute following code:


$ su - oracle

$ sqlplus sys as sysdba

Enter password:

SQL> EXEC DBMS_XDB.SETLISTENERLOCALACCESS(FALSE);

SQL> exec dbms_xdb_config.SetGlobalPortEnabled(TRUE);

Now open the firewall for port 5500:


$ sudo su

$ firewall-cmd --permanent --add-port=5500/tcp

$ systemctl reload firewalld

Check if you can connect to the EM console using a web browser (note: Flash player is required):

https://myhost.mydomain.com:5500/em/login

To connect to the Container Database (XE), login as follows:

1

To connect to the Pluggable Database (XEPDB1), login as follows:

2

More documentation here:

https://docs.oracle.com/en/database/oracle/oracle-database/18/xeinl/installation-guide.html#GUID-31891F22-B1FA-4489-A1C5-195E6B3D89C8

Please let me know if something is unclear or missing!

Matthias