Sunday 11 June 2017

Oracle Database Appliance (ODA) Verify Cabling


Once the ODA cabling is completed, it is very important to verify the cabling connection. Oracle recommends that you run oakcli to validate the cabling before starting the ODA network configuration.

The following oakcli command is used to validate the cabling:

# /opt/oracle/oak/bin/oakcli validate -c storagetopology

If the oakcli validate -c storagetopology command reports any issue, One possible cause is that the cables to all the nodes were not fully connected. verify that the cables to all the nodes are fully connected.  Unplug/plug them back in as needed for verification.

You should also run it regularly to ensure that disk connectivity has no problems.

Let's take a look at an example on how to run the Storage Topology.
Here in this example I am running the command on ODA X4-2.


You should run the validate Storage Topology command on both the nodes as follows:




Conclusion

oakcli validate -c storagetopology command is used to verify the cabling connection. Make sure the system and storage cabling is correct as per documentation and cables are tightly plugged in. If there any issue double check the physical cabling and use "oakcli validate -c storagetopology" to verify cabling.



Friday 9 June 2017

Oracle Database Appliance (ODA) Cleanup Utility


Oracle Database Appliance provides a utility called "cleanupDeploy.pl" that is used to:
  • If you encounter deployment issues and wants to cleanup to perform a fresh deployment.
  • If you want to wipe out everything on you ODA
  • If you want to redeploy the ODA base image
To perform a ODA cleanup we make use of the utility "cleanupDeploy.pl'. It located under /opt/oracle/oak/onecmd directory.

[root@odanode1 ~]# locate cleanupDeploy.pl
/opt/oracle/oak/onecmd/cleanupDeploy.pl


cleanupDeploy.pl utility wipes out everything on your ODA and you can rerun the deployment process again using oakcli utility.


Execute ODA cleanup utility:

For Bare-Metal setup execute the utility on first node (node 0) and in Virtualization setup on Dom0.

Connect to first node on ODA and execute the cleanupDeploy.pl utility as root user.

[root@odanode1 ~]# cd /opt/oracle/oak/onecmd
[root@odanode1 ~]# ./cleanupDeploy.pl


cleanupDeploy.pl utility performs a series of steps as below:

  • SetupSSHroot
  • SetupPrivSSHroot
  • deinstallASR
  • CleanupDBConsolefiles
  • DeinstallGI
  • resetmultipathconf
  • resetstoragenickname
  • deleteudevrulefile
  • DropUsersGroups
  • resetnetwork
  • resetpassword
  • reboot
At the end of the execution cleanupDeploy.pl both the ODA nodes are rebooted.

Practical cleanupDeploy.pl session
  • Connect as root user to ODA node 1 and navigate to /opt/oracle/oakcli/onecmd
[root@odanode1 ~]# cd /opt/oracle/oak/onecmd
[root@odanode1 ~]# ./cleanupDeploy.pl

Enter the root password and re-enter again
 

















Conclusion:
In this article we have learned how to cleanup ODA setup using cleanupDeploy.pl. If you encounter issue on ODA or want to wipe out everything on ODA, this utility comes handy and you can rerun the deployment process again using oakcli utility.


Thursday 8 June 2017

Oracle Database Appliance (ODA) Firstnet Utility In Action

About Firstnet Utility

The "oakcli configure firstnet" command is used to configure an initial network on Oracle Database Appliance (ODA) that enables you to copy ODA End User Bundle software to ODA server.

oakcli configure firstnet allows you to establish the network interfaces for the Oracle Database Appliance upon first usage.

Once you have completed the reimaging of ODA servers, you should log onto the one of the node ILOM console and do the initial configuration of the Oracle Database Appliance network using the command firstnet.

The bond0 interface is configured when execute the firstnet utility.

Before running firstnet, ensure that you allocated IP addresses and verify that their entries in the DNS have been completed.

Execute firstnet utility

  • Connect to the ILOM console using browser or putty to configure the initial IP address for the node you are connected to.

Default password for root user on ILOM is "changeme"

 
login as: root
Using keyboard-interactive authentication.
Password:

Oracle(R) Integrated Lights Out Manager

Version 3.2.4.46.a r101689

Copyright (c) 2015, Oracle and/or its affiliates. All rights reserved.

Warning: password is set to factory default.

Hostname: ORACLESP-1113NML13N


-> 

(->) This means you are connect to ILOM now.
  •  Start the Serial console by executing the following command. When prompted, enter y and then root and it's password. Default root password for ODA servers is "welcome1"

-> start /SP/console
Are you sure you want to start /SP/console (y/n)? y

Serial console started.  To stop, type ESC (


Oracle Linux Server release 6.8
Kernel 2.6.39-400.290.1.el6uek.x86_64 on an x86_64

oak1 login: root
Password:

Last login: Wed Jun  7 11:32:57 on tty1


  • oakcli used to execute the firstnet utility
[root@oak1 ~]# locate oakcli
/opt/oracle/oak/bin/oakcli 


  • Now execute the firstnet utility as follows:
[root@oak1 ~]# /opt/oracle/oak/bin/oakcli configure firstnet
Select the Interface to configure the network on (bond0 bond1) [bond0]:
Configure DHCP on bond0 (yes/no) [no]:
INFO: You have chosen Static configuration
Enter the IP address to configure : 10.10.17.11
Enter the Netmask address to configure : 255.255.255.0
Enter the Gateway address to configure[10.10.17.1] : 10.10.15.1

INFO: Plumbing the IPs now
INFO: Restarting the network
Shutting down interface bond0:  bonding: bond0: Removing slave eth2.
bonding: bond0: Warning: the permanent HWaddr of eth2 - 00:10:e0:62:3f:f2 - is still in use by bond0. Set the

HWaddr of eth2 to a different address to avoid conflicts.
bonding: bond0: releasing active interface eth2
bonding: bond0: Removing slave eth3.
bonding: bond0: releasing backup interface eth3
[  OK  ]
Shutting down interface bond1:  bonding: bond1: Removing slave eth4.
bonding: bond1: Warning: the permanent HWaddr of eth4 - 00:10:e0:62:3f:f4 - is still in use by bond1. Set the

HWaddr of eth4 to a different address to avoid conflicts.
bonding: bond1: releasing backup interface eth4
bonding: bond1: Removing slave eth5.
bonding: bond1: releasing backup interface eth5
[  OK  ]
Shutting down interface eth0:  [  OK  ]
Shutting down interface eth1:  [  OK  ]
Shutting down loopback interface:  [  OK  ]
Bringing up loopback interface:  [  OK  ]
Bringing up interface bond0:  bonding: bond0: setting mode to active-backup (1).
bonding: bond0: Setting MII monitoring interval to 100.
bonding: bond0: Adding slave eth2.
bonding: bond0: enslaving eth2 as a backup interface with a down link.
bonding: bond0: Adding slave eth3.
bonding: bond0: enslaving eth3 as a backup interface with a down link.
bonding: bond0: Setting eth2 as primary slave.
Determining if ip address 172.16.202.71 is already in use for device bond0...
ixgbe 0000:20:00.0: eth2: NIC Link is Up 1 Gbps, Flow Control: RX
bonding: bond0: link status definitely up for interface eth2, 1000 Mbps full duplex.
bonding: bond0: making interface eth2 the new active one.
bonding: bond0: first active interface up!
[  OK  ]
Bringing up interface bond1:  bonding: bond1: setting mode to active-backup (1).
bonding: bond1: Setting MII monitoring interval to 100.
bonding: bond1: Adding slave eth4.
bonding: bond1: enslaving eth4 as a backup interface with a down link.
bonding: bond1: Adding slave eth5.
bonding: bond1: enslaving eth5 as a backup interface with a down link.
bonding: bond1: Setting eth4 as primary slave.
[  OK  ]
Bringing up interface eth0:  Determining if ip address 192.168.16.24 is already in use for device eth0...
ixgbe 0000:a0:00.0: eth0: detected SFP+: 3
ixgbe 0000:a0:00.0: eth0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[  OK  ]
Bringing up interface eth1:  Determining if ip address 192.168.17.24 is already in use for device eth1...
ixgbe 0000:a0:00.1: eth1: detected SFP+: 4
ixgbe 0000:a0:00.1: eth1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[  OK  ]
[root@oak1 ~]#


Once the network interfaces have been configured, you need to test them by using the ping command on the node.

[root@oak1 ~]# ping 10.10.17.11
PING 172.16.202.71 (10.10.17.11) 56(84) bytes of data.
64 bytes from 10.10.17.11: icmp_seq=1 ttl=64 time=0.027 ms
64 bytes from 10.10.17.11: icmp_seq=2 ttl=64 time=0.008 ms
64 bytes from 10.10.17.11: icmp_seq=3 ttl=64 time=0.008 ms
64 bytes from 10.10.17.11: icmp_seq=4 ttl=64 time=0.006 ms
64 bytes from 10.10.17.11: icmp_seq=5 ttl=64 time=0.005 ms

--- 172.16.202.71 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4544ms
rtt min/avg/max/mdev = 0.005/0.010/0.027/0.009 ms


[root@oak1 ~]# ifconfig
bond0     Link encap:Ethernet  HWaddr 00:10:E0:62:3F:F2
          inet addr:10.10.17.11  Bcast:10.10.17.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:762 (762.0 b)  TX bytes:546 (546.0 b)

bond1     Link encap:Ethernet  HWaddr 00:10:E0:62:3F:F4
          UP BROADCAST MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

eth0      Link encap:Ethernet  HWaddr 90:E2:BA:81:2B:B4
          inet addr:192.168.16.24  Bcast:192.168.16.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:138257 errors:0 dropped:0 overruns:0 frame:0
          TX packets:138436 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:11200490 (10.6 MiB)  TX bytes:11134648 (10.6 MiB)

eth1      Link encap:Ethernet  HWaddr 90:E2:BA:81:2B:B5
          inet addr:192.168.17.24  Bcast:192.168.17.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:138217 errors:0 dropped:0 overruns:0 frame:0
          TX packets:138434 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:11197894 (10.6 MiB)  TX bytes:11134800 (10.6 MiB)

eth2      Link encap:Ethernet  HWaddr 00:10:E0:62:3F:F2
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:762 (762.0 b)  TX bytes:546 (546.0 b)

eth3      Link encap:Ethernet  HWaddr 00:10:E0:62:3F:F2
          UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

eth4      Link encap:Ethernet  HWaddr 00:10:E0:62:3F:F4
          UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

eth5      Link encap:Ethernet  HWaddr 00:10:E0:62:3F:F4
          UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:512371 errors:0 dropped:0 overruns:0 frame:0
          TX packets:512371 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:38908072 (37.1 MiB)  TX bytes:38908072 (37.1 MiB)


Conclusion:
In this short article we have learned about Firstnet utility and have seen to use it to configure initial network to copy software to ODA servers.


Sunday 4 June 2017

AWS - export and import using datapump utility.


Introduction:
AWS Import/Export is a service that accelerates transferring large amounts of data into and out of AWS using physical storage appliances, bypassing the Internet. AWS Import/Export supports transfers data directly onto and off of storage devices you own using the Amazon high-speed internal network.

Refer http://docs.aws.amazon.com/AWSImportExport/latest/DG/whatisdisk.html

Scenario 1: Take export dump  on AWS database Server for some of the schemas.

Use following parfile to take export on aws database server

directory=<directory name>
dumpfile=<dumpfile name>
logfile=<logfile name>
schemas=<schema name>
compression=all


For example we are using AWSNSMP  database to take schemas export and keep the dumpfile on database server

We need to use DATA_PUMP_DIR directory for keeping export dumpfile and logfiles.

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
AWSNSMP   READ WRITE

SQL> select * from dba_directories;

OWNER      DIRECTORY_NAME           DIRECTORY_PATH                 ORIGIN_CON_ID
---------- ------------------------ ------------------------------ -------------
SYS        BDUMP                    /rdsdbdata/log/trace                       0
SYS        ADUMP                    /rdsdbdata/log/audit                       0
SYS        OPATCH_LOG_DIR           /rdsdbbin/oracle/Opatch                   0
SYS        OPATCH_SCRIPT_DIR        /rdsdbbin/oracle/Opatch                   0
SYS        DATA_PUMP_DIR            /rdsdbdata/datapump                        0
SYS        OPATCH_INST_DIR          /rdsdbbin/oracle/OPatch                    0

6 rows selected.


1. Create Parfile and keep all parameters 

vi AWSNSMP_WNL.par

directory=DATA_DUMP_DIR
dumpfile=bkp_AWSNSMP.dpdmp
logfile=bkp_AWSNSMP.log
schemas=STG_MTS_APP,STG_MTS_V,
STG_MTS,
STG_ENTRP_ADM2,
STG_ENTRP_ADM2_V,
STG_ENTRP_ADM2_RO,
STG_ENTRP_ADM2_APP,
STG_ENTRP_ADM1_APP,
STG_ENTRP_ADM1,
STG_ENTRP_ADM1_V,
STG_ENTRP_ADM1_RO,
STG_ENTRP_V,
STG_ENTRP_RO,
STG_ENTRP,
STG_ENTRP_APP
compression=all
 


2. Start export as follows.

$ expdp parfile=AWSNSMP_WNL.par

Export: Release 12.1.0.2.0 - Production on Wed Apr 26 02:28:27 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: system@AWSNSMP
Password:
 


3. Monitor the export log from Database.

Findout out file name by querying  RDSADMIN.RDS_FILE_UTIL.LISTDIR package from Database.

SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;

FILENAME                                 TYPE         FILESIZE MTIME
---------------------------------------- ---------- ---------- ---------
bkp_AWSNSMP.dpdmp                         file         3051520  18-APR-17
bkp_AWSNSMP.log                           file         12118    18-APR-17
datapump/                                directory     4096     18-APR-17

use following package to view the import logfile.

SQL> select * from table
    (rdsadmin.rds_file_util.read_text_file(
        p_directory => 'DATA_PUMP_DIR',
        p_filename  => 'datapump/bkp_AWSNSMP.log'));   2    3    4

TEXT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
;;;
Export: Release 12.1.0.2.0 - Production on Tue Apr 18 10:16:29 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
;;;
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  SYSTEM/********@AWSNSMP directory=DATA_PUMP_DIR dumpfile=bkp_AWSNSMP.dpdmp logfile=bkp_AWSNSMP.log schemas=STG_MTS_APP,STG_MTS_V,STG_
MTS,STG_ENTRP_ADM2,STG_ENTRP_ADM2_V,STG_ENTRP_ADM2_RO,STG_ENTRP_ADM2_APP,STG_ENTRP_ADM1_APP,STG_ENTRP_ADM1,STG_ENTRP_ADM1_V,STG_ENTRP_ADM1_RO,STG_ENTRP_V,STG_ENTRP_RO,STG_ENTRP,STG_ENTRP_APP compression=all encryption_password=********

Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 28.62 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT


. . exported "STG_ENTRP_ADM2"."RECORD_SEQ_POSITION"           0 KB       0 rows
. . exported "STG_ENTRP_ADM2"."REVINFO"                       0 KB       0 rows
. . exported "STG_ENTRP_ADM2"."SCHEDULED_FORM"                0 KB       0 rows
. . exported "STG_ENTRP_ADM2"."SIGNATURE"                     0 KB       0 rows
. . exported "STG_ENTRP_ADM2"."SIGNATURE_AUD"                 0 KB       0 rows
. . exported "STG_MTS"."MTS_DEPLOYMENT"                      0 KB       0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  /rdsdbdata/datapump/bkp_AWSNSMP.dpdmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Tue Apr 18 14:16:51 2017 elapsed 0 00:00:16
 


Scenario 2: Perform export on local server from AWS database.

 Connect to any local database server.

1.    Create directory to accommodate dumpfile and logfile.

SQL> select * from dba_directories where directory_name like '%AWS%';

OWNER      DIRECTORY_NAME       DIRECTORY_PATH                         ORIGIN_CON_ID
---------- -------------------- -------------------------------------- -------------
SYS        AWS_PUMP_DIR         /mounts/nsgpr01_exports/exp/AWS                     0


2.    Create database link.


SQL> CREATE DATABASE LINK AWSNSMP_SYSTEM_LINK CONNECT TO SYSTEM IDENTIFIED BY  <password>  USING 'AWSNSMP';
Database link created.


3.    Create parfile and include network link parameter and encryption option.

Vi AWSNSMP_NL.par
directory=AWS_PUMP_DIR
dumpfile=bkp_AWSNSMP.dpdmp
logfile=bkp_AWSNSMP.log
schemas=STG_MTS_APP,
STG_MTS_V,
STG_MTS,
STG_ENTRP_ADM2,
STG_ENTRP_ADM2_V,
STG_ENTRP_ADM2_RO,
STG_ENTRP_ADM2_APP,
STG_ENTRP_ADM1_APP,
STG_ENTRP_ADM1,
STG_ENTRP_ADM1_V,
STG_ENTRP_ADM1_RO,
STG_ENTRP_V,
STG_ENTRP_RO,
STG_ENTRP,
STG_ENTRP_APP
compression=all
NETWORK_LINK=AWSNSMP_SYSTEM_LINK encryption_password=*******


4.    Initiate export.

$ expdp parfile=AWSNSMP_NL.par

Export: Release 12.1.0.2.0 - Production on Wed Apr 26 03:10:15 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username:system
Password


5.    Monitor export log

$ tail -f bkp_AWSNSMP.log


;;;
Export: Release 12.1.0.2.0 - Production on Tue Apr 18 10:12:00 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
;;;
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_SCHEMA_01":  /******** AS SYSDBA parfile=AWSNSMP_NL.par
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 17.62 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

. . exported "STG_ENTRP_ADM2"."REVINFO"                       0 KB       0 rows
. . exported "STG_ENTRP_ADM2"."SCHEDULED_FORM"                0 KB       0 rows
. . exported "STG_ENTRP_ADM2"."SIGNATURE"                     0 KB       0 rows
. . exported "STG_ENTRP_ADM2"."SIGNATURE_AUD"                 0 KB       0 rows
. . exported "STG_MTS"."MTS_DEPLOYMENT"                      0 KB       0 rows
Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
  /mounts/nsgpr01_exports/exp/AWS/bkp_AWSNSMP.dpdmp
Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at Tue Apr 18 10:14:21 2017 elapsed 0 00:02:18


Scenario 3: Perform data load into AWS database using import.

1.    Create database link on local server to communicate with aws database.

SQL>CREATE DATABASE LINK AWSNSTN_SYSTEM_LINK CONNECT TO SYSTEM IDENTIFIED BY   ***** USING 'AWSNSTN';
Database link created.

 
2.    Transfer the dumpfile into aws database server:

Refer below link for transfer file to AWS.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html

Use the following DBMS_FILE_TRANSFER.PUT_FILE to transfer file to aws Database server.
 

set timing on
set time on
BEGIN
DBMS_FILE_TRANSFER.PUT_FILE (
source_directory_object        => 'AWS_PUMP_DIR',
source_file_name               => 'expdp_NSMPAC_schema.dmp',
destination_directory_object   => 'DATA_PUMP_DIR',
destination_file_name          => 'expdp_NSMPAC_schema.dmp',
destination_database           => 'AWSNSTN_SYSTEM_LINK');
END;
/


Source_directory_object  --> local database directory where your dumpfile located
Source_file_name -->      Dumpfile name
destination_directory_object --> aws database directory where you need to place the dumpfile
destination_database --> Database link

3.    Monitor transfer of files from database.

Issue following query to view the filename

select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;

$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Wed May 3 01:02:25 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter user-name: SYSTEM@AWSNSTN
Enter password:


Last Successful login time: Tue May 02 2017 01:13:16 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
AWSNSTN   READ WRITE

SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;

FILENAME                                           TYPE         FILESIZE MTIME
-------------------------------------------------- ---------- ---------- ------------------
expdp_NSMPAC_schema.dmp                             file       2.0437E+10  18-APR-17
impdp_NSMPAC_schema.log                             file       164046      18-APR-17
datapump/                                          directory     4096      19-APR-17


4.    create parfile for import with parallel parameter to speed up the import process.

vi impdp_AWSNSTN_schemas.ctl
directory=DATA_PUMP_DIR dumpfile=expdp_NSMPAC_schema.dmp logfile=impdp_NSMPAC_schema.log schemas=BRMGR01 transform=oid:n parallel=16

  
5.    Start import in nohup and background

nohup impdp SYSTEM/*****@AWSNSTN parfile=impdp_AWSNSTN_schemas.ctl &

6.    Monitor import log from database.

SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;

FILENAME                                           TYPE         FILESIZE MTIME
-------------------------------------------------- ---------- ---------- ------------------
expdp_NSMPAC_schema.dmp                             file       2.0437E+10   18-APR-17
impdp_NSMPAC_schema.log                             file       164046       18-APR-17
datapump/                                          directory     4096       19-APR-17



From above results you have file names,
Use rdsadmin.rds_file_util.read_text_file to read import logfile

SQL> select * from table
    (rdsadmin.rds_file_util.read_text_file(
        p_directory => 'DATA_PUMP_DIR',
        p_filename  => 'datapump/impdp_NSMPAC_schema.log'));  2    3    4

TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
;;;
Import: Release 12.1.0.2.0 - Production on Tue Apr 18 13:07:42 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
;;;
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01":  SYSTEM/********@AWSNSTN parfile=impdp_AWSNSTN_schemas.ctl
Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/AUDIT_OBJ
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed at Tue Apr 18 19:46:36 2017 elapsed 0 02:38:50

2462 rows selected.


Conclusion:
In this preceding scenario based article  we have learned how to perform export from aws databases and import into aws databases from on-premises to AWS and vice versa.

BY
Name: Omer
Designation: Senior Database Engineer
Organization: Netsoftmate IT Solutions.
Email: info@netsoftmate.com

Saturday 3 June 2017

Oracle Database Appliance (ODA) Standalone Configurator

Oracle Database Appliance (ODA) Configurator: The standalone Oracle Appliance Manager (ODA) configurator is a Java-based tool that enables you to generate your ODA deployment plan and validate your network settings before the actual ODA deployment.

ODA Configurator option:
 

ODA Congigurator can be run in offline or online mode.

Offline: Use the standalone ODA configurator to create an offline deployment plan and validate your network settings before the actual deployment.

Online: You can configure your system at the time of deployment. In that case, you can use the online ODA Configurator which is part of the base Oracle Database Appliance software.

You can run ODA Configurator before deployment by downloading the ODA Configurator on your local system which can be a Linux/UNIX, Windows or MAC system.

If you choose to run the ODA Configurator in offline mode, you can save your deployment plan in a configuration file at the end of an offline configurator session. When the time comes for the actual deploy, copy the configuration file to ODA and run the online Oracle Appliance Configurator and import it to deploy your saved plan.


Download ODA Configurator:
  • Enter the following URL in a web browser
http://www.oracle.com/technetwork/database/database-appliance/overview/oda-configurator-1928685.html
  • Accept the license and click on desired ODA Configurator version. Here I am downloading ODA Manager Configurator 12.1.2.10


  • Enter the login credentials to begin the download


  • If you are using Chrome the download status is shown at the bottom of the browser.

Running ODA Configurator:

If you wish to use ODA Configurator before deployment, then you can download and run the ODA Configurator on your system runing Linux/UNIX, Window or MAC operating system.
  • Extract the ODA Configurator software
On Window: Right Click and select Extract here
On Linux/Unix: use unzip command to extract the files



  • To start the ODA Configurator
On Windows: Double click the batch file config.bat
On Linux/Unix: Use the command ./config.sh

  • The ODA Configurator welcome/home page looks like this

Steps to Run ODA Configurator in Offline mode:
  • Start the ODA Configurator by double click config.bat on a Windows system (./config.sh on Linux).


  • On the welcome screen click next

  • On Configuration Type Screen you have various options:
In Select Configuration Type, you can choose from "Typica (Recommended)", "Custom" or "SAP Application".
Custom which will give you more flexibility to provide more information.

In Select Environment, You choose "Bare Metal" (Physical) if you want to deploy a Physical setup or "Virtualized Platform" for Virtualization setup.

In Select Hardware, you can choose the hardware you purchased. For Example: ODA-X5-2

Use "Browse" button if you have existing configuration file the was created earlier using ODA Configurator.

Click Next





  • On System Information screen, you can enter:
"System Name" - this will be your prefix for Hostname, Interface, SCAN name and ILOM

Select appropriate "Region" and "Timezone"
 

"Database Backup Location" - You have "Internal" and "External" option. If you choose "Internal" the space will be used from DATA and RECO disk groups. Oracle will internally size DATA and RECO ASM disk group to 40% and 60% repectively.
 

"Root password" - enter root password and confirm root password.
 

Disk Group Redundancy - Select HIGH or Normal
 

Accelerator Disk Group - select REDO or FLASH

Click Next





  • On Generic Network screen, enter:
Enter "Domain name" to be used
Check the box if no DNS Server available.
If you are using DNS and NTP server enter thier IPs.

Click Next



  • On Public Network Screen:
Enter the Node0 and Node1 IP, VIPs, SCAN IPs, Netmask and Gateway IP.
Select bond0 for interface. This is the only choice.
Enter ILOM IPs for ILOM0 and ILOM1, Netmask and Gateway IP

Click Next





  • On Other Network Screen, enter:
Enter the IPs for for Node0 and Node1 for net1, else click next to continue.

Click Next





  • On Database Information screen, enter:
If you don't want to create an intial database uncheck the box and click next
if you wish to create an initial database, check the box and fill the required information.
"Database name" - Name for the database to be created
"Is Container Database" - Choose true if you want to create a container database else false to create a regular (non-container database)
"Database Type" - Choose from OLTP, DSS or In-Memory.
"Database Class" - Choose from the 10 different database template that comes with ODA
"Database Deployment" - Choose from RAC, RAC One Node or Enter Edition (Standalone)
"Database Files on Flash Storage" - If you wish to store database in Flash disk select true

Click Next





  • On Database Information Screen, Select:
Database Language, Block Size, Characterset, Territory and National Character set.

Click Next





  • On ASR Configuration Screen, Select:
"Configure Internal Oracle ASR" - If you wish to configure ASR on ODA Server and enert the Proxy server name, port number and MOS credentails

"Configure External Oracle AS" - If you wish to configure ASR on a separate server. Enter the ASR Manager Hostname or IP and port number.

Select SNMP 2 for configuring ASR

Note: If you don't select to configure ASR now you can configure ASR post ODA deployment using oakcli commands

Click Next





  • On CloudFS Information Screen:
You can choose to configure ACFS file system called /cloudfs.

Click Next





  • On Network Validation Screen:
Click on "Run Validation" button to perform the network validation. It will ping every IP that has entered and attempts to resolve the names using DNS or /etc/hosts file.
If you are running ODA Configuration on a server which is not ODA or not part of same subnet select "Skip Nework Configuration"

Click Next





  • On the summary page:
Click Save button to save to store the configuration in a text file.






  • Click Finish to exit the ODA Configuration application.


Conclusion:
In this article we have learned to run the Standalone Oracle Database Appliance configuration in offline mode. ODA configuration is a Java-based tool that enables you to generate your ODA deployment plan and validate your network settings before the actual ODA deployment.



Comparing Oracle Database Appliance X8-2 Model Family

September 2019 Oracle announced Oracle Database Appliance X8-2 (Small, Medium and HA). ODA X8-2 comes with more computing resources com...