Recently I had to add some storage on an ODA X8-2M that I deployed early February. At that time the last available release was ODA 18.7. In this post I would like to share my experience and the challenge I could face.

ODA X8-2M storage extension

As per Oracle datasheet we can see that we have initially 2 NVMe SSDs installed. With an usable capacity of 5.8 TB. We can extend up to 12 NVMe SSDs per slot of 2 disks, which can bring the ASM storage up to 29.7 TB as usable capacity.
In my configuration we were already having initally 4 NVME SSDs disk and we wanted to add 2 more.

Challenge

During the procedure to add the disk, I surprisingly could see that with release 18.7 the common expand storage command was not recognized.

[root@ODA01 ~]# odaadmcli expand storage -ndisk 2
Command 'odaadmcli expand storage' is not supported

What hell is going here? This was always possible on previous ODA generations and previous releases!
Looking closer to the documentation I could see the following note :
Note:In this release, you can add storage as per your requirement, or deploy the full storage capacity for Oracle Database Appliance X8-2HA and X8-2M hardware models at the time of initial deployment of the appliance. You can only utilize whatever storage you configured during the initial deployment of the appliance (before the initial system power ON and software provisioning and configuration). You cannot add additional storage after the initial deployment of the X8-2HA and X8-2M hardware models, in this release of Oracle Database Appliance, even if the expandable storage slots are present as empty.

Hmmm, 18.5 was still allowing it. Fortunately, the 18.8 version just got released at that time and post installation storage expansion is again possible with that release.
I, then, had to first patch my ODA with release 18.8. A good blog for ODA 18.8 patching from one of my colleague can be found here : Patching ODA from 18.3 to 18.8. Coming from 18.3, 18.5, or 18.7 would follow the same process.

Adding disks on the ODA

Checking ASM usage

Let’s first check the current ASM usage :

grid@ODA01:/home/grid/ [+ASM1] asmcmd
 
ASMCMD> lsdg
State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 512 4096 4194304 12211200 7550792 3052544 2248618 0 Y DATA/
MOUNTED NORMAL N 512 512 4096 4194304 12209152 2848956 3052032 -102044 0 N RECO/

Check state of the disk

Before adding a new disk, all current disks need to be healthy.

[root@ODA01 ~]# odaadmcli show disk
NAME PATH TYPE STATE STATE_DETAILS
 
pd_00 /dev/nvme0n1 NVD ONLINE Good
pd_01 /dev/nvme1n1 NVD ONLINE Good
pd_02 /dev/nvme3n1 NVD ONLINE Good
pd_03 /dev/nvme2n1 NVD ONLINE Good

We are using 2 ASM groups :
[root@ODA01 ~]# odaadmcli show diskgroup
DiskGroups
----------
DATA
RECO

Run orachk

It is recommended to run orachk and be sure the ODA is healthy before adding some new disk :

[root@ODA01 ~]# cd /opt/oracle.SupportTools/orachk/oracle.ahf/orachk
[root@ODA01 orachk]# ./orachk -nordbms

Physical disk installation

In my configuration I have already 4 disks. The 2 additional disks will then be installed in slot 4 and 5. After the disk is plugged in we need to power it on :

[root@ODA01 orachk]# odaadmcli power disk on pd_04
Disk 'pd_04' already powered on

It is recommended to wait at least one minute before plugging in the next disk. The LED of the disk should also shine green. Similarly we can power on the next disk once plugged in the slot 5 of the server :

[root@ODA01 orachk]# odaadmcli power disk on pd_05
Disk 'pd_05' already powered on

Expand the storage

Following command will be used to expand the storage with 2 new disks :
[root@ODA01 orachk]# odaadmcli expand storage -ndisk 2
Precheck passed.
Check the progress of expansion of storage by executing 'odaadmcli show disk'
Waiting for expansion to finish ...

Check expansion

At the beginning of the expansion, we can check and see that the 2 new disks have been seen and are in the process to be initialized :
[root@ODA01 ~]# odaadmcli show disk
NAME PATH TYPE STATE STATE_DETAILS
 
pd_00 /dev/nvme0n1 NVD ONLINE Good
pd_01 /dev/nvme1n1 NVD ONLINE Good
pd_02 /dev/nvme3n1 NVD ONLINE Good
pd_03 /dev/nvme2n1 NVD ONLINE Good
pd_04 /dev/nvme4n1 NVD UNINITIALIZED NewDiskInserted
pd_05 /dev/nvme5n1 NVD UNINITIALIZED NewDiskInserted

Once the expansion is finished, we can check that all our disk, including the new ones, are OK :
[root@ODA01 ~]# odaadmcli show disk
NAME PATH TYPE STATE STATE_DETAILS
 
pd_00 /dev/nvme0n1 NVD ONLINE Good
pd_01 /dev/nvme1n1 NVD ONLINE Good
pd_02 /dev/nvme3n1 NVD ONLINE Good
pd_03 /dev/nvme2n1 NVD ONLINE Good
pd_04 /dev/nvme4n1 NVD ONLINE Good
pd_05 /dev/nvme5n1 NVD ONLINE Good

We can also query the ASM instance and see that the 2 new disks in slot 4 and 5 are online :
SQL> col PATH format a50
SQL> set line 300
SQL> set pagesize 500
SQL> select mount_status, header_status, mode_status, state, name, path, label from v$ASM_DISK order by name;
 
MOUNT_S HEADER_STATU MODE_ST STATE NAME PATH LABEL
------- ------------ ------- -------- ------------------------------ -------------------------------------------------- -------------------------------
CACHED MEMBER ONLINE NORMAL NVD_S00_PHLN9440011FP1 AFD:NVD_S00_PHLN9440011FP1 NVD_S00_PHLN9440011FP1
CACHED MEMBER ONLINE NORMAL NVD_S00_PHLN9440011FP2 AFD:NVD_S00_PHLN9440011FP2 NVD_S00_PHLN9440011FP2
CACHED MEMBER ONLINE NORMAL NVD_S01_PHLN94410040P1 AFD:NVD_S01_PHLN94410040P1 NVD_S01_PHLN94410040P1
CACHED MEMBER ONLINE NORMAL NVD_S01_PHLN94410040P2 AFD:NVD_S01_PHLN94410040P2 NVD_S01_PHLN94410040P2
CACHED MEMBER ONLINE NORMAL NVD_S02_PHLN9490009MP1 AFD:NVD_S02_PHLN9490009MP1 NVD_S02_PHLN9490009MP1
CACHED MEMBER ONLINE NORMAL NVD_S02_PHLN9490009MP2 AFD:NVD_S02_PHLN9490009MP2 NVD_S02_PHLN9490009MP2
CACHED MEMBER ONLINE NORMAL NVD_S03_PHLN944000SQP1 AFD:NVD_S03_PHLN944000SQP1 NVD_S03_PHLN944000SQP1
CACHED MEMBER ONLINE NORMAL NVD_S03_PHLN944000SQP2 AFD:NVD_S03_PHLN944000SQP2 NVD_S03_PHLN944000SQP2
CACHED MEMBER ONLINE NORMAL NVD_S04_PHLN947101TZP1 AFD:NVD_S04_PHLN947101TZP1 NVD_S04_PHLN947101TZP1
CACHED MEMBER ONLINE NORMAL NVD_S04_PHLN947101TZP2 AFD:NVD_S04_PHLN947101TZP2 NVD_S04_PHLN947101TZP2
CACHED MEMBER ONLINE NORMAL NVD_S05_PHLN947100BXP1 AFD:NVD_S05_PHLN947100BXP1 NVD_S05_PHLN947100BXP1
CACHED MEMBER ONLINE NORMAL NVD_S05_PHLN947100BXP2 AFD:NVD_S05_PHLN947100BXP2 NVD_S05_PHLN947100BXP2

CACHED MEMBER ONLINE DROPPING SSD_QRMDSK_P1 AFD:SSD_QRMDSK_P1 SSD_QRMDSK_P1
CACHED MEMBER ONLINE DROPPING SSD_QRMDSK_P2 AFD:SSD_QRMDSK_P2 SSD_QRMDSK_P2
 
14 rows selected.

The operation system will recognize the disks as well :
grid@ODA01:/home/grid/ [+ASM1] cd /dev
 
grid@ODA01:/dev/ [+ASM1] ls -l nvme*
crw-rw---- 1 root root 246, 0 May 14 10:31 nvme0
brw-rw---- 1 grid asmadmin 259, 0 May 14 10:31 nvme0n1
brw-rw---- 1 grid asmadmin 259, 1 May 14 10:31 nvme0n1p1
brw-rw---- 1 grid asmadmin 259, 2 May 14 10:31 nvme0n1p2
crw-rw---- 1 root root 246, 1 May 14 10:31 nvme1
brw-rw---- 1 grid asmadmin 259, 5 May 14 10:31 nvme1n1
brw-rw---- 1 grid asmadmin 259, 10 May 14 10:31 nvme1n1p1
brw-rw---- 1 grid asmadmin 259, 11 May 14 14:38 nvme1n1p2
crw-rw---- 1 root root 246, 2 May 14 10:31 nvme2
brw-rw---- 1 grid asmadmin 259, 4 May 14 10:31 nvme2n1
brw-rw---- 1 grid asmadmin 259, 7 May 14 14:38 nvme2n1p1
brw-rw---- 1 grid asmadmin 259, 9 May 14 14:38 nvme2n1p2
crw-rw---- 1 root root 246, 3 May 14 10:31 nvme3
brw-rw---- 1 grid asmadmin 259, 3 May 14 10:31 nvme3n1
brw-rw---- 1 grid asmadmin 259, 6 May 14 10:31 nvme3n1p1
brw-rw---- 1 grid asmadmin 259, 8 May 14 10:31 nvme3n1p2
crw-rw---- 1 root root 246, 4 May 14 14:30 nvme4
brw-rw---- 1 grid asmadmin 259, 15 May 14 14:35 nvme4n1
brw-rw---- 1 grid asmadmin 259, 17 May 14 14:38 nvme4n1p1
brw-rw---- 1 grid asmadmin 259, 18 May 14 14:38 nvme4n1p2
crw-rw---- 1 root root 246, 5 May 14 14:31 nvme5
brw-rw---- 1 grid asmadmin 259, 16 May 14 14:35 nvme5n1
brw-rw---- 1 grid asmadmin 259, 19 May 14 14:38 nvme5n1p1
brw-rw---- 1 grid asmadmin 259, 20 May 14 14:38 nvme5n1p2

Check ASM space

Querying the ASM disk groups we can see that both Volumes have got additional space in relation of the corresponding pourcentage assigned to DATA and RECO disk group during appliance creation. In my case it was 50-50 for DATA and RECO repartition.

grid@ODA01:/dev/ [+ASM1] asmcmd
 
ASMCMD> lsdg
State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL Y 512 512 4096 4194304 18316288 13655792 3052544 5301118 0 Y DATA/
MOUNTED NORMAL Y 512 512 4096 4194304 18313216 8952932 3052032 2949944 0 N RECO/
ASMCMD>

Conclusion

Adding some new disks on an ODA is quite easy and fast. Surprisingly with ODA release 18.7 you are not able to expand ASM storage once the appliance is installed. This is really a regression where you will lose the ability to extend the storage. Fortunately, this has been solved in ODA version 18.8.