Introduction

When you work on ODA you sometimes get struggled with local filesystem free space. ODA has terabytes of space on data disks, but local disks are still limited to a raid-1 array of 2x 480GB disks. And only few GB are dedicated to / and /u01 filesystems. You do not need hundreds of GB on these filesystems, but I think that you prefer to keep at least 20-30% of free space. And if you plan to patch your ODA, you surely need more space to pass all the steps without reaching dangerous level of filling. Here is how to grab free space on these filesystems.

Use additional purgeLogs script

PurgeLogs script is provided as an additional tool from Oracle. It should have been available with oakcli/odacli but it’s not. Download it from MOS note 2081655.1. As this tool is not part of the official ODA tool, please test it before using it on a production environment. It’s quite easy to use, put the zip in a folder, unzip it, and run it with root user. You can use this script with a single parameter that will clean up all the logfiles for all the Oracle products aged of a number of days:


df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 29G 23G 56% /
df -h /u01/
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb1 92G 43G 45G 50% /u01

cd /tmp/
unzip purgeLogs.zip
du -hs /opt/oracle/oak/log/*
11G /opt/oracle/oak/log/aprhodap02db0
4.0K /opt/oracle/oak/log/fishwrap
232K /opt/oracle/oak/log/test
./purgeLogs -days 1

--------------------------------------------------------
purgeLogs version: 1.43
Author: Ruggero Citton
RAC Pack, Cloud Innovation and Solution Engineering Team
Copyright Oracle, Inc.
--------------------------------------------------------

2018-12-20 09:20:06: I adrci GI purge started
2018-12-20 09:20:06: I adrci GI purging diagnostic destination diag/asm/+asm/+ASM1
2018-12-20 09:20:06: I ... purging ALERT older than 1 days

2018-12-20 09:20:47: S Purging completed succesfully!
du -hs /opt/oracle/oak/log/*
2.2G /opt/oracle/oak/log/aprhodap02db0
4.0K /opt/oracle/oak/log/fishwrap
28K /opt/oracle/oak/log/test


df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 18G 34G 35% /
df -h /u01/
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb1 92G 41G 47G 48% /u01

In this example, you just freed up about 13GB. If your ODA is composed of 2 nodes, don’t forget to use the same script on the other node.

Truncate hardware log traces

Hardware related traces are quietly filling up the filesystem if your ODA is running since a long time. This traces are located under /opt/oracle/oak/log/`hostname`/adapters. I don’t know if each model has this kind of behaviour but this was an example on an old X4-2 running for 3 years now.

cd /opt/oracle/oak/log/aprhodap02db0/adapters
ls -lrth
total 2.2G
-rw-r--r-- 1 root root 50M Dec 20 09:26 ServerAdapter.log
-rw-r--r-- 1 root root 102M Dec 20 09:27 ProcessorAdapter.log
-rw-r--r-- 1 root root 794M Dec 20 09:28 MemoryAdapter.log
-rw-r--r-- 1 root root 110M Dec 20 09:28 PowerSupplyAdapter.log
-rw-r--r-- 1 root root 318M Dec 20 09:30 NetworkAdapter.log
-rw-r--r-- 1 root root 794M Dec 20 09:30 CoolingAdapter.log
head -n 3 CoolingAdapter.log
[Mon Apr 27 18:02:28 CEST 2015] Action script '/opt/oracle/oak/adapters/CoolingAdapter.scr' for resource [CoolingType] called for action discovery
In CoolingAdapter.scr
[Mon Apr 27 18:07:28 CEST 2015] Action script '/opt/oracle/oak/adapters/CoolingAdapter.scr' for resource [CoolingType] called for action discovery
head -n 3 MemoryAdapter.log
[Mon Apr 27 18:02:26 CEST 2015] Action script '/opt/oracle/oak/adapters/MemoryAdapter.scr' for resource [MemoryType] called for action discovery
In MemoryAdapter.scr
[Mon Apr 27 18:07:25 CEST 2015] Action script '/opt/oracle/oak/adapters/MemoryAdapter.scr' for resource [MemoryType] called for action discovery

Let’s purge the oldest lines in these files:

for a in `ls *.log` ; do tail -n 200 $a > tmpfile ; cat tmpfile > $a ; rm -f tmpfile; done
ls -lrth
total 176K
-rw-r--r-- 1 root root 27K Dec 20 09:32 CoolingAdapter.log
-rw-r--r-- 1 root root 27K Dec 20 09:32 ProcessorAdapter.log
-rw-r--r-- 1 root root 30K Dec 20 09:32 PowerSupplyAdapter.log
-rw-r--r-- 1 root root 29K Dec 20 09:32 NetworkAdapter.log
-rw-r--r-- 1 root root 27K Dec 20 09:32 MemoryAdapter.log
-rw-r--r-- 1 root root 27K Dec 20 09:32 ServerAdapter.log

2GB of traces you’ll never use! Don’t forget the second node on a HA ODA.

Purge old patches in the repository: simply because they are useless

If you successfully patched your ODA at least 2 times, you can remove the oldest patch in the ODA repository. As you may know, patches are quite big in size because they include a lot of things. So it’s a good practise to remove the oldest patches when you have successfuly patched your ODA. To identify if old patches are still on your ODA, you can dig into folder /opt/oracle/oak/pkgrepos/orapkgs/. Purge of old patches is easy:

df -h / >> /tmp/dbi.txt
oakcli manage cleanrepo --ver 12.1.2.6.0
Deleting the following files...
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OAK/12.1.2.6.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95000N/SF04/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95001N/SA03/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/WDC/WD500BLHXSUN/5G08/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H101860SFSUN600G/A770/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST360057SSUN600G/0B25/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H106060SDSUN600G/A4C0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109060SESUN600G/A720/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/HUS1560SCSUN600G/A820/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA6SUN200G/A29A/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA4SUN400G/A29A/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/ZeusIOPs-es-G3/E12B/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF2EUSUN73G/9440/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24P/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24C/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE3-24C/0291/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4370-es-M2/3.0.16.22.f-es-r100119/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109090SESUN900G/A720/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF4EUSUN200G/944A/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240AS60SUN4.0T/A2D2/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240B520SUN4.0T/M554/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7280A520SUN8.0T/P554/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/SUN/T4-es-Storage/0342/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4170-es-M3/3.2.4.26.b-es-r101722/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4-2/3.2.4.46.a-es-r101689/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X5-2/3.2.4.52-es-r101649/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/HMP/2.3.4.0.1/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/IPMI/1.8.12.4/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/ASR/5.3.1/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/12.1.0.2.160119/Patches/21948354
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.4.160119/Patches/21948347
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.3.15/Patches/20760997
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.2.12/Patches/17082367
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OEL/6.7/Patches/6.7.1
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OVS/12.1.2.6.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/GI/12.1.0.2.160119/Base
df -h / >> /tmp/dbi.txt
cat /tmp/dbi.txt
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 28G 24G 54% /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 21G 31G 41% /

Increase /u01 filesystem with remaining space

This only concern ODAs in bare metal. You may have noticed that not all the disk space is allocated to your ODA local filesystems. On modern ODAs, you have 2 M2 SSD of 480GB each in a RAID1 configuration for the system, and only half of the space is allocated. As the appliance is using LogicalVolumes, you can extend very easily the size of your /u01 filesystem.

This is an example on a X7-2M:


vgdisplay
--- Volume group ---
VG Name VolGroupSys
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size 446.00 GiB
PE Size 32.00 MiB
Total PE 14272
Alloc PE / Size 7488 / 234.00 GiB
Free PE / Size 6784 / 212.00 GiB
VG UUID wQk7E2-7M6l-HpyM-c503-WEtn-BVez-zdv9kM


lvdisplay
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolRoot
LV Name LogVolRoot
VG Name VolGroupSys
LV UUID icIuHv-x9tt-v2fN-b8qK-Cfch-YfDA-xR7y3W
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:00 +0100
LV Status available
# open 1
LV Size 30.00 GiB
Current LE 960
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:0
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolU01
LV Name LogVolU01
VG Name VolGroupSys
LV UUID ggYNkK-GfJ4-ShHm-d5eG-6cmu-VCdQ-hoYzL4
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:07 +0100
LV Status available
# open 1
LV Size 100.00 GiB
Current LE 3200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:2
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolOpt
LV Name LogVolOpt
VG Name VolGroupSys
LV UUID m8GvKZ-zgFF-2gXa-NSCG-Oy9l-vTYd-ALi6R1
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:30 +0100
LV Status available
# open 1
LV Size 60.00 GiB
Current LE 1920
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:3
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolSwap
LV Name LogVolSwap
VG Name VolGroupSys
LV UUID 9KWiYw-Wwot-xCmQ-uzCW-mILq-rsPz-t2X2pr
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:44 +0100
LV Status available
# open 2
LV Size 24.00 GiB
Current LE 768
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:1
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolDATA
LV Name LogVolDATA
VG Name VolGroupSys
LV UUID oTUQsd-wpYe-0tiA-WBFk-719z-9Cgd-ZjTmei
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:55:25 +0100
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:4
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolRECO
LV Name LogVolRECO
VG Name VolGroupSys
LV UUID mJ3yEO-g0mw-f6IH-6r01-r7Ic-t1Kt-1rf36j
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:55:25 +0100
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:5

212GB are available. Let’s take 100GB for extending /u01:


lvextend -L +100G /dev/mapper/VolGroupSys-LogVolU01
Size of logical volume VolGroupSys/LogVolU01 changed from 100.00 GiB (3200 extents) to 200.00 GiB.
Logical volume LogVolU01 successfully resized.

Filesystem needs to be resized:

resize2fs /dev/mapper/VolGroupSys-LogVolU01
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/mapper/VolGroupSys-LogVolU01 is mounted on /u01; on-line resizing required
old_desc_blocks = 7, new_desc_blocks = 13
Performing an on-line resize of /dev/mapper/VolGroupSys-LogVolU01 to 52428800 (4k) blocks.
The filesystem on /dev/mapper/VolGroupSys-LogVolU01 is now 52428800 blocks long.

Now /u01 is bigger:

df -h /dev/mapper/VolGroupSys-LogVolU01
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01
197G 77G 111G 41% /u01

Conclusion

Don’t hesitate to clean up your ODA before having to deal with space pressure.