Today, after we did a fresh setup of a Grid Infrastructure cluster (12.1.0.2.170814) we faced two issues reported in the alert.log of the ASM instances (in fact you would see the same for the alert logs of any instance in that configuration but we did not had any other instance up and running at that time):

This:

ORA-00700: soft internal error, arguments: [dbgrfrbf_1], [/disk00/app/grid/diag/asm/+asm/+ASM2/metadata/INC_METER_SUMMARY.ams], [0], [4], [], [], [], [], [], [], [], []
ORA-27072: File I/O error
Linux-x86_64 Error: 22: Invalid argument
Additional information: 4
Additional information: 1
Additional information: 4294967295

… and this:

ERROR: create the ADR schema in the specified ADR Base directory [/disk00/app/grid]
ERROR: The ORA-48178 error is caused by the ORA-48101 error. 
ORA-48101: error encountered when attempting to read a file [block] [/disk00/app/grid/diag/asm/+asm/+ASM1/metadata/ADR_INTERNAL.mif] [0]
ORA-27072: File I/O error
Linux-x86_64 Error: 22: Invalid argument
Additional information: 4
Additional information: 1
Additional information: 4294967295

As it turned out this was an issue with the xfs block size. In the configuration we had the block size was set to 4096 (this was chosen by default when the file system got created):

$:/disk00/app/grid/diag/asm/+asm/+ASM1/metadata/ [+ASM1] xfs_info /disk00
meta-data=/dev/mapper/vg_root-lv_disk00 isize=512    agcount=4, agsize=13107200 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=52428800, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=25600, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
$:/disk00/app/grid/diag/asm/+asm/+ASM1/metadata/ [+ASM1] 

After changing that to 512 all was fine again:

$:/home/ [+ASM1] xfs_info /disk00/
meta-data=/dev/mapper/disk00     isize=256    agcount=4, agsize=104857600 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0 spinodes=0
data     =                       bsize=512    blocks=419430400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=512    blocks=204800, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
$:/home/ [+ASM1] 

Indeed this was not so easy because the Cisco UCS systems we used for that came with internal disks that had a 4k sector size so we couldn’t even create a new xfs file system on that with the settings required.

[root@xxxx ~]# mkfs.xfs -s size=512 -m crc=0 -b size=512
        /dev/mapper/vg_root-lv_disk00_test 

      illegal sector size 512; hwsector is 4096

The solution was to get a LUN from the storage and use that instead. Another important note from the above linked data sheet:

NOTE: 4K format drives are supported and qualified as bootable with Cisco UCS Manager Release 3.1(2b)and later versions. However, 4K sector format drives do not support VMware and require UEFI boot.

Be careful …