In my previous blog post, I talked about SQL Server on Linux and high availability. During my test, I used a NFS server to share disk resources between my cluster nodes as described in the Microsoft documentation. A couple of days ago, I decided to add a fourth node (LINUX04) to my cluster infrastructure and I expected to do this work easily. But no chance, I faced a problem I never had before on this infrastructure.
Switching over this last node led to a failed SQL Server FCI resource. After digging into the problem, I found out the root from the SQL Server error log as shown below:
[[email protected] ~]$ sudo cat /var/opt/mssql/log/errorlog 2017-02-12 18:55:15.89 Server Microsoft SQL Server vNext (CTP1.2) - 22.214.171.124 (X64) Jan 10 2017 19:15:28 Copyright (C) 2016 Microsoft Corporation. All rights reserved. on Linux (CentOS Linux 7 (Core)) 2017-02-12 18:55:15.89 Server UTC adjustment: 0:00 2017-02-12 18:55:15.89 Server (c) Microsoft Corporation. 2017-02-12 18:55:15.89 Server All rights reserved. 2017-02-12 18:55:15.89 Server Server process ID is 4116. 2017-02-12 18:55:15.89 Server Logging SQL Server messages in file 'C:\var\opt\mssql\log\errorlog'. 2017-02-12 18:55:15.89 Server Registry startup parameters: -d C:\var\opt\mssql\data\master.mdf -l C:\var\opt\mssql\data\mastlog.ldf -e C:\var\opt\mssql\log\errorlog 2017-02-12 18:55:15.91 Server Error: 17113, Severity: 16, State: 1. 2017-02-12 18:55:15.91 Server Error 2(The system cannot find the file specified.) occurred while opening file 'C:\var\opt\mssql\data\master.mdf' to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary. 2017-02-12 18:55:15.91 Server SQL Server shutdown has been initiated
Well, the error speaks for itself and it seems I’m concerned by a file access permission in my case. My first reflex was to check the corresponding permissions on the corresponding NFS folder.
[[email protected] ~]$ sudo ls -lu /var/opt/mssql/data To.al 53320 drwxr-----. 2 995 993 4096 Feb 14 23:12 lost+found -rwxr-----. 1 995 993 4194304 Feb 14 23:19 master.mdf -rwxr-----. 1 995 993 2097152 Feb 14 23:19 mastlog.ldf -rwxr-----. 1 995 993 8388608 Feb 14 23:19 modellog.ldf -rwxr-----. 1 995 993 8388608 Feb 14 23:19 model.mdf -rwxr-----. 1 995 993 13959168 Feb 14 23:19 msdbdata.mdf -rwxr-----. 1 995 993 786432 Feb 14 23:19 msdblog.ldf drwxr-----. 2 995 993 4096 Feb 14 23:08 sqllinuxfci -rwxr-----. 1 995 993 8388608 Feb 14 23:19 tempdb.mdf -rwxr-----. 1 995 993 8388608 Feb 14 23:19 templog.ldf
According to the output above we may claim this is a mismatch issue between uids/guids of the mssql user across the cluster nodes. At this stage, I remembered performing some tests including creating some linux users before adding my fourth node leading to create a mismatch for the mssql user’s uids/gids. Just keep in mind that the SQL Server installation creates a mssql user by default with the next available uid/gid. In my case uid and guid.
Let’s compare mssql user uid/gid from other existing nodes LINUX01 / LINUX02 and LINUX03:
[[email protected] ~]$ id mssql uid=997(mssql) gid=995(mssql) groups=995(mssql) [[email protected] ~]$ ssh linux01 id mssql … [[email protected] ~]# ssh linux01 id mssql uid=995(mssql) gid=993(mssql) groups=993(mssql) … [[email protected] ~]# ssh linux02 id mssql uid=995(mssql) gid=993(mssql) groups=993(mssql) … [[email protected] ~]# ssh linux03 id mssql uid=995(mssql) gid=993(mssql) groups=993(mssql)
Ok this explains why I faced this permission issue. After investing some times to figure out how to get rid of this issue without changing the mssql user’s uid/guid, I read some discussions about using NFS4 which is intended to fix this uids/gids mapping issue. It seems to be perfect in my case! But firstly let’s just confirm I’m using the correct NFS version
[[email protected] ~]$ mount -v | grep nfs nfsd on /proc/fs/nfsd type nfsd (rw,relatime) 192.168.5.14:/mnt/sql_data_nfs on /var/opt/mssql/data type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.5.14,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.5.14)
Well, my current configuration is not ready to leverage NFS4 yet and some configuration changes seem to be required to address it.
Firstly, let’s change fstype parameter of my FS resource to nfs4 to mount the NFS share with NFS4.
[[email protected] ~]$ sudo pcs resource show FS Resource: FS (class=ocf provider=heartbeat type=Filesystem) Attributes: device=192.168.5.14:/mnt/sql_data_nfs directory=/var/opt/mssql/data fstype=nfs Operations: start interval=0s timeout=60 (FS-start-interval-0s) stop interval=0s timeout=60 (FS-stop-interval-0s) monitor interval=20 timeout=40 (FS-monitor-interval-20) [[email protected] ~]$ sudo pcs resource update FS fstype=nfs4 [[email protected] ~]$ sudo pcs resource restart FS FS successfully restarted
Then I had to perform some updates to my idmap configuration on both sides (NFS server and client as well) to make the mapping working correctly. The main steps were as follows:
- Enabling idmap with NFS4 (disabled by default in my case)
- Changing some parameters inside the /etc/idmap.conf
- Verifying idmap is running correctly.
[[email protected] sql_data_nfs]# echo N > /sys/module/nfsd/parameters/nfs4_disable_idmapping … [[email protected] sql_data_nfs]# grep ^[^#\;] /etc/idmapd.conf [General] Domain = dbi-services.test [Mapping] Nobody-User = nobody Nobody-Group = nobody [Translation] Method = static [Static] [email protected] = mssql [email protected] = testp … [[email protected] sql_data_nfs]# systemctl status nfs-idmap . nfs-idmapd.service - NFSv4 ID-name mapping service Loaded: loaded (/usr/lib/systemd/system/nfs-idmapd.service; static; vendor preset: disabled) Active: active (running) since Wed 2017-02-15 20:29:57 CET; 1h 39min ago Process: 3362 ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS (code=exited, status=0/SUCCESS) Main PID: 3364 (rpc.idmapd) CGroup: /system.slice/nfs-idmapd.service └─3364 /usr/sbin/rpc.idmapd
At this point, listening user permissions shows nobody/nobody meaning translation is not performed yet.
[[email protected] ~]# ls -lu /var/opt/mssql total 16 drwxr-----. 2 nobody nobody 4096 Feb 15 19:00 data …
I forgot to create a corresponding mssql user on the NFS service side. Let’s do it:
[[email protected] sql_data_nfs]# groupadd mssql -g 993 [[email protected] sql_data_nfs]# useradd -u 995 -g 993 mssql –M
After remounting the NFS share, I finally got the expected output as shown below:
[[email protected] ~]# mount -o remount -t nfs4 192.168.5.14:/mnt/sql_data_nfs/sqllinuxfci /mnt/testp/ … [[email protected] ~]# ls -lu /var/opt/mssql total 16 drwxr-----. 2 mssql mssql 4096 Feb 15 19:00 data … [[email protected] ~]# ls -lu /var/opt/mssql/data/* -rwxr-----. 1 mssql mssql 4194304 Feb 15 19:53 /var/opt/mssql/data/master.mdf -rwxr-----. 1 mssql mssql 2097152 Feb 15 19:53 /var/opt/mssql/data/mastlog.ldf -rwxr-----. 1 mssql mssql 8388608 Feb 15 19:53 /var/opt/mssql/data/modellog.ldf -rwxr-----. 1 mssql mssql 8388608 Feb 15 19:53 /var/opt/mssql/data/model.mdf -rwxr-----. 1 mssql mssql 13959168 Feb 15 19:53 /var/opt/mssql/data/msdbdata.mdf -rwxr-----. 1 mssql mssql 786432 Feb 15 19:53 /var/opt/mssql/data/msdblog.ldf -rwxr-----. 1 mssql mssql 8388608 Feb 15 19:53 /var/opt/mssql/data/tempdb.mdf -rwxr-----. 1 mssql mssql 8388608 Feb 15 19:53 /var/opt/mssql/data/templog.ldf
This time the translation is effective but let’s perform another test by running the previous command as the mssql user
[[email protected] ~]# runuser -l mssql -c 'ls -lu /var/opt/mssql/data/*' ls: cannot access /var/opt/mssql/data/*: Permission denied
The problem starts when I try to access the database files despite the correct mapping configuration. I spent some time to understand that some misconceptions about how NFSv4 and magic mismatch uids/gids fix subsist. I admit the main documentation is not clear about it but please, feel free to comment if it is not the case. After digging into further pointers, I was able to understand that NFS itself doesn’t achieve authentication but delegates it down to the RPC mechanism. If we take a look down at the RPC’s security, we may notice it hasn’t been updated to support such matching. Basically, it continues to use the historic authentication called AUTH_SYS meaning sending uids/gis over the network. Translation work comes later through the idmap service. The only way to get rid of this issue would be to prefer another protocol like RPCSEC_GSS which includes authentication based on LDAP or Kerberos for example.
The bottom line here is that SQL Server on Linux is not an exception of course. If we want to continue using basic Unix authentication, keeping synchronizing uids and guids across my cluster nodes seems to be a good way to go. Using Kerberos authentication in this case? This is another story that I will try to tell in another blog post!
Happy clustering on Linux!