Let’s continue with this blog post series about SQL Server and Docker. A couple of days ago, I was in a customer shop that already implemented SQL Server 2017 on Linux as Docker containers. It was definitely a very interesting day with a lot of customer experience and feedbacks. We discussed with him about lot of architecture scenarios.
The interesting point here is I was able to compare with a previous customer who used docker containers for a while in a completely different way. Indeed, my new customer implemented a Docker infrastructure exclusively based on SQL Server containers whereas the older one already containerized its applications that were connected to an external and non-containerized SQL Server environment.
Use case 1 – Containerized apps and virtualized SQL Server environments | Use case 2 – SQL Server containers and virtualized applications |
In this blog post I want to focus on the first use case in terms of networks.
Connecting to an outside SQL Server (from a docker perspective) is probably an intermediate solution for many customers who already deal with mission-critical environments implying very restrictive high-availability scenarios and when very high performance is required as well. Don’t get me wrong. I’m not saying docker is not designed for mission critical scenarios but let’s say that fear of unknown things, as virtualization before, is still predominant, at least for this kind of scenario. I always keep in mind the repetitive customer question: is Docker ready for production and for databases? Connecting to a non-containerized SQL Server environment may make sense here at least to speed containers adoption. That’s my guess but feel free to comment with your thoughts!
So, in this context we may use different Docker network topologies. I spent some times to study and to discuss with customers about implemented network topologies in their context. For simple Docker infrastructures (without orchestrators like Swarm or Kubernetes) Docker bridges seem to be predominant with either Docker0 bridges or user-defined bridges.
- Docker default bridge (Docker0)
For very limited Docker topologies, default network settings will be probably sufficient with Docker0 bridge. It is probably the case of my latest customer with only 5 SQL Server containers on the top of one Docker engine. By default, each container created without any network specification (and any Docker engine setting customization) will have one network interface sitting on the docker0 bridge with an IP from 172.17.0.0/16 CIDR or whichever CIDR you have configured docker to use. But did you wonder what is exactly a bridge on Docker world?
Let’s have a deeper look on it with a very simple example concerning one docker engine that includes two containers based on microsoft/mssql-tools each and one outside SQL Server that runs on the top of Hyper-V virtual machine. The below picture shows some network details that I will explain later in this blog post.
My 2 containers can communicate together because they are sitting on the same network bridge and they are also able to communicate with my database server through the NAT mechanism. IP masquerading and IP forwarding is enabled on my Docker host.
$ sudo docker run -tid --name docker1 microsoft/mssql-tools 77b501fe29af322dd2d1da2824d339a60ba3080c1e61a2332b3cf563755dd3e3 $ sudo docker run -tid --name docker2 microsoft/mssql-tools 3f2ba669591a1889068240041332f02faf970e3adc85619adbf952d5c135d3f4 $ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3f2ba669591a microsoft/mssql-tools "/bin/sh -c /bin/bash" 7 seconds ago Up 6 seconds docker2 77b501fe29af microsoft/mssql-tools "/bin/sh -c /bin/bash" 11 seconds ago Up 10 seconds docker1
Let’s take a look at the network configuration of each container. As a reminder, each network object represents a layer 2 broadcast domain with a layer 3 subnet as shown below. Each container is attached to a network through a specific endpoint.
$ sudo docker inspect docker1 [ "Gateway": "172.17.0.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "172.17.0.2", "IPPrefixLen": 16, "IPv6Gateway": "", "MacAddress": "02:42:ac:11:00:02", "Networks": { "bridge": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "985f25500e3d0c55d419790f1ac446f92c8d1090dddfd69987a52aab0717e630", "EndpointID": "bd82669031ad87ddcb61eaa2dad823d89ca86cae92c4034d4925009aae634c14", "Gateway": "172.17.0.1", "IPAddress": "172.17.0.2", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:11:00:02", "DriverOpts": null } } ] $sudo docker inspect docker2 [ "Gateway": "172.17.0.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "172.17.0.3", "IPPrefixLen": 16, "IPv6Gateway": "", "MacAddress": "02:42:ac:11:00:03", "Networks": { "bridge": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "985f25500e3d0c55d419790f1ac446f92c8d1090dddfd69987a52aab0717e630", "EndpointID": "140cd8764506344958e9a9725d1c2513f67e56b2c4a1fc67f317c3e555764c1e", "Gateway": "172.17.0.1", "IPAddress": "172.17.0.3", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:11:00:03", "DriverOpts": null } } ]
To summarize, two IP addresses have been assigned for Docker1 container (172.17.0.2) and Docker2 container (172.17.0.3) in the IP address interval defined by the Docker0 bridge from the Docker internal IPAM module. Each network interface is created with their own MAC address and the gateway IP address (172.17.0.1) for both containers corresponds to the Docker0 bridge interface.
$ sudo ip a show docker0 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:2a:d0:7e:76 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:2aff:fed0:7e76/64 scope link valid_lft forever preferred_lft forever
Let’s try to connect from the both containers to my SQL Server database:
$ sudo docker exec -it docker1 … $ sudo docker exec -it docker2 ...
Then on each container let’s run the following sqlcmd command:
sqlcmd -S 192.168.40.30,1450 -Usa -Ptoto
Finally let’s switch on the SQL Server instance and let’s get a picture of existing connections (IP Address 192.168.40.30 and port 1450).
SELECT c.client_net_address, c.client_tcp_port, c.local_net_address, c.protocol_type, c.auth_scheme, s.program_name, s.is_user_process FROM sys.dm_exec_connections AS c JOIN sys.dm_exec_sessions AS s ON c.session_id = s.session_id WHERE client_net_address <> '<local machine>'
We may notice that the IP address is basically the same (192.168.40.50) indicating we are using NAT to connect from each container.
Let’s go back to the Docker engine network configuration. After creating my 2 containers, we may notice the creation of 2 additional network interfaces.
$ ip a show | grep veth* 12: veth45297ff@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 14: veth46a8316@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
What are they? At this point, we are entering to Linux network namespace world. You can read further technical details on the internet but to keep simple network namespace concepts, I would say they allow to run different and separate network instances (including routing tables) that operate independent of each other. In other words, there is a way to isolate different networks from each other based on the same physical network device. Assuming we are using docker bridge type networks, when creating a container, in background we are creating a dedicated network namespace that includes a virtual ethernet interface which comes in interconnected pairs. In fact, a virtual ethernet interface acts as a tube to connect a Docker container namespace (in this context) to the outside world via the default / global namespace where the physical interface exists.
Before digging further into details about virtual interfaces let’s say by default Docker doesn’t expose network namespace information because it uses it own libcontainer and the microsoft/mssql-tools docker image is based on a simplified Linux image that doesn’t include network tools to easily show virtual interface information. So, a workaround is to expose a Docker container namespace into the host.
First we have to find out the process id of the container and then link its corresponding proc namespace to /var/run/netns host directory as shown below:
$ sudo docker inspect --format '{{.State.Pid}}' docker1 2094 $ sudo ln -s /proc/2094/ns/net /var/run/netns/ns-2094
Then we may use ip netns command to extract the network information
$ sudo ip netns ns-2094 (id: 0) $ sudo ip netns exec ns-2094 ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
Here we go. The interesting information is the container network interface 11: eth0@if12
So, the first pair is the eth0 interface on the Docker container and the “outside” pair corresponds to the interface number 12. On the host the interface 12 corresponds to the virtual ethernet adapter veth45297ff. Note we may also find out the pair corresponding to the container interface (@if11).
$ ip a | grep "^12" 12: veth45297ff@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
Finally, let’s take a look at the bridge used by the virtual ethernet adapter veth45297ff
$ sudo brctl show bridge name bridge id STP enabled interfaces docker0 8000.02422ad07e76 no veth45297ff veth46a8316
The other veth (46a8316) corresponds to my second docker2 container.
- User-defined network bridges
But as said previously using the Docker0 bridge is only suitable for very limited scenarios. User-defined bridges are more prevalent with more complex scenarios like microservice applications because they offer a better isolation between containers and the outside world as well as a better manageability and customization. At this stage we may also introduce macvlan networks but probably in the next blog post …
For example, let’s say you want to create 2 isolated network bridges for a 3-tiers application. The users will access the web server (from the exposed port) throughout the first network (frontend-server). But in the same time, you also want to prevent containers that sit on this network to make connections to the outside world. The second network (backend-server) will host containers that must have access to both the outside SQL Server database and the web server.
User-defined networks is a good solution to address these requirements. Let’s create two user-defined networks. Note by default containers may make connections to the outside world but the outside is not able to make connections to the containers without exposing listen ports. This is why I disabled ip masquerading (com.docker.network.bridge.enable_ip_masquerade=false) for the frontend-server network to meet the above requirements.
$sudo docker network create \ --driver bridge \ --subnet 172.20.0.0/16 \ --gateway 172.20.0.1 \ backend-server $sudo docker network create \ --driver bridge \ --subnet 172.19.0.0/16 \ --gateway 172.19.0.1 \ --opt com.docker.network.bridge.enable_ip_masquerade=false \ frontend-server $ sudo docker network ls NETWORK ID NAME DRIVER SCOPE 5c6f48269d2b backend-server bridge local 985f25500e3d bridge bridge local b1fbde4f4674 frontend-server bridge local ad52b859e3f9 host host local 1beda56f93d3 none null local
Let’s now take a look at the corresponding iptables masquerading rules on my host machine:
$ sudo iptables -t nat -L -n | grep -i "masquerade" MASQUERADE all -- 172.20.0.0/16 0.0.0.0/0 MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
You may notice only the Docker0 (172.17.0.0/16) and backend-server (172.20.0.0/16) bridges are allowed for ip masquerading.
Then let’s create 2 containers with the two first ones (docker1 and docker2) that will sit on the frontend-server network and the second one (docker2) on the backend-server network. For convenient purposes, I setup fixed hostnames for each container. I also used a different ubuntu image that provides this time all necessary network tools including ping command.
$ sudo docker run -d --rm --name=docker1 --hostname=docker1 --net=frontend-server -it smakam/myubuntu:v6 bash $ sudo docker run -d --rm --name=docker2 --hostname=docker2 --net=frontend-server -it smakam/myubuntu:v6 bash $sudo docker run -d --rm --name=docker3 --hostname=docker3 --net=backend-server -it smakam/myubuntu:v6 bash $ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 225ee13c38f7 smakam/myubuntu:v6 "bash" 2 minutes ago Up 2 minutes docker3 d95014602fe2 smakam/myubuntu:v6 "bash" 4 minutes ago Up 4 minutes docker2 1d9645f61245 smakam/myubuntu:v6 "bash" 4 minutes ago Up 4 minutes docker1
First, probably one of the biggest advantages of using user-defined networks (unlike Docker0 bridge) is the ability to use automatic DNS resolution between containers on the same user-defined subnet on the same host (this is default behavior but you can override DNS settings by specifying –dns parameter at the container creation time). In fact, Docker applies update on the /etc/hosts file of each container when adding / deleting containers.
As expected, I may ping docker2 container from docker1 container and vice-versa but the same doesn’t apply between neither docker1 and docker3 nor docker2 and docker3 because they are not sitting on the same network bridge.
$ sudo docker exec -ti docker1 ping -c2 docker2 PING docker2 (172.19.0.3) 56(84) bytes of data. 64 bytes from docker2.frontend-server (172.19.0.3): icmp_seq=1 ttl=64 time=0.088 ms 64 bytes from docker2.frontend-server (172.19.0.3): icmp_seq=2 ttl=64 time=0.058 ms … $ sudo docker exec -ti docker2 ping -c2 docker1 PING docker1 (172.19.0.2) 56(84) bytes of data. 64 bytes from docker1.frontend-server (172.19.0.2): icmp_seq=1 ttl=64 time=0.084 ms 64 bytes from docker1.frontend-server (172.19.0.2): icmp_seq=2 ttl=64 time=0.054 ms ... $ sudo docker exec -ti docker1 ping -c2 docker3 ping: unknown host docker3 ...
From a network perspective, on the host we may notice the creation of two additional bridge interfaces and 3 virtual Ethernet adapters after the creation of the containers.
$ brctl show bridge name bridge id STP enabled interfaces br-5c6f48269d2b 8000.0242ddad1660 no veth79ae355 br-b1fbde4f4674 8000.02424bebccdd no vethb66deb8 vethbf4ab2d docker0 8000.02422ad07e76 no $ ip a | egrep "^[1-9][1-9]" 25: br-5c6f48269d2b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 28: br-b1fbde4f4674: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 58: vethb66deb8@if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-b1fbde4f4674 state UP 64: veth79ae355@if63: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-5c6f48269d2b state UP
If I want to make the docker3 container reachable from docker2 container I may simply connect the latter to the corresponding network as shown below:
$ sudo docker network connect backend-server docker2 $ sudo docker inspect docker2 [ "Networks": { "backend-server": { "IPAMConfig": {}, "Links": null, "Aliases": [ "d95014602fe2" ], "NetworkID": "5c6f48269d2b752bf1f43efb94437957359c6a72675380c16e11b2f8c4ecaaa1", "EndpointID": "4daef42782b22832fc98485c27a0f117db5720e11d806ab8d8cf83e844ca6b81", "Gateway": "172.20.0.1", "IPAddress": "172.20.0.3", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:14:00:03", "DriverOpts": null }, "frontend-server": { "IPAMConfig": null, "Links": null, "Aliases": [ "d95014602fe2" ], "NetworkID": "b1fbde4f4674386a0e01b7ccdee64ed8b08bd8505cd7f0021487d32951035570", "EndpointID": "651ad7eaad994a06658941cda7e51068a459722c6d10850a4b546382c44fff86", "Gateway": "172.19.0.1", "IPAddress": "172.19.0.3", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:13:00:03", "DriverOpts": null } } ]
You may notice the container is connected to the frontend-server and backend-server as well thanks to an additional network interface created at same time.
$ sudo docker exec -it docker2 ip a show | grep eth 59: eth0@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:13:00:03 brd ff:ff:ff:ff:ff:ff inet 172.19.0.3/16 brd 172.19.255.255 scope global eth0 68: eth2@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:14:00:03 brd ff:ff:ff:ff:ff:ff inet 172.20.0.3/16 brd 172.20.255.255 scope global eth2
Pinging both docker1 container and docker3 container from docker2 container is successful now.
$ sudo docker exec -it docker2 ping -c2 docker1 PING docker1 (172.19.0.2) 56(84) bytes of data. 64 bytes from docker1.frontend-server (172.19.0.2): icmp_seq=1 ttl=64 time=0.053 ms 64 bytes from docker1.frontend-server (172.19.0.2): icmp_seq=2 ttl=64 time=0.052 ms … $ sudo docker exec -it docker2 ping -c2 docker3 PING docker3 (172.20.0.2) 56(84) bytes of data. 64 bytes from docker3.backend-server (172.20.0.2): icmp_seq=1 ttl=64 time=0.082 ms 64 bytes from docker3.backend-server (172.20.0.2): icmp_seq=2 ttl=64 time=0.054 ms …
In this blog post, we surfaced Docker network bridges and use cases we may have to deal with SQL Server instances regarding the context. As a reminder, user-defined networks may allow to define fine-grained policy rules to interconnect containers on different subnets. This is basically what we may want to achieve with microservices applications. Indeed, such applications include some components that need to span multiple networks (backend and frontend networks) whereas other ones should by isolated (even from outside) regarding their role.
Happy containerization!
By David Barbarin