Exploring Default Docker Networking Half 2


Properly, good day everybody. I’m again to proceed my exploration of default Docker networking. If you happen to haven’t already learn (or it has been some time because you learn) Exploring Default Docker Networking Half 1 I’d suggest checking it out. In that publish, I clarify what “Default Docker Networking” means, then placed on my headlamp and climbing gear as I’m going deep down into the layer 1 (bodily) and layer 2 (knowledge hyperlink) facets of container networking and the way Linux networking ideas like digital ethernet hyperlinks and bridges present the “magic.”

On this publish, I’m going to proceed the exploration into layer 3 (community), shining the sunshine on how container community visitors is routed, NATed, and filtered. So end a protein bar, take a drink out of your canteen, and let’s shed some gentle on this subsequent a part of our journey!

Our default Docker bridge community exploration map

Half 1 ended with a community topology drawing of the container networking mentioned all through the publish. I’ve expanded that topology to incorporate particulars past the Linux bridge, veths, and containers we mentioned and explored in that publish by including community particulars and data from outdoors the layer 2 area of the containers we’ll use to discover right now.

Linux and Container Networking Topology
Right here we see the docker0 community and the way it connects to the Linux host community and exterior hosts.

The additions to this topology drawing embrace:

  1. A fourth container known as net has been added. This container is working an online server on port 80 that has been “revealed” (made accessible outdoors the container community) on port 81.
  2. The Linux host’s main community hyperlink, ens160, with its IP handle of 172.16.211.128 has been added.
  3. The community processing layer from Linux that gives routing, filtering, and different community features has been added.
  4. Two further hosts within the lab community outdoors of the Linux host working the containers have been proven, together with fundamental connectivity indications.

A ping at midnight…

We’re going to begin with a easy check to find out whether or not we will ping from one container to the first IP handle from the Linux host internet hosting the container. Particularly, from C1 to ens160’s IP handle of 172.16.211.128.

[email protected]:/# ping 172.16.211.128
PING 172.16.211.128 (172.16.211.128) 56(84) bytes of knowledge.
64 bytes from 172.16.211.128: icmp_seq=1 ttl=64 time=1.80 ms
64 bytes from 172.16.211.128: icmp_seq=2 ttl=64 time=0.051 ms
64 bytes from 172.16.211.128: icmp_seq=3 ttl=64 time=0.092 ms
^C
--- 172.16.211.128 ping statistics ---
3 packets transmitted, 3 obtained, 0% packet loss, time 2021ms
rtt min/avg/max/mdev = 0.051/0.646/1.796/0.813 ms

The success might be not all that shocking — and even really feel that satisfying. I imply, we community engineers ping between two hosts on a regular basis, proper? So, why hassle? Let me break it down for you.

First up, keep in mind that the container and the Linux host are on 2 completely different layer 2 networks. The container is on 172.17.0.0/16, and the host is on 172.16.211.0/24. We’d assume any such visitors would contain routing. So let’s test the routing desk on the concerned units.

The container’s desk beneath exhibits us that the default route is used to succeed in the 172.16.211.0/24 community.

[email protected]:/# ip route
default through 172.17.0.1 dev eth0 
172.17.0.0/16 dev eth0 proto kernel scope hyperlink src 172.17.0.2

And the host’s desk exhibits that there’s truly a route for 172.17.0.0/16 by the docker0 interface.

[email protected]:~# ip route
default through 172.16.211.2 dev ens160 proto dhcp src 172.16.211.128 metric 100 
172.16.211.0/24 dev ens160 proto kernel scope hyperlink src 172.16.211.128 
172.16.211.2 dev ens160 proto dhcp scope hyperlink src 172.16.211.128 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope hyperlink src 172.17.0.1

Nothing appears out of the unusual, however don’t depart simply but. I swear there’s a level to this a part of the exploration.

Let’s have a look at the packets themselves utilizing the Linux software tcpdump to watch the icmp visitors on ens160.

[email protected]:~# tcpdump -n -i ens160 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens160, link-type EN10MB (Ethernet), seize dimension 262144 bytes

I began up the seize after which issued one other ping from C1 to the Linux host. However I didn’t see any packets captured on interface ens160 regardless of the pings being profitable. See, I informed you it would get fascinating. 🙂

Let’s change our seize to the docker0 interface as a substitute.

[email protected]:~# tcpdump -n -i docker0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), seize dimension 262144 bytes
14:51:53.427337 IP 172.17.0.2 > 172.16.211.128: ICMP echo request, id 38, seq 1, size 64
14:51:53.427373 IP 172.16.211.128 > 172.17.0.2: ICMP echo reply, id 38, seq 1, size 64

Okay, that appears higher. However, why are we seeing the visitors on the docker0 interface when the vacation spot was the handle assigned to the ens160 interface? Properly, as a result of the visitors by no means truly reaches the ens160 interface. The networking stack inside the Linux system processes the visitors, and since it’s all inside to the system, there is no such thing as a want for the visitors to make it to the community hyperlink/adapter.

Then why does it present up on the docker0 interface in any respect? Why can’t the networking stack simply course of it immediately and depart all “interfaces” out of it?  That is due to the community isolation that’s used as a part of Docker networking. Recall again to Half 1; how once we ran ip hyperlink from inside the container, we solely noticed the container interface and never the opposite interfaces from the host. And once we ran the command from the host, we did NOT see the container interfaces within the listing. We solely noticed the host facet of the veth pair. As a reminder, right here is the command from the container host.

[email protected]:~# ip hyperlink
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
hyperlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
hyperlink/ether 00:0c:29:75:99:27 brd ff:ff:ff:ff:ff:ff
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
hyperlink/ether 02:42:9a:0c:8a:ee brd ff:ff:ff:ff:ff:ff
97: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue grasp docker0 state UP mode DEFAULT group default 
hyperlink/ether 46:10:b9:df:52:8b brd ff:ff:ff:ff:ff:ff link-netnsid 0
99: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue grasp docker0 state UP mode DEFAULT group default 
hyperlink/ether 52:07:4f:3e:11:c6 brd ff:ff:ff:ff:ff:ff link-netnsid 1
101: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue grasp docker0 state UP mode DEFAULT group default 
hyperlink/ether 9e:51:13:75:53:52 brd ff:ff:ff:ff:ff:ff link-netnsid 2
105: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue grasp docker0 state UP mode DEFAULT group default 
hyperlink/ether d2:86:8f:ab:75:0b brd ff:ff:ff:ff:ff:ff link-netnsid 3

The blue hyperlink quantity 97 represents the host facet of the veth that connects to hyperlink quantity 96 on C1. However the place is hyperlink quantity 96 within the listing?

Linux community namespaces enter the story

The reply is that hyperlink quantity 96, the eth0 interface for C1, is in a special community namespace from the default one from the opposite hyperlinks on the host.

Linux namespaces are an abstraction inside Linux that enable system sources to be remoted from one another. Namespaces could be arrange for a lot of various kinds of sources together with processes, mount factors, and networks. In truth, these namespaces are key to how Docker containers run as remoted situations from one another and the host they run on.

We are able to view the community namespaces on our host with the listing namespaces command.

[email protected]:~# lsns --type=web
NS         TYPE  NPROCS PID    USER   NETNSID    NSFS COMMAND
4026531992 web   375    1      root   unassigned /run/docker/netns/default /sbin/init maybe-ubiquity
4026532622 web   1      81590  uuidd  unassigned /usr/sbin/uuidd --socket-a
4026532675 web   1      1090   rtkit  unassigned /usr/libexec/rtkit-daemon
4026532749 web   2      134263 knowledgeable unassigned /usr/share/code/code --typ
4026532808 web   1      267673 root   0          /run/docker/netns/74fa6636a15f bash
4026532872 web   1      267755 root   1          /run/docker/netns/e12672b07df8 bash
4026532921 web   6      133573 knowledgeable unassigned /choose/google/chrome/chrome 
4026532976 web   1      133575 knowledgeable unassigned /choose/google/chrome/nacl_he
4026533050 web   1      267840 root   2          /run/docker/netns/5cab1255c9ae bash
4026533115 web   1      268958 root   3          /run/docker/netns/c54dcb1bd674 /bin/bash

Every of the entries within the listing coloured blue represents one of many 4 containers that we’re working now, with the PID column figuring out the particular course of tied to the distinctive container. We are able to decide the PID for a container by inspecting it.

[email protected]:~# docker examine c1 | jq .[0].State.Pid
267673

And with that, we will now run the command to view the community hyperlinks from inside the container’s community namespace.

[email protected]:~# nsenter -t 267673 -n ip hyperlink
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
hyperlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
96: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
hyperlink/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0

And BAM! There we’ve hyperlink quantity 96.

And this brings us again to the query: why didn’t the host community stack course of the ping immediately from the container? Why can we see the visitors on the docker0 interface? As a result of the networking “stack” is actually the community namespace. And the container’s community namespace the place the ping originated is completely different from the default community namespace the place the IP handle for the ens160 interface resides. It’s the digital ethernet “cable” that permits visitors from the container namespace to succeed in the default namespace, by the docker0 interface. And as soon as the visitors arrives within the docker0 interface, the networking stack can now course of the request and ship the reply, all by the docker0 interface.

Pinging past the gates… er host

So we’ve now seen how community isolation is completed with Linux namespaces and the influence on the interfaces concerned within the community processing of visitors. For our subsequent check, let’s ship visitors outdoors of the Linux host the place our containers are working, and ship a ping to a different Host01 from the community topology.

[email protected]:/# ping -c 1 172.16.211.1
PING 172.16.211.1 (172.16.211.1) 56(84) bytes of knowledge.
64 bytes from 172.16.211.1: icmp_seq=1 ttl=63 time=0.271 ms

--- 172.16.211.1 ping statistics ---
1 packets transmitted, 1 obtained, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms

I truly despatched a single ping packet from the container, and we will see that it was profitable. Earlier than sending the ping I began up a packet seize on each the docker0 and ens160 interfaces to seize the visitors alongside the best way and evaluate the variations because the visitors arrived within the default community namespace from the container and because it was despatched out from the host in direction of its vacation spot (in addition to the return journey).

# Seize on the docker0 interface 
[email protected]:~# tcpdump -n -i docker0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), seize dimension 262144 bytes

17:11:13.047823 IP 172.17.0.2 > 172.16.211.1: ICMP echo request, id 41, seq 1, size 64
17:11:13.048061 IP 172.16.211.1 > 172.17.0.2: ICMP echo reply, id 41, seq 1, size 64

# Seize on the ens160 interface
[email protected]:~# tcpdump -n -i ens160 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens160, link-type EN10MB (Ethernet), seize dimension 262144 bytes

17:11:13.047856 IP 172.16.211.128 > 172.16.211.1: ICMP echo request, id 41, seq 1, size 64
17:11:13.048024 IP 172.16.211.1 > 172.16.211.128: ICMP echo reply, id 41, seq 1, size 64

Check out the output above. The blue traces are the echo requests despatched from the container, and the inexperienced traces are the echo replies from the opposite host. The daring purple addresses within the requests symbolize the supply addresses from the container, and the daring orange addresses point out the vacation spot addresses for the reply packets. On the docker0 captures, the addresses proven are the IP addresses assigned to the C1 interface — this might be anticipated. Nonetheless, on the ens160 seize, the addresses have been translated to the IP handle of the Linux host machine’s ens160 interface.

That’s proper, our outdated good friend Community Deal with Translation (NAT) exhibits up in container networking as properly. In truth, so does NAT’s very helpful cousin PAT (Port Deal with Translation), however I’m getting forward of myself.

Coming into the Docker networking story… iptables!

The networks created by Docker to assist a bridge-type community are constructed to be non-public and to leverage IP handle area that’s NOT reachable from outdoors the Docker-managed community. Nonetheless, many companies deployed and managed with Docker do require connectivity past the small variety of containers making up the service and working on the host. Docker leverages the identical community idea used elsewhere to resolve this drawback, Community (and Port) Deal with Translation (NAT/PAT). And just like how we’ve seen Docker leveraging Linux parts like bridges and namespaces, Docker makes use of iptables to carry out the handle translation and filtering concerned right here as properly.

Earlier than we begin how iptables are concerned in these visitors flows, I needed to provide a fast caveat. Community visitors processing and circulate by the underbelly of Linux is a sophisticated matter, and iptables is each a robust and sophisticated software. I plan to interrupt down the subject right here within the weblog to explain and clarify what is going on below the hood of Docker networking in as easy and clear a approach as attainable. However a radical exploration of iptables and Linux networking can be worthy of a number of weblog posts on their very own.

With iptables, guidelines are created and utilized to the processing of community visitors as it’s dealt with by Linux. These guidelines are utilized at completely different factors within the processing of visitors to perform a lot of completely different duties. Guidelines could be utilized:

  1. Earlier than any routing determination is made (PREROUTING).
  2. As visitors destined for the native host arrives (INPUT).
  3. As visitors created by the native host is shipped (OUTPUT).
  4. As visitors “passing by” the native host is processed (FORWARD).
  5. After the routing determination is made (POSTROUTING).

And the foundations which might be written can do a lot of issues to the visitors.

  • Site visitors could be blocked/denied.
  • Site visitors could be allowed/permitted.
  • Site visitors could be redirected elsewhere.
  • Site visitors can have its supply or vacation spot addresses modified (NAT/PAT).

Guidelines are added to one of many “tables” that iptables manages. The 2 tables price mentioning now are the filter and the nat tables. The filter desk creates guidelines primarily involved with whether or not visitors is allowed or blocked, whereas the nat desk has guidelines associated to deal with translation. Let’s have a look at the nat desk and see if we will discover what precipitated the interpretation of the ICMP visitors from our instance.

[email protected]:~# iptables -L -v -t nat
Chain PREROUTING (coverage ACCEPT 251 packets, 67083 bytes)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
   18  1292 DOCKER     all  --  any    any     anyplace             anyplace             ADDRTYPE match dst-type LOCAL

Chain INPUT (coverage ACCEPT 247 packets, 66747 bytes)
 pkts bytes goal     prot choose in     out     supply               vacation spot         

Chain OUTPUT (coverage ACCEPT 19468 packets, 1264K bytes)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
    3   252 DOCKER     all  --  any    any     anyplace            !localhost/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (coverage ACCEPT 19472 packets, 1264K bytes)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
   31  1925 MASQUERADE  all  --  any    !docker0  172.17.0.0/16        anyplace            

Chain DOCKER (2 references)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
    7   588 RETURN     all  --  docker0 any     anyplace             anyplace            

Take a look at the rule within the POSTROUTING desk coloured blue. That is the rule that precipitated the interpretation we noticed. Now, let’s break down the elements of the rule which might be used to match visitors to course of.

  • protocol = all
    • Match visitors of any protocol sort
  • in = any / out = !docker0
    • Match visitors coming IN any interface and going OUT any interface aside from docker0
    • Site visitors going OUT docker0 can be despatched in direction of a container
  • supply = anyplace / vacation spot = anyplace
    • Match visitors from or to any handle

The “goal = MASQUERADE” half describes the motion this rule will take. You may be extra aware of the actions like DROP or ACCEPT that present up on the filter desk, however the NAT desk was a special set of targets that point out the kind of translation that may happen. MASQUERADE is a sort of supply handle translation (SNAT) that interprets the supply community handle of the visitors to the handle assigned to the interface the visitors has been routed OUT.

Think about the echo request despatched from the container in opposition to this rule.

  1. An echo request matches the “all protocol.”
  2. The packet got here within the docker0 interface (in = any) and will probably be going out the ens160 (out = !docker0).
  3. The supply and vacation spot are actually “anyplace.”

When the visitors was processed in opposition to this rule the MASQUERADE goal/motion was taken to SNAT the supply handle to the IP handle of the ens160 interface — which is strictly what we noticed occur.

Look out! There’s a net (server) forward!

Up to now we’ve used some ICMP visitors with ping to take a look at how containers can attain exterior networks and hosts. However, what about when a container is working a service like an online server that’s designed to be accessible to exterior customers? Let’s finish our dialogue with this instance. With the intention to get began, we’re going to wish an online server.

There’s a multitude of net servers that may be run as Docker containers, however for our exploration right here, I’m going to maintain it quite simple and use the HTTP server that’s included with Python and the usual “python:3” Docker picture maintained by the Python Software program Basis and Docker.

# Begin the container within the background 
[email protected]:~# docker run -tid --rm 
  			--name net --hostname net 
  			-p 172.16.211.128:81:80 
  			python:3 /bin/bash

# Connect to the working container 
[email protected]:~# docker connect net

# Begin a fundamental net server 
[email protected]:/# python -m http.server 80
Serving HTTP on 0.0.0.0 port 80 (http://0.0.0.0:80/) ...

The “docker run” command needs to be acquainted from once we ran instructions in Half 1, however there’s a new choice included. We have to “publish” the container’s ports which must be made accessible to exterior hosts. A container can haven’t any ports revealed, or many dozens of ports relying on the distinctive wants of that service.

Within the command above, I’m publishing port 80 from the container to port 81 on the host server’s IP handle of 172.16.211.128.

If I had left off the IP handle to publish the service to, Docker would have made the online server accessible on any/all IP addresses on the underlying host. Leaving off an specific IP handle for publishing a service is frequent, nevertheless I discover being specific a greater technique. That is considerably of a private choice in utility design.

I can now try and entry the online server from Host01.

Basic Python Web Server

Glorious, by searching to the IP handle of the Linux host on port 81 I’m greeted with a direct itemizing from the container the place the Python net server is working.

Tracing the online visitors with packets and tables

Let’s end our exploration right now by inspecting the visitors for the incoming net visitors and the interpretation guidelines that join issues collectively.

We have to change up our packet seize instructions to seize the online visitors on each the ens160 and docker0 interfaces. As visitors arrives on the Linux host it is going to be destined to tcp port 81 and translated to tcp port 80, earlier than it’s despatched out to the container.

# Seize visitors from the Linux host interface
[email protected]:~# tcpdump -n -i ens160 'tcp port 81'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens160, link-type EN10MB (Ethernet), seize dimension 262144 bytes

18:34:59.147085 IP 172.16.211.1.64534 > 172.16.211.128.81: Flags [SEW], seq 3761281905, win 65535, choices [mss 1460,nop,wscale 6,nop,nop,TS val 1727838954 ecr 0,sackOK,eol], size 0
18:34:59.147191 IP 172.16.211.128.81 > 172.16.211.1.64534: Flags [S.E], seq 3294439992, ack 3761281906, win 65160, choices [mss 1460,sackOK,TS val 3650251894 ecr 1727838954,nop,wscale 7], size 0
.
.


# Seize visitors being despatched to the containers 
[email protected]:~# tcpdump -n -i docker0 'tcp port 80'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), seize dimension 262144 bytes


18:34:59.147133 IP 172.16.211.1.64534 > 172.17.0.5.80: Flags [SEW], seq 3761281905, win 65535, choices [mss 1460,nop,wscale 6,nop,nop,TS val 1727838954 ecr 0,sackOK,eol], size 0
18:34:59.147178 IP 172.17.0.5.80 > 172.16.211.1.64534: Flags [S.E], seq 3294439992, ack 3761281906, win 65160, choices [mss 1460,sackOK,TS val 3650251894 ecr 1727838954,nop,wscale 7], size 0
.
.

I’ve restricted the output within the publish above to only the beginning of the request the place we will see the interpretation at work.

Within the output above, the blue traces symbolize the preliminary request packet from the online browser to the server, and the inexperienced traces are the primary packet despatched to determine the session. By wanting on the daring purple and orange addresses, you’ll be able to see the vacation spot handle translation (DNAT) at work within the communications. The supply addresses are left unchanged, and actually, within the beneath logs from the container, you’ll be able to see the IP handle from Host01.

[email protected]:/# python -m http.server 80
Serving HTTP on 0.0.0.0 port 80 (http://0.0.0.0:80/) ...
172.16.211.1 - - [07/Sep/2022 18:30:07] "GET / HTTP/1.1" 200 -

We are able to as soon as once more have a look at the NAT desk utilizing iptables and discover the rule that gives this habits.

[email protected]:~# iptables -L -v -t nat -n
Chain PREROUTING (coverage ACCEPT 42 packets, 4004 bytes)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
   38  2712 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (coverage ACCEPT 37 packets, 3584 bytes)
 pkts bytes goal     prot choose in     out     supply               vacation spot         

Chain OUTPUT (coverage ACCEPT 15523 packets, 1010K bytes)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
   17  1092 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (coverage ACCEPT 15530 packets, 1010K bytes)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
   42  2849 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.5           172.17.0.5           tcp dpt:80

Chain DOCKER (2 references)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
   14  1176 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           
    7   448 DNAT       tcp  --  !docker0 *       0.0.0.0/0            172.16.211.128       tcp dpt:81 to:172.17.0.5:80

The rule in blue has the goal set to DNAT together with a vacation spot of 172.16.211.128 and translation of “tcp dpt:81 to:172.17.0.5:80“. The rule is utilized each throughout the PREROUTING and OUTPUT phases of community processing through the use of the power inside iptables to TARGET one other chain within the hyperlink.

A fast cease on the filter desk

There may be one remaining cease in our exploration of the visitors flows I wish to make earlier than ending up. Up to now our iptables instructions have focused the NAT desk (-t nat). Let’s check out the filter desk the place the ACCEPT/DROP guidelines are discovered.

[email protected]:~# iptables -L -v -t filter -n
Chain INPUT (coverage ACCEPT 231K packets, 26M bytes)
 pkts bytes goal     prot choose in     out     supply               vacation spot         

Chain FORWARD (coverage DROP 0 packets, 0 bytes)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
  141 15676 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  141 15676 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
39461  118M ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
   13   952 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
30852 1266K ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
    6   504 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (coverage ACCEPT 215K packets, 18M bytes)
 pkts bytes goal     prot choose in     out     supply               vacation spot         

Chain DOCKER (1 references)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
    7   448 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.5           tcp dpt:80

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
30852 1266K DOCKER-ISOLATION-STAGE-2  all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
70326  120M RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
    0     0 DROP       all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
30852 1266K RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-USER (1 references)
 pkts bytes goal     prot choose in     out     supply               vacation spot         
70326  120M RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0   

Many of the DOCKER-related facets discovered within the filter desk are there to make sure the community isolation of containers. Nonetheless the rule in blue that I’ve indicated above is essential to how companies are uncovered from a container to the surface world. This rule will ACCEPT tcp port 80 visitors destined for 172.17.0.5 (the online container) that arrives on any interface aside from docker0 and goes out interface docker0. This rule makes use of the container’s precise IP handle and port as a result of the filtering occurs after the DNAT from the NAT desk.

The sunshine on the finish of the default Docker networking journey

And so we discover ourselves on the finish of this exploration of the default Docker Networking. Wanting across the group, I’m glad to see that we didn’t lose anybody alongside the best way, however I do know it was a detailed one. And also you won’t consider me, however even after one other 3,500 phrases on the subject of Docker networking (a complete of over 7,000 between Components 1 and a couple of), there’s lots extra to discover on the subject. Overlay networks, how DNS works for containers, customized community plugins, and (gasp) Kubernetes networking are all on the market so that you can discover!

My objective for this quick collection was to assist in giving a basis on which you’ll proceed to construct your data round container networking and to make the subject much less mysterious or daunting for community engineers new to it. It may be very simple to grow to be intimidated when working by container introductions that “simply work” however don’t clarify “why they work” or “how they work.”  If I did my job proper, the magic isn’t so magical anymore.

Listed below are a number of hyperlinks to different sources price trying out for extra info on the subject.

  • In Season 2 of NetDevOps Dwell, Matt Johnson joined me to do a deep dive into container networking. His session was incredible, and I reviewed it when preparing for this publish. I extremely suggest it as one other nice useful resource.
  • The Docker documentation on networking is excellent. I referenced it very often when placing this publish collectively.
  • The person pages for Linux namespaces and iptables are wonderful sources about these vital applied sciences that allow Docker networking
  • And take a look at the person web page for tcpdump for those who’d love to do extra packet capturing

And as all the time, please let me know what you considered this publish within the feedback or over on Twitter. What ought to I “discover” subsequent right here on the weblog? Thanks for studying!

 

Observe Cisco Studying & Certifications

TwitterFbLinkedIn | Instagram

Use #CiscoCert to hitch the dialog.

Share:



Leave a Reply