众所周知docker有3种网络模式,分别是bridgehostnone。默认的是bridge,即在主机上创建一个虚拟网桥(交换机),用ifconfig主机上一般都能看到这个叫docker0。容器会分配到一个IP,数据包通过转发到docker0,在同一主机下的容器通过这种方式可以直接通信。若是跨主机则需要通过映射主机端口的方式,缺少一定的扩展性。

host则是容器和主机共享网络配置,none是容器不会有任何网络配置,需要自己手动配置。

跨主机通信

docker开始并没有关注网络方面,由于需求有很多第三方的解决方案,包括LibnetworkFlannel等。之前公司在做marathon集群的网络解决方案是是通过calico实现。docker1.9之后终于出官方的方案overlay。使用overlay必须要借助于KV存储系统,官方支持Consul, etcd, ZooKeeper和本地KV存储系统BoltDB。官方给的示列是基于SwarmConsul。因为之前通过etcd做服务发现和配置中心,本文将基于etcd在没有Swarm情况下配置overlay

前提

  1. 一个KV存储系统etcd,并且能被整个集群的各节点访问到;
  2. 每个主机都需要可配置docker Engine daemon
  3. 主机必须是3.16及以上内核的Linux;
  4. 主机必须有有一些端口可以被使用,包括KV存储系统的,docker监听的远程客户端端口等;

安装Etcd

由于是测试,只安装一个单实例的etcd。从github下载已经编译过的二进制包并解压:

wget https://github.com/coreos/etcd/releases/download/v3.2.1/etcd-v3.2.1-linux-arm64.tar.gz && tar -xvf etcd-v3.2.1-linux-arm64.tar.gz

运行etcd:

cd etcd-v3.2.1-linux-arm64
nohup ./etcd > etcd.log 2>&1 &

配置docker Engine daemon

在比较老的docker下,配置文件可能在/etc/default/docker,编辑并添加如下参数:

DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://127.0.0.1:2379/network --cluster-advertise=ens160:2375"

其中前两个-H参数是docker需要监听的远程客户端,--cluster-store就是KV存储系统的访问地址,--cluster-advertise是docker所监听Linux上的网络接口和端口号。

现在从docker文档安装的最新版的docker 17.06.0-ce,就是那个已经改名过后的docker。配置文件在/etc/docker/daemon.json,添加如下:

{
"registry-mirrors": ["http://a0596535.m.daocloud.io"],
"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"],
"cluster-store": "etcd://127.0.0.1:2379",
"cluster-advertise": "ens160:2375"
}

配置好docker之后重启daemon:

systemctl restart docker

创建overlay网络

先看看当前默认的网络:

#  docker network ls
NETWORK ID NAME DRIVER SCOPE
6d7af496fd0f bridge bridge local
e136d9020834 host host local
1afff9cac664 none null local

可以看到这就是上面说到的docker的几个网络模式,确保刚刚docker daemon配置完成并且重启成功,创建网络:

docker network create -d overlay --subnet=10.10.10.0/24 multi-host

其中-d参数指定了这次要创建的网络驱动,--subnet指定了要创建的这个子网,multi-host后面就是这个创建的网络的名称。现在查看当前的网络:

#  docker network ls
NETWORK ID NAME DRIVER SCOPE
6d7af496fd0f bridge bridge local
e136d9020834 host host local
c4940941351f multi-host overlay global
1afff9cac664 none null local

可以看到multi-host已经创建成功。

在overlay网络下运行容器

使用busybox容器来测试,分别创建两个容器:

# docker run -itd --name c1 --net multi-host  busybox
f5d8ed1ba5a0d7226ca87536918a5e39741a353688f45e5ed2b0d34f590ab010
# docker run -itd --name c2 --net multi-host busybox
9cd34b0154c31f47f4c74617fe17f72c41fb6d0f909c0a2bbcbe9921ac5e6e2c

--net参数即是指定刚刚创建的网络,现在检查multi-host网络:

# docker network inspect multi-host
[
{
"Name": "multi-host",
"Id": "c4940941351ffcbb27f1e10641931d51baf50c30fb92d3daa71fe578a20600ac",
"Created": "2017-06-30T10:19:51.145669408+08:00",
"Scope": "global",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.10.10.0/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9cd34b0154c31f47f4c74617fe17f72c41fb6d0f909c0a2bbcbe9921ac5e6e2c": {
"Name": "c2",
"EndpointID": "16c6f2cd9f65d3a8297770bbe0aab95ba7543d024924ef3fe9f4ecfd2c280190",
"MacAddress": "02:42:0a:0a:0a:03",
"IPv4Address": "10.10.10.3/24",
"IPv6Address": ""
},
"f5d8ed1ba5a0d7226ca87536918a5e39741a353688f45e5ed2b0d34f590ab010": {
"Name": "c1",
"EndpointID": "879130bb02fca09463242e0ace5c8714b20484fcdad2de69b66228c378fd9d81",
"MacAddress": "02:42:0a:0a:0a:02",
"IPv4Address": "10.10.10.2/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

可以看到Containers下两个容器分配的IP都在刚刚创建的multi-host网络:

# docker network inspect multi-host
[
{
"Name": "multi-host",
"Id": "c4940941351ffcbb27f1e10641931d51baf50c30fb92d3daa71fe578a20600ac",
"Created": "2017-06-30T10:19:51.145669408+08:00",
"Scope": "global",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.10.10.0/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9cd34b0154c31f47f4c74617fe17f72c41fb6d0f909c0a2bbcbe9921ac5e6e2c": {
"Name": "c2",
"EndpointID": "16c6f2cd9f65d3a8297770bbe0aab95ba7543d024924ef3fe9f4ecfd2c280190",
"MacAddress": "02:42:0a:0a:0a:03",
"IPv4Address": "10.10.10.3/24",
"IPv6Address": ""
},
"f5d8ed1ba5a0d7226ca87536918a5e39741a353688f45e5ed2b0d34f590ab010": {
"Name": "c1",
"EndpointID": "879130bb02fca09463242e0ace5c8714b20484fcdad2de69b66228c378fd9d81",
"MacAddress": "02:42:0a:0a:0a:02",
"IPv4Address": "10.10.10.2/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

可以看到Containers下两个容器分配的IP都在刚刚创建的multi-host网络:

# docker network inspect multi-host
[
{
"Name": "multi-host",
"Id": "c4940941351ffcbb27f1e10641931d51baf50c30fb92d3daa71fe578a20600ac",
"Created": "2017-06-30T10:19:51.145669408+08:00",
"Scope": "global",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.10.10.0/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9cd34b0154c31f47f4c74617fe17f72c41fb6d0f909c0a2bbcbe9921ac5e6e2c": {
"Name": "c2",
"EndpointID": "16c6f2cd9f65d3a8297770bbe0aab95ba7543d024924ef3fe9f4ecfd2c280190",
"MacAddress": "02:42:0a:0a:0a:03",
"IPv4Address": "10.10.10.3/24",
"IPv6Address": ""
},
"f5d8ed1ba5a0d7226ca87536918a5e39741a353688f45e5ed2b0d34f590ab010": {
"Name": "c1",
"EndpointID": "879130bb02fca09463242e0ace5c8714b20484fcdad2de69b66228c378fd9d81",
"MacAddress": "02:42:0a:0a:0a:02",
"IPv4Address": "10.10.10.2/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

可以看到Containers下两个容器分配的IP都在刚刚创建的multi-host网络:

# docker network inspect multi-host
[
{
"Name": "multi-host",
"Id": "c4940941351ffcbb27f1e10641931d51baf50c30fb92d3daa71fe578a20600ac",
"Created": "2017-06-30T10:19:51.145669408+08:00",
"Scope": "global",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.10.10.0/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9cd34b0154c31f47f4c74617fe17f72c41fb6d0f909c0a2bbcbe9921ac5e6e2c": {
"Name": "c2",
"EndpointID": "16c6f2cd9f65d3a8297770bbe0aab95ba7543d024924ef3fe9f4ecfd2c280190",
"MacAddress": "02:42:0a:0a:0a:03",
"IPv4Address": "10.10.10.3/24",
"IPv6Address": ""
},
"f5d8ed1ba5a0d7226ca87536918a5e39741a353688f45e5ed2b0d34f590ab010": {
"Name": "c1",
"EndpointID": "879130bb02fca09463242e0ace5c8714b20484fcdad2de69b66228c378fd9d81",
"MacAddress": "02:42:0a:0a:0a:02",
"IPv4Address": "10.10.10.2/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

可以看到Containers下两个容器分配的IP都在刚刚创建的multi-host网络:

# docker network inspect multi-host
[
{
"Name": "multi-host",
"Id": "c4940941351ffcbb27f1e10641931d51baf50c30fb92d3daa71fe578a20600ac",
"Created": "2017-06-30T10:19:51.145669408+08:00",
"Scope": "global",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.10.10.0/24"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9cd34b0154c31f47f4c74617fe17f72c41fb6d0f909c0a2bbcbe9921ac5e6e2c": {
"Name": "c2",
"EndpointID": "16c6f2cd9f65d3a8297770bbe0aab95ba7543d024924ef3fe9f4ecfd2c280190",
"MacAddress": "02:42:0a:0a:0a:03",
"IPv4Address": "10.10.10.3/24",
"IPv6Address": ""
},
"f5d8ed1ba5a0d7226ca87536918a5e39741a353688f45e5ed2b0d34f590ab010": {
"Name": "c1",
"EndpointID": "879130bb02fca09463242e0ace5c8714b20484fcdad2de69b66228c378fd9d81",
"MacAddress": "02:42:0a:0a:0a:02",
"IPv4Address": "10.10.10.2/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

可以看到Containers下两个容器分配的IP都在刚刚创建的multi-host网络下。查看容器里面的网络配置,可以看到网络接口eth0的IP:

#docker exec c1 ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:0A:0A:0A:02
inet addr:10.10.10.2 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

eth1 Link encap:Ethernet HWaddr 02:42:AC:12:00:02
inet addr:172.18.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:0 (0.0 B)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

现在直接ping c2容器,可以直接ping容器名,可以看到能正常通信:

docker exec c1 ping -w 3 c2

这样一个overlay已经配置成功,还可以配置多个overlay网络,下次在研究。