linux network namespace 使用说明

环境信息

  • Centos 7 5.4.239-1

Linux 的 namespace 的作用是 隔离内核资源,目前主要实现了以下 namespace

  • mount namespace - 文件系统挂载点
  • UTS namespace - 主机名
  • IPC namespace - POSIX 进程间通信消息队列
  • PID namespace - 进程 pid 数字空间
  • network namespace - network
  • user namespace - user ID 数字空间

其中,除了 network namespace,其他 namespace 的操作需要使用 C 语言调用系统 API 实现。network namespace 的增删改查功能已经集成到了 Linux 的 ip 工具集的 netns 子命令中

Linux 里面的 namespace 给处在其中的进程造成 2 个错觉:

  1. 它是系统里面唯一的进程
  2. 它独享系统的所有资源

默认情况下,Linux 里面的所有进程处在和宿主机相同的 namespace ,即初始 namespace 里,默认享有全局系统资源。

network namespace 常用操作

network namespace 的增删改查功能已经集成到了 Linux 的 ip 工具集的 netns 子命令中,因此在 Linux 系统中,对 network namespace 的操作主要使用 ip netns 命令

$ ip netns help
Usage: ip netns list
ip netns add NAME
ip netns set NAME NETNSID
ip [-all] netns delete [NAME]
ip netns identify [PID]
ip netns pids NAME
ip [-all] netns exec [NAME] cmd ...
ip netns monitor
ip netns list-id

创建并查看 network namespace

使用以下命令创建名为 netns1network namespace

ip netns add netns1

以下命令查看系统中的 network namespace

$ ip netns list
netns1

新的 network namespace 创建后,系统会在 /var/run/netns/ 下面生成一个同名的挂载点

$ ls -l /var/run/netns/
total 0
-r--r--r-- 1 root root 0 Apr 3 13:33 netns1

此挂载点的主要作用一方面是方便对 namespace 的管理,一方面是使 namespace 即使没有进程运行也能继续存在。

新的 network namespace 创建后,可以使用 ip netns exec 命令进入 namespace,做网络配置或者查询的工作。

ip netns exec 命令只能根据 network namespace 的名称进入 namespace

以下命令查询 netns1network namespace 的 IP 地址信息

$ ip netns exec netns1 ip add
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

默认的 network namespace 除了附带一个 lo 网卡外,没有任何其他网络设备,并且此 lo 接口还处于 DOWN 的状态,因此此回环网卡也是不可访问的。

$ ip netns exec netns1 ping 127.0.0.1
connect: Network is unreachable

在此示例中,如果想启用本地回环地址,首先需要进入 namespace,将本地回环网卡的状态修改为 UP

$ ip netns exec netns1 ip link set dev lo up

$ ip netns exec netns1 ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

$ ip netns exec netns1 ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms
^C
--- 127.0.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms

此时,namespace 中的 lo 网卡可以正常使用,但是因为 namespace 中没有其他网络设备,此 network namespace 无法和其他网络通信,要和其他网络通信,需要用到其他的网络技术,例如 veth pair

删除 network namespace

要删除 network namespace,可以使用以下命令

ip netns delete netns1

上面这条命令并没有实际删除 netns1 这个 network namespace,它只是移除了这个 namespace 对应的挂载点(/var/run/netns/netns1),只要里面的进程还运行着,network namespace 就会一直存在

veth pair

veth 是虚拟以太网(Virtual Ethernet)的缩写。veth 设备总是成对出现的,因此我们称之为 veth pairveth pair 的一端发送的数据会在另外一端接收。根据这一特性,veth pair 常被用于跨 network namespace 的通信,即分别将 veth pair 的 2 端放在不同的 network namespace

veth pair 的创建和使用

创建 veth pair,名称分别是 veth1veth2

ip link add veth0 type veth peer name veth1

查看主机上面的网卡信息,创建的 veth pair 在主机上表现为 2 块网卡: veth1veth2,mtu 为 1500,初始状态为 DOWN

$ ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:e7:c0:27 brd ff:ff:ff:ff:ff:ff
5: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 46:d5:d3:da:b8:80 brd ff:ff:ff:ff:ff:ff
6: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 5a:b4:83:22:2b:99 brd ff:ff:ff:ff:ff:ff

使用以下命令修改 veth pair 状态和配置 ip 地址信息

$ ip link set dev veth0 up

$ ip link set dev veth1 up

$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:e7:c0:27 brd ff:ff:ff:ff:ff:ff
5: veth1@veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 46:d5:d3:da:b8:80 brd ff:ff:ff:ff:ff:ff
6: veth0@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 5a:b4:83:22:2b:99 brd ff:ff:ff:ff:ff:ff

$ ifconfig veth0 192.168.10.10/24
$ ifconfig veth1 192.168.10.11/24


$ ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

5: veth1@veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 46:d5:d3:da:b8:80 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.11/24 brd 192.168.10.255 scope global veth1
valid_lft forever preferred_lft forever
inet6 fe80::44d5:d3ff:feda:b880/64 scope link
valid_lft forever preferred_lft forever
6: veth0@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 5a:b4:83:22:2b:99 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.10/24 brd 192.168.10.255 scope global veth0
valid_lft forever preferred_lft forever
inet6 fe80::58b4:83ff:fe22:2b99/64 scope link
valid_lft forever preferred_lft forever

以下示例创建 2 个 network namespace,并将上面新创建的 veth pair 移动到 2 个新建的 network namespace,实现 2 个 network namespace 之间的通信

$ ip netns add newnetns1
$ ip netns add newnetns2

$ ip netns list
newnetns2
newnetns1

veth1 配置给 newnetns1veth2 配置给 newnetns2

ip link set veth0 netns newnetns1
ip link set veth1 netns newnetns2

veth pair 放入指定的 network namespace,除了可以使用 network namespace 的名称之外,还可以使用指定进程的 PID,实现在不知道进程对应的 network namespace 的名称的情况下操作 network namespace 。假如要将 veth 放入 PID 为 84040 的进程对应的 network namespace,可以使用如下命令

ip link set veth0 netns 84040 

在宿主机查看网卡信息以及 ip 信息,veth1veth2 不出现在列表中,因为它们已经不在根网络命名空间中

$ ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

$ ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:e7:c0:27 brd ff:ff:ff:ff:ff:ff
inet 192.168.142.10/24 brd 192.168.142.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fee7:c027/64 scope link
valid_lft forever preferred_lft forever

分别查看 newnetns1newnetns2network namespae 信息如下

$ ip netns exec newnetns1 ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
6: veth0@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 5a:b4:83:22:2b:99 brd ff:ff:ff:ff:ff:ff link-netnsid 1

$ ip netns exec newnetns2 ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: veth1@if6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 46:d5:d3:da:b8:80 brd ff:ff:ff:ff:ff:ff link-netnsid 0

$ ip netns exec newnetns1 ip add
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
6: veth0@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 5a:b4:83:22:2b:99 brd ff:ff:ff:ff:ff:ff link-netnsid 1


$ ip netns exec newnetns2 ip add
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: veth1@if6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 46:d5:d3:da:b8:80 brd ff:ff:ff:ff:ff:ff link-netnsid 0

配置 ip 地址信息,并测试 2 个 network namespace 是否能 ping 通

$ ip netns exec newnetns1 ifconfig veth0 up
$ ip netns exec newnetns2 ifconfig veth1 up


$ ip netns exec newnetns1 ifconfig veth0 192.168.10.10/24
$ ip netns exec newnetns2 ifconfig veth1 192.168.10.11/24

$ ip netns exec newnetns1 ip add
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
6: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 5a:b4:83:22:2b:99 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 192.168.10.10/24 brd 192.168.10.255 scope global veth0
valid_lft forever preferred_lft forever
inet6 fe80::58b4:83ff:fe22:2b99/64 scope link
valid_lft forever preferred_lft forever


$ ip netns exec newnetns2 ip add
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: veth1@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 46:d5:d3:da:b8:80 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.10.11/24 brd 192.168.10.255 scope global veth1
valid_lft forever preferred_lft forever
inet6 fe80::44d5:d3ff:feda:b880/64 scope link
valid_lft forever preferred_lft forever


$ ip netns exec newnetns2 ping 192.168.10.10
PING 192.168.10.10 (192.168.10.10) 56(84) bytes of data.
64 bytes from 192.168.10.10: icmp_seq=1 ttl=64 time=0.071 ms

--- 192.168.10.10 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms

Linux bridge

veth pair 可以连接 2 个 network namespace, 要连接更多的 network namespace,就需要 bridge 设备。

创建 bridge

使用以下命令创建一个名为 br0 的 bridge 设备

$ ip link add name br0 type bridge
$ ip link set br0 up

$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:e7:c0:27 brd ff:ff:ff:ff:ff:ff
7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/ether ca:80:61:e4:98:94 brd ff:ff:ff:ff:ff:ff

除了使用 ip link 命令管理网桥外,还可以使用 bridge-utils 里面的 brctl 命令管理网桥设备,例如创建名为 br0 的 bridge 设备

$ yum install -y bridge-utils

$ brctl help
never heard of command [help]
Usage: brctl [commands]
commands:
addbr <bridge> add bridge
delbr <bridge> delete bridge
addif <bridge> <device> add interface to bridge
delif <bridge> <device> delete interface from bridge
hairpin <bridge> <port> {on|off} turn hairpin on/off
setageing <bridge> <time> set ageing time
setbridgeprio <bridge> <prio> set bridge priority
setfd <bridge> <time> set bridge forward delay
sethello <bridge> <time> set hello time
setmaxage <bridge> <time> set max message age
setpathcost <bridge> <port> <cost> set path cost
setportprio <bridge> <port> <prio> set port priority
show [ <bridge> ] show a list of bridges
showmacs <bridge> show a list of mac addrs
showstp <bridge> show bridge stp info
stp <bridge> {on|off} turn stp on/off

$ brctl addbr br0

以下命令演示创建 veth pair 并将其中的一端连接到 br0

ip link add veth0 type veth peer name veth0_p
ip add add 172.17.0.2/24 dev veth0
ip add add 172.17.0.3/24 dev veth0_p
ip link set veth0 up
ip link set veth0_p up

ip link set dev veth0 master br0

常见问题

network namespace 网卡配置 ip 后无法 ping 通本机网卡

示例操作如下,主要为创建 network namespace,添加网卡,配置 IP

$ ip netns add testns

$ ip link add veth0 type veth peer name veth0_p

$ ip link set veth0 netns testns

$ ip netns exec testns ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
9: veth0@if8: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether ae:c7:2f:73:31:9e brd ff:ff:ff:ff:ff:ff link-netnsid 0

$ ip netns exec testns ip add add 10.10.1.1/24 dev veth0
$ ip netns exec testns ip link set veth0 up

$ ip netns exec testns ping 10.10.1.1
PING 10.10.1.1 (10.10.1.1) 56(84) bytes of data.
^C
--- 10.10.1.1 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3054ms

如上操作,为 network namespace 创建了网卡并配置了 IP,网卡处于 UP 状态,但是配置的 IP 在本机无法 ping 通。

原因为 回环网卡 lo 处于 DOWN 状态,启用 lo,重新 ping,可以正常 ping 通本地 IP

$ ip netns exec testns ip link set lo up

$ ip netns exec testns ping 10.10.1.1
PING 10.10.1.1 (10.10.1.1) 56(84) bytes of data.
64 bytes from 10.10.1.1: icmp_seq=1 ttl=64 time=0.031 ms
64 bytes from 10.10.1.1: icmp_seq=2 ttl=64 time=0.023 ms
^C
--- 10.10.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1060ms
rtt min/avg/max/mdev = 0.023/0.027/0.031/0.004 ms

2 个 network namespace 配置后无法 ping 通

示例操作如下

  1. 创建 netwrk namespace

    ip netns add ns1
    ip netns add ns2
  2. 创建 veth pair 并分配给 netwrk namespace

    ip link add veth1 type veth peer name veth1_p
    ip link add veth2 type veth peer name veth2_p

    ip link set veth1 netns ns1
    ip link set veth2 netns ns2
  3. 为 veth 网卡配置 IP 并启用

    ip addr add 10.10.10.1/24 dev veth1_p
    ip link set veth1_p up

    ip addr add 10.10.20.1/24 dev veth2_p
    ip link set veth2_p up

    ip netns exec ns1 ip addr add 10.10.10.2/24 dev veth1
    ip netns exec ns1 ip link set veth1 up

    ip netns exec ns2 ip addr add 10.10.20.2/24 dev veth2
    ip netns exec ns2 ip link set veth2 up
  4. 为 netwrk namespace 添加到对方的路由

    ip netns exec ns1 route add -net 10.10.20.0 netmask 255.255.255.0 gw 10.10.10.1
    ip netns exec ns2 route add -net 10.10.10.0 netmask 255.255.255.0 gw 10.10.20.1

    分别查看 2 个 netwrk namespace 的路由信息

    $ ip netns exec ns1 route -n
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    10.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 veth1
    10.10.20.0 10.10.10.1 255.255.255.0 UG 0 0 0 veth1

    $ ip netns exec ns2 route -n
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    10.10.10.0 10.10.20.1 255.255.255.0 UG 0 0 0 veth2
    10.10.20.0 0.0.0.0 255.255.255.0 U 0 0 0 veth2

    本示例网络拓扑如下

  5. 确保系统开启了 ip_forward

    $ cat /proc/sys/net/ipv4/ip_forward
    1

    尝试互相 ping

    $ ip netns exec ns1 ping 10.10.20.2
    PING 10.10.20.2 (10.10.20.2) 56(84) bytes of data.
    ^C
    --- 10.10.20.2 ping statistics ---
    7 packets transmitted, 0 received, 100% packet loss, time 6133ms


    结果无法 ping 通,检查 iptables 防火墙,发现防火墙未开启,但是 filter 表中的 FORWARD 链的默认动作为 DROP,并且测试 network namespace 中的网卡互 ping 时,policy DROP 17 packets, 1428 bytes 显示 DROP 的包会增多。由此可知,无法 ping 通是因为此处包被 DROP

    $ iptables -L -v -n
    Chain INPUT (policy ACCEPT 6292 packets, 425K bytes)
    pkts bytes target prot opt in out source destination

    Chain FORWARD (policy DROP 17 packets, 1428 bytes)
    pkts bytes target prot opt in out source destination

    Chain OUTPUT (policy ACCEPT 4532 packets, 328K bytes)
    pkts bytes target prot opt in out source destination

    执行以下命令修改 iptables 防火墙的 filter 表中 FORWARD 链的默认动作为 ACCEPT

    iptables -P FORWARD ACCEPT

    重新测试 ping,可以正常 ping 通

    $ ip netns exec ns1 ping 10.10.20.2
    PING 10.10.20.2 (10.10.20.2) 56(84) bytes of data.
    64 bytes from 10.10.20.2: icmp_seq=1 ttl=63 time=0.084 ms
    64 bytes from 10.10.20.2: icmp_seq=2 ttl=63 time=0.037 ms
    ^C
    --- 10.10.20.2 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1062ms

查看指定类型的网卡

要查看系统上网卡属于哪种类型,可以通过以下命令查看

$ ip link show type vxlan
6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8951 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether ce:8c:56:84:e1:7a brd ff:ff:ff:ff:ff:ff

$ ip link show type veth
2350: veth2fd77154@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8951 qdisc noqueue master cni0 state UP mode DEFAULT group default
link/ether 5a:02:6c:65:c8:5f brd ff:ff:ff:ff:ff:ff link-netnsid 0
2352: vethfef50370@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8951 qdisc noqueue master cni0 state UP mode DEFAULT group default
link/ether 3a:1c:0b:4f:af:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 2

$ ip link show type bridge
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:48:54:ca:75 brd ff:ff:ff:ff:ff:ff
2349: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8951 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 26:4e:46:b2:2a:4c brd ff:ff:ff:ff:ff:ff

以下命令查看网卡的详细信息,包括网卡类型

$ ip -d link show veth1_p
4: veth1_p@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 72:f1:d6:f6:c9:53 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
veth addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535

$ ip -d link show docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:f2:1b:dc:ea brd ff:ff:ff:ff:ff:ff promiscuity 0
bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.2:42:f2:1b:dc:ea designated_root 8000.2:42:f2:1b:dc:ea root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer 0.00 tcn_timer 0.00 topology_change_timer 0.00 gc_timer 48.83 vlan_default_pvid 1 vlan_stats_enabled 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535

$ ip -d add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:e7:c0:27 brd ff:ff:ff:ff:ff:ff promiscuity 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 192.168.142.10/24 brd 192.168.142.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fee7:c027/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:f2:1b:dc:ea brd ff:ff:ff:ff:ff:ff promiscuity 0
bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.2:42:f2:1b:dc:ea designated_root 8000.2:42:f2:1b:dc:ea root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer 0.00 tcn_timer 0.00 topology_change_timer 0.00 gc_timer 147.42 vlan_default_pvid 1 vlan_stats_enabled 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: veth1_p@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 72:f1:d6:f6:c9:53 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
veth numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 10.10.10.1/24 scope global veth1_p
valid_lft forever preferred_lft forever
inet6 fe80::70f1:d6ff:fef6:c953/64 scope link
valid_lft forever preferred_lft forever
6: veth2_p@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether b6:17:a8:92:57:09 brd ff:ff:ff:ff:ff:ff link-netnsid 1 promiscuity 0
veth numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 10.10.20.1/24 scope global veth2_p
valid_lft forever preferred_lft forever
inet6 fe80::b417:a8ff:fe92:5709/64 scope link
valid_lft forever preferred_lft forever

$ ip -d link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:e7:c0:27 brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:f2:1b:dc:ea brd ff:ff:ff:ff:ff:ff promiscuity 0
bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.2:42:f2:1b:dc:ea designated_root 8000.2:42:f2:1b:dc:ea root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer 0.00 tcn_timer 0.00 topology_change_timer 0.00 gc_timer 67.56 vlan_default_pvid 1 vlan_stats_enabled 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
4: veth1_p@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 72:f1:d6:f6:c9:53 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
veth addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
6: veth2_p@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether b6:17:a8:92:57:09 brd ff:ff:ff:ff:ff:ff link-netnsid 1 promiscuity 0
veth addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535