Configure Aggregated Network Links on RHEL 7: Bonding and Teaming

Configure network bonding and teaming on RHEL 7.

Aggregated Network Links

There are two ways to configure aggregated network links on RHEL 7, via bonding or via teaming.

Network bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy.

Network teaming is designed to implement the concept in a different way by providing a small kernel driver to implement the fast handling of packet flows, and various user-space applications to do everything else in user space. The existing bonding driver is unaffected, network teaming is offered as an alternative and does not replace bonding in RHEL 7.

Before We Begin

We have two virtual machines in our lab, with two network interfaces each. One machine will be configured for network bonding, and one for network teaming. Basic IPv6 configuration for network teaming will be covered.

To avoid problems, we are going to configure networking from the console and not from Secure Shell (SSH). If we do something wrong, at least we won’t lose connectivity.

Caveat for VirtualBox

For those using VirtualBox to configure bonding or teaming, ensure that network adapters have promiscuous mode set to “Allow All”, and then enable promisc mode on the links, for example:

# ip link set eth0 promisc on
# ip link set eth1 promisc on

The above works OK for testing, but won’t persist after reboot. Either add to /etc/rc.local, or write a systemd service to handle it.

Configure Network Bonding

Ensure that the bonding module is loaded:

# modprobe bonding

While not required, feel free to check nmcli examples, where example 6 is about adding a bonding master and two slaves. You may find them useful.

# man 5 nmcli-examples

A team interface can be created with either the nmcli or the nmtui utilities. We use nmcli since we find it faster and easier to use.

We are going to delete any existing network configuration to save ourselves some headache:

# nmcli c
NAME     UUID                                  TYPE            DEVICE
enp0s8   00cb8299-feb9-55b6-a378-3fdc720e0bc6  802-3-ethernet  enp0s8
enp0s17  8512e951-6012-c639-73b1-5b4d7b469f7f  802-3-ethernet  enp0s17

We see that we have two network cards with predictable network interface names configured. Delete exiting configuration:

# nmcli c del enp0s8 enp0s17

Create a bonding interface named mybond0 with an active-backup mode:

# nmcli c add type bond ifname mybond0 con-name mybond0 mode active-backup

Add two slaves to the mybond0 interface:

# nmcli c add type bond-slave ifname enp0s17 con-name slave1 master mybond0
# nmcli c add type bond-slave ifname enp0s8 con-name slave2 master mybond0

Now, if we don’t specify any IP configuration, the server will get its ip address and gateway through DHCP by default.

In the lab that we use today, we have our gateway on 10.8.8.2, and DNS (FreeIPA) server on 10.8.8.70, so we want to reflect these details in the config.

Now, if on RHEL 7.0, do the following:

# nmcli con mod mybond0 ipv4.addresses "10.8.8.71/24 10.8.8.2" \
 ipv4.method manual ipv4.dns 10.8.8.70 ipv4.dns-search rhce.local

If on RHEL 7.1 or RHEL 7.2, we need use the ipv4.gateway property to define a gateway:

# nmcli con mod mybond0 ipv4.addresses 10.8.8.71/24 \
 ipv4.gateway 10.8.8.2 ipv4.dns 10.8.8.70 ipv4.dns-search rhce.local \
 ipv4.method manual

In order to bring up a bond, the slaves must be brought up first. Note that starting the master interface does not automatically start the slave interfaces. However, starting a slave interface always starts the master interface, and stopping the master interface also stops the slave interfaces.

# nmcli c up slave1; nmcli c up slave2
# nmcli c up mybond0

Check connections:

# nmcli c
NAME     UUID                                  TYPE            DEVICE
slave2   fd9b7775-044a-47d2-8745-0a326ebc4df1  802-3-ethernet  enp0s17
slave1   12a7ee7f-070c-4366-b80a-a06b6fcbd8fc  802-3-ethernet  enp0s8
mybond0  fd19f953-1aaa-4f32-8246-58a2c0e60514  bond            mybond0

Check bonding status:

# cat /proc/net/bonding/mybond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: enp0s17
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: enp0s17
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:ff:71:00
Slave queue ID: 0

Slave Interface: enp0s8
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:ff:81:00
Slave queue ID: 0

Let us see the routing table:

# ip ro
default via 10.8.8.2 dev mybond0  proto static  metric 1024
10.8.8.0/24 dev mybond0  proto kernel  scope link  src 10.8.8.71

Ensure the DNS setting were set up correctly:

# cat /etc/resolv.conf
# Generated by NetworkManager
search rhce.local
nameserver 10.8.8.70

This is purely for references:

# cat /etc/sysconfig/network-scripts/ifcfg-mybond0
DEVICE=mybond0
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=mybond0
UUID=12a7ee7f-070c-4366-b80a-a06b6fcbd8fc
ONBOOT=yes
IPADDR0=10.8.8.71
PREFIX0=24
GATEWAY0=10.8.8.2
DNS1=10.8.8.70
BONDING_OPTS=mode=active-backup
DOMAIN=rhce.local
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

We can test bonding by disabling and enabling slave interfaces, network connection should not be droped.

Configure Network Teaming

We’ll need the teamd package:

# yum install -y teamd

As with the bonding, while not required, you may want to check nmcli examples, where example 7 is about adding a team master and two slaves.

# man 5 nmcli-examples

Now, to create a team master, we can try to remember some JSON config, or we can use teamd example files that are available on a RHEL 7 server.

Copy one of the example files, open for editing and leave the “runner” part only deleting everything else:

# cp /usr/share/doc/teamd-1.9/example_configs/loadbalance_1.conf /root/

We use the loadbalance runner, however, feel free to pick the activebackup or any other. Make sure there is no comma “,” left at the end of the runner line, otherwise you may get a connection activation failure.

# cat /root/loadbalance_1.conf
{
        "runner":               {"name": "loadbalance"}
}

Create a load balanced teaming interface named myteam0:

# nmcli c add type team con-name myteam0 ifname myteam0 config /root/loadbalance_1.conf

As with the bonding, we have our gateway on 10.8.8.2 and DNS (FreeIPA) server on 10.8.8.70, therefore we want to reflect these within the config.

We also want to assign a unique local IPv6 IP address fc00::10:8:8:72/7 to the teamed interface. IPv6 on a teamed interface requries some extra kernel configuration to handle duplicate address detection, which we want to cover in the article.

If on RHEL 7.0, do the following:

# nmcli c mod myteam0 ipv4.addresses "10.8.8.72/24 10.8.8.2" \
 ipv4.method manual ipv4.dns 10.8.8.70 ipv4.dns-search rhce.local \
 ipv6.addresses fc00::10:8:8:72/7 ipv6.method manual

If on RHEL 7.2, we need use the ipv4.gateway property to define a gateway:

# nmcli c mod myteam0 ipv4.addresses 10.8.8.72/24 \
 ipv4.gateway 10.8.8.2 ipv4.dns 10.8.8.70 ipv4.dns-search rhce.local \
 ipv4.method manual \
 ipv6.addresses fc00::10:8:8:72/7 ipv6.method manual

Add two network devices to the myteam0 interface:

# nmcli c add type team-slave ifname enp0s8 con-name slave1 master myteam0
# nmcli c add type team-slave ifname enp0s17 con-name slave2 master myteam0

Note that starting the master interface does not automatically start the port interfaces. However, starting a port interface always starts the master interface, and stopping the master interface also stops the port interfaces.

# nmcli c up myteam0

Check connections:

# nmcli c
NAME     UUID                                  TYPE            DEVICE
slave1   c5551395-06d4-482b-9cb1-b73decf6f68c  802-3-ethernet  enp0s8
myteam0  05880fe0-38a8-43ca-83c0-e420e84dde9a  team            myteam0
slave2   a527b772-6cec-4c8a-bc7e-f104433c8eeb  802-3-ethernet  enp0s17
# teamdctl myteam0 state
setup:
  runner: loadbalance
ports:
  enp0s17
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
  enp0s8
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
# teamnl myteam0 ports
 3: enp0s17: up 1000Mbit FD
 2: enp0s8: up 1000Mbit FD

Check the routing tables for IPv4 and IPv6:

# ip ro
default via 10.8.8.2 dev myteam0  proto static  metric 1024
10.8.8.0/24 dev myteam0  proto kernel  scope link  src 10.8.8.72
# ip -6 ro|grep -v error
fc00::/7 dev myteam0  proto kernel  metric 256
fe80::/64 dev myteam0  proto kernel  metric 25

Ensure the DNS setting were set up correctly:

# cat /etc/resolv.conf
# Generated by NetworkManager
search rhce.local
nameserver 10.8.8.70

The ifcfg file configuration for references:

# cat /etc/sysconfig/network-scripts/ifcfg-myteam0
DEVICE=myteam0
TEAM_CONFIG="{  \"runner\":             {\"name\": \"loadbalance\"} }"
DEVICETYPE=Team
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=myteam0
UUID=05880fe0-38a8-43ca-83c0-e420e84dde9a
ONBOOT=yes
IPADDR0=10.8.8.72
PREFIX0=24
GATEWAY0=10.8.8.2
DNS1=10.8.8.70
DOMAIN=rhce.local
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

As with bonding, we can test teaming by disabling and enabling slave interfaces, network connection should not be droped.

IPv6 and Duplicate Address Detectiod (DAD)

You may notice that after the server reboot, IPv6 interfaces go into the dadfailed state:

# ip ad show myteam0
8: myteam0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 08:00:27:ff:82:00 brd ff:ff:ff:ff:ff:ff
    inet 10.8.8.72/24 brd 10.8.8.255 scope global myteam0
       valid_lft forever preferred_lft forever
    inet6 fc00::10:8:8:72/7 scope global tentative dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feff:8200/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever

You may be unable to ping6 IPv6 addresses. To fix this, check the kernel configuration:

# sysctl -a | grep accept_dad
net.ipv6.conf.all.accept_dad = 1
net.ipv6.conf.default.accept_dad = 1
net.ipv6.conf.enp0s17.accept_dad = 1
net.ipv6.conf.enp0s8.accept_dad = 1
net.ipv6.conf.lo.accept_dad = -1
net.ipv6.conf.myteam0.accept_dad = 1

Disable DAD on the teamed interface:

# sysctl -w net.ipv6.conf.myteam0.accept_dad=0

The meaning of accept_dad is as follows:

accept_dad - INTEGER
    Whether to accept DAD (Duplicate Address Detection).
        0: Disable DAD
        1: Enable DAD (default)
        2: Enable DAD, and disable IPv6 operation if MAC-based duplicate
            link-local address has been found.

Make the change persistent, add the following line to the new file /etc/sysctl.d/accept_dad.conf:

net.ipv6.conf.myteam0.accept_dad=0

Restart the teamed inetrface:

# nmcli c down myteam0; nmcli c up slave1; nmcli c up slave2

Check IPv6 status:

# ip ad show myteam0
4: myteam0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 08:00:27:ff:82:00 brd ff:ff:ff:ff:ff:ff
    inet 10.8.8.72/24 brd 10.8.8.255 scope global myteam0
       valid_lft forever preferred_lft forever
    inet6 fc00::10:8:8:72/7 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feff:8200/64 scope link
       valid_lft forever preferred_lft forever

15 thoughts on “Configure Aggregated Network Links on RHEL 7: Bonding and Teaming

    • Yes, and it’s mentioned within the article as you have to use the ipv4.gateway property to define a gateway :)

  1. Question, if I perform a con down of slave1 and slave2, I can still ping and access my IP address I set on the myteam0? I would have thought that the team would be unreachable with both slaves down? How does this work?

    Basically if the team is still up even with both interfaces down, how can I actually really test that it’s working properly, short of disabling the network interfaces on my VM? Is this normal?

    • You likely have something misconfigured. When I take both slave interfaces down, I lose network connection:

      # teamdctl myteam0 state
      setup:
        runner: activebackup
      runner:
        active port:
      # host 8.8.8.8
      ;; connection timed out; no servers could be reached
  2. Hie Tomas

    This objective is a bit ambiguous for me and what you have shared works perfectly

    Use network teaming or bonding to configure aggregated network links between two Red Hat Enterprise Linux systems

    Does it mean that 2 different machines as below

    Machine A with 2 network interfaces(teamed together)
    Machine B with 2 network interfaces(teamed together)

    then machine A communicates with machine B using their repsective IPs

    or something else i dont understand entirely

    • Yes, the above is the way that I understand it.

      An example would be a web server with a teamed network link (two interfaces) and a database server with a teamed network link (two interfaces), so they have an aggregated (redundant) network link between them.

  3. Hi Tomas,
    For your reply to Martin on this subject:
    Machine A with 2 network interfaces(teamed together)
    Machine B with 2 network interfaces(teamed together)
    then machine A communicates with machine B using their repsective IPs

    Do we also need to add any bridge over the network after Teams configuration? I see it in some other examples and very confuse about it! Thanks

    • It depends, does the task ask you to add a bridge? If it does, then you do need one, if it doesn’t, then you don’t.

      Bridge is not required to get a teamed interface to work.

  4. Im an trying to practice this lab to my vmware player installed with rhel7.2 with 2 nics and i cant make it work, the team interface wont up, tried bonding the same results. Is there anything i need to setup in the network config of vmware?

    • Hi Glenn, I tried on VMware ESXi 5.1 (I don’t use VMware player), and it works fine. You need to have two network interfaces attached.

Leave a Reply

Your email address will not be published. Required fields are marked *