Configure Aggregated Network Links on RHEL 7: Bonding and Teaming

Configure network bonding and teaming on RHEL 7.

Aggregated Network Links

There are two ways to configure aggregated network links on RHEL 7, via bonding or via teaming.

Network bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy.

Network teaming is designed to implement the concept in a different way by providing a small kernel driver to implement the fast handling of packet flows, and various user-space applications to do everything else in user space. The existing bonding driver is unaffected, network teaming is offered as an alternative and does not replace bonding in RHEL 7.

Before We Begin

We have two virtual machines in our lab, with two network interfaces each. One machine will be configured for network bonding, and one for network teaming. Basic IPv6 configuration for network teaming will be covered.

To avoid problems, we are going to configure networking from the console and not from Secure Shell (SSH). If we do something wrong, at least we won’t lose connectivity.

Caveat for VirtualBox

For those using VirtualBox to configure bonding or teaming, ensure that network adapters have promiscuous mode set to “Allow All”, and then enable promisc mode on the links, for example:

# ip link set eth0 promisc on
# ip link set eth1 promisc on

The above works OK for testing, but won’t persist after reboot. Either add to /etc/rc.local, or write a systemd service to handle it.

Configure Network Bonding

Ensure that the bonding module is loaded:

# modprobe bonding

While not required, feel free to check nmcli examples, where example 6 is about adding a bonding master and two slaves. You may find them useful.

# man 5 nmcli-examples

A bond interface can be created with either the nmcli or the nmtui utilities. We use nmcli since we find it faster and easier to use.

We are going to delete any existing network configuration to save ourselves some headache:

# nmcli c
NAME     UUID                                  TYPE            DEVICE
enp0s8   00cb8299-feb9-55b6-a378-3fdc720e0bc6  802-3-ethernet  enp0s8
enp0s17  8512e951-6012-c639-73b1-5b4d7b469f7f  802-3-ethernet  enp0s17

We see that we have two network cards with predictable network interface names configured. Delete exiting configuration:

# nmcli c del enp0s8 enp0s17

Create a bonding interface named mybond0 with an active-backup mode:

# nmcli c add type bond ifname mybond0 con-name mybond0 mode active-backup

Add two slaves to the mybond0 interface:

# nmcli c add type bond-slave ifname enp0s17 con-name slave1 master mybond0
# nmcli c add type bond-slave ifname enp0s8 con-name slave2 master mybond0

Now, if we don’t specify any IP configuration, the server will get its ip address and gateway through DHCP by default.

In the lab that we use today, we have our gateway on 10.8.8.2, and DNS (FreeIPA) server on 10.8.8.70, so we want to reflect these details in the config.

Now, if on RHEL 7.0, do the following:

# nmcli con mod mybond0 ipv4.addresses "10.8.8.71/24 10.8.8.2" \
 ipv4.method manual ipv4.dns 10.8.8.70 ipv4.dns-search rhce.local

If on RHEL 7.1 or RHEL 7.2, we need use the ipv4.gateway property to define a gateway:

# nmcli con mod mybond0 ipv4.addresses 10.8.8.71/24 \
 ipv4.gateway 10.8.8.2 ipv4.dns 10.8.8.70 ipv4.dns-search rhce.local \
 ipv4.method manual

In order to bring up a bond, the slaves must be brought up first. Note that starting the master interface does not automatically start the slave interfaces. However, starting a slave interface always starts the master interface, and stopping the master interface also stops the slave interfaces.

# nmcli c up slave1; nmcli c up slave2
# nmcli c up mybond0

Check connections:

# nmcli c
NAME     UUID                                  TYPE            DEVICE
slave2   fd9b7775-044a-47d2-8745-0a326ebc4df1  802-3-ethernet  enp0s17
slave1   12a7ee7f-070c-4366-b80a-a06b6fcbd8fc  802-3-ethernet  enp0s8
mybond0  fd19f953-1aaa-4f32-8246-58a2c0e60514  bond            mybond0

Check bonding status:

# cat /proc/net/bonding/mybond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: enp0s17
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: enp0s17
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:ff:71:00
Slave queue ID: 0

Slave Interface: enp0s8
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:ff:81:00
Slave queue ID: 0

Let us see the routing table:

# ip ro
default via 10.8.8.2 dev mybond0  proto static  metric 1024
10.8.8.0/24 dev mybond0  proto kernel  scope link  src 10.8.8.71

Ensure the DNS setting were set up correctly:

# cat /etc/resolv.conf
# Generated by NetworkManager
search rhce.local
nameserver 10.8.8.70

This is purely for references:

# cat /etc/sysconfig/network-scripts/ifcfg-mybond0
DEVICE=mybond0
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=mybond0
UUID=12a7ee7f-070c-4366-b80a-a06b6fcbd8fc
ONBOOT=yes
IPADDR0=10.8.8.71
PREFIX0=24
GATEWAY0=10.8.8.2
DNS1=10.8.8.70
BONDING_OPTS=mode=active-backup
DOMAIN=rhce.local
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

We can test bonding by disabling and enabling slave interfaces, network connection should not be dropped.

Configure Network Teaming

We’ll need the teamd package:

# yum install -y teamd

As with the bonding, while not required, you may want to check nmcli examples, where example 7 is about adding a team master and two slaves.

# man 5 nmcli-examples

Now, to create a team master, we can try to remember some JSON config, or we can use teamd example files that are available on a RHEL 7 server.

Copy one of the example files, open for editing and leave the “runner” part only deleting everything else:

# cp /usr/share/doc/teamd-1.9/example_configs/loadbalance_1.conf /root/

We use the loadbalance runner, however, feel free to pick the activebackup or any other.

The following runners are available at time of writing.

  1. broadcast (data is transmitted over all ports),
  2. round-robin (data is transmitted over all ports in turn),
  3. active-backup (one port or link is used while others are kept as a backup),
  4. loadbalance (with active Tx load balancing and BPF-based Tx port selectors),
  5. lacp (implements the 802.3ad Link Aggregation Control Protocol).

Make sure there is no comma “,” left at the end of the runner line, otherwise you may get a connection activation failure.

# cat /root/loadbalance_1.conf
{
        "runner":               {"name": "loadbalance"}
}

Create a load balanced teaming interface named myteam0:

# nmcli c add type team con-name myteam0 ifname myteam0 config /root/loadbalance_1.conf

As with the bonding, we have our gateway on 10.8.8.2 and DNS (FreeIPA) server on 10.8.8.70, therefore we want to reflect these within the config.

We also want to assign a unique local IPv6 IP address fc00::10:8:8:72/7 to the teamed interface. IPv6 on a teamed interface requries some extra kernel configuration to handle duplicate address detection, which we want to cover in the article.

If on RHEL 7.0, do the following:

# nmcli c mod myteam0 ipv4.addresses "10.8.8.72/24 10.8.8.2" \
 ipv4.method manual ipv4.dns 10.8.8.70 ipv4.dns-search rhce.local \
 ipv6.addresses fc00::10:8:8:72/7 ipv6.method manual

If on RHEL 7.2, we need use the ipv4.gateway property to define a gateway:

# nmcli c mod myteam0 ipv4.addresses 10.8.8.72/24 \
 ipv4.gateway 10.8.8.2 ipv4.dns 10.8.8.70 ipv4.dns-search rhce.local \
 ipv4.method manual \
 ipv6.addresses fc00::10:8:8:72/7 ipv6.method manual

Add two network devices to the myteam0 interface:

# nmcli c add type team-slave ifname enp0s8 con-name slave1 master myteam0
# nmcli c add type team-slave ifname enp0s17 con-name slave2 master myteam0

Note that starting the master interface does not automatically start the port interfaces. However, starting a port interface always starts the master interface, and stopping the master interface also stops the port interfaces.

# nmcli c up myteam0

Check connections:

# nmcli c
NAME     UUID                                  TYPE            DEVICE
slave1   c5551395-06d4-482b-9cb1-b73decf6f68c  802-3-ethernet  enp0s8
myteam0  05880fe0-38a8-43ca-83c0-e420e84dde9a  team            myteam0
slave2   a527b772-6cec-4c8a-bc7e-f104433c8eeb  802-3-ethernet  enp0s17
# teamdctl myteam0 state
setup:
  runner: loadbalance
ports:
  enp0s17
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
  enp0s8
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
# teamnl myteam0 ports
 3: enp0s17: up 1000Mbit FD
 2: enp0s8: up 1000Mbit FD

Check the routing tables for IPv4 and IPv6:

# ip ro
default via 10.8.8.2 dev myteam0  proto static  metric 1024
10.8.8.0/24 dev myteam0  proto kernel  scope link  src 10.8.8.72
# ip -6 ro|grep -v error
fc00::/7 dev myteam0  proto kernel  metric 256
fe80::/64 dev myteam0  proto kernel  metric 25

Ensure the DNS setting were set up correctly:

# cat /etc/resolv.conf
# Generated by NetworkManager
search rhce.local
nameserver 10.8.8.70

The ifcfg file configuration for references:

# cat /etc/sysconfig/network-scripts/ifcfg-myteam0
DEVICE=myteam0
TEAM_CONFIG="{  \"runner\":             {\"name\": \"loadbalance\"} }"
DEVICETYPE=Team
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=myteam0
UUID=05880fe0-38a8-43ca-83c0-e420e84dde9a
ONBOOT=yes
IPADDR0=10.8.8.72
PREFIX0=24
GATEWAY0=10.8.8.2
DNS1=10.8.8.70
DOMAIN=rhce.local
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

As with bonding, we can test teaming by disabling and enabling slave interfaces, network connection should not be dropped.

IPv6 and Duplicate Address Detectiod (DAD)

You may notice that after the server reboot, IPv6 interfaces go into the dadfailed state:

# ip ad show myteam0
8: myteam0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 08:00:27:ff:82:00 brd ff:ff:ff:ff:ff:ff
    inet 10.8.8.72/24 brd 10.8.8.255 scope global myteam0
       valid_lft forever preferred_lft forever
    inet6 fc00::10:8:8:72/7 scope global tentative dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feff:8200/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever

You may be unable to ping6 IPv6 addresses. To fix this, check the kernel configuration:

# sysctl -a | grep accept_dad
net.ipv6.conf.all.accept_dad = 1
net.ipv6.conf.default.accept_dad = 1
net.ipv6.conf.enp0s17.accept_dad = 1
net.ipv6.conf.enp0s8.accept_dad = 1
net.ipv6.conf.lo.accept_dad = -1
net.ipv6.conf.myteam0.accept_dad = 1

Disable DAD on the teamed interface:

# sysctl -w net.ipv6.conf.myteam0.accept_dad=0

The meaning of accept_dad is as follows:

accept_dad - INTEGER
    Whether to accept DAD (Duplicate Address Detection).
        0: Disable DAD
        1: Enable DAD (default)
        2: Enable DAD, and disable IPv6 operation if MAC-based duplicate
            link-local address has been found.

Make the change persistent, add the following line to the new file /etc/sysctl.d/accept_dad.conf:

net.ipv6.conf.myteam0.accept_dad=0

Restart the teamed inetrface:

# nmcli c down myteam0; nmcli c up slave1; nmcli c up slave2

Check IPv6 status:

# ip ad show myteam0
4: myteam0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 08:00:27:ff:82:00 brd ff:ff:ff:ff:ff:ff
    inet 10.8.8.72/24 brd 10.8.8.255 scope global myteam0
       valid_lft forever preferred_lft forever
    inet6 fc00::10:8:8:72/7 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feff:8200/64 scope link
       valid_lft forever preferred_lft forever

78 thoughts on “Configure Aggregated Network Links on RHEL 7: Bonding and Teaming

    • Yes, and it’s mentioned within the article as you have to use the ipv4.gateway property to define a gateway :)

  1. Question, if I perform a con down of slave1 and slave2, I can still ping and access my IP address I set on the myteam0? I would have thought that the team would be unreachable with both slaves down? How does this work?

    Basically if the team is still up even with both interfaces down, how can I actually really test that it’s working properly, short of disabling the network interfaces on my VM? Is this normal?

    • You likely have something misconfigured. When I take both slave interfaces down, I lose network connection:

      # teamdctl myteam0 state
      setup:
        runner: activebackup
      runner:
        active port:
      # host 8.8.8.8
      ;; connection timed out; no servers could be reached
    • Hi Tomas,
      Facing a similar problem, even though output of ‘teamdctl team0 state’ is similar to the one you provided below, team0 connection still holds the ip even after both the slaves are down and it is ping-able in the network.

    • Did you ever figure this out?

      CentOS Linux release 7.5.1804 in-use here and even though both of my backend ports are offline, I can still ping the team interface IP. If I bring down the team interface itself, it is no longer ping-able.
      Is this a sign of a misconfiguration?

      # teamdctl team01 state
      setup:
        runner: activebackup
      runner:
        active port:
  2. Hie Tomas

    This objective is a bit ambiguous for me and what you have shared works perfectly

    Use network teaming or bonding to configure aggregated network links between two Red Hat Enterprise Linux systems

    Does it mean that 2 different machines as below

    Machine A with 2 network interfaces(teamed together)
    Machine B with 2 network interfaces(teamed together)

    then machine A communicates with machine B using their repsective IPs

    or something else i dont understand entirely

    • Yes, the above is the way that I understand it.

      An example would be a web server with a teamed network link (two interfaces) and a database server with a teamed network link (two interfaces), so they have an aggregated (redundant) network link between them.

  3. Hi Tomas,
    For your reply to Martin on this subject:
    Machine A with 2 network interfaces(teamed together)
    Machine B with 2 network interfaces(teamed together)
    then machine A communicates with machine B using their repsective IPs

    Do we also need to add any bridge over the network after Teams configuration? I see it in some other examples and very confuse about it! Thanks

    • It depends, does the task ask you to add a bridge? If it does, then you do need one, if it doesn’t, then you don’t.

      Bridge is not required to get a teamed interface to work.

  4. Im an trying to practice this lab to my vmware player installed with rhel7.2 with 2 nics and i cant make it work, the team interface wont up, tried bonding the same results. Is there anything i need to setup in the network config of vmware?

    • Hi Glenn, I tried on VMware ESXi 5.1 (I don’t use VMware player), and it works fine. You need to have two network interfaces attached.

  5. Hi Tomas,

    Does RHCE exams blueprint want us to use teamd or bonding on the exam? or is any method applicable? I think the bonding is straight forward and I’m curious to know if it cool to use that.

    • RHCE exam objectives require you to know how to use network teaming or bonding to configure aggregated network. You need to know both, but can pick and use the one that you like, as they achieve the same goal really.

  6. Hi Tomas,
    can we assign another ip address on a interface that already a member of network teaming

    • I’m sorry, I don’t quite understand. Do you want to assign another IP to a teamed interface, or to one of its slave interfaces?

  7. Hi Tomas,
    I want to assign IP address on one of the slave interface is it possible ? (on a interface that already a member of a teaming interface)
    or on a teaming interface can we assign IPv4 address and IPv6 address on same teaming interface

    • I don’t think that assigning IP to slave interfaces is going to work since the loadbalance runner has to customise ARP responses sent to each peer on the Ethernet domain, such that the hosts are spread across the slave interfaces. Give it a go if you want, but I’d be surprised if that worked without unexpected consequences.

      Regards IPv4 together with IPv6 on a teamed interface, yes, you can surely do that. There are examples on this very blog post in case you’re intereted.

  8. Hi Tomas,

    If I shutdown one of my slaves (nmcli con down eth0) then the second server loses connection. I can see the active port changing but the connection is still lost. If I restore the interface and shutdown the other slave interface, then nothing happends. My ping just keeps running.
    So far i tried Activebackup and Roundrobin voor modes.
    Did you encounter this behaviour?

    • No, I didn’t I’m afraid.

      On second thought, I may have had something similar on VirtualBox, so had to enable a promiscuous mode. But it wasn’t the case on KVM.

    • Hi Tomas,
      Its strange. I tried multiple guides. Maybe its because of VMware workstation with NAT interfaces..
      Could you look at my config below?
      Thanks in advance.

      [root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0
      DEVICE=team0
      DEVICETYPE=Team
      BOOTPROTO=none
      DEFROUTE=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_FAILURE_FATAL=no
      NAME=team0
      UUID=a950a224-9cb0-48ed-90f4-4dc019aa665b
      ONBOOT=yes
      IPADDR0=192.168.4.210
      PREFIX0=24
      GATEWAY0=192.168.4.1
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes

      [root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
      BOOTPROTO=dhcp
      DEFROUTE=yes
      PEERDNS=yes
      PEERROUTES=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=eth0
      UUID=6648eb26-c793-44fc-8685-2b5cbaadfac5
      DEVICE=eth0
      ONBOOT=yes
      TEAM_MASTER=team0
      DEVICETYPE=TeamPort

      [root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
      BOOTPROTO=dhcp
      DEFROUTE=yes
      PEERDNS=yes
      PEERROUTES=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=eth1
      UUID=dd2231b3-3530-4d6c-a8ff-6860d003cc0a
      DEVICE=eno33554992
      ONBOOT=yes
      TEAM_MASTER=team0
      DEVICETYPE=TeamPort

    • You have the wrong device reference in the file ifcfg-eth1. It should be eth1, not eno33554992.

      Not sure if it helps you much, but I use teaming on VMware (ESXi) VMs and I didn’t have this issue.

  9. # nmcli c
    NAME UUID TYPE DEVICE
    slave1 c5551395-06d4-482b-9cb1-b73decf6f68c 802-3-ethernet enp0s8
    myteam0 05880fe0-38a8-43ca-83c0-e420e84dde9a team myteam0
    slave2 a527b772-6cec-4c8a-bc7e-f104433c8eeb 802-3-ethernet enp0s17

    After we setup the teaming .. how do we test its working .
    e.g.
    from any machine we run ping command to team ip address. in this case .
    ping 10.8.8.72 (continues ping)
    and we do the following testing. ..
    team is always up.
    case1 ) slave1 down , slave2 up — what should be behaviour for ping response
    case2 ) slave1 up , slave2 down — what should be behaviour for ping response

    Thanks to add these details in this tutorial.

    • Hi, I have mentioned this in the blog post, you can test teaming by disabling and enabling slave interfaces, network connection should not be dropped. This is exactly what you asked for, to answer your question, in both cases (case1 and case2) ping responses should continue and and you should have zero packet loss.

  10. Tomas, I add my compliments for your excellent tutorials. In my case, I have successfully added Teaming to RHEL 7.4 with 4 NICs. (I did have to de-select “load on startup” for the physical devices so that the Slaves would startup instead and activate the Team effectively. Is that normal? Should the “priority” settings be used instead?) More importantly, even though my Team shows UP and LOWER_UP in the ‘ip a’ command output, I cannot PING the Team IP (manually config’d) from any other box on the local network (another 10.201.* box on a /16 network). I can ping from the same box, but no other. No DNS, just using the IP address. How do I “publish” that new IP Address? Does the LACP runner have any side-effects if the network lacp is not yet activated?

    • Thank you Tim! The only runners that I used were loadbalance and active-backup, I never configured lacp I’m afraid, it’s hard to advise therefore.

  11. Well, apparently the LACP Runner does not like it when LACP is not active on the network. (I was trying to set it up in advance, since the Network Team also did not want to enable it until everything was ready to use it.) I replaced the JSON for the Team with a “loadbalance” runner config by way of the nmtui command, restarted the network, and the Team IP was immediately visible to another Server on the local network.

  12. Hello Tomas,

    During the exam i suppose that we will be asked to setup teaming and also IPV6 on interfaces

    If this is the case, we setup teaming on the interface . both the interfaces are part of team now
    NOW
    There is another question to setup ipv6 on both the VMs during the exam.

    Do we need to configure ipv6 on teaming interface ( my assumption )
    or
    we have any method to assign ipv6 addresses seperately once teaming is also in place ?? ( or Is it possible to assign ipv6 seperately on interface once teaming is in place ?)

    I hope I have clarified the question .

    • All the details including what IPs and which interfaces they need to be assigned to will be provided during the exam. You won’t have to assume anything.

      You can assign multiple IPs to a single interface.

  13. Just to rephrase the question:
    once both the interface are assigned to teamX. I think we can only give ip to team that is bound with these interfaces to make it work..

    we cann’t give ip to interface separately once its bound with team ?

    or is there any way to assign ip to the interface one its attached with team , and still that new ip is pingable ??
    thanks

    • I’m going to use the response I provided to a person who asked a similar question some time ago, I don’t think that assigning IP to slave interfaces is going to work since the loadbalance runner has to customise ARP responses sent to each peer on the Ethernet domain, such that the hosts are spread across the slave interfaces. You can always give it a go and test it yourself, please let us know if that worked for you.

  14. Hi Tomas,
    Thanks for this great resource I’m on your page every day at the mo gearing up for my RHCE.
    I’ve noticed that here on network teaming the command
    -> #man 5 nmcli-examples
    Dosen’t work on RHEL7.3 and 7.4 (havn’t tested on RHEL7.0 , 7.1, 7.2)
    Instead it’s
    -> #man nmcli-examples

    • Hi Jason,

      Thanks! The key to acquiring proficiency in any task is repetition (with certain obvious exceptions), so good luck with your studies.

      On RHEL 7.0 the nmcli-examples man page is under section 5 (File Formats), but I see that it’s been moved to section 7 in the later versions.

    • Just wondering, when I’ve set up teamed interface, (I’ve got my lab envir in Virt.Box with NIC’s in promisuous mode)
      and I ping my IPA server, then i get DUP! for every other ping? but not when i do the bonding setup.
      Do you happen to know why?

    • Try changing that to active-backup, it should fix it.

      I’m not familiar with your network set up, but it may be something to do with a switch updating its MAC forwarding tables.

  15. Your RHCE tutorials are great. Thanks for all your effort.

    I am experiencing an issue related to link aggregation, where connections are using the IP address associated with the bond, instead of the primary interface, eth0. This results in connectivity/authentication issues when, for example, attempting to mount an NFS share that has been exported to the IP address associated with eth0, obtaining a Kerberos TGT, or restricting access to certain IP addresses in Apache, to name a few. How do you ensure that eth0 is used and not the aggregated interface? Output from ip route indicates that eth0 is the default route, yet the bond is clearly making connections to services intended for the eth0 IP address. Any help would be greatly appreciated. I am totally stumped.

  16. Hey Tomas,

    Great tutorials!

    Your comment on the promisc mode is exactly what I was looking for. I have looked everywhere for what I was doing wrong. Btw is there a link to why promisc mode should be on for teaming to work?

    Thanks.

    • Thank you!

      It was a while ago I used VirtualBox, I cannot remember exactly, but I think I simply learnt this bit the hard way.

  17. Hey. on the exam after creating an aggregated interface I was not able to ping the neighboring system through the created interface.
    I checked the settings several times – network addresses and interface availability were enabled and up state.
    Should the aggregated interfaces on the two systems ping each other during the exam?
    How to check the correctness of the aggregated interface with unavailability of ping of neighboring system?

    • Ping should normally work if firewalls are configured to allow ICMP traffic and net.ipv4.icmp_echo_ignore_all isn’t set to 1.

      I rarely rely on ping nowadays and almost always check with telnet or netcat. There is usually at least one TCP port open which you can use to test network connections.

  18. Hello,
    I am trying to configure bonding interface on Red Hat Linux 7.5 64 bit on Virtual Machine Oracle VirtualBox for installation of Oracle 12c. I am following this link:
    https://access.redhat.com/documentation/en-us/reference_architectures/2017/html-single/deploying_oracle_database_12c_release_2_on_red_hat_enterprise_linux_7/index#public_network_configuration
    My gateway is 10.0.2.2 and I am able to ping it, but if I create /etc/sysconfig/network-scripts/ifcfg-bond0 by having :


    IPADDR=10.0.1.1
    GATEWAY=10.0.2.2


    and other parameters, then I am not able to ping the gateway. Since, I am using Virtual Machine, so I have only one option of Bridge Network as network adapter to connect the internet. Kindly tell me, if I need to submit more info.

  19. Hi Tomas appreciate you for this helpful resource!
    I hope my question doesn’t repeated.
    After team configuration I have to con up the team0 interface. second interface doesn’t starts due to reserved by active connection eth0 for example. Do I have to stop eth0 connection in order to bring up second interface for test purposes and bring backup online eth0? Do I have to configure gw and dns if team stays as active network?
    Sorry for duplicate incase:)

  20. Thanks for reply, But to make team to work both of the slaves should be up. Are there 3
    nics to be provided during an exam?

    • It depends on a runner that you use. With active-backup, one link is used while another is kept as a backup. With loadbalance, both are used with load balancing.

  21. Hello Tomas,

    It seems only you can help!

    I’m trying to configure Teaming with two NICs. I’m running CentOS 7.5.1804 on VMware Workstation 14.1.2.

    Steps I have done:
    1) added team interface with below command

    [root@client1 ~]# nmcli c a type team ifname myteam1 con-name myteam1 config my.conf
    Connection ‘myteam1’ (6d95c6a4-cde9-437f-81e2-6c2cc2c0064e) successfully added.

    Content of my.conf file:

    [root@client1 ~]# cat my.conf
    {
    “device”: “myteam1”,
    “runner”: {“name”: “activebackup”}
    }

    2) I have added 2 slave interfaces:

    [root@client1 ~]# nmcli c a type ethernet ifname ens33 con-name myteam1_slave1 master myteam1
    Connection ‘myteam1_slave1’ (dd1d1bde-ff15-4d53-a174-96225e95f431) successfully added.
    [root@client1 ~]# nmcli c a type ethernet ifname ens37 con-name myteam1_slave2 master myteam1
    Connection ‘myteam1_slave2’ (bcd1319c-cb9b-4290-87ab-ef35d2a127b7) successfully added.

    3) Restarted network and NetworkManager services.
    4) Checked the my team interface state:

    [root@client1 ~]# teamdctl myteam1 state
    setup:
    runner: activebackup
    ports:
    ens33
    link watches:
    link summary: up
    instance[link_watch_0]:
    name: ethtool
    link: up
    down count: 0
    ens37
    link watches:
    link summary: up
    instance[link_watch_0]:
    name: ethtool
    link: up
    down count: 0
    runner:
    active port: ens33

    When I try to check connection everything is fine. But when I try to take down myteam1_slave1 (ens33) I am losing connectivity despite the fact that it switches runner active port to ens37.

    Please assist!

    Below information maybe helpful for you.

    [root@client1 ~]# nmcli con down myteam1_slave1
    Connection ‘myteam1_slave1′ successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
    [root@client1 ~]#
    [root@client1 ~]# teamdctl myteam1 state
    setup:
    runner: activebackup
    ports:
    ens37
    link watches:
    link summary: up
    instance[link_watch_0]:
    name: ethtool
    link: up
    down count: 0
    runner:
    active port: ens37
    [root@client1 ~]#
    [root@client1 ~]#
    [root@client1 ~]# ping google.com
    ^C
    [root@client1 ~]# nmcli c s
    NAME UUID TYPE DEVICE
    myteam1 6d95c6a4-cde9-437f-81e2-6c2cc2c0064e team myteam1
    myteam1_slave2 bcd1319c-cb9b-4290-87ab-ef35d2a127b7 ethernet ens37
    virbr0 cf294a4d-b615-4cb3-b191-2c806fd4eb9b bridge virbr0
    myteam1_slave1 dd1d1bde-ff15-4d53-a174-96225e95f431 ethernet —
    [root@client1 ~]# nmcli d s
    DEVICE TYPE STATE CONNECTION
    myteam1 team connected myteam1
    virbr0 bridge connected virbr0
    ens37 ethernet connected myteam1_slave2
    ens33 ethernet disconnected —
    lo loopback unmanaged —
    virbr0-nic tun unmanaged —
    [root@client1 ~]#

    [root@client1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-myteam1
    TEAM_CONFIG=$'{\n\t\”device\”:\t\”myteam1\”,\n\t\”runner\”:\t{\”name\”: \”activebackup\”}\n}\n’
    PROXY_METHOD=none
    BROWSER_ONLY=no
    BOOTPROTO=dhcp
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_FAILURE_FATAL=no
    IPV6_ADDR_GEN_MODE=stable-privacy
    NAME=myteam1
    UUID=6d95c6a4-cde9-437f-81e2-6c2cc2c0064e
    DEVICE=myteam1
    ONBOOT=yes
    DEVICETYPE=Team
    [root@client1 ~]#

    [root@client1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-myteam1_slave1
    NAME=myteam1_slave1
    UUID=dd1d1bde-ff15-4d53-a174-96225e95f431
    DEVICE=ens33
    ONBOOT=yes
    TEAM_MASTER=myteam1
    DEVICETYPE=TeamPort
    [root@client1 ~]#

    [root@client1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-myteam1_slave2
    NAME=myteam1_slave2
    UUID=bcd1319c-cb9b-4290-87ab-ef35d2a127b7
    DEVICE=ens37
    ONBOOT=yes
    TEAM_MASTER=myteam1
    DEVICETYPE=TeamPort
    [root@client1 ~]#

    • What’s your VMware network config like? Did you set the network adapters into promiscuous mode? When you bring the slave interface up, does it restore the network connection? Can you ping the gateway after you take the slave interface down? Also, get Wireshark on that server and run a packet trace to see where packets get routed/dropped.

  22. Hello Tomas!
    Thanks for reply.
    What’s your VMware network config like? I have 2 Bridged NIC’s
    Did you set the network adapters into promiscuous mode? I have set both of them but did not help.
    When you bring the slave interface up, does it restore the network connection? No
    Can you ping the gateway after you take the slave interface down? No
    Also, get Wireshark on that server and run a packet trace to see where packets get routed/dropped. Cant install Wireshark, I’m running minimal Centos.

    I have deleted and everything and installed from scratch and get the same issue.
    When I add 2nd slave to team I am getting such vmware notification: “mac address of adapter ethernet0 is within the reserved address range”. Therefore I’ve noticed something interesting. On Vmware Workstation when I do “ip a s” I see that my all (master and 2 slave) interfaces uses the same MAC Addresses. I have tried to add suitable hardware addresses in ifcfg files for slave interfaces but after restart network service I am getting the same MAC for all interfaces. I also noticed the same value for both adapter’s virtualDev (ethernet0.virtualDev = “e1000”, ethernet1.virtualDev = “e1000”).

    I dont think issue is with vmware because teaming does not work for me in VirtualBox as well.
    There MAC’s for slave are different. But when I take down my first slave it switches active port to second slave but master’s (team0’s) MAC still same as first slave. I have tried {“hwaddr_policy”: “by_active”} option during setup team0 but this also did not help.

    I’m confused. Can this be because I am using Wifi and dhcp?

    Hope you can help me. Thanks in advance for your efforts!

    • If you cannot install Wireshark, then use tcpdump. See if you get any duplicate MAC addresses.

      Did you enable promisc mode on the OS? With regards to e1000, it’s just an adapter type that you use. Try a different adapter if you can, e.g. VMXNET3.

  23. thanks a lot for the detailed guide. it’s very helpful

    If you run into issues in the exam, where testing the failover of the interfaces results in dropped packets, and you confirm that your config is correct – would you advise to change promisc mode on your teamed interfaces? Or is it safe to assume that this has been done already.

  24. I’m practicing on RHEL 7.0 for everything and there doesn’t seem to be a ‘master’ argument for nmcli.

    Following man nmcli-examples:
    $ nmcli con add type bond ifname mybond0 mode active-backup
    $ nmcli con add type ethernet ifname eth1 master mybond0
    $ nmcli con add type ethernet ifname eth2 master mybond0

    I get
    Error: Unexpected argument ‘master’

    How is it possible to complete the setup in nmcli on 7.0?

    • You’re trying to add an ethernet type connection to the master, this is not going to work, because there is no master argument for that type of connection. If you’re configuring bonding, then you should use bond-slave.

    • I was referring to a man page on RHEL 7.5 when setting it up. Indeed, this is how it should be setup on 7.0:

      $ nmcli con add type bond ifname mybond0 mode active-backup
      $ nmcli con add type bond-slave ifname eth1 master mybond0
      $ nmcli con add type bond-slave ifname eth2 master mybond0

    • The man page on RHEL 7.0 says the following:

      Example 6. Adding a bonding master and two slave connection profiles

      $ nmcli con add type bond ifname mybond0 mode active-backup
      $ nmcli con add type bond-slave ifname eth1 master mybond0
      $ nmcli con add type bond-slave ifname eth2 master mybond0

      If you use RHEL 7.0, then you should use the man page that’s on RHEL 7.0 and not RHEL 7.5.

  25. Hi Tomas,
    During the exam, is it clear what is the network cabling between servers where teaming connection need to be configured.
    I noticed that this team.config ‘{ “runner”: { “name”: “activebackup” } }’ doesn’t work if servers are cabled directly.
    It works just fine if there is one switch (which doesn’t have sense for teaming) or two interconnected switches between servers.

    https://imgur.com/IUxtDQp

    • Don’t worry about this too much. You will be provided with the system that can be configured to work with the exam objectives.

  26. Hi Tomas,
    If I will configure aggregated network links (teamd) at the exam.
    Do I have to configure the NFS and Samba tasks via the Team0 interface?

  27. Tomas,
    Do you agree, in cionfiguring Network Teaming, in RHEL7 is the same as configring BONDING in RHEL6?

    1. modify the ifcfg-eth0 and eth1
    2. create ifcfg-team
    4. restart the network

    Now you have teaming. ExActly as the same. Just touch 3 config file and restart the netwrok and you are good to go. Please advise

  28. Slave1 NIC is active runner port , I tcpdump on slave2 port , got ICMPv6 packet be sent to switch , slave2 has same MAC address with slave1 , so , switch update its MAC address item , data had been sent to slave2 port, it appeared 30-50% icmp data loss.

Leave a Reply

Your email address will not be published. Required fields are marked *