Configure Aggregated Network Links on RHEL 7: Bonding and Teaming

Configure network bonding and teaming on RHEL 7.

Aggregated Network Links

There are two ways to configure aggregated network links on RHEL 7, via bonding or via teaming.

Network bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy.

Network teaming is designed to implement the concept in a different way by providing a small kernel driver to implement the fast handling of packet flows, and various user-space applications to do everything else in user space. The existing bonding driver is unaffected, network teaming is offered as an alternative and does not replace bonding in RHEL 7.

Before We Begin

We have two virtual machines in our lab, with two network interfaces each. One machine will be configured for network bonding, and one for network teaming. Basic IPv6 configuration for network teaming will be covered.

To avoid problems, we are going to configure networking from the console and not from Secure Shell (SSH). If we do something wrong, at least we won’t lose connectivity.

Caveat for VirtualBox

For those using VirtualBox to configure bonding or teaming, ensure that network adapters have promiscuous mode set to “Allow All”, and then enable promisc mode on the links, for example:

# ip link set eth0 promisc on
# ip link set eth1 promisc on

The above works OK for testing, but won’t persist after reboot. Either add to /etc/rc.local, or write a systemd service to handle it.

Configure Network Bonding

Ensure that the bonding module is loaded:

# modprobe bonding

While not required, feel free to check nmcli examples, where example 6 is about adding a bonding master and two slaves. You may find them useful.

# man 5 nmcli-examples

A bond interface can be created with either the nmcli or the nmtui utilities. We use nmcli since we find it faster and easier to use.

We are going to delete any existing network configuration to save ourselves some headache:

# nmcli c
NAME     UUID                                  TYPE            DEVICE
enp0s8   00cb8299-feb9-55b6-a378-3fdc720e0bc6  802-3-ethernet  enp0s8
enp0s17  8512e951-6012-c639-73b1-5b4d7b469f7f  802-3-ethernet  enp0s17

We see that we have two network cards with predictable network interface names configured. Delete exiting configuration:

# nmcli c del enp0s8 enp0s17

Create a bonding interface named mybond0 with an active-backup mode:

# nmcli c add type bond ifname mybond0 con-name mybond0 mode active-backup

Add two slaves to the mybond0 interface:

# nmcli c add type bond-slave ifname enp0s17 con-name slave1 master mybond0
# nmcli c add type bond-slave ifname enp0s8 con-name slave2 master mybond0

Now, if we don’t specify any IP configuration, the server will get its ip address and gateway through DHCP by default.

In the lab that we use today, we have our gateway on 10.8.8.2, and DNS (FreeIPA) server on 10.8.8.70, so we want to reflect these details in the config.

Now, if on RHEL 7.0, do the following:

# nmcli con mod mybond0 ipv4.addresses "10.8.8.71/24 10.8.8.2" \
 ipv4.method manual ipv4.dns 10.8.8.70 ipv4.dns-search rhce.local

If on RHEL 7.1 or RHEL 7.2, we need use the ipv4.gateway property to define a gateway:

# nmcli con mod mybond0 ipv4.addresses 10.8.8.71/24 \
 ipv4.gateway 10.8.8.2 ipv4.dns 10.8.8.70 ipv4.dns-search rhce.local \
 ipv4.method manual

In order to bring up a bond, the slaves must be brought up first. Note that starting the master interface does not automatically start the slave interfaces. However, starting a slave interface always starts the master interface, and stopping the master interface also stops the slave interfaces.

# nmcli c up slave1; nmcli c up slave2
# nmcli c up mybond0

Check connections:

# nmcli c
NAME     UUID                                  TYPE            DEVICE
slave2   fd9b7775-044a-47d2-8745-0a326ebc4df1  802-3-ethernet  enp0s17
slave1   12a7ee7f-070c-4366-b80a-a06b6fcbd8fc  802-3-ethernet  enp0s8
mybond0  fd19f953-1aaa-4f32-8246-58a2c0e60514  bond            mybond0

Check bonding status:

# cat /proc/net/bonding/mybond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: enp0s17
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: enp0s17
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:ff:71:00
Slave queue ID: 0

Slave Interface: enp0s8
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:ff:81:00
Slave queue ID: 0

Let us see the routing table:

# ip ro
default via 10.8.8.2 dev mybond0  proto static  metric 1024
10.8.8.0/24 dev mybond0  proto kernel  scope link  src 10.8.8.71

Ensure the DNS setting were set up correctly:

# cat /etc/resolv.conf
# Generated by NetworkManager
search rhce.local
nameserver 10.8.8.70

This is purely for references:

# cat /etc/sysconfig/network-scripts/ifcfg-mybond0
DEVICE=mybond0
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=mybond0
UUID=12a7ee7f-070c-4366-b80a-a06b6fcbd8fc
ONBOOT=yes
IPADDR0=10.8.8.71
PREFIX0=24
GATEWAY0=10.8.8.2
DNS1=10.8.8.70
BONDING_OPTS=mode=active-backup
DOMAIN=rhce.local
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

We can test bonding by disabling and enabling slave interfaces, network connection should not be dropped.

Configure Network Teaming

We’ll need the teamd package:

# yum install -y teamd

As with the bonding, while not required, you may want to check nmcli examples, where example 7 is about adding a team master and two slaves.

# man 5 nmcli-examples

Now, to create a team master, we can try to remember some JSON config, or we can use teamd example files that are available on a RHEL 7 server.

Copy one of the example files, open for editing and leave the “runner” part only deleting everything else:

# cp /usr/share/doc/teamd-1.9/example_configs/loadbalance_1.conf /root/

We use the loadbalance runner, however, feel free to pick the activebackup or any other.

The following runners are available at time of writing.

  1. broadcast (data is transmitted over all ports),
  2. round-robin (data is transmitted over all ports in turn),
  3. active-backup (one port or link is used while others are kept as a backup),
  4. loadbalance (with active Tx load balancing and BPF-based Tx port selectors),
  5. lacp (implements the 802.3ad Link Aggregation Control Protocol).

Make sure there is no comma “,” left at the end of the runner line, otherwise you may get a connection activation failure.

# cat /root/loadbalance_1.conf
{
        "runner":               {"name": "loadbalance"}
}

Create a load balanced teaming interface named myteam0:

# nmcli c add type team con-name myteam0 ifname myteam0 config /root/loadbalance_1.conf

As with the bonding, we have our gateway on 10.8.8.2 and DNS (FreeIPA) server on 10.8.8.70, therefore we want to reflect these within the config.

We also want to assign a unique local IPv6 IP address fc00::10:8:8:72/7 to the teamed interface. IPv6 on a teamed interface requries some extra kernel configuration to handle duplicate address detection, which we want to cover in the article.

If on RHEL 7.0, do the following:

# nmcli c mod myteam0 ipv4.addresses "10.8.8.72/24 10.8.8.2" \
 ipv4.method manual ipv4.dns 10.8.8.70 ipv4.dns-search rhce.local \
 ipv6.addresses fc00::10:8:8:72/7 ipv6.method manual

If on RHEL 7.2, we need use the ipv4.gateway property to define a gateway:

# nmcli c mod myteam0 ipv4.addresses 10.8.8.72/24 \
 ipv4.gateway 10.8.8.2 ipv4.dns 10.8.8.70 ipv4.dns-search rhce.local \
 ipv4.method manual \
 ipv6.addresses fc00::10:8:8:72/7 ipv6.method manual

Add two network devices to the myteam0 interface:

# nmcli c add type team-slave ifname enp0s8 con-name slave1 master myteam0
# nmcli c add type team-slave ifname enp0s17 con-name slave2 master myteam0

Note that starting the master interface does not automatically start the port interfaces. However, starting a port interface always starts the master interface, and stopping the master interface also stops the port interfaces.

# nmcli c up myteam0

Check connections:

# nmcli c
NAME     UUID                                  TYPE            DEVICE
slave1   c5551395-06d4-482b-9cb1-b73decf6f68c  802-3-ethernet  enp0s8
myteam0  05880fe0-38a8-43ca-83c0-e420e84dde9a  team            myteam0
slave2   a527b772-6cec-4c8a-bc7e-f104433c8eeb  802-3-ethernet  enp0s17
# teamdctl myteam0 state
setup:
  runner: loadbalance
ports:
  enp0s17
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
  enp0s8
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
# teamnl myteam0 ports
 3: enp0s17: up 1000Mbit FD
 2: enp0s8: up 1000Mbit FD

Check the routing tables for IPv4 and IPv6:

# ip ro
default via 10.8.8.2 dev myteam0  proto static  metric 1024
10.8.8.0/24 dev myteam0  proto kernel  scope link  src 10.8.8.72
# ip -6 ro|grep -v error
fc00::/7 dev myteam0  proto kernel  metric 256
fe80::/64 dev myteam0  proto kernel  metric 25

Ensure the DNS setting were set up correctly:

# cat /etc/resolv.conf
# Generated by NetworkManager
search rhce.local
nameserver 10.8.8.70

The ifcfg file configuration for references:

# cat /etc/sysconfig/network-scripts/ifcfg-myteam0
DEVICE=myteam0
TEAM_CONFIG="{  \"runner\":             {\"name\": \"loadbalance\"} }"
DEVICETYPE=Team
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=myteam0
UUID=05880fe0-38a8-43ca-83c0-e420e84dde9a
ONBOOT=yes
IPADDR0=10.8.8.72
PREFIX0=24
GATEWAY0=10.8.8.2
DNS1=10.8.8.70
DOMAIN=rhce.local
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

As with bonding, we can test teaming by disabling and enabling slave interfaces, network connection should not be dropped.

IPv6 and Duplicate Address Detectiod (DAD)

You may notice that after the server reboot, IPv6 interfaces go into the dadfailed state:

# ip ad show myteam0
8: myteam0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 08:00:27:ff:82:00 brd ff:ff:ff:ff:ff:ff
    inet 10.8.8.72/24 brd 10.8.8.255 scope global myteam0
       valid_lft forever preferred_lft forever
    inet6 fc00::10:8:8:72/7 scope global tentative dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feff:8200/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever

You may be unable to ping6 IPv6 addresses. To fix this, check the kernel configuration:

# sysctl -a | grep accept_dad
net.ipv6.conf.all.accept_dad = 1
net.ipv6.conf.default.accept_dad = 1
net.ipv6.conf.enp0s17.accept_dad = 1
net.ipv6.conf.enp0s8.accept_dad = 1
net.ipv6.conf.lo.accept_dad = -1
net.ipv6.conf.myteam0.accept_dad = 1

Disable DAD on the teamed interface:

# sysctl -w net.ipv6.conf.myteam0.accept_dad=0

The meaning of accept_dad is as follows:

accept_dad - INTEGER
    Whether to accept DAD (Duplicate Address Detection).
        0: Disable DAD
        1: Enable DAD (default)
        2: Enable DAD, and disable IPv6 operation if MAC-based duplicate
            link-local address has been found.

Make the change persistent, add the following line to the new file /etc/sysctl.d/accept_dad.conf:

net.ipv6.conf.myteam0.accept_dad=0

Restart the teamed inetrface:

# nmcli c down myteam0; nmcli c up slave1; nmcli c up slave2

Check IPv6 status:

# ip ad show myteam0
4: myteam0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 08:00:27:ff:82:00 brd ff:ff:ff:ff:ff:ff
    inet 10.8.8.72/24 brd 10.8.8.255 scope global myteam0
       valid_lft forever preferred_lft forever
    inet6 fc00::10:8:8:72/7 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feff:8200/64 scope link
       valid_lft forever preferred_lft forever

54 thoughts on “Configure Aggregated Network Links on RHEL 7: Bonding and Teaming

    • Yes, and it’s mentioned within the article as you have to use the ipv4.gateway property to define a gateway :)

  1. Question, if I perform a con down of slave1 and slave2, I can still ping and access my IP address I set on the myteam0? I would have thought that the team would be unreachable with both slaves down? How does this work?

    Basically if the team is still up even with both interfaces down, how can I actually really test that it’s working properly, short of disabling the network interfaces on my VM? Is this normal?

    • You likely have something misconfigured. When I take both slave interfaces down, I lose network connection:

      # teamdctl myteam0 state
      setup:
        runner: activebackup
      runner:
        active port:
      # host 8.8.8.8
      ;; connection timed out; no servers could be reached
    • Hi Tomas,
      Facing a similar problem, even though output of ‘teamdctl team0 state’ is similar to the one you provided below, team0 connection still holds the ip even after both the slaves are down and it is ping-able in the network.

    • Did you ever figure this out?

      CentOS Linux release 7.5.1804 in-use here and even though both of my backend ports are offline, I can still ping the team interface IP. If I bring down the team interface itself, it is no longer ping-able.
      Is this a sign of a misconfiguration?

      # teamdctl team01 state
      setup:
        runner: activebackup
      runner:
        active port:
  2. Hie Tomas

    This objective is a bit ambiguous for me and what you have shared works perfectly

    Use network teaming or bonding to configure aggregated network links between two Red Hat Enterprise Linux systems

    Does it mean that 2 different machines as below

    Machine A with 2 network interfaces(teamed together)
    Machine B with 2 network interfaces(teamed together)

    then machine A communicates with machine B using their repsective IPs

    or something else i dont understand entirely

    • Yes, the above is the way that I understand it.

      An example would be a web server with a teamed network link (two interfaces) and a database server with a teamed network link (two interfaces), so they have an aggregated (redundant) network link between them.

  3. Hi Tomas,
    For your reply to Martin on this subject:
    Machine A with 2 network interfaces(teamed together)
    Machine B with 2 network interfaces(teamed together)
    then machine A communicates with machine B using their repsective IPs

    Do we also need to add any bridge over the network after Teams configuration? I see it in some other examples and very confuse about it! Thanks

    • It depends, does the task ask you to add a bridge? If it does, then you do need one, if it doesn’t, then you don’t.

      Bridge is not required to get a teamed interface to work.

  4. Im an trying to practice this lab to my vmware player installed with rhel7.2 with 2 nics and i cant make it work, the team interface wont up, tried bonding the same results. Is there anything i need to setup in the network config of vmware?

    • Hi Glenn, I tried on VMware ESXi 5.1 (I don’t use VMware player), and it works fine. You need to have two network interfaces attached.

  5. Hi Tomas,

    Does RHCE exams blueprint want us to use teamd or bonding on the exam? or is any method applicable? I think the bonding is straight forward and I’m curious to know if it cool to use that.

    • RHCE exam objectives require you to know how to use network teaming or bonding to configure aggregated network. You need to know both, but can pick and use the one that you like, as they achieve the same goal really.

  6. Hi Tomas,
    can we assign another ip address on a interface that already a member of network teaming

    • I’m sorry, I don’t quite understand. Do you want to assign another IP to a teamed interface, or to one of its slave interfaces?

  7. Hi Tomas,
    I want to assign IP address on one of the slave interface is it possible ? (on a interface that already a member of a teaming interface)
    or on a teaming interface can we assign IPv4 address and IPv6 address on same teaming interface

    • I don’t think that assigning IP to slave interfaces is going to work since the loadbalance runner has to customise ARP responses sent to each peer on the Ethernet domain, such that the hosts are spread across the slave interfaces. Give it a go if you want, but I’d be surprised if that worked without unexpected consequences.

      Regards IPv4 together with IPv6 on a teamed interface, yes, you can surely do that. There are examples on this very blog post in case you’re intereted.

  8. Hi Tomas,

    If I shutdown one of my slaves (nmcli con down eth0) then the second server loses connection. I can see the active port changing but the connection is still lost. If I restore the interface and shutdown the other slave interface, then nothing happends. My ping just keeps running.
    So far i tried Activebackup and Roundrobin voor modes.
    Did you encounter this behaviour?

    • No, I didn’t I’m afraid.

      On second thought, I may have had something similar on VirtualBox, so had to enable a promiscuous mode. But it wasn’t the case on KVM.

    • Hi Tomas,
      Its strange. I tried multiple guides. Maybe its because of VMware workstation with NAT interfaces..
      Could you look at my config below?
      Thanks in advance.

      [[email protected] ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0
      DEVICE=team0
      DEVICETYPE=Team
      BOOTPROTO=none
      DEFROUTE=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_FAILURE_FATAL=no
      NAME=team0
      UUID=a950a224-9cb0-48ed-90f4-4dc019aa665b
      ONBOOT=yes
      IPADDR0=192.168.4.210
      PREFIX0=24
      GATEWAY0=192.168.4.1
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes

      [[email protected] ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
      BOOTPROTO=dhcp
      DEFROUTE=yes
      PEERDNS=yes
      PEERROUTES=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=eth0
      UUID=6648eb26-c793-44fc-8685-2b5cbaadfac5
      DEVICE=eth0
      ONBOOT=yes
      TEAM_MASTER=team0
      DEVICETYPE=TeamPort

      [[email protected] ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
      BOOTPROTO=dhcp
      DEFROUTE=yes
      PEERDNS=yes
      PEERROUTES=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=eth1
      UUID=dd2231b3-3530-4d6c-a8ff-6860d003cc0a
      DEVICE=eno33554992
      ONBOOT=yes
      TEAM_MASTER=team0
      DEVICETYPE=TeamPort

    • You have the wrong device reference in the file ifcfg-eth1. It should be eth1, not eno33554992.

      Not sure if it helps you much, but I use teaming on VMware (ESXi) VMs and I didn’t have this issue.

  9. # nmcli c
    NAME UUID TYPE DEVICE
    slave1 c5551395-06d4-482b-9cb1-b73decf6f68c 802-3-ethernet enp0s8
    myteam0 05880fe0-38a8-43ca-83c0-e420e84dde9a team myteam0
    slave2 a527b772-6cec-4c8a-bc7e-f104433c8eeb 802-3-ethernet enp0s17

    After we setup the teaming .. how do we test its working .
    e.g.
    from any machine we run ping command to team ip address. in this case .
    ping 10.8.8.72 (continues ping)
    and we do the following testing. ..
    team is always up.
    case1 ) slave1 down , slave2 up — what should be behaviour for ping response
    case2 ) slave1 up , slave2 down — what should be behaviour for ping response

    Thanks to add these details in this tutorial.

    • Hi, I have mentioned this in the blog post, you can test teaming by disabling and enabling slave interfaces, network connection should not be dropped. This is exactly what you asked for, to answer your question, in both cases (case1 and case2) ping responses should continue and and you should have zero packet loss.

  10. Tomas, I add my compliments for your excellent tutorials. In my case, I have successfully added Teaming to RHEL 7.4 with 4 NICs. (I did have to de-select “load on startup” for the physical devices so that the Slaves would startup instead and activate the Team effectively. Is that normal? Should the “priority” settings be used instead?) More importantly, even though my Team shows UP and LOWER_UP in the ‘ip a’ command output, I cannot PING the Team IP (manually config’d) from any other box on the local network (another 10.201.* box on a /16 network). I can ping from the same box, but no other. No DNS, just using the IP address. How do I “publish” that new IP Address? Does the LACP runner have any side-effects if the network lacp is not yet activated?

    • Thank you Tim! The only runners that I used were loadbalance and active-backup, I never configured lacp I’m afraid, it’s hard to advise therefore.

  11. Well, apparently the LACP Runner does not like it when LACP is not active on the network. (I was trying to set it up in advance, since the Network Team also did not want to enable it until everything was ready to use it.) I replaced the JSON for the Team with a “loadbalance” runner config by way of the nmtui command, restarted the network, and the Team IP was immediately visible to another Server on the local network.

  12. Hello Tomas,

    During the exam i suppose that we will be asked to setup teaming and also IPV6 on interfaces

    If this is the case, we setup teaming on the interface . both the interfaces are part of team now
    NOW
    There is another question to setup ipv6 on both the VMs during the exam.

    Do we need to configure ipv6 on teaming interface ( my assumption )
    or
    we have any method to assign ipv6 addresses seperately once teaming is also in place ?? ( or Is it possible to assign ipv6 seperately on interface once teaming is in place ?)

    I hope I have clarified the question .

    • All the details including what IPs and which interfaces they need to be assigned to will be provided during the exam. You won’t have to assume anything.

      You can assign multiple IPs to a single interface.

  13. Just to rephrase the question:
    once both the interface are assigned to teamX. I think we can only give ip to team that is bound with these interfaces to make it work..

    we cann’t give ip to interface separately once its bound with team ?

    or is there any way to assign ip to the interface one its attached with team , and still that new ip is pingable ??
    thanks

    • I’m going to use the response I provided to a person who asked a similar question some time ago, I don’t think that assigning IP to slave interfaces is going to work since the loadbalance runner has to customise ARP responses sent to each peer on the Ethernet domain, such that the hosts are spread across the slave interfaces. You can always give it a go and test it yourself, please let us know if that worked for you.

  14. Hi Tomas,
    Thanks for this great resource I’m on your page every day at the mo gearing up for my RHCE.
    I’ve noticed that here on network teaming the command
    -> #man 5 nmcli-examples
    Dosen’t work on RHEL7.3 and 7.4 (havn’t tested on RHEL7.0 , 7.1, 7.2)
    Instead it’s
    -> #man nmcli-examples

    • Hi Jason,

      Thanks! The key to acquiring proficiency in any task is repetition (with certain obvious exceptions), so good luck with your studies.

      On RHEL 7.0 the nmcli-examples man page is under section 5 (File Formats), but I see that it’s been moved to section 7 in the later versions.

    • Just wondering, when I’ve set up teamed interface, (I’ve got my lab envir in Virt.Box with NIC’s in promisuous mode)
      and I ping my IPA server, then i get DUP! for every other ping? but not when i do the bonding setup.
      Do you happen to know why?

    • Try changing that to active-backup, it should fix it.

      I’m not familiar with your network set up, but it may be something to do with a switch updating its MAC forwarding tables.

  15. Your RHCE tutorials are great. Thanks for all your effort.

    I am experiencing an issue related to link aggregation, where connections are using the IP address associated with the bond, instead of the primary interface, eth0. This results in connectivity/authentication issues when, for example, attempting to mount an NFS share that has been exported to the IP address associated with eth0, obtaining a Kerberos TGT, or restricting access to certain IP addresses in Apache, to name a few. How do you ensure that eth0 is used and not the aggregated interface? Output from ip route indicates that eth0 is the default route, yet the bond is clearly making connections to services intended for the eth0 IP address. Any help would be greatly appreciated. I am totally stumped.

  16. Hey Tomas,

    Great tutorials!

    Your comment on the promisc mode is exactly what I was looking for. I have looked everywhere for what I was doing wrong. Btw is there a link to why promisc mode should be on for teaming to work?

    Thanks.

    • Thank you!

      It was a while ago I used VirtualBox, I cannot remember exactly, but I think I simply learnt this bit the hard way.

  17. Hey. on the exam after creating an aggregated interface I was not able to ping the neighboring system through the created interface.
    I checked the settings several times – network addresses and interface availability were enabled and up state.
    Should the aggregated interfaces on the two systems ping each other during the exam?
    How to check the correctness of the aggregated interface with unavailability of ping of neighboring system?

    • Ping should normally work if firewalls are configured to allow ICMP traffic and net.ipv4.icmp_echo_ignore_all isn’t set to 1.

      I rarely rely on ping nowadays and almost always check with telnet or netcat. There is usually at least one TCP port open which you can use to test network connections.

  18. Hello,
    I am trying to configure bonding interface on Red Hat Linux 7.5 64 bit on Virtual Machine Oracle VirtualBox for installation of Oracle 12c. I am following this link:
    https://access.redhat.com/documentation/en-us/reference_architectures/2017/html-single/deploying_oracle_database_12c_release_2_on_red_hat_enterprise_linux_7/index#public_network_configuration
    My gateway is 10.0.2.2 and I am able to ping it, but if I create /etc/sysconfig/network-scripts/ifcfg-bond0 by having :


    IPADDR=10.0.1.1
    GATEWAY=10.0.2.2


    and other parameters, then I am not able to ping the gateway. Since, I am using Virtual Machine, so I have only one option of Bridge Network as network adapter to connect the internet. Kindly tell me, if I need to submit more info.

Leave a Reply

Your email address will not be published. Required fields are marked *