Setting up a bond interface on Debian 11 with ifupdown or systemd's networkd



With my brother's new office taking shape, he wanted a more reliable and rack mountable solution for his centralised storage. We went with the Asrock Rack 1U4LW-X570/2L2T RPSU 1U server. While this may seem like overkill for centralised SMB storage, his previous home office server suffered from instability that finally turned out to be linked to the PSU, which caused a lot of downtime (and time is money). Not only does the Asrock Rack come with redundant PSUs, it also sports an Intel X550-AT2 dual 10G Base-T adapter, along with 2 Intel I210-AT 1G Base-T adapters. Redundance is what we're going for here, so we're going to implement a bond interface: a single logical interface that aggregates multiple physical network interfaces - also referred to as 'bonding' or 'link aggregation'.

A little background on bonding

On Linux, one typically uses the bonding driver to achieve this. The driver supports multiple modes - see the kernel documentation for a more in-depth reference. I'll list the ones you see pop up most frequently. You can use the number and name interchangeably within configuration files, and if needed you can also pass those as module options through e.g. /etc/modprobe.d/ - just like all the other options listed in the kernel documentation.

Most frequently used bonding mode names and matching numbers below. Except for mode 1, all these modes require specific configuration on the switch side.

  • mode 0: balance-rr: round robin; default.
  • mode 1: active-backup
  • mode 2: balance-xor
  • mode 4: 802.3ad: IEEE 802.3ad dynamic link aggregation. Requires specific support often only found on higher end switches, seldom on consumer devices.

For more info and detail, refer to the kernel documentation. It seems there's seven modes total, as of 2021; kernel documentation was last updated 2011. We're not interested in load balancing here - redundancy is our goal, and that's provided by mode 1. It means one link will be active, and the other one will be kept as backup and only be brought online when the active link goes offline. Since both links won't be used simultaneously, you do not need any special configuration on the other end of the link.

The instructions below assume you are not using NetworkManager; if you are, disable it before proceeding. Make sure you do have physical access to the system you're configuring in case stuff goes wrong.

Setting up a bond interface the Debian way (with ifupdown)

Debian allows you to set up bonding through /etc/network/interfaces. For it to work, you'll need to install the ifenslave package first. This will also provide you with a few handy sample configurations in /usr/share/doc/ifenslave/examples/. Debian relies on ifupdown for interface management. I configured my bond interface like this:

auto bond0
iface bond0 inet static
    address 192.168.1.20/24
    gateway 192.168.1.1
    dns-nameservers 192.168.1.1
    bond-slaves x550-at2-up x550-at2-down
    #bond-slaves enp36s0f1 enp36s0f0
    bond-mode active-backup
    bond-miimon 100
    bond-primary x550-at2-up
    bond-downdelay 200
    bond-updelay 200
iface bond0 inet6 dhcp

The IPv4 is fully static, while IPv6 is handled through DHCP. Debian 11 does not require you to configure the slaves separately in /etc/network/interfaces, so just defining the bond interface itself suffices. See the first example on the Debian wiki page for bonding. If you're using udev renaming like me to turn the newfangled persistent device names into something human readable, you can use those as well. If you find your interface does not come up reliably, you may want to switch to the kernel names (commented).

Ifupdown bond settings and their values

  • bond-slaves defines the slave interfaces.
  • bond-miimon sets the frequency with which the kernel will inspect the link.
  • bond-downdelay sets the timeout at which the kernel will conclude that the currently active interface is indeed down.
  • bond-updelay, on the other hand, sets the delay with which the OS switches over to the new active interface, after that link has been brought up.
  • bond-primary specifies the primary interface and is an optional setting.

All values are by default defined in milliseconds (ms), and both delay settings need to be multiples of the bond-miimon value.

Switching from the Debian ifupdown implementation to systemd's networkd

While I love being able to define network settings in just one file, my main niggle with Debian's /etc/network/interfaces is that it seems to bring up interfaces regardless of whether there's a cable inserted. I have configured the I210-AT adapters as fallback interfaces, in case the bond interface would fail to come up completely. The fact they are unconnected by default seems to confuse Debian though - even with allow-hotplug set, (which it's apparently not meant for that but for real removable interfaces like USB to Ethernet adapters). Debian will configure all the interfaces defined as auto but with the I210-AT unconnected, the bond interface somehow also breaks. That leaves me with no choice but to disable the gigabit adapters. And that's really annoying.

A friend told me systemd's networkd handles those scenarios much more gracefully, so I cloned my setup to systemd's networkd. And yes, the unconnected gigabit adapters do not get brought up as long as there's no cable connected.

Preparing for migration to systemd-networkd

First, you need to rename /etc/network/interfaces so it won't get parsed at boot anymore:

$ sudo mv /etc/network/interfaces{,.backup}

Then enable networkd. If you hadn't it active previously, you'll notice an /etc/systemd/network/ being created.

$ sudo systemctl enable systemd-networkd

If you want booting to pause until the network is up, enable the systemd-networkd-wait-online service as well:

$ sudo systemctl enable systemd-networkd-wait-online

Basic systemd-networkd configuration

Systemd's networkd uses separate network and (virtual) device configuration files. The former end in .network, while the latter end in .netdev. Let's set up one of the fallback connections first as an example. I have renamed the gigabit interfaces to i210-up and i210-down. It's a simple, regular, physical interface, so just a .network file is plenty:

$ cat /etc/systemd/network/20-i210-up-fallback.network 
[Match]
Name=i210-up

[Network]
Address=192.168.1.21/24
Gateway=192.168.1.1
DNS=192.168.1.1

As you can see, this is a fully static interface. The [Match] stanza tells networkd to match these settings to the interface called i210-up. Note the sequence number at the beginning of the file name: we'll want the bond interface to be brought up first, so the other interfaces get a higher number.

Defining and configuring the bond interface

For the bond interface, we need to tell networkd about the virtual interface first by creating a .netdev file. Note the [NetDev] stanza describing the interface type, with the other stanza describing the bond settings.

Notice lots of older tutorials use bond1 instead of bond0; this is a safety precaution because apparently the kernel itself may create a bond0 interface once the bonding module is loaded. This is not a bug but a feature apparently. This can be prevented by setting options bonding max_bonds=0 in e.g. /etc/modprobe.d/bonding.conf. I have been unable to reproduce this behaviour myself on Debian 11 'Bullseye', but I'd recommend you set the modprobe option if you'd like to use bond0, to be on the safe side.

$ cat /etc/systemd/network/10-bond0.netdev
[NetDev]
Name=bond0
Description=Link aggregation with failover
Kind=bond

[Bond]
Mode=active-backup
PrimaryReselectPolicy=always
MIIMonitorSec=0.1s
UpDelaySec=0.5s
DownDelaySec=0.5s

Systemd-networkd bond settings and their values

Any of these settings can be read up on in detail by consulting man 5 systemd.netdev in your favourite terminal, or check it online.

  • Mode takes identical values to the ifupdown implementation; you can use the corresponding number (1) too instead of active-backup.
  • PrimaryReselectPolicy specifies the reselection policy for the primary slave.
  • MIIMonitorSec is identical to ifupdown's bond-miimon.
  • UpDelaySec is identical to ifupdown's bond-downdelay.
  • DownDelaySec is identical to ifupdown's bond-updelay.

With the interface defined, we can move on to the bond slaves. There is no need to define each bond slave separately; you can specify them together in one single file. We're using kernel names here, but obviously you can use the udev defined names like explained earlier (unless you notice that causes trouble). The file below tells networkd which interfaces are part of the virtual bond interface.

$ cat /etc/systemd/network/10-bond0-slaves.network 
[Match]
Name=enp36s0f1
Name=enp36s0f0

[Network]
Bond=bond0

The alternative to this single .network file would be to define each slave interface in its own file, which gives you more fine-grained control over the settings for each interface should you need it. I don't.

With the bond settings and its slaves defined, we can move on to the final interface settings:

$ cat /etc/systemd/network/10-bond0-config.network
[Match]
Name=bond0

[Network]
Address=192.168.1.20/24
Gateway=192.168.1.1
DNS=192.168.1.1
DNS=9.9.9.9
DHCP=ipv6

That's it - now you can reboot and enjoy your new bond interface with failover feature!

Checking bond interface status

You can keep an eye on the 'health' of your bond interface in a few ways. Systemd's networkctl will print detailed information - example below. The monitoring and delay values seem to be ten times what's set in the networkd settings; this looks like a bug (see the /proc check next).

# networkctl status bond0
● 2: bond0                                                                        
             Link File: /usr/lib/systemd/network/99-default.link
          Network File: /etc/systemd/network/10-bond0-config.network
              Type: bond
             State: routable (configured)
            Driver: bonding
            HW Address: xx:xx:xx:xx:xx:xx
               MTU: 1500 (min: 68, max: 65535)
             QDisc: noqueue
  IPv6 Address Generation Mode: eui64
              Mode: active-backup
            Miimon: 1s
               Updelay: 5s
             Downdelay: 5s
      Queue Length (Tx/Rx): 16/16
          Auto negotiation: no
             Speed: 10Gbps
            Duplex: full
               Address: 192.168.1.20
                2a02:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
                fd7b:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
                fe80::xxxx:xxxx:xxxx:xxxx
               Gateway: 192.168.1.1 (Ubiquiti Networks Inc.)
                fe80::xxxx:xxxx:xxxx:xxxx (Ubiquiti Networks Inc.)
               DNS: 192.168.1.1
                9.9.9.9
                fd7b:xxxx:xxxx::1
         DHCP6 Client IAID: 0xc1xxxxxx
         DHCP6 Client DUID: DUID-EN/Vendor:00xxxxxxxxxxxxxxxxxxxxxxxxxx

nov 28 15:28:53 porphyrion systemd-networkd[362]: bond0: netdev ready
nov 28 15:28:53 porphyrion systemd-networkd[362]: bond0: Link UP
nov 28 15:29:01 porphyrion systemd-networkd[362]: bond0: Gained carrier
nov 28 15:29:02 porphyrion systemd-networkd[362]: bond0: Gained IPv6LL

To check for any connection failures, you can query /proc/net/bonding/bond0. Notice how the polling and delay values look right here.

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.10.0-9-amd64

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: x550-at2-down
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 500
Down Delay (ms): 500
Peer Notification Delay (ms): 0

Slave Interface: x550-at2-down
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: xx:xx:xx:xx:xx:xx
Slave queue ID: 0

Slave Interface: x550-at2-up
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: xx:xx:xx:xx:xx:xx
Slave queue ID: 0

In case of link failures, you'll see the 'Link Failure Count' numbers above increment. Try unplugging and replugging each cable a few times to see the kernel pick these events up and switch to your backup link.

You can see a general status of all your interfaces like this:

$ sudo networkctl 
IDX LINK            TYPE     OPERATIONAL SETUP
  1 lo              loopback carrier     unmanaged
  2 bond0           bond     routable    configured 
  3 enxxxxxxxxxxxxx ether    off         unmanaged
  4 i210-up         ether    no-carrier  configuring
  5 i210-down       ether    no-carrier  configuring
  6 x550-at2-down   ether    enslaved    configured 
  7 x550-at2-up     ether    enslaved    configured

7 links listed.