Aggregating network interfaces
Posted by opk on Fri 10 Feb 2006 at 09:42
Using more than one hard drive to achieve better performance and fault tolerance is very common. Less well known is that it's also possible to aggregate more than one network interface into a single logical interface. In Linux, this is handled by the bonding driver. Benefits of doing this are much the same as the benefits of aggregating discs using RAID: if one device dies, your server carries on working and by using two devices in parallel, performance can be improved.
The first thing you need is two network interfaces. It's not entirely uncommon for a server to come with two: one gigabit card on the motherboard and a separate 100 Mb PCI card. You will need to ensure that the Linux kernel has recognised both interfaces. Running /sbin/ifconfig -a lists network interfaces. Typically, you should see both eth0 and eth1 interfaces. If not, make sure that the modules for both interfaces have been compiled for your kernel and loaded. You may need to do something special if both devices use the same driver.
The one Debian package you will need to install is ifenslave, specifically ifenslace-2.6.
The next step is to ensure that the bonding driver is compiled for your kernel. If you're lucky, running modprobe bonding will load the driver. Otherwise, you can find it in the linux kernel configuration under "Device Drivers" followed by "Network device support". The option is "Bonding driver support". It needs to be compiled as a module, mainly so that options can be passed when it is loaded.
There are a wide range of options, for full details, I suggest you read the file in Documentation/networking/bonding.txt that comes with the Linux kernel sources. The options are passed as parameters to the bonding driver when it is loaded. The first option specifies a name for the combined interface, typically bond0. It is also necessary to specify a method by which the bonding driver can monitor the interface for failures. MII based monitoring has worked well for me.
There is also an option for specifying a mode. This controls how the bonding driver decides which interface to transmit packets on. There are also options for handling received packets. Each of your two network cards will have a different physical (MAC) address and on an ethernet network, the machine transmitting a packet has to specify the physical address corresponding to the packet destination. To use both network interfaces for received packets you therefore need to either have a clever enough network switch that understands link aggregation or you need to subvert ARP: the system by which machines convert IP addresses into MAC addresses. The mode I use is balance-alb which uses ARP. If you're unsure, the documentation that I referred to earlier explains all the modes in detail.
Once you have the bonding driver installed, you need to ensure that it is loaded when your computer boots up. For this, you need to add an alias in the module-init-tools setup. This alias associates the kernel module with the network interface name (typically bond0). It is also necessary to specify the options. I added the following lines to /etc/modprobe.d/arch/i386:
alias bond0 bonding options bond0 mode=balance-alb miimon=100
That uses bond0 as the name for the network interface, balance-alb as the mode and MII based monitoring every 100 milliseconds.
The bond0, interface is then configured, using ifconfig, in the same way as you would previously configure your eth0 interface. For example:
ifconfig bond0 126.96.36.199 netmask 255.255.255.0 up
Once you have configured the IP address, you just need to add the slave devices:
ifenslave bond0 eth0 eth1
That should have leave you with the bond0 interface working. If it works, you'll want to ensure that ifenslave is run at the next reboot. From /etc/network/interfaces you can run ifenslave by using the "up" keyword. The following is what I have except that I've replaced real IP addresses with 'x's.
auto bond0 iface bond0 inet static address xxx.xxx.xxx.xxx netmask 255.255.255.0 network xxx.xxx.xxx.0 broadcast xxx.xxx.xxx.255 gateway xxx.xxx.xxx.1 up /sbin/ifenslave bond0 eth0 eth1
With everything up, you'll want to test it out. Running /sbin/ifconfig -a will give you some idea. I also use bwm-ng to monitor bandwidth usage. And to really test it, try running a ping and then pull the network cables out in turn.