Aggregating network interfaces

Posted by opk on Fri 10 Feb 2006 at 09:42

Using more than one hard drive to achieve better performance and fault tolerance is very common. Less well known is that it's also possible to aggregate more than one network interface into a single logical interface. In Linux, this is handled by the bonding driver. Benefits of doing this are much the same as the benefits of aggregating discs using RAID: if one device dies, your server carries on working and by using two devices in parallel, performance can be improved.

The first thing you need is two network interfaces. It's not entirely uncommon for a server to come with two: one gigabit card on the motherboard and a separate 100 Mb PCI card. You will need to ensure that the Linux kernel has recognised both interfaces. Running /sbin/ifconfig -a lists network interfaces. Typically, you should see both eth0 and eth1 interfaces. If not, make sure that the modules for both interfaces have been compiled for your kernel and loaded. You may need to do something special if both devices use the same driver.

The one Debian package you will need to install is ifenslave, specifically ifenslace-2.6.

The next step is to ensure that the bonding driver is compiled for your kernel. If you're lucky, running modprobe bonding will load the driver. Otherwise, you can find it in the linux kernel configuration under "Device Drivers" followed by "Network device support". The option is "Bonding driver support". It needs to be compiled as a module, mainly so that options can be passed when it is loaded.

There are a wide range of options, for full details, I suggest you read the file in Documentation/networking/bonding.txt that comes with the Linux kernel sources. The options are passed as parameters to the bonding driver when it is loaded. The first option specifies a name for the combined interface, typically bond0. It is also necessary to specify a method by which the bonding driver can monitor the interface for failures. MII based monitoring has worked well for me.

There is also an option for specifying a mode. This controls how the bonding driver decides which interface to transmit packets on. There are also options for handling received packets. Each of your two network cards will have a different physical (MAC) address and on an ethernet network, the machine transmitting a packet has to specify the physical address corresponding to the packet destination. To use both network interfaces for received packets you therefore need to either have a clever enough network switch that understands link aggregation or you need to subvert ARP: the system by which machines convert IP addresses into MAC addresses. The mode I use is balance-alb which uses ARP. If you're unsure, the documentation that I referred to earlier explains all the modes in detail.

Once you have the bonding driver installed, you need to ensure that it is loaded when your computer boots up. For this, you need to add an alias in the module-init-tools setup. This alias associates the kernel module with the network interface name (typically bond0). It is also necessary to specify the options. I added the following lines to /etc/modprobe.d/arch/i386:

alias bond0 bonding
options bond0 mode=balance-alb miimon=100

That uses bond0 as the name for the network interface, balance-alb as the mode and MII based monitoring every 100 milliseconds.

The bond0, interface is then configured, using ifconfig, in the same way as you would previously configure your eth0 interface. For example:

ifconfig bond0 123.123.123.4 netmask 255.255.255.0 up

Once you have configured the IP address, you just need to add the slave devices:

ifenslave bond0 eth0 eth1

That should have leave you with the bond0 interface working. If it works, you'll want to ensure that ifenslave is run at the next reboot. From /etc/network/interfaces you can run ifenslave by using the "up" keyword. The following is what I have except that I've replaced real IP addresses with 'x's.

auto bond0
iface bond0 inet static
        address xxx.xxx.xxx.xxx
        netmask 255.255.255.0
        network xxx.xxx.xxx.0
        broadcast xxx.xxx.xxx.255
        gateway xxx.xxx.xxx.1
        up /sbin/ifenslave bond0 eth0 eth1

With everything up, you'll want to test it out. Running /sbin/ifconfig -a will give you some idea. I also use bwm-ng to monitor bandwidth usage. And to really test it, try running a ping and then pull the network cables out in turn.

 

 


Posted by tsykoduk (63.230.xx.xx) on Fri 10 Feb 2006 at 16:25
Thanks for this timely (for me) and useful article!

[ Parent | Reply to this comment ]

Posted by Anonymous (213.164.xx.xx) on Fri 10 Feb 2006 at 16:29
Yes, very nice.

[ Parent | Reply to this comment ]

Posted by TRx (80.32.xx.xx) on Mon 13 Feb 2006 at 19:46
Great article!!

[ Parent | Reply to this comment ]

Posted by Anonymous (71.96.xx.xx) on Sat 11 Feb 2006 at 15:39
Excellent! Do you need to modify routing table ?

[ Parent | Reply to this comment ]

Posted by Anonymous (195.228.xx.xx) on Sat 11 Feb 2006 at 21:19
Of course, not.
this's operating on osi level 2, so there's no need to mess around routing.

asd

[ Parent | Reply to this comment ]

Posted by Anonymous (82.146.xx.xx) on Sun 12 Feb 2006 at 11:45
Just wondering, can you get higher speeds with this when both interfaces are on the same 100 mbit/s switch? I guess it won't make that much difference?

[ Parent | Reply to this comment ]

Posted by Anonymous (82.146.xx.xx) on Sun 12 Feb 2006 at 16:33
As long as your switch understands trunking (managed only thus), you can use this technique to achieve double, triple and higher network troughput.

Switches have an internal capacity that is far higher than the port speed. Good switches should be able to push nearly full port speed on all ports full duplex.

Of course, you can only push as much as the 'other side' can swallow.


This also has a downside:

We connect our Fat Fileserver (TM) with two gigE ports on two physically different but interconnected switches.
The clients have a single 1000 or 100 Mbit connection to this L2 fabric.

It's nice to know that we can push close to 2Gbit out of the fileserver (memory cache mostly, obviously), but we only have 1Gbit to count on. It's very tempting to forget about this when doing timing tests etc.


Of course, if you go for speed only, there is no need to worry about that. Just get a switch that supports trunking.

[ Parent | Reply to this comment ]

Posted by Anonymous (202.146.xx.xx) on Tue 14 Feb 2006 at 05:34
Trunking.. Hmm.. That sounds like an expensive feature which my switch does not support. Mine is an el-cheapo Level 1 8 port Switch.

Based on this, I believe that w/o trunking, there's not much point in doing bonding, is there?

[ Parent | Reply to this comment ]

Posted by Anonymous (64.213.xx.xx) on Tue 14 Feb 2006 at 06:15
I believe he meant port aggregation, not trunking. Trunking is usually associalted with passing traffic from multiple VLANs across a single interface. But most switches that I've seen either support both or neither feature. You'll start seeing these things (to various degrees) on managed switches in the $150+ range and up.

[ Parent | Reply to this comment ]

Posted by Anonymous (67.88.xx.xx) on Mon 13 Feb 2006 at 19:29
I have been wondering about this for a long time but never put the time in to try and figure it out. Thanks for the great article, it looks much easier than I thought.

[ Parent | Reply to this comment ]

Posted by Anonymous (63.228.xx.xx) on Mon 13 Feb 2006 at 19:36
If you have two internet connections through two different ISPs, how would this work? Do you move your default GW to the bonded device or does each of your internet connections have a default GW that the bonding driver just round-robin's on? What exactly goes on behind the curtains with load-balancing the two connections?

[ Parent | Reply to this comment ]

Posted by opk (62.52.xx.xx) on Mon 13 Feb 2006 at 20:05
I don't think you could usefully use the bonding driver with two internet connections through separate ISPs. You can only configure one default gateway for the bond0 interface.

[ Parent | Reply to this comment ]

Posted by Anonymous (64.213.xx.xx) on Tue 14 Feb 2006 at 06:44
Yes, bonding just makes two interfaces act as one. To load balance between two ISPs, all you need to do is tweak your routing tables. See the follwoing link for more information:

http://www.tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.rpdb.multiple-l inks.html

Hmmmm, maybe I'll see if I can't write up a modern debianized article for us. But don't hold your breath.

[ Parent | Reply to this comment ]

Posted by Anonymous (72.139.xx.xx) on Mon 13 Feb 2006 at 23:28
I would think this could be useful for laptops with wireless capabilities. You could set your wired network card as the default and your wireless card as a sort of backup. It could make switching from wired to wireless seamless.

I don't have a system to test this on. Has anyone tried this?

[ Parent | Reply to this comment ]

Posted by Anonymous (82.146.xx.xx) on Tue 14 Feb 2006 at 07:00
Interesting thought. I'll try it later.

But afaik this is something targetted at High (Availability|Performance) environments. It may not react well to typical wifi behaviour.


And errrr, having your wifi L2 bridged to your wired connection is network-wise at most a debatable solution, even when it is the default in most "access points" ;)

[ Parent | Reply to this comment ]

Posted by niol (143.196.xx.xx) on Tue 14 Feb 2006 at 13:19
[ View Weblogs ]
There is a comment on this site about bonding with wireless, but I cannot manage to have it work with wpa_supplicant.

[ Parent | Reply to this comment ]

Posted by Anonymous (62.77.xx.xx) on Tue 14 Feb 2006 at 19:41
I remember once I created a bonded interface in 1 of my file server because if u have a raid0 array with great I/O capacity 100 mbit not enough.

But I can advice any of u that a bonding capable switch sometimes can solve some trouble.
I used a simple switch and the main problem was the dhcp server confused and didn't assigned adress to the clients.

There was some problem with iptraf, i dont remember waht it was :)

But today network switches not expensive, and if u plan to create bonding between 2 computer it will be excellent.

And if u want to build a huge warez ftp server :) then maybe simple round robin dns and 2 nic will be enough.

Remember don't use a cheap, simple switch!

GreetZ

RaYnoR

[ Parent | Reply to this comment ]

Posted by JRue (63.110.xx.xx) on Tue 11 Nov 2008 at 17:37
I'm wondering how an additional IP on bond0 would work. Would I have to declare an additional alias like so? Or are additional IPs just for ifconfig?

alias bond0 bonding
alias bond0:1 bonding
options bond0 mode=0 miimon=100
options bond0:1 mode=0 miimon=100

and this?

ifenslave bond0 bond0:1 eth0 eth1



Thanks

[ Parent | Reply to this comment ]

Posted by Anonymous (64.183.xx.xx) on Thu 12 Feb 2009 at 00:11
Thanx a bunch for this "How To" - very useful and easy to do!!
I'm running Deb. Etch (4.0r6) on our new file server with a Netgear GS748T. Pumping up to the server works great but on the way down to the workstations the connection is incredibly slow! (the few gigs that took a couple of minutes up take many hours to get back down). Any idea what is going wrong?

[ Parent | Reply to this comment ]

Sign In

Username:

Password:

[Register|Advanced]

 

Flattr

 

Current Poll

What do you use for configuration management?








( 459 votes ~ 5 comments )