NIC Teaming / BONDING in CentOS 6.X

How Can We Help?
You are here:
< Back


NIC Teaming / BONDING in CentOS 6.X

Bonding or NIC Teaming are two terms referring to the very same thing. The concept is simple: on servers with multiple NIC’s, you can use two or more cards to function as one, this increases the throughput, efficiency and redundancy of your server. There are many reasons why you would want to setup Bonding/Nic Teamin on a server, in this guide we will focus on configuring a guide that will allow your server to push 2Gb/s of traffic, using two NIC’s on  a server.

For this guide you can use your favorite text editor in Linux, we will use nano.


The IP allocation for this Test server wil be

The network work for this is complete.

The server is live and responding to ICMP packets.

The server can be accessed via SSH.

You MUST have local access to the server, just in case you lose access over SSH to the server.


1. We must SSH to the server and create the Logical interface

We must issue the following command to create the bond0 Logical Interface.

[root@SRV1C318 ~]# nano /etc/sysconfig/network-scripts/ifcfg-bond0

Once you create this file, we will add the configuration lines for the Master interface, that will control both NIC’s.




2. We then need to configure BOTH NIC’s as “slaves” of this Master interface we just created.

The configuration for both NIC’s is the same, so you can just copy/paste the contents of this configuration to the ones on your server.


For Nic1: [root@SRV1C318 ~]# nano /etc/sysconfig/network-scripts/ifcfg-eth0

For Nic2: [root@SRV1C318 ~]# nano /etc/sysconfig/network-scripts/ifcfg-eth1

3. Delete everything you see on these two configuration files, and add ONLY the following lines to BOTH configuration files. NOTE: Use eth0 and eth1 respectively.




At this point, the NIC’s configuration is made, there are only a few things pending.

4. Configure the Kernel Modules.

On Centos 6, you must edit the following file: /etc/modprobe.d/bonding.conf

NOTE: If the file does not exist, it must be created.

[root@SRV1C318 /]# nano /etc/modprobe.d/bonding.conf

Add the following Lines:



alias bond0 bonding
options bond0 mode=balance-alb miimon=100


Then Save/Exit the file.


5. Lets make sure Bonding starts across reboots.

#export EDITOR=nano
#crontab -e
add the following line:
@reboot /sbin/modprobe bonding

6. We must then activate the “bonding” Kernel Module, and restart the network service.

Issue the following command:


modprobe bonding; service network restart

This will do two things:

1. Enable the kernel Module.

2. Make sure the network service is restarted.


7. Verifying it all works good.


[root@SRV1C318 /]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:c1:2c:ee
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0

Permanent HW addr: 00:25:90:c1:2c:ef
Slave queue ID: 0
[root@SRV1C318 /]#


Rackend Host

Previous Morning Your Server
Next Operating Systems Available (path list)
Table of Contents
WordPress Appliance - Powered by TurnKey Linux