Tag Archives: NIC Teaming

IPv6 and Junos – VRRPv3

With the Christmas season coming up, changes and the current course of projection slow down, so this is the perfect time to start messing around in the lab with things I wouldn’t normally get the chance to do. One thing I know will be happening in the future (but not the near future) on my work network is IPv6. We run IPv6 in the Internet Core however, it’s not to be seen anywhere else on the network, and I’ve heard they’re pushing hard to get IPv6 into our hosting datacentres. With this in mind, and having time to kill, it would be good to be proactive and start looking at how IPv6 and Junos work together!

From looking at the hosting and enterprise (a small bit of enterprise) network, I had a chat with a few of the seniors and we came up with list of things that we were most likely to be used on the network, and we agreed would be the best things to test:

Routing and Switching Features

  • VRRPv3
  • BGP
  • ACL
  • Virtual Routers (VRFs)
  • IGPs (OSPFv3, Static Routes & IS-IS)
  • SLAAC (Router Advertisements)
  • DHCPv6
  • Multicasting

Firewall Features

  • NAT64 / DNS64
  • Security Policies

Of course this list isn’t the be-all or end-all however, for now it’s a good base to get me started and from there we’ll see what happens next. Where I can, I’ll be mostly using IPv6 only, but there are a few features where I’ll have dual stacked setup as it will be good ‘real world’ testing! So with all that talk and explanation out of the way…. Let’s get cracking 😀

The first protocol on my list: Virtual Router Redundancy Protocol (VRRP). I’ve previously wrote a post on how to configure VRRP between Cisco and Juniper Switch, if you take look at that post it defines what VRRP is and why you would you use it within your network. When working with IPv6 (or in Dual Stacked environment) on the other hand you will need to make sure that we are using VRRPv3. VRRPv3 supports both IPv4 and IPv6 and can be defined best in RFC5798 for what the main advantages of VRRPv3:

The VRRP router controlling the IPv4 or IPv6 address(es) associated with a virtual router is called the Master, and it forwards packets sent to these IPv4 or IPv6 addresses. VRRP Master routers are configured with virtual IPv4 or IPv6 addresses, and VRRP Backup routers infer the address family of the virtual addresses being carried based on the transport protocol. Within a VRRP router, the virtual routers in each of the IPv4 and IPv6 address families are a domain unto themselves and do not overlap. The election process provides dynamic failover in the forwarding responsibility should the Master become unavailable. For IPv4, the advantage gained from using VRRP is a higher-availability default path without requiring configuration of dynamic routing or router discovery protocols on every end-host. For IPv6, the advantage gained from using VRRP for IPv6 is a quicker switchover to Backup routers than can be obtained with standard IPv6 Neighbor Discovery mechanisms.

For this test, I’ll have a similar topology as my other VRRP post, but I’ll be using 2x Juniper EX4200 switches and I’ll have ESXi Ubuntu 14.04LTS host configured with active-backup bond; the 2 physical NICs were connected into each switch.

VRRPv3 Topology

VRRP Configuration

Firstly will need to enable VRRPv3. By default VRRPv3 isn’t enabled and VRRPv2 doesn’t support inet6, you will need to have this enabled and is done under protocol vrrp stanza. In addition, as IPv6 doesn’t use Address Resolution Protocol (ARP) for Link Layer Discovery, we need to enable the IPv6 version of ARP, Neighbor Discovery Protocol (NDP). This will allow Neighbor Discoveries (ND) to be sent out to Host and other Network devices with that subnet with are needed to VRRPv3 to work affectively.

NOTE
For about IPv6 NDP check out RFC4861

ND is set under protocol router-advertisement stanza, and the logical interface set.

{master:0}[edit protocols]
[email protected]# show 
router-advertisement {
    interface vlan.100 {
        prefix 2001:192:168:1::/64;
}
vrrp {
    version-3;
}

Just like with VRRPv2 you will need to set the entire configuration under the interface stanza whether you have vlan or on physical interface. It is very important to note that you will need to manually set the link-local address on the interface and set a virtual link-local address (these both will need to in the same subnet) without these you will not be able to commit the configuration.

VRRP MasterVRRP Backup
{master:0}[edit interfaces vlan unit 100]
[email protected]# show 
family inet6 {
    address 2001:192:168:1::2/64 {
        vrrp-inet6-group 1 {
            virtual-inet6-address 2001:192:168:1::1;
            virtual-link-local-address fe80:192:168:1::1;
            priority 200;
            preempt;
            accept-data;
        }
    }
    address fe80:192:168:1::2/64;
}
{master:0}[edit interfaces vlan]
[email protected]# show 
unit 100 {
    family inet6 {
        address 2001:192:168:1::3/64 {
            vrrp-inet6-group 1 {
                virtual-inet6-address 2001:192:168:1::1;
                virtual-link-local-address fe80:192:168:1::1;
                priority 100;
                no-preempt;
                accept-data;
            }
        }
        address fe80:192:168:1::3/64;
    }
}

VRRP Verification

Depending on the level of detail you want to go into, you can run any of these commands show vrrp summary, show vrrp detail or show vrrp extensive. I checked both the Master and Backup to make sure everything was expected and differences between the two, by using show vvrp detail.

VRRP Master show vrrp detailVRRP Backup show vrrp detail
[email protected]> show vrrp detail    
Physical interface: vlan, Unit: 100, Address: 2001:192:168:1::2/64
  Index: 72, SNMP ifIndex: 709, VRRP-Traps: disabled, VRRP-Version: 3
  Interface state: up, Group: 1, State: master, VRRP Mode: Active
  Priority: 200, Advertisement interval: 1, Authentication type: none
  Advertisement threshold: 3, Computed send rate: 0
  Preempt: yes, Accept-data mode: yes, VIP count: 2, VIP: fe80:192:168:1::1, 2001:192:168:1::1
  Advertisement Timer: 0.530s, Master router: fe80:192:168:1::2
  Virtual router uptime: 00:00:20, Master router uptime: 00:00:17
  Virtual Mac: 00:00:5e:00:02:01 
  Tracking: disabled

[email protected]> show vrrp detail 
Physical interface: vlan, Unit: 100, Address: 2001:192:168:1::3/64
  Index: 72, SNMP ifIndex: 709, VRRP-Traps: disabled, VRRP-Version: 3
  Interface state: up, Group: 1, State: backup, VRRP Mode: Active
  Priority: 100, Advertisement interval: 1, Authentication type: none
  Advertisement threshold: 3, Computed send rate: 0
  Preempt: no, Accept-data mode: yes, VIP count: 2, VIP: fe80:192:168:1::1, 2001:192:168:1::1
  Dead timer: 3.244s, Master priority: 200, Master router: fe80:192:168:1::2 
  Virtual router uptime: 00:00:28
  Tracking: disabled 

In addition, we can confirm that, from the VRRP Master, we are receiving ND’s from the ESXi host as we can see an entry when we run the command show ipv6 neighbors

[email protected]> show ipv6 neighbors 
IPv6 Address                 Linklayer Address  State       Exp Rtr Secure Interface
2001:192:168:1::3            cc:e1:7f:2b:82:81  stale       776 yes no      vlan.100    
2001:192:168:1::4            00:0c:29:d3:ac:77  stale       1070 no no      vlan.100   
fe80::20c:29ff:fed3:ac77     00:0c:29:d3:ac:77  stale       673 no  no      vlan.100    
fe80::20c:29ff:fed3:ac81     00:0c:29:d3:ac:81  stale       588 no  no      vlan.100    
fe80:192:168:1::3            cc:e1:7f:2b:82:81  stale       776 yes no      vlan.100

Failover Testing

Before testing the VRRP fail over, I enabled VRRP traceoptions on the master and backup, so that we will be able to see what’s happening under the bonnet. I found the logs from the backup were much simpler to understand compared to master however, on the master you were able to see what the VRRP daemon goes through the process of gaining mastership.

{master:0}[edit protocols vrrp]
[email protected]# show 
traceoptions {
    file vrrp.backup.log;
    flag all;
}

For the failover, the link down to the host and trunk link on the master were deactivated and from the logs on the VRRP Backup, we can see that VRRP daemon had received the vrrpd_process_ppmd_packet notifying that the VRRP master adjacency had gone down and then received another update ppmd_vrrp_delete_adj to remove the link-local address of the VRRP master and transition to become the VRRP Master.

Apr  2 14:43:03 vrrpd_process_ppmd_packet : PPMP_PACKET_ADJ_DOWN received
Apr  2 14:43:03 vrrpd_update_state_machine, vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001 state: backup
Apr  2 14:43:03 vrrp_fsm_update IFD: vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001 event: transition
Apr  2 14:43:03 vrrp_fsm_transition: vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001 state from: backup
Apr  2 14:43:03 vrrp_fsm_update_for_inherit IFD: vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001 event: transition
Apr  2 14:43:03 ppmd_vrrp_delete_adj : VRRP neighbour fe80:192:168:1::2 on interface <72 1 1> deleted
Apr  2 14:43:03 vrrp_fsm_update IFD: vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001 event: master
Apr  2 14:43:03 vrrp_fsm_active: vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001 state from: transition
Apr  2 14:43:03 VRRPD_NEW_MASTER: Interface vlan.100 (local address 2001:192:168:1::3) became VRRP master for group 1 with master reason masterNoResponse
Apr  2 14:43:03 vrrpd_construct_pdu if: vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001, checksum flag 0, checksum 17650
Apr  2 14:43:03 vrrpd_ppmd_program_send : Creating XMIT on IFL 72, Group 1, Distributed 0, enabled 1
Apr  2 14:43:03 vrrp_fsm_update_for_inherit IFD: vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001 event: master

When preempt has been configured on the Master, the two interfaces were reactivated, and it automatically takes over as VRRP Master. As we can see from the logs on the original backup switch, another vrrpd_process_ppmd_packet notification was received by the switch and the switch automatically transitions back to become VRRP Backup.

Apr  2 14:44:00 vrrpd_process_ppmd_packet : PPMP_PACKET_RECEIVE received
Apr  2 14:44:00 vrrp_fsm_update IFD: vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001 event: backup
Apr  2 14:44:00 vrrp_fsm_backup: vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001 state from: master
Apr  2 14:44:00 VRRPD_NEW_BACKUP: Interface vlan.100 (local address 2001:192:168:1::3) became VRRP backup for group 1
Apr  2 14:44:00 vrrpd_construct_pdu if: vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001, checksum flag 0, checksum 17650
Apr  2 14:44:00 vrrpd_ppmd_program_send : Creating XMIT on IFL 72, Group 1, Distributed 0, enabled 0
Apr  2 14:44:00 vrrp_fsm_update_for_inherit IFD: vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001 event: backup
Apr  2 14:44:00 Signalled dcd (PID 1225) to reconfig
Apr  2 14:44:00 ppmd_vrrp_set_adj : Created adjacency for neighbor fe80:192:168:1::2 on interface <72 1 1> with hold-time <3 609000000>, Distributed 0
Apr  2 14:44:00 ppmd_vrrp_program_send : Programmed periodic send on interface <72 1 1> with enabled = 0, Distribute = 0, MASTER RE = 1
Apr  2 14:44:00 vrrpd_rts_async_ifa_msg, Received Async message for: (null) index: 72, family 0x1c op: 0x3 address : 2001:192:168:1::1
Apr  2 14:44:00 vrrpd_rts_async_ifa_msg, Received Async message for: (null) index: 72, family 0x1c op: 0x3 address : fe80:192:168:1::1
Apr  2 14:44:00 vrrpd_rts_async_ifl_msg, Received Async message for: vlan index: 72, flags 0xc000 op: 0x2
Apr  2 14:44:00 vrrpd_if_find_by_ifname_internal, Found vlan.000.100: vlan.000.100.001.2001:0192:0168:0001:0000:0000:0000:0003.001 in run 1
Apr  2 14:44:00 vrrpd_find_track_if_entry_array_by_name: vlan.100

In addition, you can check to see how many transitional changes were made by using the show vrrp extensive command:

[email protected]> show vrrp extensive | match Backup 
    Idle to backup transitions               :4         
    Backup to master transitions             :4         
    Master to backup transitions             :0

From the host point of view, I had a rolling ping going to gateway during the failover testing and the results were as expected.

--- 2001:192:168:1::1 ping statistics ---
123 packets transmitted, 95 received, 22% packet loss, time 122275ms
rtt min/avg/max/mdev = 1.058/96.186/3257.648/494.372 ms, pipe 4

Although you see packet loss, this is normal due to the bond type (active-backup) and one of the two NICs was unavailable. In addition, the connectivity never completely dropped out and having a host running on 50% capacity is better than a host with no accessibility.

The full logs and ping6 outputs are available here: vrrp.master.log, vrrp.backup.log, ping6.

VRRPv3 has very similar configuration to VRRPv2 but it took me a while to work out that without small differences, i.e. enabling router-advertisement and version-3 , you could be looking at the screen scratching your head! And with that we’ve got another post in the books. Keep an eye for future posts on IPv6 and Junos 🙂

Share this:
Share

Manually changing an Active Slave in a Bond-Type 1 Configuration

As I was doing testing in my previous post, I ran into an issue where I had configured bond-type 1 (Active-Backup) interface however Active Slave never failed over when I disconnected the interface. For the life of me, I didn’t have a clue why! Subsequently, I found out that the configuration I had on ESXi host’s vSwitch was wrong and this is why the failover never happened.

Before I told about the ESXi vSwitch, I was looking at a number of different ways to fix this issue. From my searching I found a great article written by Ivan Erben on how you can manually fail over active slave in bond-type 1 configuration

It was quite straightforward, as I like it :p

Firstly, check to see what the active slave is by using the command cat /proc/net/bonding/bond0

[email protected]:~$ cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:4f:26:c5
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:4f:26:cf
Slave queue ID: 0

Having seen that eth1 is the active slave, we can remove the interface from the bond, by running echo -eth1 > /sys/class/net/bond0/bonding/slaves

Note
You will need to sudo root to make this change.
[email protected]:~$ sudo -s
[sudo] password for marquk01: 
[email protected]:~# echo -eth1 > /sys/class/net/bond0/bonding/slaves

We can see that the eth1 has been removed from bond configuration

[email protected]:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:4f:26:cf
Slave queue ID: 0

The bond will still pass traffic and work as expected to add the interface back into the bond, we would need to run echo +eth1 > /sys/class/net/bond0/bonding/slaves

As we can see, eth1 has been added back into the bond and eth2 has become the active slave.

[email protected]:~# echo +eth1 > /sys/class/net/bond0/bonding/slaves
[email protected]:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:4f:26:cf
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:4f:26:c5
Slave queue ID: 0

This is very useful, if you know you have planned maintenance or need a quick failover of interfaces and you don’t have link detection enabled. Definitely a great find and post by Ivan! You can check out his blog here

Share this:
Share

Configuring a 802.3ad Bonded Interface Ubuntu (NIC Teaming)

Messing about in the lab configuring 802.3ad LACP bundled interfaces between switches and I wanted to see how easy (or hard) it would be to create a bonded interface on a server. I’ve got an Ubuntu 14.04LTS VM and 3 NICs available, so eth1 and eth2 were told they will become one 😀

NOTE
Please make sure you are either doing this via ILO/KVM or have a management interface I like have, as you are making network changes and you could lock yourself out of your server, if it goes horribly wrong!

Let’s get cracking!

Firstly, I configured the switch as 802.3ad LACP aggregated interface and set the interfaces to apart of the aggregated interface:

{master:0}[edit interfaces]
[email protected]# show  
ge-0/0/2 {
    description "km-vm1 1GB";
    enable;
    ether-options {
        802.3ad ae1;
    }
}
ge-0/0/3 {
    description "km-vm1 eth2 1GB";
    enable;
    ether-options {
        802.3ad ae1;
    }
}
ae1 {
    aggregated-ether-options {
        lacp {
            active;                     
            periodic fast;
        }
    }
    unit 0 {
        family ethernet-switching {
            port-mode access;
            vlan {
                members v10;
            }
        }
    }
}

Server wise, check that the NICs can be configured as an 802.3ad bond, as when I’m using LACP method of bonding, you need to ensure that the NICs support ethtool.

By running ethtool {interface} , if a link is detected then you’re good to go:

[email protected]:~$ ethtool eth1
Settings for eth1:
	Supported ports: [ TP ]
	Supported link modes:   1000baseT/Full 
	                        10000baseT/Full 
	Supported pause frame use: No
	Supports auto-negotiation: No
	Advertised link modes:  Not reported
	Advertised pause frame use: No
	Advertised auto-negotiation: No
	Speed: 10000Mb/s
	Duplex: Full
	Port: Twisted Pair
	PHYAD: 0
	Transceiver: internal
	Auto-negotiation: off
	MDI-X: Unknown
Cannot get wake-on-lan settings: Operation not permitted
	Link detected: yes

[email protected]:~$ ethtool eth2
Settings for eth2:
	Supported ports: [ TP ]
	Supported link modes:   1000baseT/Full 
	                        10000baseT/Full 
	Supported pause frame use: No
	Supports auto-negotiation: No
	Advertised link modes:  Not reported
	Advertised pause frame use: No
	Advertised auto-negotiation: No
	Speed: 10000Mb/s
	Duplex: Full
	Port: Twisted Pair
	PHYAD: 0
	Transceiver: internal
	Auto-negotiation: off
	MDI-X: Unknown
Cannot get wake-on-lan settings: Operation not permitted
	Link detected: yes

I needed to install ifenslave package, as this package is used to attach and detach NICs to a bonding interface

sudo apt-get install ifenslave

Once that has been installed, the kernel module file needs to be edited to include bonding before creating a bonded interface:

sudo nano /etc/modules

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

lp
rtc
bonding

Once that is saved, manually load the module:

sudo modprobe bonding

Next edit the interfaces into a bond sudo nano /etc/network/interfaces

auto eth1
iface eth1 inet manual
    bond-master bond0

auto eth2
iface eth2 inet manual
    bond-master bond0

auto bond0
iface bond0 inet static
    # For jumbo frames, change mtu to 9000
    mtu 1500
    address 192.31.1.2
    netmask 255.255.255.0
    network 192.31.1.0
    broadcast 192.31.1.255
    gateway 192.31.1.1
    bond-miimon 100
    bond-downdelay 200 
    bond-updelay 200 
    bond-mode 4
    bond-slaves none
Bond Configuration Details
Bond-MiimonBond-DowndelayBond-UpdelayBond-ModeBond-Slaves
Specifies the MII link monitoring frequency in milliseconds. This determines how often the link state of each slave is inspected for link failures
Specifies the time, in milliseconds, to wait before disabling a slave after a link failure has been detected.
Specifies the time, in milliseconds, to wait before enabling a slave after a link recovery has been detected.
Specifies what mode of NIC bonding configured. There’s 7 mode:

  • Mode 0 – balance-rr
  • Mode 1 – active-backup
  • Mode 2 – balance-xor
  • Mode 3 – broadcast
  • Mode 4 – 802.3ad
  • Mode 5 – balance-tlb
  • Mode 6 – balance-alb

For more in-depth details on bonding modes and Linux Ethernet Bonding visit Kernel.org white paper documentation

Defines all the interfaces that will be in the bond. My example has none because I had defined them with bond-master

Save and Exit, then you need to do network restart or reboot the server for the change to take effect.

Once the reboot/restart has completed you should be sorted. You can check this by running the commands ifconfig

[email protected]:~$ ifconfig 
bond0     Link encap:Ethernet  HWaddr 00:0c:29:4f:26:c5  
          inet addr:192.31.1.2  Bcast:192.31.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe4f:26c5/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:150 errors:0 dropped:5 overruns:0 frame:0
          TX packets:446 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:14381 (14.3 KB)  TX bytes:53888 (53.8 KB)

eth0      Link encap:Ethernet  HWaddr 00:0c:29:4f:26:bb  
          inet addr:10.1.0.137  Bcast:10.1.0.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe4f:26bb/64 Scope:Link
          inet6 addr: 2001:41c1:4:8040:20c:29ff:fe4f:26bb/64 Scope:Global
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:304 errors:0 dropped:0 overruns:0 frame:0
          TX packets:127 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:26921 (26.9 KB)  TX bytes:24900 (24.9 KB)

eth1      Link encap:Ethernet  HWaddr 00:0c:29:4f:26:c5  
          inet6 addr: fe80::20c:29ff:fe4f:26c5/64 Scope:Link
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:24 errors:0 dropped:1 overruns:0 frame:0
          TX packets:216 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:4155 (4.1 KB)  TX bytes:26653 (26.6 KB)

eth2      Link encap:Ethernet  HWaddr 00:0c:29:4f:26:c5  
          inet6 addr: fe80::20c:29ff:fe4f:26c5/64 Scope:Link
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:126 errors:0 dropped:4 overruns:0 frame:0
          TX packets:230 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:10226 (10.2 KB)  TX bytes:27235 (27.2 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:64 errors:0 dropped:0 overruns:0 frame:0
          TX packets:64 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:5696 (5.6 KB)  TX bytes:5696 (5.6 KB)

or cat /proc/net/bonding/bond0

[email protected]:~$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
	Aggregator ID: 1
	Number of ports: 2
	Actor Key: 33
	Partner Key: 2
	Partner Mac Address: cc:e1:7f:2b:82:80

Slave Interface: eth1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:4f:26:c5
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:4f:26:cf
Aggregator ID: 1
Slave queue ID: 0

By using cat /proc/net/bonding/bond0 you can also check if a link in the bond has failed as the Link Failure Count would increase.

And thats how you can configure 802.3ad Bonded Interface 🙂

Share this:
Share