November 26, 2009

EtherChannel between a Cisco switch and a Dell PowerEdge server

Problem: My production Dell PowerEdge file server has a single Broadcom Gigabit connection to a Cisco Catalyst 3750 switch on the internal network. I’m seeing average through put of around 100 MB/sec (~800 mbit) and am concerned about link saturation and performance bottlenecks. How can I increase the bandwidth between my file server and the internal network without complicated layer 3 load balancing or DNS dual homing?

Solution: Using the Broadcom Advanced Control Suite included with Dell’s PowerEdge servers and Cisco’s native EtherChannel capability, I can trunk up to eight (8) LAN connections between my Dell server and Cisco switch. This allows me to have a single LAN connection of up to 8 Gbit (or 80 Gbit if using 10 Gigabit cards) between my server and the network core. All 2 to 8 links will operate as a single pseudo interface with a single MAC address. When an EtherChannel is configured to a Cisco stack (vs. a single switch), I can have link redundancy in that if a single switch fails, my link will continue to operate.

How To: This article is an outline of the configuration requirements for an EtherChannel between a Cisco Catalyst switch and a Dell PowerEdge server. Whilst this configuration can apply to other server platforms (e.g. HP, IBM) this article focuses on the Broadcom Advanced Control Suite which ships with most Dell servers using Broadcom Gigabit network interfaces and Cisco Catalyst switches. First of all, an EtherChannel is a port trunking (link aggregation being the general term) technology used primarily on Cisco switches. It allows grouping several physical Ethernet links to create one logical Ethernet link for the purpose of providing fault-tolerance and high-speed links between switches, routers and servers. An EtherChannel can be created from between two and eight active Fast Ethernet, Gigabit Ethernet or 10 Gigabit Ethernet ports, with an additional one to eight inactive (failover) ports which become active as the other active ports fail. EtherChannel is primarily used in the backbone network, but can also be used to connect end user machines.

Configuration of an EtherChannel should begin at the switch. These examples are based on configuring a EtherChannel between a Dell server with two (2) Broadcom Gigabit LAN cards and a single Cisco Catalyst 3750 switch. If you are using a switch stack or blade based configuration this configuration can also apply across multiple switches.

1. You need to identify two available switch ports on your 3750 switch then check and confirm that they support channeling:

switch#show interfaces Gi2/0/23 capabilities
GigabitEthernet2/0/23
Model: WS−C3750G−24T
Type: 10/100/1000BaseTX
Speed: 10,100,1000,auto
Duplex: half,full,auto
Trunk encap. type: 802.1Q,ISL
Trunk mode: on,off,desirable,nonegotiate
Channel: yes
Broadcast suppression: percentage(0−100)
Flowcontrol: rx−(off,on,desired),tx−(none)
Fast Start: yes
QoS scheduling: rx−(not configurable on per port basis),tx−(4q2t)
CoS rewrite: yes
ToS rewrite: yes
UDLD: yes
Inline power: no
SPAN: source/destination
PortSecure: yes
Dot1x: yes

In the above example “Channel: yes” identifies that port 2/0/23 supports channel mode. Repeat this step for the second port you will use.

2.Next we need to configure each switch port to into a channel-group.

Warning: I strongly recommend configuring this on two new switch ports, making sure the configuration is correct then moving your server over to the port channel. Configuring the existing server switch port may take it offline and when we bond on network cards on the server (later step) it will definitely take the server offline for up to 15 minutes so you must complete this configuration outside production hours or during a scheduled maintenance window.

We will use ports 2/0/23 and 2/0/24 in this configuration example:

switch#conf t
switch(config)#int Gi2/0/23
switch(config−if)#switchport mode access
switch(config−if)#switchport access vlan 100 **Note: Be sure to enter your server VLAN.
switch(config−if)#spanning−tree portfast
switch(config−if)#channel−group 11 mode active
switch(config)#int Gi2/0/24
switch(config−if)#switchport mode access
switch(config−if)#switchport access vlan 100 **Note: Be sure to enter your server VLAN.
switch(config−if)#spanning−tree portfast
switch(config−if)#channel−group 11 mode active
switch(config−if)#exit

Once the configuration is complete, each ports configuration should look like:

switch#sh run int Gi2/0/23
Building configuration...

Current configuration : 216 bytes
!
interface GigabitEthernet2/0/23
description Uplink to Server (Team 1)
switchport access vlan 100
switchport mode access
no snmp trap link-status
channel-group 11 mode active
spanning-tree portfast
end
switch#sh run int Gi2/0/24
Building configuration…
Current configuration : 216 bytes
!
interface GigabitEthernet2/0/24
description Uplink to Server (Team 1)
switchport access vlan 100
switchport mode access
no snmp trap link-status
channel-group 11 mode active
spanning-tree portfast
end

3. Next we need to configure the EtherChannel load balancing mode. EtherChannel load balancing can use MAC addresses, IP addresses, or Layer 4 port numbers with a Policy Feature Card 2 (PFC2) and either source mode, destination mode, or both. The mode you select applies to all EtherChannels that you configure on the switch. Use the option that provides the greatest variety in your configuration. For example, if the traffic on a channel only goes to a single MAC address, use of the destination MAC address results in the choice of the same link in the channel each time. Use of source addresses or IP addresses can result in a better load balance. My recommended configuration is:

Switch(config)#port−channel load−balance ?
dst−ip Dst IP Addr
dst−mac Dst Mac Addr
src−dst−ip Src XOR Dst IP Addr
src−dst−mac Src XOR Dst Mac Addr
src−ip Src IP Addr
src−mac Src Mac Addr
Switch(config)#port−channel load−balance src−mac

4. Next we need to configure “teaming” on the Dell PowerEdge server. You can find configuration details for the Broadcom Advanced Control Suite 3 here. Note that you will need to connect your server to the two newly configured switch ports before enabling the team config in the Broadcom software.

Interface Note: When you create a new team, a new virtual interface will be created under Windows. You will need to re-configure this interface with your servers IP address, subnet mask, default gateway and DNS servers before the server will be accessible on the network.

Team Type Note: Broadcom Advanced Control Suite 3 will, by default, set the team type as “Smart Load Balancing(TM) and Failover”. This is not natively compatible with Cisco’s EtherChannel standard. Once you’ve created the Team on the Dell server you need to change the Team Type to “Link Aggregation 802.3ad” which is compatible with Cisco’s LACP (IEEE 802.3ad) implementation.

5. Once teaming is setup we need to confirm that the EtherChannel is active on the switch and do some quick testing to confirm redundancy.

a. Check the status of the EtherChannel:
switch#show etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M – not in use, minimum links not met
u – unsuitable for bundling
w – waiting to be aggregated
d – default port

Number of channel-groups in use: 1
Number of aggregators: 1

Group Port-channel Protocol Ports
——+————-+———–+———————————————–
11 Po5(SU) LACP Gi2/0/23(P) Gi2/0/24(P)

switch#

Note that the above flags (S / U) show that the channel is running in Layer 2 mode (Data Link) and is in use.

b. Start a ping to your server’s IP address:

C:\Users\bill>ping 10.9.8.10 -t

Pinging 192.168.123.10 with 32 bytes of data:
Reply from 192.168.123.10: bytes=32 time=19ms TTL=127
Reply from 192.168.123.10: bytes=32 time<1ms TTL=127
Reply from 192.168.123.10: bytes=32 time<1ms TTL=127

Leave this running in the background then login to your switch and disable one of two switch ports which is a part of the team configuration:

switch#conf t
Enter configuration commands, one per line. End with CNTL/Z.
switch(config)#int Gi2/0/23
switch(config-if)#shut

You should see no interruption with access to the server but the EtherChannel will state:

switch#show etherchannel 5 summary
Flags: D – down P – bundled in port-channel
I – stand-alone s – suspended
H – Hot-standby (LACP only)
R – Layer3 S – Layer2
U – in use f – failed to allocate aggregator

M – not in use, minimum links not met
u – unsuitable for bundling
w – waiting to be aggregated
d – default port

Number of channel-groups in use: 1
Number of aggregators: 1

Group Port-channel Protocol Ports
——+————-+———–+———————————————–
5 Po5(SU) LACP Gi2/0/23(D) Gi2/0/24(P)

Note that “D” on port 2/0/23 which is turned off, therefore down.

Re-enable the 2/0/23 interface:

switch#conf t
Enter configuration commands, one per line. End with CNTL/Z.
switc(config)#int Gi2/0/23
switch(config-if)#no shut
switch(config-if)#exit
switch(config)#

And confirm the EtherChannel is back online:

switch#show etherchannel 5 summary
Flags: D – down P – bundled in port-channel
I – stand-alone s – suspended
H – Hot-standby (LACP only)
R – Layer3 S – Layer2
U – in use f – failed to allocate aggregator

M – not in use, minimum links not met
u – unsuitable for bundling
w – waiting to be aggregated
d – default port

Number of channel-groups in use: 1
Number of aggregators: 1

Group Port-channel Protocol Ports
——+————-+———–+———————————————–
5 Po5(SU) LACP Gi2/0/23(P) Gi2/0/24(P)

6. Your team configuration is now complete. You now have two redundant Gigabit interfaces connected to your file server with a maximum of 4 gbit/second of symmetric throughput.

For more technical information please see Cisco’s EtherChannel implementation guide, document id: 98469

Let me know if you have questions or problems regarding this configuration.

EOF Notes: Dell server dual homing, dual NIC, server redundant NIC config, teaming NIC’s, increase server LAN link, network teaming, link aggregation, high performance network link