Cisco Nexus vPC LACP bonding with server issue

I am configuring vPC end to end till my server to get more bandwidth, and following is my scenario

enter image description here

I am seeing one strange thing which is i configured vPC on N3k switch and then i configured my Linux server for link aggregation 802.3ad bonding and then i restart server so far all good, i can see correct bonding configuration in /proc/net/bonding/bind0 and my server also started ping but i found i got packet loss in ping, later i found on switch its showing vpc is down but wondering how i am getting ping?

N3k# show vpc 1


vPC status
----------------------------------------------------------------------
id   Port   Status Consistency Reason                     Active vlans
--   ----   ------ ----------- ------                     ------------
134  Po1  down*  success     success                    -

Later i did shut & no shut on Port-Channel 1 and that immediately bring up vpc

N3k# show vpc 1


vPC status
----------------------------------------------------------------------
id   Port   Status Consistency Reason                     Active vlans
--   ----   ------ ----------- ------                     ------------
131  Po1  up     success     success                    10,20,30

My VPC domain config

vpc domain 204
  peer-switch
  role priority 10
  peer-keepalive destination 10.29.0.51 source 10.29.0.50
  auto-recovery
  ip arp synchronize

This is my vPC config

interface Ethernet1/1
  switchport mode trunk
  switchport trunk allowed vlan 10,20,30
  spanning-tree port type edge trunk
  spanning-tree bpduguard enable
  speed 10000
  channel-group 1 mode active

interface port-channel1
  switchport mode trunk
  switchport trunk allowed vlan 10,20,30
  speed 10000
  vpc 1

This is my Linux server config

ifcfg-bond0

NAME=bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=4 miimon=500 downdelay=1000 lacp_rate=1"
NM_CONTROLLED=no

ifcfg-bond0.10

NAME=bond0.10
DEVICE=bond0.10
BOOTPROTO=dhcp
VLAN=yes
ONPARENT=yes
NM_CONTROLLED=no

Question:

  1. How does server pinging even if vpc is down on switch?
  2. Why do i need to shut/no shut vpc to bring it up vpc? is this normal?
  3. I installed 30 servers on same vpc cluster and all had same issue, every time i have to go to switch and need to do port-channel shut/no shut
  4. Am i missing something here?

Update - 1

For testing i reboot server and found server is up but switch vpc is down and on switch i am seeing following logs, This is strange issue.

sw1# show logging | grep "Ethernet1/37"
    2018 Jul  9 14:28:13 sw1 %ETHPORT-5-IF_DOWN_INITIALIZING: Interface Ethernet1/37 is down (Initializing)
    2018 Jul  9 14:28:13 sw1 %ETH_PORT_CHANNEL-5-PORT_INDIVIDUAL_DOWN: individual port Ethernet1/37 is down
    2018 Jul  9 14:28:15 sw1 %ETHPORT-5-IF_DOWN_INITIALIZING: Interface Ethernet1/37 is down (Initializing)
    2018 Jul  9 14:28:18 sw1 %ETHPORT-5-SPEED: Interface Ethernet1/37, operational speed changed to 10 Gbps
    2018 Jul  9 14:28:18 sw1 %ETHPORT-5-IF_DUPLEX: Interface Ethernet1/37, operational duplex mode changed to Full
    2018 Jul  9 14:28:18 sw1 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet1/37, operational Receive Flow Control state changed to off
    2018 Jul  9 14:28:18 sw1 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet1/37, operational Transmit Flow Control state changed to off
    2018 Jul  9 14:28:28 sw1 %ETH_PORT_CHANNEL-4-PORT_INDIVIDUAL: port Ethernet1/37 is operationally individual
    2018 Jul  9 14:28:28 sw1 %ETHPORT-5-IF_UP: Interface Ethernet1/37 is up in mode trunk

Server side i am seeing following error

[[email protected] ~]# tail -f /var/log/messages
Jul  9 10:45:47 [email protected] kernel: : [  321.299960] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond
Jul  9 10:46:11 [email protected] kernel: : [  345.300288] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond

Linux server side i am seeing following

[[email protected] ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 500
Up Delay (ms): 0
Down Delay (ms): 1000

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 6c:3b:e5:b0:7a:40
Active Aggregator Info:
    Aggregator ID: 2
    Number of ports: 2
    Actor Key: 13
    Partner Key: 32883
    Partner Mac Address: 00:23:04:ee:be:cc

Leave Your Comment

Leave a Reply