Settle a problem:41
Few events are as disruptive as a broadcast storm. It can quickly saturate network links, pin switch CPUs, and bring user connectivity to a grinding halt. When this happens in a large campus environment, especially with a high-density switch stack like the Catalyst 9300 series, finding the source can feel like searching for a needle in a haystack. The challenge is to diagnose and resolve the issue quickly without taking the entire stack—and all its connected users—offline.
Recently, a community member raised this exact issue, sparking a helpful discussion on how to tackle the problem. Let’s break down that conversation and build a comprehensive guide to hunt down and prevent broadcast storms.
A user with a network of stacked Cisco Catalyst 9300s asked for a method to trace the source of frequent broadcast storms without a full system reboot. The community provided several key suggestions:
show interface
commands to look for interfaces with a rapidly increasing input packet count, specifically focusing on the broadcast counter. This is the primary reactive method for identifying the port where the storm is entering the network.These points are an excellent starting point. Now, let’s expand on them to create a systematic workflow for troubleshooting and prevention.
This guide is structured to help you move from immediate reaction to long-term prevention.
When the network is slow or users are reporting outages, your first priority is to find which port is receiving the flood of broadcast traffic. The command line is your best friend here.
The show interfaces
command provides a wealth of information, but we can filter it to find what we need quickly.
Connect to the primary switch in the stack via SSH or console.
Run the following command:
show interfaces | include is up|Broadcast
is up
filters the output to only show active interfaces.Broadcast
shows the line containing the broadcast packet counters.Analyze the output. You will see a list of interfaces and their broadcast counters. Run the command two or three times, a few seconds apart.
Example Output (first run):
GigabitEthernet1/0/24 is up, line protocol is up (connected)
5 minute input rate 3000 bits/sec, 5 packets/sec
Received 251346 broadcasts (250100 multicasts)
GigabitEthernet2/0/15 is up, line protocol is up (connected)
5 minute input rate 987000 bits/sec, 45000 packets/sec
Received 89473210 broadcasts (1024 multicasts)
Example Output (5 seconds later):
GigabitEthernet1/0/24 is up, line protocol is up (connected)
5 minute input rate 3000 bits/sec, 5 packets/sec
Received 251371 broadcasts (250125 multicasts)
GigabitEthernet2/0/15 is up, line protocol is up (connected)
5 minute input rate 995000 bits/sec, 46200 packets/sec
Received 89699210 broadcasts (1024 multicasts)
In this example, the broadcast counter for GigabitEthernet1/0/24
barely changed, which is normal. However, the counter for GigabitEthernet2/0/15
jumped by over 226,000 in just 5 seconds. This is our problem port.
Now that you’ve identified the port (GigabitEthernet2/0/15
in our example), you need to find out what’s connected to it. But first, stop the bleeding.
configure terminal
interface GigabitEthernet2/0/15
shutdown
end
show cdp neighbors GigabitEthernet2/0/15 detail
show lldp neighbors GigabitEthernet2/0/15 detail
show mac address-table interface GigabitEthernet2/0/15
Once you’ve resolved the immediate issue, you must configure your switches to prevent it from happening again. This is where proactive features are essential.
Storm Control: This feature monitors the rate of broadcast, multicast, and unknown-unicast traffic on a port. If the traffic exceeds a configured threshold, it can shut down the port or send an SNMP trap. It’s highly recommended on all user-facing (access) ports.
Configuration Example:
configure terminal
interface range GigabitEthernet1/0/1-48
storm-control broadcast level pps 500
storm-control multicast level pps 1000
storm-control action shutdown
exit
This configuration will shut down a port if it receives more than 500 broadcast packets per second (pps) or 1000 multicast pps.
Spanning Tree Protocol (STP) Hardening: Most broadcast storms are caused by Layer 2 loops. Hardening STP is your best defense.
err-disable
state, breaking the loop.
! Globally enable on all PortFast ports
spanning-tree portfast bpduguard default
! Or apply per-interface
interface GigabitEthernet1/0/1
spanning-tree bpduguard enable
err-disable
port will try to recover after a timeout (typically 300 seconds). For a critical issue like a loop, you may want to disable this so an admin must investigate.
! To disable automatic recovery for a BPDU Guard violation
no errdisable recovery cause bpduguard
To see the current recovery settings, use show errdisable recovery
.
interface GigabitEthernet1/0/48
description Trunk_to_Access_Switch
spanning-tree guard root
To summarize, here is a complete workflow for handling broadcast storms:
show interfaces | include is up|Broadcast
to find the port with rapidly increasing broadcast counters.shutdown
the offending interface to restore network stability.show cdp/lldp neighbor
, show mac address-table
, and physical tracing to identify the source device and the root cause (e.g., loop, faulty NIC).By adopting this methodical approach, you can move from a reactive fire-fighting mode to a state of proactive network stability, confidently tracking down and eliminating broadcast storms without resorting to a full stack outage.