QOS – Why More Bandwidth Doesn’t Fix Bad Network Design

The Common Misconception

“We don’t need QoS — we’ll just upgrad our links.”

That’s a common but dangerous mindset. QoS doesn’t exist to fix slow links — it exists to manage congestion intelligently when multiple high-speed flows compete for the same egress queue.

The Reality of Link Ratios

Upgrading bandwidth helps, but ratios matter more.

If your servers go from 1 Gbps → 10 Gbps and your uplinks go from 10 Gbps → 100 Gbps, the oversubscription ratio (1:10) hasn’t changed. Packets can still build up and drop when multiple sources transmit simultaneously.

The real challenge isn’t speed — it’s flow balance, buffering, and scheduling.

Understanding Cisco Platform Differences

Platform ASIC Buffering QoS Type Use Case
Catalyst 9000 UADP Shared or per-port (varies by model; some use VoQ) MQC (Modular QoS CLI) Campus & Access
Nexus 9000 / ACI Broadcom Trident / Jericho Deep buffers with VoQ (depth varies significantly by model) DCB, PFC, ECN Data Center
Cisco 8000 Series (Silicon One) Q200 / Q100 / P100 On-chip deep buffers + HBM Hierarchical QoS (HQoS, depth varies by model) WAN / Core / Edge

Important Note: Buffer architectures and QoS capabilities vary significantly within each platform family. For example, a Catalyst 9300 differs from a 9500, and a Nexus 93180YC-FX has vastly different buffering than a 9736C-FX. Always consult platform-specific datasheets and the Cisco QoS design guide for your exact model and software version.

The Root of the Problem

Traffic bursts exceed buffer capacity → queues fill → packets drop → retransmissions begin → jitter and delay increase.

This happens even at 400 Gbps if buffers or QoS policies aren’t tuned. Different Cisco platforms handle this differently — so your QoS design must align with hardware architecture.

Practical Cisco Remediation Examples

A. Monitor Queues & Drops

Catalyst 9000

show interfaces counters errors
show interfaces queue | include drops
show platform hardware qos queue stats interface TenGigabitEthernet1/0/1

Nexus 9000

show interface Ethernet1/1 counters errors
show queuing interface Ethernet1/1
show hardware qos queue stats interface Ethernet1/1

Cisco 8000 Series (IOS XR)

show qos interface HundredGigE0/0/0/0
show platform hardware qfp active statistics drop
show platform hardware qfp active qos queue output

B. Enable QoS Globally (Where Required)

Nexus 9000

configure terminal
feature qos
feature queuing

Cisco 8000 (IOS XR)

configure
qos enable

C. Configure QoS Classes & Policies

Catalyst 9000 (Access Layer)

class-map match-any VOICE
 match dscp ef
class-map match-any VIDEO
 match dscp af41
!
policy-map CAMPUS-QOS
 class VOICE
  priority percent 20
 class VIDEO
  bandwidth percent 30
 class class-default
  fair-queue
!
interface TenGigabitEthernet1/0/1
 service-policy output CAMPUS-QOS

Nexus 9000 (Data Center)

policy-map type qos DC-QOS
 class type qos AI-TRAFFIC
  set qos-group 5
 class type qos class-default
  set qos-group 0
!
system qos
 service-policy type qos input DC-QOS

Cisco 8000 (Core WAN)

policy-map CORE-QOS
 class VOICE
  priority level 1
 class VIDEO
  bandwidth percent 30
 class BULK
  bandwidth percent 10
 class class-default
  fair-queue
!
interface HundredGigE0/0/0/0
 service-policy output CORE-QOS

D. Tune Buffers & Queues

Catalyst 9000

interface GigabitEthernet1/0/1
 hold-queue 4096 out

Nexus 9000

hardware qos queue-policy QPOLICY-10G
  queue 1 bandwidth percent 20
  queue 2 bandwidth percent 30
  queue 3 bandwidth percent 50
!
interface Ethernet1/1
 service-policy type queuing output QPOLICY-10G

Cisco 8000 (IOS XR)

policy-map QUEUE-TUNING
 class VOICE
  queue-limit 300 ms
 class VIDEO
  queue-limit 500 ms
 class BULK
  queue-limit 800 ms
!
interface Bundle-Ether10
 service-policy output QUEUE-TUNING

E. Implement PFC & ECN (For Lossless AI / HPC Traffic)

Nexus 9000

priority-flow-control mode on
priority-flow-control priority 3 enable

Cisco 8000 (IOS XR)

policy-map LOSSLESS
 class HPC
  pause no-drop
  ecn
!
system qos
 service-policy type network-qos LOSSLESS

F. Validate QoS Behavior

show policy-map interface
show queuing interface
show hardware qos queue stats all
show qos interface statistics

Cisco Platform Design Summary

Platform Design Focus Key QoS Features Typical Use
Catalyst 9000 Campus / Access MQC QoS, per-port queues, fair-queue Edge/LAN
Nexus 9000 / ACI Data Center Fabric VoQ, PFC, ECN, deep buffers East–West traffic, AI fabrics
Cisco 8000 Series WAN / Core / Cloud Edge Hierarchical QoS, Silicon One ASIC, adaptive congestion visibility High-throughput backbones, WAN aggregation

Hierarchical QoS (HQoS) on Cisco 8000 Series

Hierarchical QoS allows multiple levels of control:

  • Level 1 (Parent Policy): Shape the aggregate bandwidth at the interface level.
  • Level 2 (Child Policy): Assign bandwidth and priority to individual traffic classes.
  • Level 3 (Grandchild Policy): Apply per-subscriber or per-service controls.

A. Define Class Maps

class-map match-any VOICE
 match dscp ef
class-map match-any VIDEO
 match dscp af41
class-map match-any CRITICAL
 match dscp cs5
class-map match-any BULK
 match dscp cs1

B. Create Child Policy

policy-map CHILD-POLICY
 class VOICE
  priority level 1
 class VIDEO
  bandwidth percent 25
 class CRITICAL
  bandwidth percent 20
 class BULK
  bandwidth percent 10
 class class-default
  fair-queue

C. Create Parent Policy

policy-map PARENT-POLICY
 class class-default
  shape average 10G
  service-policy CHILD-POLICY

D. Apply to Interface

interface HundredGigE0/0/0/0
 service-policy output PARENT-POLICY

E. Optional Grandchild Policy

policy-map GRANDCHILD-POLICY
 class SUBSCRIBER1
  shape average 100M
 class SUBSCRIBER2
  shape average 200M
!
policy-map CHILD-POLICY
 class BULK
  service-policy GRANDCHILD-POLICY

F. Validate HQoS

show policy-map interface HundredGigE0/0/0/0 output
show qos interface HundredGigE0/0/0/0 hierarchy
show policy-map target parent all

The Bottom Line

Upgrading links doesn’t eliminate congestion — it just raises the ceiling. Cisco’s QoS toolset — from Catalyst’s MQC to Nexus VoQ to Cisco 8000 HQoS — gives you precise control of how traffic behaves under pressure. Hierarchical QoS is the crown jewel for WAN and core networks, ensuring fairness, priority, and determinism at scale.

Final Takeaway

“In network engineering, hope isn’t a strategy. Data, architecture, and QoS are.”