if you’re peering with anyone else using vast number of available IXPs, it’s crucial for proper traffic engineering, prediction of traffic flow changes and optimization of paid services.
one of the more popular and easy tools, able to visualize traffic exchanged between ASes is AS-Stats. for properly do its work, AS-Stats needs proper link definition in
knownlinks file - as NetFlow probes exported to collector will contain only the id. the file itself for my example installation is easy:
# Router IP SNMP ifindex tag description color 192.168.0.1 1 ATMAN ATMAN 5EA631 192.168.0.1 2 CROWLEY Crowley FFFF00 192.168.0.1 3 TPSA TP E45605
to get the proper interface index, you can check your router indexes using command:
show snmp mib ifmib ifindex
and it’s all good until router is rebooted. that happened to me on one of the older 3845 and suddenly the graphs became interesting if not unreal. of course the problem itself has easy solution - you need to nail down interface identifiers in NVRAM, which will stop the problem from bugging you in future. it’s done using command:
snmp-server ifindex persist
having said that - that was not the only problem I was trying to solve. I needed something more interactive and powerful on the router itself to nail other heavy abuser of my services. thanks for the recent innovations in Cisco IOS, we are able now to see what’s inside our NetFlow cache - and the flexibility of mix, match and collect is called now Flexible NetFlow. with v9 of NetFlow we’ve got option to define number of caches and modularity, and Flexible NetFlow also uses that capability.
first of all, I defined a flow record, which will inform router which fields together create a flow (
match command), and on top of that, what data to collect with each and every new flow (
flow record NF-DATA match routing source as peer match interface input collect routing source as collect counter bytes collect counter packets
as I’m using three upstream services, I also generated three separate flow monitors, to let me gather statistics separately:
flow monitor FM-ATMAN record NF-DATA ! flow monitor FM-Crowley record NF-DATA ! flow monitor FM-TP record NF-DATA
flow definitions needs to be now assigned to interfaces - for the real work of gathering the flow data to begin:
interface GigabitEthernet 0/0.100 description Uplink-ATMAN ip flow monitor FM-ATMAN input ip flow ingress ! interface GigabitEthernet 0/0.101 description Uplink-Crowley ip flow monitor FM-Crowley input ip flow ingress ! interface GigabitEthernet 0/0.102 description Uplink-TP ip flow monitor FM-TP input ip flow ingress
once applied, the cache information should start to be collected according to NF-DATA definition. we can now play with the different sorting, filtering and aggregation functions, or even export it externally using CSV. as we’re now interested only for per-AS sourced traffic, the filter command will be simple:
c3845#show flow monitor FM-ATMAN cache aggregate routing source as counter bytes sort highest counter bytes Processed 6 flows Aggregated to 6 flows Showing the top 6 flows IP SRC AS BYTES flows ========= ========== ========== 24724 1669629642 1 2529 375072464 1 16265 172599860 1 8615 2173 1 2852 64 1
what we see is that on the ATMAN link we see most of the traffic originated from AS 24724. and on the TP (Orange) link:
c3845#show flow monitor FM-TP cache aggregate routing source as counter bytes sort highest counter bytes Processed 8 flows Aggregated to 8 flows Showing the top 8 flows IP SRC AS BYTES flows ========= ========== ========== 5617 166892112 1 34805 1435138 1 41023 1613 1 48224 1400 1 39006 1063 1 283 437 1 20829 152 1
…most of the traffic comes from AS5617. of course it doesn’t have to be this way always, but it’s quick and dirty verification if the traffic patterns are sane. does gathering and processing of all this information take a lot of RP CPU? on 3845 platform for 300Mbit/s of aggregated traffic it costs 2-3% CPU load. if you selected your platform correctly - that shouldn’t be a problem.