if you’re peering with somebody else using one of available IXPs, prediction of traffic flow changes and optimization of paid services is crucial for proper traffic engineering.
one of the more popular and easier tools, that is able to visualize traffic exchanged between ASes is AS-Stats. to properly doing its work, AS-Stats needs proper link definition in knownlinks
file. NetFlow probes exported to collector will contain only the id and AS-Stats needs to match it. example file itself for my installation is simple:
# Router IP SNMP ifindex tag description color
192.168.0.1 1 ATMAN ATMAN 5EA631
192.168.0.1 2 CROWLEY Crowley FFFF00
192.168.0.1 3 TPSA TP E45605
to get the proper interface index, you can check your router indexes using:
show snmp mib ifmib ifindex
…and it’s all good until router is rebooted. that happened to me on one of the older 3845 and suddenly the graphs became interesting if not unreal. of course the problem itself has easy solution - you need to nail down interface identifiers in NVRAM, which will stop the problem from bugging you in future. it’s done using:
snmp-server ifindex persist
having said that - that was not the only problem i was trying to solve. i needed something more interactive and powerful on the router itself to nail other heavy abuser of my services. thanks for the recent innovations in Cisco IOS, we are now able to see what’s inside our NetFlow cache. that feature has additional flexibility of selecting entries based on rich match and collect selectors. the feature itself is called Flexible NetFlow. with v9 of NetFlow we’ve got option to define number of caches, with Flexible NetFlow actually using that feature.
first of all, i defined a flow record, which will decide for router which fields together create a flow (match
command). next step is to define what data to collect with each and every new flow (collect
):
flow record NF-DATA
match routing source as peer
match interface input
collect routing source as
collect counter bytes
collect counter packets
as i’m using three upstream links, i also generated three separate flow monitors, to let me gather statistics separately:
flow monitor FM-ATMAN
record NF-DATA
!
flow monitor FM-Crowley
record NF-DATA
!
flow monitor FM-TP
record NF-DATA
flow definitions need to be now assigned to interfaces - for the real work of gathering the flow data:
interface GigabitEthernet 0/0.100
description Uplink-ATMAN
ip flow monitor FM-ATMAN input
ip flow ingress
!
interface GigabitEthernet 0/0.101
description Uplink-Crowley
ip flow monitor FM-Crowley input
ip flow ingress
!
interface GigabitEthernet 0/0.102
description Uplink-TP
ip flow monitor FM-TP input
ip flow ingress
once applied, the cache information should begin to be collected according to NF-DATA definition. we can now play with different sorting, filtering and aggregation functions, or even export it externally using CSV. as we’re now interested only for per-AS sourced traffic, the filter command will be simple:
c3845#show flow monitor FM-ATMAN cache aggregate routing source as counter bytes sort highest counter bytes
Processed 6 flows
Aggregated to 6 flows
Showing the top 6 flows
IP SRC AS BYTES flows
========= ========== ==========
24724 1669629642 1
2529 375072464 1
16265 172599860 1
8615 2173 1
2852 64 1
we see that on the ATMAN link we have most of the traffic originated from AS 24724. and on the TP (Orange) link:
c3845#show flow monitor FM-TP cache aggregate routing source as counter bytes sort highest counter bytes
Processed 8 flows
Aggregated to 8 flows
Showing the top 8 flows
IP SRC AS BYTES flows
========= ========== ==========
5617 166892112 1
34805 1435138 1
41023 1613 1
48224 1400 1
39006 1063 1
283 437 1
20829 152 1
…most of the traffic comes from AS5617. of course it doesn’t have to be this way always, but it’s quick and dirty verification if the traffic patterns are sane. does gathering and processing of all this information take a lot of RP CPU? on 3845 platform for 300Mbit/s of aggregated traffic costs 2-3% additional CPU load. if you selected your platform correctly - that shouldn’t be a problem.