as you can see, 1GE share in overall switching market started to rise only recently (mainly thanks to cheap NICs and onboard integrations done by Realtek, Marvell, Broadcom and Intel). on the other hand, hunger for bandwidth grows as well - full HD movies from NAS need a lot of it, and if you’re planning to do something in addition to that sourced from the same NAS - it’s even worse (it seems everyone streams nowadays video content to different mobile devices around their homes over WLAN).
10GE at home is still quite expensive - for me the critical point was working with many virtual machines over 1Gbps connection. thanks to use of our own Cisco UCS and two Intel X520 NICs, I managed to connect my workstation (Intel Xeon X5670, 6 cores, 12GB RAM, Momentus XT 320GB as boot drive and two 2TB HDDs in RAID0 as data drives) with server working as NAS and VMware server (Intel Quad Core, 8GB RAM, 4x2TB in RAID5 and 2x2TB in RAID0). for now, all is connected using Catalyst 2960S in the 48x10/100/1000 PoE + 2x10GE version. NICs are connected to switch using Twingig modules:
wescore#sh interfaces status | e notconnected
Port Name Status Vlan Duplex Speed Type
Gi1/0/11 connected 10 a-full a-1000 10/100/1000BaseTX
Te1/0/1 connected 10 full 10G SFP-10GBase-CX1
Te1/0/2 connected 10 full 10G SFP-10GBase-CX1
wescore#sh interfaces tenGigabitEthernet 1/0/1
TenGigabitEthernet1/0/1 is up, line protocol is up (connected)
Hardware is Ten Gigabit Ethernet, address is 40f4.ec8f.efb3 (bia 40f4.ec8f.efb3)
Full-duplex, 10Gb/s, link type is auto, media type is SFP-10GBase-CX1
it’s really enjoyable - file transfers using SMB2 can even hit 440MBps (bytes not bits!), virtual machines are much more snappy. once i’ll hit requirements to host more UCS servers, switching will need to be extended.
why all that fuss? because sometimes it’s actually funny to throw everything away and tinker in your home lab ;)