i had to migrate recently my old Synology 1815+ thanks to well known Intel SNAFU with Atom CPUs. interestingly enough, even Synology own service department declined to RMA the NAS, without even discussing the situation.
so i managed to setup quickly 12x 3.5” bay server. i had five 3.5” 8TB HDDs from Synology that i wanted to rescue data from. the server itself is kind of old one - but solid. it’s a dual Intel L5100 series chassis (with sadly one CPU only), 64GB of RAM, LSI/Avago RAID card and Intel twin 10GE NIC. for ‘fast & dirty’ hack it was more than enough.
while copying data from the failed array (thankfully, Synology RAID is Linux mdadm) in disguise) i noticed however that it’s veeeeeery slow. while i was connected to 10GE network, using 10GE interfaces, the throughput i was getting was round 5-8MBps, not the 40-50MBps i was looking for.
to my dismay, quick troubleshooting shown that the problem happens only with my Macs - my older son Windows connected just for testing enabled faster transfers. i also did
iozone test on the FreeNAS ZFS pool itself, just to test if everything is all right with the server. with 1.8GBps (yeah!) it clearly was. so… it was a problem between Macs and new gear.
so after digging in, it became apparent that Apple around MacOS 10.11 (and for sure starting from 10.12) enabled by default file transfer crypto signing. to disable that if you hit compatibility problems, you need to create (on affected, client station)
/etc/nsmb.conf with following content:
transfers immediately jumped up!
by the way - if you’re buidling your own NAS, you can benchmark them using
iozone - that should give you at least baseline performance.
here are my two test runs on different hardware.
NAS Supermicro X7DWN+ with 1x Intel L5410, 64GB RAM, and 6TB SAS-NL disks
NAS UCS 220M3, 2x 2620v0 CPUs, 128GB RAM, and 1TB SATA HDDs