831XU - 60Mbs WHY!! tested everything!

I have a 831Xu Qnap with 8 toshiba N300 CMR Drives,
these used to be fine, Read:600MBs/Write:100MBs.

this is conencted to Direct Attached Copper cables using SMB Multipathing to a 10/25Gbs switch.
as previously mentioned this used to be fine and capable enough for what we needed.

For some reason in the last 14 days the performance has tanked to 60MBs Read and 2MBs Write. i have performed 3 updates, i have installed the latest SMB Version.

I have disabled unneeded services and CPU sitting at 20% usage.

I have as of today tested NFS and FTP both hit the same 60Mbs.

i have performed the qcli Storage tested with 552MB-668MBs Throughput.
1 /dev/mapper/cachedev6 558.15 MB/s /share/CACHEDEV6_DATA 458.78 MB/s

I know the Annapurna models are bottem barrel this is for archive and i have 21TB of data to move off the array to a new TrueNas Based array.
i have identcal 831Xu which is performing as normal, i have compared the /etc/SMB.conf between them and they look fine,
cache is disabled.

i have been fighting this for days.
its not the disks.
i have tested the network SFP to PC directly same performance.

using SMB3 and tuned windows as much as possible with SMB Signign but even after todays tests of NFS and FTP no change.

if i use 4 nics they will do 150Mbs each, if i use 2, they will do 300Mbs each.

i have changed the SFP modules to Dac Cables.

i dont think theres any more testing i can do the disk and Raid are ok.
the network is OK but somewhere between its crippling.

What kind of RAID config are you running ? CACHEDEV6 sounds like it’s all single disks (as the NAS would start with 1)

i have doubled checked,

md1 : active raid6 sdc3[0] sdg3[7] sde3[6] sda3[5] sdb3[4] sdh3[3] sdf3[2] sdd3[1]
23382383616 blocks super 1.0 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]

Very strange that yours is 6…oh well

Is the array almost full?

47% Full
52.7% unallocated

Enclosure  Port  Sys_Name          Throughput    RAID        RAID_Type    RAID_Throughput   Pool
NAS_HOST   1     /dev/sdc          186.53 MB/s   /dev/md1    RAID 6       817.66 MB/s       1
NAS_HOST   2     /dev/sdd          190.82 MB/s   /dev/md1    RAID 6       817.66 MB/s       1
NAS_HOST   3     /dev/sdf          197.82 MB/s   /dev/md1    RAID 6       817.66 MB/s       1
NAS_HOST   4     /dev/sdh          197.03 MB/s   /dev/md1    RAID 6       817.66 MB/s       1
NAS_HOST   5     /dev/sdb          192.00 MB/s   /dev/md1    RAID 6       817.66 MB/s       1
NAS_HOST   6     /dev/sda          197.21 MB/s   /dev/md1    RAID 6       817.66 MB/s       1
NAS_HOST   7     /dev/sde          184.71 MB/s   /dev/md1    RAID 6       817.66 MB/s       1
NAS_HOST   8     /dev/sdg          186.41 MB/s   /dev/md1    RAID 6       817.66 MB/s       1
\[\~\] #

Regarding your situation, please open a support ticket with us, and our Support Team will assist you with further inspection and diagnostics.

Thank you for your cooperation!

Support Portal: https://service.qnap.com/

Hi,

This has been raised.
please can i get some help on this as soon as possible.

I had a similar/same problem, eventually discovered it was “caching” that had to be disabled, then it was off the to races with amazing speed difference.

Regarding the issue you reported, we will continue to provide assistance with the analysis and investigation through your support ticket. Our technical team will reach out to you as soon as there are any updates. Thank you again for your patience and understanding!