TS-433 boost performance

Hi,

I’m trying to get high sequential data throughput on my NAS (TS-433) for video editing / fun. I do everything with windows file copy. The feature page says that 280 MB/s read or 202 MB/s write (with 4 disk SSD raid 5)

This is a kind of brain dump and feedback is welcome!
If possible can people share what performance they have? And the disk configuration?

I bought the NAS/disk before really looking into it all. I do not want to throw anything away or sell maybe add some stuff to get more performance or tweak settings or at least understand. Thats my goal.

My setup

TS-433 Qnap v5.2.3.3006 (8 jan 2025)
2x Seagate 4TB IronWolf in raid 1 (mirror).
2.5 Gbit network client and nas.
Client is high end with nvme storage.

When I got the NAS it felt the first days a bit faster writing data to it… But now it has about 2TB of data on it, It feels a bit slower (which makes sense).

Current measures

Copy files from the NAS is average 155 MB/s with spikes of 175 MB/s.
Copy files to the NAS is average of 80 MB/s no spikes (starts high for 1 sec)

I’m trying to understand why we get this speed and what is the limiting factor.

Disk performance
Disk throughput from qnap software:

Through CLI with some old basic commands:

$ hdparm -t /dev/sda -t /dev/sdb -t /dev/md1
/dev/sda:
Timing buffered disk reads: 608 MB in 3.01 seconds = 202.32 MB/sec
/dev/sdb:
Timing buffered disk reads: 592 MB in 3.01 seconds = 196.79 MB/sec
/dev/md1:
Timing buffered disk reads: 802 MB in 3.02 seconds = 265.61 MB/sec

$ echo 3 > /proc/sys/vm/drop_caches && dd if=/dev/sda of=/dev/null bs=1M count=4000
4000+0 records in
4000+0 records out
4194304000 bytes (3.9GB) copied, 20.796743 seconds, 192.3MB/s

$ echo 3 > /proc/sys/vm/drop_caches && dd if=/dev/sdb of=/dev/null bs=1M count=4000
4000+0 records in
4000+0 records out
4194304000 bytes (3.9GB) copied, 21.137790 seconds, 189.2MB/s

$ echo 3 > /proc/sys/vm/drop_caches && dd if=/dev/md1 of=/dev/null bs=1M count=4000
4000+0 records in
4000+0 records out
4194304000 bytes (3.9GB) copied, 18.758039 seconds, 213.2MB/s

The disks are 5400 RPM so I would expect lower values.
But this site UserBenchmark: Seagate IronWolf 4TB (2016) ST4000VN008
So the disks are good.

But the raid device (md1) is slower than expected. Why not 350 MB/s…

Will running “dd” I checked the disks with iostats:

                               extended device statistics
 device mgr/s mgw/s    r/s    w/s    kr/s    kw/s   size queue   wait svc_t  %b
 sda        1     1  224.9    4.0 113348.4     7.0  495.2   5.6   23.3   4.2  97
 sdb        1     1  220.9    4.0 111802.8     7.0  497.1   5.1   21.8   4.1  92
 md1        0     0  445.8    3.0 224636.0     6.0  500.5  10.1   22.6   2.2 100
                              extended device statistics
 device mgr/s mgw/s    r/s    w/s    kr/s    kw/s   size queue   wait svc_t  %b
 sda        0     0  241.4    0.5 121807.5     0.0  503.6   4.2   17.6   3.7  90
 sdb        0     0  240.4    0.5 121807.5     0.0  505.6   4.5   18.6   3.8  92
 md1        0     0  477.3    0.0 241061.3     0.0  505.0   8.8   18.5   2.1 100
                              extended device statistics
 device mgr/s mgw/s    r/s    w/s    kr/s    kw/s   size queue   wait svc_t  %b
 sda        0   529  224.5    9.0 113152.0  2129.8  493.7   5.2   21.4   3.9  91
 sdb        1   529  222.5    9.5 111362.0  2139.8  489.2   5.6   23.1   4.2  97
 md1        0     0  449.0    0.0 225280.0     0.0  501.7   9.8   21.8   2.2 100

I see the mirror is using both disks equally.

Network Throughput
I installed iperf. See How do I install iPerf3 in QTS and QuTS hero? | QNAP

C:\iperf>iperf3.exe -c qnap
Connecting to host qnap, port 5201
[  5] local 192.168.178.174 port 59051 connected to 192.168.178.190 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.01   sec   282 MBytes  2.34 Gbits/sec
[  5]   1.01-2.01   sec   282 MBytes  2.37 Gbits/sec
[  5]   2.01-3.01   sec   284 MBytes  2.37 Gbits/sec
[  5]   3.01-4.01   sec   282 MBytes  2.37 Gbits/sec
[  5]   4.01-5.01   sec   282 MBytes  2.36 Gbits/sec
[  5]   5.01-6.01   sec   282 MBytes  2.37 Gbits/sec
[  5]   6.01-7.01   sec   279 MBytes  2.34 Gbits/sec
[  5]   7.01-8.01   sec   283 MBytes  2.37 Gbits/sec
[  5]   8.01-9.01   sec   283 MBytes  2.36 Gbits/sec
[  5]   9.01-10.01  sec   279 MBytes  2.35 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.01  sec  2.75 GBytes  2.36 Gbits/sec                  sender
[  5]   0.00-10.04  sec  2.75 GBytes  2.36 Gbits/sec                  receiver
iperf Done.

That looks good! The network seems okay. This is NAS sending to client.

NAS Configuration

Two disk in a RAID1. No bitmap on the mirror.
Disk setup is one logical volume type thick and no snapshots.

So where do we lose speed?

Well the network performance seems fine. The disk through put is good on the single disk but combined it doesn’t gain as I would expect / hope. Unclear to me why. Maybe when the data arrives the arm cpu is not ready? Maybe the CPU is busy feeding the NIC and reading the disk? Interrupts etcetera.

Which setting could have improvements?

  • SMB multichannel
  • Jumbo MTU
  • SMB v3
  • SMB Async

I guess Jumbo frames makes the most sense… Will test this.

Regards,
Harry

I forgot to say i needed to update the NIC driver to get stable performance.
Check the old forum ts-433 acting strange - Page 2 - QNAP NAS Community Forum

I was thicking of adding NVME or SSD for write cache.
(reading will not improve for video editing with this…)

Expand the NAS

The NAS has a USB 5 Gbit port.
On my windows client I get 800MB/s of my USB NVME (10 Gbit USB).
I connected the USB NVME to the NAS.
Testing the NAS sharing the NVME I get a 150 MB/s.
(With a 5 Gbit USB slot on the client I get about 400 MB/s with this NVME)

$ hdparm -t /dev/sdc
/dev/sdc:
Timing buffered disk reads: 880 MB in 3.00 seconds = 293.06 MB/sec

I assume that handling the disk/usb/nic on this ARM CPU is not able to go higher than 150 MB/s.

What about the two empty bays on the NAS?

SSD Cache

Not possible for this NAS. See:

3. Due to hardware limitations, TS-216G and TS-x33 NAS don’t support SSD cache.

QTier
Well this is possible but has some more risks i think. With Qtier it becomes part of the storage. Than I would have add a mirror of SSD/NVME.
This type of storage is per TB more expensive so it feels not the way to go. Ofcourse this type of storage is always fast.

Well maybe add more disks, that would also improve read performance…

My two disks in mirror “should” give a double read performance and some less write performance.

A raid 5 of 3 disks should give triple read performance but will lower write even more. (A single write would be 1 write + 1 read followed by 1 write parity or a single triple write “full stripe”)
So raid 5 with 3 or 4 disks will not add to write performance so no option.

Let’s assume when we add the same disk types that the give us only 80 Mb/s for each disk:

Disks Raid SMB Read / Write
2 1 (mirror) 155 / 80 (current)
2 0 155 / 155 (I guess)
3 0 235 / 200 (I guess)
3 5 235 / 60 (I guess)
4 5 235 / 80 (I guess)
4 6 235 / 60 (I guess)
4 0 280 / 200 (I guess)
4 10 280 / 160 (I guess)

Raid 0 is no option for me. Raid 5/6 will not help the write performance.
So the only path I see is adding 2 disks and created a raid 10 volume.

Assuming that the CPU is able to handle the data coming from the disks…

But first my test plans:

  • Jumbo testing
  • Single SSD throughput test in a separte volume on the NAS

Regards,
Harry

The NAS is just too slow…no need to try cache (broken anyways)

If you need more speed, switch to a x86/x64 NAS

I understand this is not a very fast nas. I’m trying to get speed as advertised.
TS-433 | Product Performance | QNAP

The test were done with a RAID5 and SSDs

I last installed a 431XeU and the speeds were below 200MB/s read and below that writing.

That’s just the way it is with these ARM NAS, they are ok for 1GbE or slightly above. Don’t bank on them for solid 2.5/5/or 10 GbE stuff.

I have setup Jumbo frames of a size of 9000.

C:\Users\Beheerder>ping -f -l 8972 192.168.178.200

Pinging 192.168.178.200 with 8972 bytes of data:
Reply from 192.168.178.200: bytes=8972 time<1ms TTL=64
Reply from 192.168.178.200: bytes=8972 time<1ms TTL=64
Reply from 192.168.178.200: bytes=8972 time<1ms TTL=64
Reply from 192.168.178.200: bytes=8972 time<1ms TTL=64

Ping statistics for 192.168.178.200:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 0ms, Average = 0ms

Copy from NVME on USB with Jumbo went from 130 MB/s to 162 MB/s
Copy to NVME tops 213 MB/s and stays around 200 MB/s

Strange that read is slower than write…
But we got the maximum as QNAP says with write. Great!

While doing test on the mirror I see high spikes but i hear the disk seeking a lot and that costs a lot of throughput.

So I created a extra volume thick on the pool and tested it again.

Write files to the NAS I get spikes of 180 MB/s and average of 160 MB/s
This is the maximum speed of what the disk array can do!

Reading files from the NAS i get about average 220 MB/s with spikes of 250 MB/s.

I can conclude the following:

  • The new RTL8125 makes a difference!
  • An empty volume seems to prevents the disks from seeking alot.
  • Jumbo frames really help

The performance I now have is good enough.
For the write part there is little to gain. ( i have 80 % of the max)
For the read part there also little to gain. ( i have 78 of the max)

I will test the SSD later on.

Let’s update the table with my guesses.
Table is with 5400 Ironwolf disk and Jumbo frames on 9000
On fresh empty volumes.

Disks Raid SMB Read / Write
2 1 (mirror) 220 / 160 (current)
2 0 240 / 200 (I guess)
3 0 280 / 200 (I guess)
3 5 280 / 120 (I guess)
4 5 280 / 160 (I guess)
4 6 280 / 105 (I guess)
4 0 280 / 200 (I guess)
4 10 280 / 200 (I guess)
1 single disk SSD 290 / 290 (current!)

It got me thinking about the empty volume.
My disks are only 5400 rpm, random i/o performance sucks it should be avoided.

So the problem of a my filled volume would be fragmentation.
I used Entware to install filefrag. This allows me to check the fragmentation of a file.

So the fast copy of a 10 GByte file to the empty disk:

$ /opt/sbin/filefrag -v TEST_FILE
TEST_FILE: 11 extents found

The bad performance copy to the normal disk 2.4 TB / 300 GB Free.

$ /opt/sbin/filefrag -v TEST_FILE
TEST_FILE: 124 extents found

124 vs 11 extents. This confirm the fragmentation problem.

I found “e2freefrag” tool. Which gives some insights.

The new volume

$ e2freefrag /dev/mapper/cachedev2
Device: /dev/mapper/cachedev2
Blocksize: 4096 bytes
Total blocks: 78643200
Free blocks: 77744836 (98.9%)

Min. free extent: 4 KB
Max. free extent: 2080640 KB
Avg. free extent: 1851064 KB
Num. free extent: 168

HISTOGRAM OF FREE EXTENT SIZES:
Extent Size Range :  Free extents   Free Blocks  Percent
    4K...    8K-  :             1             1    0.00%
   64M...  128M-  :             6        170768    0.22%
  128M...  256M-  :             5        322362    0.41%
  256M...  512M-  :             2        191417    0.25%
  512M... 1024M-  :             6       1236804    1.59%
    1G...    2G-  :           148      75823484   97.53%

My production volume

$ e2freefrag /dev/mapper/cachedev1
Device: /dev/mapper/cachedev1
Blocksize: 4096 bytes
Total blocks: 655360000
Free blocks: 74780720 (11.4%)

Min. free extent: 4 KB
Max. free extent: 2080640 KB
Avg. free extent: 16856 KB
Num. free extent: 17743

HISTOGRAM OF FREE EXTENT SIZES:
Extent Size Range :  Free extents   Free Blocks  Percent
    4K...    8K-  :           751           751    0.00%
    8K...   16K-  :           689          1639    0.00%
   16K...   32K-  :           892          4733    0.01%
   32K...   64K-  :          1221         13626    0.02%
   64K...  128K-  :           683         14679    0.02%
  128K...  256K-  :           550         25336    0.03%
  256K...  512K-  :           716         66956    0.09%
  512K... 1024K-  :          1414        260150    0.35%
    1M...    2M-  :          1720        648493    0.87%
    2M...    4M-  :          2645       1989195    2.66%
    4M...    8M-  :          3854       5748362    7.69%
    8M...   16M-  :          1349       3778962    5.05%
   16M...   32M-  :           466       2614941    3.50%
   32M...   64M-  :           270       2748786    3.68%
   64M...  128M-  :           396      11086409   14.83%
  128M...  256M-  :            17        834022    1.12%
  256M...  512M-  :             7        666937    0.89%
  512M... 1024M-  :            20       3767726    5.04%
    1G...    2G-  :            83      40509017   54.17%

Not sure where I’m looking at…

I have been reading a lot about ext4 and fragmentation.
It seems that you should always make sure you have at least 20% free space.
(that’s not what i was doing…)

Search a bit on defragmentation on the disk array.
I can not find e4defrag on QNAP but I did find “shake”.

$ filefrag ccie.rtf
ccie.rtf: 2 extents found
$ shake --old 0 --bigsize 0 ccie.rtf
$ filefrag ccie.rtf
ccie.rtf: 1 extents found

So now I have a gun :slight_smile:

Can we remove some unneeded I/O on the disks?

I tried some stuff

I disable the filesystem journal but QNAP did not like this at all…
Played around with option on ext4 like largefile4 and bigalloc.
Well QNAP GUI doesn’t like this…
I was able te reformat the volume with qnap cli to make the GUI happy again.
qcli_volume -f volumeID=2 action=start inode=65536

From the cli you can do, but this will not help video editing…

$ mount /dev/mapper/cachedev1 -o remount,noatime

The only step forwards is maybe the 65k option while formating with QNAP.
But not sure…

Well I think I should make two thick volumes:
One volume dedicated for video (big files for sequentail)
One volume for all the rest.

I still need to test the SSD, i’m pretty sure i will git high values.

Cheers!

Entware tools:

shake - 1.0-20170702-1 - Shake is a defragmenter that runs in userspace.
filefrag - 1.47.0-2 - Ext2 Filesystem file fragmentation report utility
e2freefrag - 1.47.0-2 - Ext2 Filesystem free space fragmentation information utility

Installed the SSD and got 290 MB/s / 290 MB/s throughput.
That’s more than I expected for the writing part.

Tests in the QNAP software show disk throughput at 530 MB/s

I guess that with RAID 10 on HDD I will also get this kind of performance if I manage the fragmentation issue.

For now I will stick to the 2 disks in mirror.
Will make one volume for big files and another for the rest both with enough free space. That should keep fragmentation low.

Any input is still appreciated! Please let me know what performance you get with which disks.

I have the problem when I loaded the fix network driver the GUI doesn’t show the nic anymore. Does anybody know a solution? I’m waiting for QNAP to release an update with this driver onboard.

Cheers!

I did some rethinking and some retesting.
And I did NOT get the same performance during the tests.

Retested network

C:\iperf>iperf3.exe -c 192.168.178.200
Connecting to host 192.168.178.200, port 5201
[  5] local 192.168.178.174 port 50682 connected to 192.168.178.200 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.01   sec   270 MBytes  2.23 Gbits/sec
[  5]   1.01-2.01   sec   290 MBytes  2.43 Gbits/sec
[  5]   2.01-3.01   sec   295 MBytes  2.48 Gbits/sec
[  5]   3.01-4.01   sec   295 MBytes  2.48 Gbits/sec
[  5]   4.01-5.01   sec   295 MBytes  2.48 Gbits/sec
[  5]   5.01-6.01   sec   284 MBytes  2.39 Gbits/sec
[  5]   6.01-7.00   sec   266 MBytes  2.24 Gbits/sec
[  5]   7.00-8.00   sec   296 MBytes  2.48 Gbits/sec
[  5]   8.00-9.00   sec   296 MBytes  2.48 Gbits/sec
[  5]   9.00-10.00  sec   295 MBytes  2.48 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  2.81 GBytes  2.42 Gbits/sec                  sender
[  5]   0.00-10.03  sec  2.81 GBytes  2.41 Gbits/sec                  receiver

iperf Done.

Deleted and created the volume again and again with some different sizes and suddenly the results where good again.

That did not make sense to me. This would indicate that a certain sweetspot on the disk array is important.

I used simple DD with an offset to test performance. And it turned out that the speed at the end is really a lot less!
Of course this expected from a drive that the outer is faster than the inner part, but did not really expected to be a factor of 2.

I tested each disk and than the mirror:

[admin@QNAP /]# for i in `seq 0 500 3500`; do  echo  "offset $i Gigabytes";echo 3 > /proc/sys/vm/drop_caches && dd if=/dev/sda of=/dev/null bs=1M count=500 skip=${i}K; done
offset 0 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.454363 seconds, 203.7MB/s
offset 500 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.506304 seconds, 199.5MB/s
offset 1000 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.983244 seconds, 167.6MB/s
offset 1500 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.825006 seconds, 177.0MB/s
offset 2000 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 3.615630 seconds, 138.3MB/s
offset 2500 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 3.578937 seconds, 139.7MB/s
offset 3000 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 4.422567 seconds, 113.1MB/s
offset 3500 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 4.839563 seconds, 103.3MB/s
[admin@QNAP /]# for i in `seq 0 500 3500`; do  echo  "offset $i Gigabytes";echo 3 > /proc/sys/vm/drop_caches && dd if=/dev/sdb of=/dev/null bs=1M count=500 skip=${i}K; done
offset 0 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.588500 seconds, 193.2MB/s
offset 500 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.617839 seconds, 191.0MB/s
offset 1000 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.797189 seconds, 178.8MB/s
offset 1500 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.917865 seconds, 171.4MB/s
offset 2000 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.919876 seconds, 171.2MB/s
offset 2500 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 3.717384 seconds, 134.5MB/s
offset 3000 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 4.240667 seconds, 117.9MB/s
offset 3500 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 5.708912 seconds, 87.6MB/s
[admin@QNAP /]# for i in `seq 0 500 3500`; do  echo  "offset $i Gigabytes";echo 3 > /proc/sys/vm/drop_caches && dd if=/dev/md1 of=/dev/null bs=1M count=500 skip=${i}K; done
offset 0 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.179269 seconds, 229.4MB/s
offset 500 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.088836 seconds, 239.4MB/s
offset 1000 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.301137 seconds, 217.3MB/s
offset 1500 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 3.947950 seconds, 126.6MB/s
offset 2000 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 3.064201 seconds, 163.2MB/s
offset 2500 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 2.718814 seconds, 183.9MB/s
offset 3000 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 3.425079 seconds, 146.0MB/s
offset 3500 Gigabytes
500+0 records in
500+0 records out
524288000 bytes (500.0MB) copied, 3.747751 seconds, 133.4MB/s

So I have learned that the FAST volume must be the first.
The last 1 TB is to slow even with the mirror to give high throughput.

So a volume with lots of free space have less fragmentation but we need to avoid the last part of the disks for speed.

I learned a lot!

This is called short stroke it seems.

For this disk it seems a “good” trade off to devided at about 2300 GB. That’s 70% of the disk. This is probably for every disk about the same.

First volume BIG files
Second volume “space for snapshots” 100 GB?
Third volume the rest for small files.

Unfortunately its not possible i believe with in qnap to make a dedicated space for snapshots. So the snap shots are likely to end up on the slowest part.

I will wipe the nas a build it up again .
With 3 volumes and later on I wil delete the middle volume so it can become the snapshot part.

It’s a nice puzzle, I enjoy it. Of course if you take your time seriously buy more fast disks and fast NAS and while your at it go for 10 Gbit!

I’m seeing performance issues here with my TS-435XeU too.

I’m using 10GE but only seeing a couple of gbps. I’ve gone for SMB v3, jumbos, async etc. and have modified settings on the Mac I’m using as a client. Mac to Mac I can get 10gbps fine, so the issue is clearly with the NAS.

QNAP claim I should see 10GE with this:

anyone got any thoughts (other than “it’s an ARM CPU so it can only do 1 gig”)?

so you have the same exact setup as them?

Client Computer:

  • OS: Microsoft Windows Server 2019
  • Spec: Intel® Core™ i7-7700 (4C/8T) , 32GB RAM, QNAP 10GbE/25GbE/100GbE Network Expansion Card

NAS Configuration:

  • OS: QTS 4.5.x & 5.0.0
  • RAID Volume: RAID 50 (8 bay and above), RAID 5 (4 bay to 6-bay), RAID 1 (2 bay), Single (1 bay)
  • SSD / HDD : Fully populated, Samsung 860 EVO 1TB SATA SSD / Seagate ST1000NM0033 1TB HDD / Samsung PM9A1 960GB M.2 NVMe PCIe Gen4 / Samsung PM9A3 (MZQL2960HCJR-00A07) 960GB U.2 NVMe PCIe Gen4
  • 2.5GbE & GbE: built-in Ethernet ports
  • Network Interface Card: 10GbE: QNAP Dual-port 10 Gigabit Network Expansion Card (LAN-10G2T-U or LAN-10G2SF-MLX)
  • Network Interface Card: 25GbE: QNAP Dual-port 25 Gigabit Network Expansion Card(QXG-25G2SF-CX6)
  • Network Interface Card: 100GbE: QNAP Dual-port 100 Gigabit Network Expansion Card(QXG-100G2SF-E810 or QXG-100G2SF-CX6)

QNAP used this as their demo system

No way those numbers are attainable, I could barely get 250MB\s out of a 431XeU (1x SFP+)

Of course I don’t have the exact same setup. Especially given that the setup the site quotes there isn’t the one they tested (actualy impossible I think given that the TS-435XeU has in-built 10GE but no support AFAIK for expansion NICs).

The correct one (I think) is here:

at any rate I’m running a newer S/W release than they tested (as you’d expect).

the client computer is clearly not an issue (as I can get 10gbps using a second Mac as the SMB server).

The key difference is probably that I don’t have a 4 x SSD RAID setup (I have good old-fashioned HDDs, albeit 7200 RPM ones). So yeah - could be the disks, though I’d have hoped it’d share the read load across the disks…

From what I’ve been reading the M.2 SSDs get pretty hot in the TS435XeU so am loathe to put those in. And equally I don’t really want to stump up to replace the HDDs with SSD…