iSCSI vs SMB Performance

I don’t seem to get the same performance on my three QNAP NAS using iSCSI volumes as I do with SMB, even using 4x 1TB SSD RAID-10 Caching arrays. I have not properly tested and charted it, but just feel-wise, experientially, iSCSI seems a lot slower. Is there a comparison chart somewhere that would show if iSCSI is actually slower, and I should switch everything over to SMB?

What is the exact NAS and setup you are testing with (including the …for some reason…RAID10 cache)

Sometimes it is by case.

But while I tested the NAS for Milestone XProtect verification, the iSCSI and SMB both performs well when I use the NAS as the storage for 3rd party NVR solutions. (The test case is about handling 200 IP camera streams at same time)

There was an impression that iSCSI might performs better in the speed because it is dedicated for one device, but as the SMB protocol improves a lot so the difference are becomes lesser if talking about the file transfer only.

In my view, if the storage machine working as dedicated external storage over the network for another device, the iSCSI will work especially you wants to use that as the storage for Database service or Virtual Machine Storage Pool because the "latency" shall be lower.

Otherwise for common storage, SMB works good for most of the time. And because the contents in SMB share can be recognized by QTS as well, means you can backup those files to another place for additional layer of data-loss prevention.

Overall, when the speed is already sufficient, I’ll choose SMB as my priority unless other factors or specific features are needed.

I have three QNAP NAS that I am using; two TS-932PX-16G and one TS-1679U-RP with 32G. Each one has 4-Drive SSD Caching enabled; the two TS-932PX units via their internal storage bays and the TS-1679U via a QNAP QM2 4-Port NVMe card. They are all connected to the same 10G fiber switch.

When you have a 4-Drive SSD Cache, the OS sets it up as RAID-10.

The use case for which these three NAS are being utilized as iSCSI targets is under VMware, where multiple hosts in a vSphere cluster are aware of the iSCSI targets and can access them as needed over the network. Migration between the NAS via iSCSI seems / “feels” (experientially) somewhat slow, and so I was wondering if there were any stats online from anyone doing speed tests in the past that might confirm SMB would be faster. I can switch the units over to SMB.

Not sure if you aware QNAP does have pages listing test results in lab:

The hardware configuration are listed at the bottom of those reports. And some of them might similar to your scenario.

SSD Caching does not help improve speed except in very rare instances when you are reading a large amount of very small files. Other than that, it can actually slow things down.

The way QNAP has it implemented leaves a lot to be desired. Once the cache fills up, then you have to rely on the speed of the bus between the SSD cache and the drives you are trying to read or write from. I would be your caching is actually slowing you down.

You would be better off using your SSD drives as the system volume where all your apps are stored, you run your VMs, etc.

Based on our internal tests, transfer speeds are related to file types.

When transferring a single large file of the same size, SMB is faster than iSCSI. For many small files, iSCSI is faster than SMB.

It’s also important to note that the risk of data corruption is relatively higher with iSCSI if the loses power, as the file system is managed by the client. Frequent power outages can lead to data inaccuracy or the file system failing to mount.