I ordered a TS-h1277AFX through a sales rep and had it configured with all drives and pre-configured storage pools. Since it won’t ship for three weeks, I’m wondering what the build and configuration process looks like at QNAP. What happens on their end to put a customized NAS like this together?
How do they test fully assembled units or do they do something like a burn-in? Just curious this being my first nas and all, and I still need to get a second one for cold storage since this one is all performance.
there are lots of different build configurations. I always use two internal M.2 drives on a QM2 card (QM2-2P-384A) as storage pool 1 for the QuTS stuff and Apps. Then I leave the 12 SSD’s for my media as Storage Pool 2 (RAID 6). Some people just use one single storage pool for everything. Since the beginning of QuTS in 2019, and seeing “official builds” from people like Craig at QNAP in the UK - I always create the separate storage pools now.
From the TS-h1277AFX website -
M.2 Slot Expansion: QM2 Expansion Card
Add M.2 SSDs for high-speed storage pools or cache acceleration, reducing latency and improving overall system responsiveness.
So I always follow this when doing my build outs of QuTS systems. I am curious which SSD’s they have supplied for you, since this is “directly from QNAP”. I am told that they will be selling Phison Piscari SSD’s, M.2’s and U.2/U.3 drives directly now - so I am curious as to what you were sold.
The build I requested was two pools, x4 bays in a raid10 and x8 in a raid10, once it starts needing ram I’ll add ram and when it starts feeling like it needs a boost, I’ll add a card and m.2 cache. They’re putting 2tb wd reds in all the bays for now, I wasn’t offered anything else, and at the time I didn’t even think that was weird but now I’m wondering, lol. And for how 'll be using it, the reds will more than suffice… for a brief moment I even thought about buying/putting evo 870s in it on my own, but went with the reds.
I’m just curious about their behind the scenes processes for building and configuring a nas that they would ship ready to deploy.
I do not know your application, but I do not know why you needed 2 RAID 10 pools. You are going to sacrifice so much storage by doing this. You could have done one RAID 6 pool, and still get fantastic performance. And only sacrifice 2 drives. You should have looked at the compatibility list on the QNAP website, and said “hey - you guys offer 4 TB and 8 TB drives - how much more would that cost me !”. But like I said - I don’t know your application - so it might be very different than what I usually do -
I get that, but I don’t need a ton of storage on this nas, just speed, and once it’s up and running the handful of users that’ll be using it will be dealing with mostly hot data, and maybe some warm data. Most of our warm data as well as all the cold will be offloaded to a second nas configured for max storage in a raid 6, which I haven’t picked up yet - I shouldn’t need it for a few more months.
I only started getting into QNAP a few months ago, reading everything I could get my hands on, and a few weeks ago I picked up one of the QSW-M3224 switches and built a segmented 10gbe network just for this nas , so when this 1277afx shows up I’ll lag its two 10gbe ports and really put it to the test. It should stay snappy for all of us without any additional network upgrades beyond the 10gbe we all have now - I mean, I’m expecting to put more ram in the nas and m.2 cache when it starts needing it, but other than that I’ve got my fingers crossed this unit screams for what we need it to do.
My hope is there’s some sort of burn in testing when they build this unit before shipping it, I guess it wouldn’t be a big deal if they’re just cloning drives and installing them, that’d work but so far the level of service I’ve received has been a surprise in this day and age - we’re just a small business but have received treatment like I used to experience when I worked in a NOC, it’s been nice so I’d be surprised if they cloned all the drives and shipped the unit but I don’t know, just thought I’d ask.
If you want to saturate those 10G ports then I would suggest to use two RAID 10 with 6 drives each as RAID 10 with 4 of the WD 2TB SA500 will be slower than needed for 10G.
Like Bob I prefer to install the system on 2 internal NVMEs in RAID1, in this case that would mean on an add-on card. The nice thing about having the system on those NVME SSDs is that you can swap out all your drives in the front while everything that you installed on your NAS over time will still be available.
In any case this is a nice and compact solution that should be well suited for a quiet office environment.
Same here, RAIDZ with 4 drives is faster for sequential reads than a 10G connection but I assumed that the goal is to work in RAID 10.
If this was supposed to be two RAIDs opiimized for sequential speed with comparable protection against failure then I would go for two RAIDZ2 with 6 drives each but there may be other factors at play here that we do not know.
I appreciate all the suggestions, and I’ll be keeping an eye on things while we bring services online and if I’m seeing the need for an alternative configuration, we’ll adjust and reconfigure where it makes sense. For now, the plan is to maximize normal operations and frontend on the x4 bay raid for a handful of users that’ll be working the x8 bay, all of them are power users that’ll be sending constant small random writes and pulling reads all day and some into the nights. Really, the only bottlenecks I’m expecting here, after adding more ram and a m.2 cache, are the internet for a couple remote workers (and that’s with redundant gigabit connections), and possibly the cpu if we continue to scale. And when that happens I’ll have to start offloading services to the cloud and dial things back to where this nas is just for a small team again. And because all the data is critical, I get my best rebuild times from a raid 10, hopefully never needed but it a genuine consideration for us. Anyway, thanks for all the suggestions, I’ll be sure to keep an eye on it.
You’re exactly right, which is why we’re using vpn endpoints for remote access, and why we chose raid10 to prioritize faster rebuild times and minimize performance impacts during recovery. A quick recovery is critical for us.
SSD rebuilds have very little impact on array performance as seek time is not the same as for HDD. My RAIDz array rebuild in under an hour and performance was still great during that time.