Problem with broken disk

Hi,
I have a QNAP 8 bay TS-853A.
The disks are configured like this
1-4 RAID5
5-6 RAID1
7-8 JBOD

Initially the raid1 was on disks 7-8, then I bought bigger disks, I made the raid1 on disks 5-6 and I transformed the previous disks (7-8) into JBOD to park bulky, non-vital, recoverable stuff

Up until 2 days ago all the disks were in “good” status. Today while I was doing some updates, it went into error and I discovered that it had ejected disk 7. I only noticed (or realized) now that the System label is linked to the Storage Pool used in JBOD. I don’t think I noticed when I added the two disks and made the moves that the system data was there.
Now it tells me to insert a new disk to rebuild the raid.
I tried to connect the disk directly to the PC to read it with LinuxReader but it makes strange noises and is not detected, I will investigate better but I think it is a mechanical and/or controller problem.
My question is:

  1. but also in jbod volumes, are the system data replicated safely between the two disks (as if they were RAID1 for what concerns the system and JBOD for what concerns the data?
  2. If I turn off and on the NAS, does the system restart?
  3. if I insert a new disk, it rebuilds the system partition, and at the data level what should I expect, will I only see the data present on the good disk?

Always make sure you switch your UI to English, when posting screenshots in an English forum

  1. The OS itself is replicated across all internal drives. The system volume is on the first setup volume (marked with “System”) So if that spanning (?) JBOD or JBOD single disks (unlcear) were the System volume, you will have to reinstall all apps when you elevate another volume to system
  2. Sure, why not?
  3. What was the exact setup ? Spanning JBOD or single disks (if spanning you need to remove that whole Pool/volume and start from scratch)

hi @dolbyman , tnx for your answer!

  1. Spanning JBOD: 2 disk is spanned in one volume. So i suppose that disk 8 have the replica of system info (not data) located on disk 7.

So what is the best choice now?

  1. insert a new disk into slot 7 and restore the 7-8 spanning that contains the system?
  2. will the data that was on disk 8 be visible?
  3. how does JBOD Spanning work? Does it sequentially fill the disks (fill one disk and then move on to the other) or does it scatter the info like RAID0?

Yes the actual OS/system is on all disks (you can login via SSH and do a cat /proc/mdstat and you will see all the members of md9 and md13)

  1. You cannot restore the JBOD like that, it’s busted, remove the volume, clear the remaining drive and start the data on both from scratch
  2. No
  3. JBOD (md designation linear) fills disks sequentially, but over time data is deleted moved etc, so you block mapping cloud just be a swiss cheese.

But If i log by ssh, can see the content of disk 8, or almost directory structure?

Well … if you never touched the files you might be able to read the conent of the first disk but only an analysis of the whole array would give you that info.

Normally any non redundant array (RAID0,Spanning JBOD,pooled single disks) is a one way street where you have to be ready to lose all these on a whim

Yes, I knew that with JBOD I had no guarantee of recovery of the lost disk, but I believed that unlike RAID0 which for performance reasons divides the information on the disks, JBOD filled them sequentially, so 80% of the good disk was recoverable…
I will try to mount good disk on pc

After i try to recovery old data from good disk, if i NOT want to create again JBOD span, can i add this 2 disk to already working raid5 disk1-4, and have a raid5 of 6 disk?
Can i ADD one by one to existing RAID5?

Yes, adding disks to a RAID5 can be done
Static Volumes:
https://docs.qnap.com/operating-system/qts/5.0.x/en-us/expanding-a-static-volume-by-adding-disks-to-a-raid-group-8B8B18B0.html
Pools:
https://docs.qnap.com/operating-system/qts/5.0.x/en-us/expanding-a-storage-pool-by-adding-disks-to-a-raid-group-FD20DB.html

Hi, I’m trying to add a new disk to the working raid5. However, the old JBOD RAID that is no longer working is also present (with an error). When I select it to remove it, the interface gets stuck on the raid data recovery and does not go forward and does not allow me to delete the raid in order to insert a clean disk and add it to the other working raid.
How i can remove the volume?
Disk are removed and interface stuck on 12% of retry volume data.

I would to remove te volume “APPOGGIO” that is BROKEN and that i can’t recover. and the add 2 new disk for create new raid or for now ADD to raid5 that is near the limit.
How i can remove the volume and tell to nas than SYSTEM volume must be DataVol2? And then add new disk and expand DataVol2?

The Spin near APPOGGIO is allways running and if i select the volume for remove it, interface stucks on 12% and action is not enabled.

I suppose the same problem of this
https://forum.qnap.com/viewtopic.php?t=175754

Check the bottom of the linked thread and issue given command via SSH

When the SSH command not work, you can downgrad to QTS 5.2.0 or 5.1.9
With QTS 5.2.0/5.1.9 you can remove the volume

I used info of thread for init lvm.
Now I added a new 4TB disk to raid5, so now i have
RAID1 (2x6TB)
Raid5(5x4TB)
1 empty slot.

After the LVM init, i added disk to raid and after 22H the migration ended well.

Now the problem is that I don’t have a volume type SYSTEM, and i can add new APP or upgrade existing APp becouse i had no space.
How i can tell to NAS to promote RAID5 to SYSTEM volume?

You can go to

ControlPanel>SharedFolders>Others>RestoreDefaultFolders

That should create the default folders on your volume of choice

Tnx, worked.
Don’t asked me where to create, but now raid5 have the (SYSTEM) label.
Tnx again.
Also installed a firmware update.