Raid 5 Fail old TS-809U

Raid 5 Fail old TS-809U

Hi All.

Had hard drive 2 in warning state.
Old QNAP did not have drive numbers listed on the front and the bays actually go as follows.
8-7-6-5
4-3-2-1

we mistakenly unplugged drive 7 which was working instead of drive 2.
When we checked storage manager before adding a new drive we noticed that drive 7 was unplugged so we plugged it back in and shortly after unplugged the actual drive 2.

Raid 5 is no longer active.
in Volume management i can see all drives as good.

When i check via ssh using md_checker i get the following

RAID metadata found!

Level: raid5

Devices: 6

Name: md0

Chunk Size: 64K

md Version: 1.0

Creation Time: Aug 8 10:58:27 2016

Status: OFFLINE

===============================================================================

Disk | Device | # | Status | Last Update Time | Events | Array State

===============================================================================

1 /dev/sda3 0 Active Feb 6 17:51:53 2026 362103 U_uuu_

-------------- 1 Missing -------------------------------------------

3 /dev/sdc3 2 Active Feb 6 17:51:53 2026 362103 u_Uuu_

4 /dev/sdd3 3 Active Feb 6 17:51:53 2026 362103 u_uUu_

5 /dev/sde3 4 Active Feb 6 17:51:53 2026 362103 u_uuU_

-------------- 5 Missing -------------------------------------------

===============================================================================

I tried to use the following command being that the working drive should be /dev/sdf3

mdadm -AfR /dev/md0 /dev/sda3 /dev/sda3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3

Got the following error.

mdadm: device /dev/md0 already active - cannot assemble it

When i run

mdadm --detail /dev/md0

I get the following

/dev/md0:

Version : 01.00.03

Creation Time : Mon Aug 8 10:58:27 2016

Raid Level : raid5

Used Dev Size : 3517490240 (3354.54 GiB 3601.91 GB)

Raid Devices : 6

Total Devices : 5

Preferred Minor : 0

Persistence : Superblock is persistent

Update Time : Fri Feb 6 17:51:53 2026

State : active, degraded, Not Started

Active Devices : 4

Working Devices : 5

Failed Devices : 0

Spare Devices : 1

Layout : left-symmetric

Chunk Size : 64K

Name : 0

UUID : a090f234:f014e294:a326c8c3:a3c0de03

Events : 362103

Number Major Minor RaidDevice State

0 8 3 0 active sync /dev/sda3

1 0 0 1 removed

8 8 35 2 active sync /dev/sdc3

6 8 51 3 active sync /dev/sdd3

4 8 67 4 active sync /dev/sde3

5 0 0 5 removed

5 8 83 - spare /dev/sdf3

Number 5 in the array was the good drive which has been reinserted. Now it’s listed as a spare.
Is there a way to remount it as part of the array as it was working prior or is it too late?

Thanks

STRONGLY suggest you open a ticket with QNAP support BEFORE playing around with any commands.

1 Like

8 drive RAID5 is sketchy anyways..hope you have backups

Its a TS-809UI running OS 4.2.6.
Its end of life as of 31/12/2020 and no long receives tech support

I believe QNAP should still provide support for storage issues

So it’s a RAID 5 with a failed disk. And you removed another disk. So you got 2 disks failure in the RAID 5, it is normal to go offline.

Suggest to shutdown the NAS, reinsert all the original healthy drives (except the disk 2).
If the RAID is online, and only degraded, reinsert a new drive to replace the failed one.
If it is read-only, backup data first, and contact support.
If it is offline, suggest to contact QNAP support directly.

Disk 6 was reinserted. It is the working disk.
Disk 2 is the broken disk. It was replaced with a new disk.
The Qnap has been rebooted.

there is a total of 6 disks in the array.

When i check via ssh using md_checker i get the following

RAID metadata found!

Level: raid5

Devices: 6

Name: md0

Chunk Size: 64K

md Version: 1.0

Creation Time: Aug 8 10:58:27 2016

Status: OFFLINE

===============================================================================

Disk | Device | # | Status | Last Update Time | Events | Array State

===============================================================================

1 /dev/sda3 0 Active Feb 6 17:51:53 2026 362103 U_uuu_

-------------- 1 Missing -------------------------------------------

3 /dev/sdc3 2 Active Feb 6 17:51:53 2026 362103 u_Uuu_

4 /dev/sdd3 3 Active Feb 6 17:51:53 2026 362103 u_uUu_

5 /dev/sde3 4 Active Feb 6 17:51:53 2026 362103 u_uuU_

-------------- 5 Missing -------------------------------------------

===============================================================================

I have opened a support ticket so hopefully they can help.
I had the majority of the data replicated to another nas so its not the biggest issue if it is lost.

Ok, I am not sure you have read my reply though.
Anyway, let’s wait for the support to help

So here is the situation…

Whenever you remove a disk from a RAID, the entire RAID needs to be rebuilt even if you removed it for just a split second.

Since you had a drive fail and then removed a second drive, the raid is now failed entirely. Even though that drive is “good” - it doesn’t matter. The raid needed to be rebuilt due to the bad drive and now, a second drive was removed. It is not able to rebuild since two devices have now been removed.

Drive 2, drive 7, latest info was drive 6, …?

Anyhow, 2 drives bad in raid 5 is normally the death of data.

Regards