Drives dropping from Ts659Pro+

I have an older Qnap 4.2.6 device that has been super reliable. Drive #5 was reporting some anomalies for the past 2 years. I purchased a spare drive in case it simply died, plus there is a hot spare in the unit.
Drive#2 simply crashed ‘dead’ about a week ago. Hot spare was activated.
replaced Drive #2, all seemed to rebuild after 24+ hours
Drive#5 is now showing the red status light above the drive, Drive#1 is now reporting as ejected and Drive#2 is not part of the Raid anymore
I was wondering if anyone here has more of an idea than myself. thanks SG


** QNAP NAS Report **


NAS Model: TS-659 Pro+
Firmware: 4.2.6 Build 20240618
System Name: QNAPFileserver
Workgroup: NAS
Base Directory: /share/MD0_DATA
NAS IP address: 192.168.1.50

Default Gateway Device: bond0

      inet addr:192.168.1.50  Bcast:192.168.1.255  Mask:255.255.255.0
      inet6 addr: 2001:8003:e08d:4201:208:9bff:fec5:5f25/64 Scope:Global
      inet6 addr: fe80::208:9bff:fec5:5f25/64 Scope:Link
      UP BROADCAST RUNNING MASTER MULTICAST  MTU:9000  Metric:1
      RX packets:375957 errors:0 dropped:37449 overruns:0 frame:0
      TX packets:1015261 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:204724301 (195.2 MiB)  TX bytes:94129439 (89.7 MiB)

DNS Nameserver(s):61.9.211.33
61.9.211.1

HDD Information:

/dev/sda/dev/sda: No such device or address
/dev/sdb Model=Hitachi HUA723030ALA640 , FwRev=MKAOAA10, SerialN o= MK0371YVH1890A
/dev/sdc Model=Hitachi HUA723030ALA640 , FwRev=MKAOAA10, SerialN o= MK0371YVH2PE5A
/dev/sdd Model=Hitachi HUA723030ALA640 , FwRev=MKAOAA10, SerialN o= MK0371YVH2PDKA
/dev/sde Model=Hitachi HUA723030ALA640 , FwRev=MKAOAA10, SerialN o= MK0371YVH2PBYA
/dev/sdf Model=Hitachi HUA723030ALA640 , FwRev=MKAOAA10, SerialN o= MK0371YVGXLLAK

Disk Space:

Filesystem Size Used Available Use% Mounted on
/dev/ramdisk 151.1M 136.6M 14.5M 90% /
tmpfs 64.0M 748.0k 63.3M 1% /tmp
tmpfs 998.7M 24.0k 998.7M 0% /dev/shm
tmpfs 16.0M 0 16.0M 0% /share
/dev/sda4 371.0M 283.9M 87.1M 77% /mnt/ext
/dev/md9 509.5M 141.7M 367.8M 28% /mnt/HDA_ROOT
/dev/md0 10.8T 8.6T 2.2T 80% /share/MD0_DATA
tmpfs 1.0M 0 1.0M 0% /mnt/rf/nd

Mount Status:

/proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
tmpfs on /share type tmpfs (rw,size=16M)
none on /proc/bus/usb type usbfs (rw)
/dev/sda4 on /mnt/ext type ext3 (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/md0 on /share/MD0_DATA type ext4 (ro,usrjquota=aquota.user,jqfmt=vfsv0,user _xattr,data=ordered,delalloc,acl)
tmpfs on /mnt/rf/nd type tmpfs (rw,size=1m)

RAID Status:

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi path]
md0 : active (read-only) raid5 sdc3[0] sdf3[3] sde3[2] sdd3[1]
11714790144 blocks super 1.0 level 5, 64k chunk, algorithm 2 [5 /4] [UUUU_]

md6 : active raid1 sdf2[5] sdd24 sdc23 sdb2[2]
530128 blocks super 1.0 [2/2] [UU]

md13 : active raid1 sdb4[4] sdc4[0] sdf4[3] sde4[2] sdd4[1]
458880 blocks [6/5] [UUUUU_]
bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sdb1[4] sdc1[0] sdf1[3] sde1[2] sdd1[1]
530048 blocks [6/5] [UUUUU_]
bitmap: 23/65 pages [92KB], 4KB chunk

unused devices:

Memory Information:

MemTotal: 2045300 kB
MemFree: 72908 kB

NASReport completed on 2025-04-23 09:24:05 (-sh)
[~] # echo “Done.”
Done.

So you had a drive with issues for 2 years…never replaced it and then another drive failed, the spare kicked in but then 5 kicked the bucket…the data is toast then, I hope you have backups

@dolbyman , Not concerned about the data as use it for archiving stuff I dont really need. Drive#5 had not ‘failed’(red light) until after the raid was re-built from drive#2 failing. It was also working fine 2 days ago. I am wondering how to/if i can resurrect it. I did chkdisk of drives 1+2 last night and no errors, but it wont add them back into the Raid array. Any thoughts down that path?. The format options etc are all ‘greyed out’. I could unmount the raid and see if I get more options? But i thought to ask here firstly. Cheers SG

I don’t think there is a way back, just a way forward in reusing the working disks to rebuild a new storage pool/volume(md0 also points to an older legacy storage system, so starting from scratch should also give you storage pools (or a static volume)

Ok, thats fine, thanks for the response. I am guessing that means start up with all the drives out, Then insert them all, or a complete reset? Or unmount the volume and start there? I have two additional drives now, so can dump #5 and exchange for the new one. Still have 1 spare.

All sorted. Thanks for your time. Cheers