I have an older Qnap 4.2.6 device that has been super reliable. Drive #5 was reporting some anomalies for the past 2 years. I purchased a spare drive in case it simply died, plus there is a hot spare in the unit.
Drive#2 simply crashed ‘dead’ about a week ago. Hot spare was activated.
replaced Drive #2, all seemed to rebuild after 24+ hours
Drive#5 is now showing the red status light above the drive, Drive#1 is now reporting as ejected and Drive#2 is not part of the Raid anymore
I was wondering if anyone here has more of an idea than myself. thanks SG
** QNAP NAS Report **
NAS Model: TS-659 Pro+
Firmware: 4.2.6 Build 20240618
System Name: QNAPFileserver
Workgroup: NAS
Base Directory: /share/MD0_DATA
NAS IP address: 192.168.1.50
Default Gateway Device: bond0
inet addr:192.168.1.50 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: 2001:8003:e08d:4201:208:9bff:fec5:5f25/64 Scope:Global
inet6 addr: fe80::208:9bff:fec5:5f25/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:9000 Metric:1
RX packets:375957 errors:0 dropped:37449 overruns:0 frame:0
TX packets:1015261 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:204724301 (195.2 MiB) TX bytes:94129439 (89.7 MiB)
DNS Nameserver(s):61.9.211.33
61.9.211.1
HDD Information:
/dev/sda/dev/sda: No such device or address
/dev/sdb Model=Hitachi HUA723030ALA640 , FwRev=MKAOAA10, SerialN o= MK0371YVH1890A
/dev/sdc Model=Hitachi HUA723030ALA640 , FwRev=MKAOAA10, SerialN o= MK0371YVH2PE5A
/dev/sdd Model=Hitachi HUA723030ALA640 , FwRev=MKAOAA10, SerialN o= MK0371YVH2PDKA
/dev/sde Model=Hitachi HUA723030ALA640 , FwRev=MKAOAA10, SerialN o= MK0371YVH2PBYA
/dev/sdf Model=Hitachi HUA723030ALA640 , FwRev=MKAOAA10, SerialN o= MK0371YVGXLLAK
Disk Space:
Filesystem Size Used Available Use% Mounted on
/dev/ramdisk 151.1M 136.6M 14.5M 90% /
tmpfs 64.0M 748.0k 63.3M 1% /tmp
tmpfs 998.7M 24.0k 998.7M 0% /dev/shm
tmpfs 16.0M 0 16.0M 0% /share
/dev/sda4 371.0M 283.9M 87.1M 77% /mnt/ext
/dev/md9 509.5M 141.7M 367.8M 28% /mnt/HDA_ROOT
/dev/md0 10.8T 8.6T 2.2T 80% /share/MD0_DATA
tmpfs 1.0M 0 1.0M 0% /mnt/rf/nd
Mount Status:
/proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
tmpfs on /share type tmpfs (rw,size=16M)
none on /proc/bus/usb type usbfs (rw)
/dev/sda4 on /mnt/ext type ext3 (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/md0 on /share/MD0_DATA type ext4 (ro,usrjquota=aquota.user,jqfmt=vfsv0,user _xattr,data=ordered,delalloc,acl)
tmpfs on /mnt/rf/nd type tmpfs (rw,size=1m)
RAID Status:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi path]
md0 : active (read-only) raid5 sdc3[0] sdf3[3] sde3[2] sdd3[1]
11714790144 blocks super 1.0 level 5, 64k chunk, algorithm 2 [5 /4] [UUUU_]
md6 : active raid1 sdf2[5] sdd24 sdc23 sdb2[2]
530128 blocks super 1.0 [2/2] [UU]
md13 : active raid1 sdb4[4] sdc4[0] sdf4[3] sde4[2] sdd4[1]
458880 blocks [6/5] [UUUUU_]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sdb1[4] sdc1[0] sdf1[3] sde1[2] sdd1[1]
530048 blocks [6/5] [UUUUU_]
bitmap: 23/65 pages [92KB], 4KB chunk
unused devices:
Memory Information:
MemTotal: 2045300 kB
MemFree: 72908 kB
NASReport completed on 2025-04-23 09:24:05 (-sh)
[~] # echo “Done.”
Done.