Hello! Two of my four drives (4x 3TB RAID 10) developed bad sectors and needed replacement, so I bought 2x 6TB drives in order to buy two more next month and replace the other 2 good drives and, eventually, extend the storage capacity.
I haven’t read the article posted at Online RAID Capacity Upgrade | QNAP before, my bad.
Oddly, the drives nedeed to be replaced were Disk 1 and Disk 2, which are on same mirror.
First I changed Disk 2 (it has the least number of bad sectors) by cold swap. After I powered on the server, it imediatelly included the disk in RAID and started to rebuild the matrix on Disk 2, which was the expected behavior, and after 11 hours the rebuild was done succesfully and everything was normal.
Meanwhile, the NAS was used normally, so files were accessed and uploaded on it.
I then changed the Disk 1 using cold swap aswell. The server included the disk in matrix and started the rebuild process which went smooth until it reached 29.9% when I noticed a significant drop in rebuild speed, but using the Resource Monitor I found that the rebuild was stopped completelly, and the speed showed in Storage Manager was simply an average, not the actual speed.
I stoped and rebooted the server, several times, but even the rebuild process started from zero, it stopped at same 29.9% every time and no error nor warning is issued on the matter. Just stops the transfer, but the Storage Manager still thinks is in the rebuild process!
In [~] # cat /var/log/storage_lib.log | more
I’ve found some info, but I can’t figure out if are related with my problem:
2025-10-13 15:21:14 [ 6156 disk_manage.cgi] Perform cmd "/sbin/lvs vg1/tp1 2>>/dev/null -o segtype --noheadings | /bin/grep 'tier-thin-pool' &>/dev/null 2>>/dev/null" failed, cmd_rsp=256, reason code:1.
2025-10-13 15:21:14 [ 6156 disk_manage.cgi] Perform cmd "/sbin/lvs vg1/tp1 -o lv_attr --noheadings 2>>/dev/null | /bin/sed s/^[[:space:]]*// 2>>/dev/null" OK, cmd_rsp=0, reason code:0.
2025-10-13 15:21:14 [ 6156 disk_manage.cgi] Perform cmd "/sbin/lvs vg1/tp1 -o lv_attr --noheadings 2>>/dev/null | /bin/sed s/^[[:space:]]*// 2>>/dev/null" OK, cmd_rsp=0, reason code:0.
… and more.
NAS info:
Model: TS-469U
CPU: Intel(R) Atom™ CPU D2701 @ 2.13GHz
Memory: 2.93 GB
Firmware: 4.3.4.2814 Build 20240618
Disks:
Disk 1: TOSHIBA HDWT860 (SATA) 5589.03 GB, FW: KQ0C1L
Disk 2: TOSHIBA HDWT860 (SATA) 5589.03 GB, FW: KQ0C1L
Disk 3: WD30EZRX-00SPEB0 (SATA) 2794.52 GB, FW: 80.00A80
Disk 4: WD30EFZX-68AWUN0 (SATA) 2794.52 GB, FW: 81.00B81
System info:
[~] # pvs
PV VG Fmt Attr PSize PFree
/dev/md1 vg1 lvm2 a-- 5.44t 0
[~] # mdadm -D /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Wed Aug 12 13:54:11 2015
Raid Level : raid10
Array Size : 5840623232 (5570.05 GiB 5980.80 GB)
Used Dev Size : 2920311616 (2785.03 GiB 2990.40 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Mon Oct 13 14:24:06 2025
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : near=2
Chunk Size : 64K
Rebuild Status : 29% complete
Name : 1
UUID : 49fe7ca6:8cabaaa6:b7d566cb:98e22f1a
Events : 144869
Number Major Minor RaidDevice State
6 8 3 0 spare rebuilding /dev/sda3
5 8 19 1 active sync set-B /dev/sdb3
2 8 35 2 active sync set-A /dev/sdc3
4 8 51 3 active sync set-B /dev/sdd3
[~] # md_checker
Welcome to MD superblock checker (v1.4) - have a nice day~
Scanning system...
HAL firmware detected!
Scanning Enclosure 0...
RAID metadata found!
UUID: 49fe7ca6:8cabaaa6:b7d566cb:98e22f1a
Level: raid10
Devices: 4
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: Aug 12 13:54:11 2015
Status: ONLINE (md1) [_UUU]
=========================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
=========================================================================
1 /dev/sda3 0 Rebuild Oct 13 14:28:55 2025 144877 AAAA
2 /dev/sdb3 1 Active Oct 13 14:28:55 2025 144877 AAAA
3 /dev/sdc3 2 Active Oct 13 14:28:55 2025 144877 AAAA
4 /dev/sdd3 3 Active Oct 13 14:28:55 2025 144877 AAAA
=========================================================================
Basically, I’m stuck, I don’t know what to do to push it over the 29.9% barrier. Please help!