Scrubbing process wanted...

Hi QNAP team members,

when a raid scrubbing is running, which is the related process?

Running ps -ef at CLI I am still unable to identfy the scrub process.

My goal is to identify by a script if the scrubbing is currently running or not.

Regards

Hi mate. :slight_smile:

Scrubbing is managed by mdraid. Check /proc/mdstat for the current operation being performed.

Looks like this for me (a bit slow as auto tiering is happening as well)

md1 : active raid6 sdf3[10] sdc3[6] sdg3[8] sdh3[9] sde3[7] sdd3[11]
      46835709952 blocks super 1.0 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  resync =  0.0% (2455548/11708927488) finish=2145.3min speed=90946K/sec
      bitmap: 1/88 pages [4KB], 65536KB chunk
1 Like

@FSC830, if it helps, it should be easier to extract the state of the userdata array using:

mdadm --detail <array name>

There’s a line in the resulting report called State :

@dolbyman, can you please check this on your current array operation?

mdadm --detail /dev/md1

Output is

mdadm --detail /dev/md1
/dev/md1:
        Version : 1.0
  Creation Time : Wed Oct  9 15:48:10 2019
     Raid Level : raid6
     Array Size : 46835709952 (44666.01 GiB 47959.77 GB)
  Used Dev Size : 11708927488 (11166.50 GiB 11989.94 GB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue Feb 10 13:48:05 2026
          State : active, resyncing
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

  Resync Status : 5% complete

           Name : 1
           UUID : 2287586a:dd935b28:8f1d0240:2c5627bf
         Events : 3290468

    Number   Major   Minor   RaidDevice State
      10       8       83        0      active sync   /dev/sdf3
      11       8       51        1      active sync   /dev/sdd3
       7       8       67        2      active sync   /dev/sde3
       9       8      115        3      active sync   /dev/sdh3
       8       8       99        4      active sync   /dev/sdg3
       6       8       35        5      active sync   /dev/sdc3

Cheers mate. :+1:

Yup, the line following “Update Time” should be useful. :nerd_face:

If you are running QuTS Hero you can also use the command zpool status -v

[jono@NA9D-NAS ~]$ zpool status -v
  pool: zpool1
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:41:45 with 0 errors on Sun Feb  1 00:42:06 2026
 prune: last pruned 309 entries, 1091 entries are pruned ever
        total pruning count #12, avg. pruning rate = 1308547 (entry/sec)
expand: none requested
 renew: none requested
config:

	NAME                                    STATE     READ WRITE CKSUM
	zpool1                                  ONLINE       0     0     0
	  mirror-0                              ONLINE       0     0     0
	    qzfs/enc_0/disk_0x1_24074767F6C0_3  ONLINE       0     0     0
	    qzfs/enc_0/disk_0x2_24534D52401A_3  ONLINE       0     0     0

errors: No known data errors

  pool: zpool2
 state: ONLINE
  scan: scrub repaired 0 in 8 days 03:08:22 with 0 errors on Mon Feb  9 03:09:39 2026
 prune: last pruned 3813445 entries, 65250996 entries are pruned ever
        total pruning count #11, avg. pruning rate = 3184734 (entry/sec)
expand: none requested
 renew: none requested
config:

	NAME                                        STATE     READ WRITE CKSUM
	zpool2                                      ONLINE       0     0     0
	  raidz1-0                                  ONLINE       0     0     0
	    qzfs/enc_0/disk_0x3_5000CCA27EC5F5A5_3  ONLINE       0     0     0
	    qzfs/enc_0/disk_0x4_5000CCA267CD00FE_3  ONLINE       0     0     0
	    qzfs/enc_0/disk_0x5_5000CCA273F0B2D9_3  ONLINE       0     0     0
	    qzfs/enc_0/disk_0x6_5000CCA27EC5A850_3  ONLINE       0     0     0

errors: No known data errors

  pool: zpool3
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:22 with 0 errors on Sun Feb  1 00:01:00 2026
 prune: last pruned 11170 entries, 225866 entries are pruned ever
        total pruning count #12, avg. pruning rate = 3569426 (entry/sec)
expand: none requested
 renew: none requested
config:

	NAME                                      STATE     READ WRITE CKSUM
	zpool3                                    ONLINE       0     0     0
	  qzfs/enc_0/disk_0xa_50014EE262CA01AF_3  ONLINE       0     0     0
	  qzfs/enc_0/disk_0x7_50014EE20BE5CA36_3  ONLINE       0     0     0
	  qzfs/enc_0/disk_0x8_50014EE2613A68E3_3  ONLINE       0     0     0
	  qzfs/enc_0/disk_0x9_50014EE20BE5E07E_3  ONLINE       0     0     0

errors: No known data errors
1 Like

Hi,

@OneCD @dolbyman Thanks for the kick in the a…. :upside_down_face:

My guess, I am dealing to long with QNAP. Reading your answers bring up a hidden memory in my mind as I did check out this mdadm/mdstat output by myself years ago :grin: .

Typical example how FIFO works. :laughing:

@NA9D Thanks too, but no hero in use (yet).

Regards

1 Like

Hi,

I found the process [mdX_resync] running when I perform the scrubbing on my QTS NAS.
[admin@abt882br nasadmin]# ps aux | grep resync
23541 admin DWN [md1_resync]

1 Like

Yes, this process exists at my NAS too, I was using the wrong string for grep: “rebuild” or “scrub”, but not “resync”.

Anyhow /proc/mdstat provides exakt what I need.

Goal was to create a script which postpones the scheduled poweroff when a scrub process is active.

Regards