Abnormal SMART status, but IHM says the drive is healthy

Weirdly enough, THAT stat is 0:
retired_block_count: Value: 100, Worst: 100, Threshold: 10, Raw value: 0
In the FULL SMART stats that I got from using smartctl:
Device Statistics (GP Log 0x04)
Page Offset Size Value Flags Description
0x03 0x020 4 0 — Number of Reallocated Logical Sectors
0x03 0x028 4 20 — Read Recovery Attempts
0x03 0x030 4 0 — Number of Mechanical Start Failures
0x03 0x038 4 0 — Number of Realloc. Candidate Logical Sectors
0x03 0x040 4 3 — Number of High Priority Unload Events
0x04 ===== = = === == General Errors Statistics (rev 1) ==
0x04 0x008 4 18 — Number of Reported Uncorrectable Errors
0x04 0x010 4 0 — Resets Between Cmd Acceptance and Completion
0x04 0x018 4 0 -D- Physical Element Status Changed

In the FARM logs:
FARM Log Page 3: Error Statistics
Unrecoverable Read Errors: 0
Unrecoverable Write Errors: 0
Number of Reallocated Sectors: 0
Number of Read Recovery Attempts: 20
Number of Mechanical Start Failures: 0
Number of Reallocated Candidate Sectors: 0
Number of ASR Events: 24

	Uncorrectable errors: 0
	Cumulative Lifetime Unrecoverable Read errors due to ERC: 0


Cum Lifetime Unrecoverable by head 7:
Cumulative Lifetime Unrecoverable Read Repeating: 18
Cumulative Lifetime Unrecoverable Read Unique: 0

FARM Log Page 5: Reliability Statistics
	Error Rate (SMART Attribute 1 Raw): 0x000000000bf750ae
	Error Rate (SMART Attribute 1 Normalized): 83
	Error Rate (SMART Attribute 1 Worst): 64
	Seek Error Rate (SMART Attr 7 Raw): 0x0000000022dc4e9e
	Seek Error Rate (SMART Attr 7 Normalized): 88
	Seek Error Rate (SMART Attr 7 Worst): 60
	High Priority Unload Events: 3
	Helium Pressure Threshold Tripped: 0
	LBAs Corrected By Parity Sector: 1

Based on this, it looks like the drive experienced a transient fault, the QNAP freaked out, and now everything seems to be ok. Of course, those 18 errors are logged.
Error 18 occurred at disk power-on lifetime: 11814 hours (492 days + 6 hours)
Error: UNC at LBA = 0x0fffffff = 268435455

Error 17:
Error: WP at LBA = 0x0fffffff = 268435455

And of course, the self test history shows:
#3 Extended offline Completed: read failure 90% 11830 1580896880

Subsequent testing says everything is ok. Once the rebuild is done, I intend to initiate ANOTHER full SMART test, and then a bad block scan. This is REALLY weird, and that’s coming from someone who has had to troubleshoot transient failures before. This doesn’t exactly boost my confidence in Seagate any. I suppose if it happens again, I’ll initiate a RAID scrub (for those more familiar with mdadm, “echo resync > /sys/block/md1/md/sync_action” which will basically perform a writeback and parity validation. On a RAID 5, block errors can fatally corrupt your data, but with RAID 6, the extra parity bit can usually resolve these. Removing the drive, wiping it, then putting it back in and letting the system rebuild it works, too, but takes longer. The upside is: a wiped drive CAN’T screw up the parity calculations.