我正在嘗試在 Linux 容器中做一些事情,但遇到了一些困難,也發現了一些令人驚訝的結果。
我想要找出 /dev 中作為安裝系統磁碟的裝置。然而,當我執行像 lsblk 這樣的指令時,我居然看到 NAS 上的所有磁碟!
首先,我原本以為容器安裝應該是和 NAS 其他部分「隔離」的。事實並非如此。看起來你無法從 NAS 端存取容器,但你卻能從容器存取 NAS 的其他部分。這真的很令人不安。
第二,掛載點從來沒有被列出。
root@45217fe699b0:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 64M 1 loop
sda 8:0 0 1.8T 0 disk
|-sda1 8:1 0 517.7M 0 part
| `-md9 9:9 0 517.6M 0 raid1
|-sda2 8:2 0 517.7M 0 part
|-sda3 8:3 0 1.8T 0 part
|-sda4 8:4 0 526.6M 0 part
| `-md13 9:13 0 448.1M 0 raid1
`-sda5 8:5 0 31.5G 0 part
`-md322 9:322 0 30.4G 0 raid1 [SWAP]
sdb 8:16 0 1.8T 0 disk
|-sdb1 8:17 0 517.7M 0 part
| `-md9 9:9 0 517.6M 0 raid1
|-sdb2 8:18 0 517.7M 0 part
|-sdb3 8:19 0 1.8T 0 part
|-sdb4 8:20 0 526.6M 0 part
| `-md13 9:13 0 448.1M 0 raid1
`-sdb5 8:21 0 31.5G 0 part
`-md322 9:322 0 30.4G 0 raid1 [SWAP]
sdc 8:32 0 9.1T 0 disk
|-sdc1 8:33 0 517.7M 0 part
| `-md9 9:9 0 517.6M 0 raid1
|-sdc2 8:34 0 517.7M 0 part
|-sdc3 8:35 0 9.1T 0 part
|-sdc4 8:36 0 526.8M 0 part
| `-md13 9:13 0 448.1M 0 raid1
`-sdc5 8:37 0 31.5G 0 part
`-md322 9:322 0 30.4G 0 raid1 [SWAP]
sdd 8:48 0 9.1T 0 disk
|-sdd1 8:49 0 517.7M 0 part
| `-md9 9:9 0 517.6M 0 raid1
|-sdd2 8:50 0 517.7M 0 part
|-sdd3 8:51 0 9.1T 0 part
|-sdd4 8:52 0 526.8M 0 part
| `-md13 9:13 0 448.1M 0 raid1
`-sdd5 8:53 0 31.5G 0 part
`-md322 9:322 0 30.4G 0 raid1 [SWAP]
sde 8:64 0 9.1T 0 disk
|-sde1 8:65 0 517.7M 0 part
| `-md9 9:9 0 517.6M 0 raid1
|-sde2 8:66 0 517.7M 0 part
|-sde3 8:67 0 9.1T 0 part
|-sde4 8:68 0 526.8M 0 part
| `-md13 9:13 0 448.1M 0 raid1
`-sde5 8:69 0 31.5G 0 part
`-md322 9:322 0 30.4G 0 raid1 [SWAP]
sdf 8:80 0 9.1T 0 disk
|-sdf1 8:81 0 517.7M 0 part
| `-md9 9:9 0 517.6M 0 raid1
|-sdf2 8:82 0 517.7M 0 part
|-sdf3 8:83 0 9.1T 0 part
|-sdf4 8:84 0 526.8M 0 part
| `-md13 9:13 0 448.1M 0 raid1
`-sdf5 8:85 0 31.5G 0 part
`-md322 9:322 0 30.4G 0 raid1 [SWAP]
sdg 8:96 1 1.8T 0 disk
|-sdg1 8:97 1 517.7M 0 part
| `-md9 9:9 0 517.6M 0 raid1
|-sdg2 8:98 1 517.7M 0 part
|-sdg3 8:99 1 1.8T 0 part
|-sdg4 8:100 1 526.6M 0 part
| `-md13 9:13 0 448.1M 0 raid1
`-sdg5 8:101 1 31.5G 0 part
`-md322 9:322 0 30.4G 0 raid1 [SWAP]
sdh 8:112 1 931.5G 0 disk
|-sdh1 8:113 1 517.7M 0 part
| `-md9 9:9 0 517.6M 0 raid1
|-sdh2 8:114 1 517.7M 0 part
|-sdh3 8:115 1 898G 0 part
|-sdh4 8:116 1 526.2M 0 part
| `-md13 9:13 0 448.1M 0 raid1
`-sdh5 8:117 1 31.5G 0 part
`-md322 9:322 0 30.4G 0 raid1 [SWAP]
sdi 8:128 0 372.6G 0 disk
|-sdi1 8:129 0 200M 0 part
`-sdi2 8:130 0 372.3G 0 part
sdj 8:144 1 4.6G 0 disk
|-sdj1 8:145 1 5.1M 0 part
|-sdj2 8:146 1 488.4M 0 part
|-sdj3 8:147 1 488.4M 0 part
|-sdj4 8:148 1 1K 0 part
|-sdj5 8:149 1 8.1M 0 part
|-sdj6 8:150 1 8.5M 0 part
`-sdj7 8:151 1 2.7G 0 part
sdk 8:160 0 10.9T 0 disk
`-sdk1 8:161 0 10.9T 0 part
sdl 8:176 0 1.8T 0 disk
`-sdl1 8:177 0 1.8T 0 part
nbd0 43:0 0 0B 0 disk
nbd1 43:32 0 0B 0 disk
nbd2 43:64 0 0B 0 disk
nbd3 43:96 0 0B 0 disk
nbd4 43:128 0 0B 0 disk
nbd5 43:160 0 0B 0 disk
nbd6 43:192 0 0B 0 disk
nbd7 43:224 0 0B 0 disk
fbsnap0 250:0 0 0B 0 disk
fbsnap1 250:1 0 0B 0 disk
fbsnap2 250:2 0 0B 0 disk
fbsnap3 250:3 0 0B 0 disk
fbsnap4 250:4 0 0B 0 disk
fbsnap5 250:5 0 0B 0 disk
fbsnap6 250:6 0 0B 0 disk
fbsnap7 250:7 0 0B 0 disk
fbdisk0 251:0 0 0B 0 disk
fbdisk1 251:1 0 0B 0 disk
fbdisk2 251:2 0 0B 0 disk
fbdisk3 251:3 0 0B 0 disk
fbdisk4 251:4 0 0B 0 disk
fbdisk5 251:5 0 0B 0 disk
fbdisk6 251:6 0 0B 0 disk
fbdisk7 251:7 0 0B 0 disk
nvme1n1 259:0 0 1.8T 0 disk
|-nvme1n1p1 259:1 0 517.7M 0 part
| `-md9 9:9 0 517.6M 0 raid1
|-nvme1n1p2 259:2 0 517.7M 0 part
|-nvme1n1p3 259:3 0 1.8T 0 part
|-nvme1n1p4 259:4 0 526.6M 0 part
| `-md13 9:13 0 448.1M 0 raid1
`-nvme1n1p5 259:5 0 31.5G 0 part
`-md321 9:321 0 30.4G 0 raid1 [SWAP]
nvme0n1 259:6 0 1.8T 0 disk
|-nvme0n1p1 259:7 0 517.7M 0 part
| `-md9 9:9 0 517.6M 0 raid1
|-nvme0n1p2 259:8 0 517.7M 0 part
|-nvme0n1p3 259:9 0 1.8T 0 part
|-nvme0n1p4 259:10 0 526.6M 0 part
| `-md13 9:13 0 448.1M 0 raid1
`-nvme0n1p5 259:11 0 31.5G 0 part
`-md321 9:321 0 30.4G 0 raid1 [SWAP]
nbd8 43:256 0 0B 0 disk
nbd9 43:288 0 0B 0 disk
nbd10 43:320 0 0B 0 disk
nbd11 43:352 0 0B 0 disk
nbd12 43:384 0 0B 0 disk
nbd13 43:416 0 0B 0 disk
nbd14 43:448 0 0B 0 disk
nbd15 43:480 0 0B 0 disk
root@45217fe699b0:/#
第三,更令人不安的是。昨天我在 Alpine 容器裡嘗試做我要的事。我想要重啟 Alpine 實例,所以我輸入了這個指令:
echo "b" > /proc/sysrq-trigger
結果是整台 NAS 都重開機了!
這真的很可怕。這代表如果有惡意人士取得 Linux 容器的存取權,他們可以搞亂整個系統。這完全不是容器應該有的行為!
我是不是完全搞錯了?