NFS Files Visible to Root & Nobody, Not to PHP User (company-user) – AWS & QNAP NAS

Dear Community!

Following a recent system outage caused by a cyclone, we encountered a persistent issue where FileRun (a web-based file management system) is unable to display files stored in an NFS-mounted directory. Our setup consists of a QNAP NAS, which serves as the primary storage, and an AWS-hosted server running FileRun, which connects to the NAS via a VPN tunnel.

The NFS mount is active, and files are accessible via SSH, but PHP scripts running under company-user cannot see the contents of the mounted directory, while root and nobody can. Even more strangely, files created by company-user inside the NFS mount are only visible to company-user—but not to root or nobody. This suggests an NFS visibility issue tied to user ID mapping or permissions, which persisted even after exhaustive debugging steps, including remounting with different NFS versions, modifying idmapd.conf, tweaking /etc/exports on the NAS, and manually aligning UID/GID mappings.

I have outlined all relevant debugging steps, logs, and configurations below. What could be causing this visibility discrepancy for company-user? Any insights into potential NFS-specific quirks, QNAP configuration issues, or Apache/PHP-related factors would be greatly appreciated.

System Architecture, Intentions, Expectations, and Identified Issue

Architecture Overview

The current setup consists of two primary components:

  1. Local QNAP NAS
    • Hosted within the company’s local infrastructure.
    • Functions as a centralized storage solution for company data.
    • Runs an NFS (Network File System) server, enabling file sharing over a network.
  2. AWS Server (Private Cloud)
    • Hosts a private cloud infrastructure using FileRun, a web-based file management system.
    • Acts as the access point for company employees, particularly the marketing team, to retrieve and manage files remotely.
    • Connects to the QNAP NAS via a VPN tunnel to allow seamless integration of NAS storage within the FileRun environment.

The Issue

Following a system outage caused by a cyclone over the past weekend, FileRun is unable to display the files stored in the mounted NAS directory (NAS03).

Observations:

  • The NFS mount is active and correctly configured on AWS.
  • Files are accessible via SSH when listed with ls under certain users, specifically root and nobody.
  • FileRun operates through Apache (nobody) and executes PHP scripts under company-user. Thus, while Apache (nobody) can see the files, PHP (company-user) cannot, preventing FileRun from displaying them.
  • When root or nobody lists the directory, all expected files are visible, confirming that the data exists and that the mount itself is functioning correctly.
  • However, when company-user lists the same directory, it appears empty, suggesting a user-specific access or visibility issue.
  • If company-user creates a new file or directory inside the NAS mount, it is only visible to company-user—both in the CLI and in the FileRun interface—but, very strangely, is not visible to root or nobody.
  • These newly created files are indexed by FileRun, indicating that FileRun is at least partially aware of changes in the directory.

This suggests a user-specific NFS visibility issue, likely caused by an underlying access control mechanism on the NAS that isolates files created by different users.

Steps Taken

Initial Checks: Verifying FileRun’s Access to NAS

1 - Checking Which User PHP-FPM Runs As

$ ps aux | grep php-fpm | grep -v root
  • Outcome: php-fpm: pool company_software was running under company-user.

2 - Checking Apache’s Running User

$ ps aux | grep -E 'php|httpd|apache' | grep -v root
  • Outcome: Apache (httpd) is running as nobody.
  • Key Finding:
    • PHP runs as company-user**,** but Apache runs as nobody.
    • PHP scripts executed via Apache are likely running as company-user**.**

3 - Checking PHP’s Visibility to the NAS Mount

$ sudo -u company-user ls -lah /home2/company-user/cloud.company-user.software/cloud/drive/NAS03
  • Outcome: Only . and .. appeared, meaning PHP (running as company-user**) cannot see the files inside the NAS mount**.

4 - Checking Apache’s Visibility to the NAS Mount

$ sudo -u nobody ls -lah /home2/company-user/cloud.company-user.software/cloud/drive/NAS03
  • Outcome: The files inside the NAS are visible under nobody.
    • Note: The files are also visible under root.

5 - Checking FileRun’s Indexing

$ sudo -u company-user touch test.txt
  • Outcome 1: The file test.txt is visible when listing the directory as company-user (sudo -u company-user ls .).
  • Outcome 2: FileRun’s web interface, the private web-cloud our employees use, also displays the new test.txt file.
  • BUT:
    • root cannot see the new test.txt file (sudo -u root ls -al .), although it continues to see the hard drive’s pre-existing data.
    • The same applies to the nobody user.
  • Key Finding:
    • FileRun’s indexing system successfully detects newly created files by company-user**, but pre-existing files in the NAS remain inaccessible.**
    • This confirms a visibility discrepancy between company-user and the users nobody and, strangely, root**.**

6 - Restarting Services:

$ sudo systemctl restart httpd
$ sudo systemctl restart php-fpm
$ rm -f /home2/company-user/cloud.company-user.software/system/data/temp/*
  • Outcome: Restarting had no effect.

7 - Investigating the NAS Mount and File Permissions

$ mount | grep NAS03
  • Outcome: The mount is active.
    10.10.X.X:/Cloud on /home2/company-user/cloud.company-user.software/cloud/drive/NAS03 type nfs4

8 - Investigating NFS Server Configuration on the NAS

On the QNAP NAS:

$ sudo cat /etc/exports
  • Outcome:
"/share/CACHEDEV1\_DATA/Cloud" \*(sec=sys,rw,async,wdelay,insecure,no\_subtree\_check,all\_squash,anonuid=65534,anongid=65534,fsid=fbf4aade825ed2f296a81ae665239487)
"/share/NFSv=4" \*(no\_subtree\_check,no\_root\_squash,insecure,fsid=0)
"/share/NFSv=4/Cloud" \*(sec=sys,rw,async,wdelay,insecure,nohide,no\_subtree\_check,all\_squash,anonuid=65534,anongid=65534,fsid=087edbcbb7f6190346cf24b4ebaec8eb)
  • Note: all_squash means squash all users
  • Tried changing the QNAP NAS NFS Server’s configuration for:
    • Squash root user only
    • Squash no users
      • Outcome: had no effect.
  • Tried to editing /etc/exports on the NAS, to tweak around the options, such as changing:
    • anonuid and anongid (to match other users in the AWS client)
    • uid and gid (to match other users in the AWS client, notably company-user (uid=1007 gid=1009)
    • changing squash options
    • Leaving only rw,no_root_squash,insecure,no_subtree_check)
    • I tried actimeo=0 when mounting, but nothing worked.
  • Note: I did remember to sudo exportfs -r on the QNAP NAS before remounting.

9 - Restarting NFS Server

$ sudo /etc/init.d/nfs restart
  • Outcome: Restarting did not resolve the issue.

10 - Checking QNAP NAS Logs

$ dmesg | grep nfs
  • Outcome: No critical errors detected.

11 - NFS Identity Mapping, Permissions, and Access Synchronisation

11.1 - Checking UID and GID on AWS

I ran
id company-user
Output:

uid=1007(company-user) gid=1009(company-user) groups=1009(company-user)

11.2 - Created Matching User and Group on NAS

I eddited /etc/group

$ cat /etc/group

Output:

(...)
company-user:x:1009:

Then I eddited /etc/passwd

$ cat /etc/passwd

Output:

(...)
company-user:x:1007:1009::/share/homes/company-user:/bin/bash

11.3 - Updating File Ownership on NAS

$ sudo chown -R company-user:company-user /share/CACHEDEV1_DATA/Cloud
$ sudo chmod -R 777 /share/CACHEDEV1_DATA/Cloud
$ ls -al

Output:

total 60
drwxrwxrwx 11 company-user company-user        4096 2025-03-13 14:55 ./
drwxrwxrwx 34 admin   administrators           4096 2025-03-13 14:55 ../
drwxrwxrwx 21 company-user company-user        4096 2025-03-13 09:42 Marketing/
drwxrwxrwx  7 company-user company-user        4096 2025-03-13 09:45 Marketing2/
(...)

11.4 - Updating ID Mapping on AWS

I editted /etc/imapd.conf

$ cat /etc/idmapd.conf
  • Output:
\[General\]
Verbosity = 5
Pipefs-Directory = /var/lib/nfs/rpc\_pipefs
Domain = localdomain

\[Mapping\]
company-user@localdomain = company-user

\[Translation\]
Method = static

\[Static\]
company-user@localdomain = company-use

11.5 - Updating ID Mapping on NAS

I eddited /etc/imapd.conf

$ cat /etc/idmapd.conf
  • Output:
\[General\]
Verbosity = 9
Pipefs-Directory = /var/lib/nfs/rpc\_pipefs
Domain = localdomain

\[Mapping\]
Nobody-User = guest
Nobody-Group = guest
company-user@localdomain = company-user

\[Translation\]
Method = static

\[Static\]
company-user@localdomain = company-user

11.6 - Restarted NFS Services

  • On NAS:
$ sudo /etc/init.d/nfs restart

Output:

Shutting down NFS services: OK
Use Random Port Number...
Starting NFS services...
(with manage-gids)
Start NFS successfully!
  • On AWS:
$ sudo systemctl restart rpcbind
$ sudo systemctl restart nfs-server
$ sudo systemctl restart nfs-mountd
$ sudo systemctl restart nfs-idmapd
$ sudo systemctl restart nfsdcld
$ sudo systemctl restart nfs-client.target
  • Outcome: No effects in the visibility issue.

12 - Testing with NFSv3

$ sudo mount -t nfs -o nfsvers=3,tcp,noatime,nolock,intr 10.10.X.X:/Cloud /home2/company-user/cloud.company-user.software/cloud/drive/NAS03
  • Outcome: No effects in the visibility issue. Just to be sure it was actually mounted with NFSv3, I did:
$ mount | grep Cloud

Output:

10.10.X.X:/Cloud on /home2/company-user/cloud.company-user.software/cloud/drive/NAS03 type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.10.X.X,mountvers=3,mountport=51913,mountproto=udp,local_lock=none,addr=10.10.X.X)
  • Note: Yeah, the mount is using NFSv3, but:
    • Switching to NFSv3 did not change the behavior.
      • This eliminates NFSv4-specific ID mapping issues (nfsidmap, request-key,idmapd.conf).

13 - Possible ACL restriction

Then I thought there could be some ACL restriction on the NAS:
So I ran:
getfacl /share/CACHEDEV1_DATA/Cloud

getfacl: Removing leading '/' from absolute path
# file: share/CACHEDEV1\_DATA/Cloud
# owner: 1007
# group: 1009
user::rwx
group::rwx
other::rwx

Note: uid=1007 and gid=1009 are the same as the company-user on AWS

  • This confirms no additional ACL restrictions should be blocking access.
  • Just because, why not, I tried cleaning the AWS cache:
    • it did not restore company-user’s ability to see the files.
    • This suggests the problem is not related to outdated metadata caching on the AWS client.
  • Just because, why not, I tried cleaning the AWS cache:
$ sudo umount -l /home2/company-user/cloud.company-user.software/cloud/drive/NAS03
$ sudo echo 3 > /proc/sys/vm/drop\_caches sudo mount -a
  • Finally dmesg Logs Show No NFS Errors

At this point, I am out of ideas.

Extra infos:

  • “Enable Advanced Folder Permissions” or “Enable Windows ACL Support” in the QNAP NAs are disabled (but I did try with them enabled too, nothing changes).

It is just amazing that nobody and root can see everything, except for whatever company-user creates, whereas company-user — the actual owner — cannot see anything except for whatever it creates.

  • Checked NFSv4 ID mapping parameter in the Linux kernel
    • On AWS:
$ cat /sys/module/nfs/parameters/nfs4_disable_idmapping
N
  • On QNAP NAS:
$ cat /sys/module/nfs/parameters/nfs4_disable_idmapping
N

Note: I did try mounting with those parameters set to Y, no changes.

LOGS

*On the AWS:

$ sudo /usr/sbin/rpc.idmapd -f -vvv
rpc.idmapd: Setting log level to 8

rpc.idmapd: libnfsidmap: using domain: localdomain
rpc.idmapd: libnfsidmap: Realms list: 'LOCALDOMAIN'
rpc.idmapd: libnfsidmap: processing 'Method' list
rpc.idmapd: static_getpwnam: name 'company-user@localdomain' mapped to 'company-user'
rpc.idmapd: static_getgrnam: group 'company-user@localdomain' mapped to 'company-user'
rpc.idmapd: libnfsidmap: loaded plugin /usr/lib64/libnfsidmap/static.so for method static
rpc.idmapd: Expiration time is 600 seconds.
rpc.idmapd: Opened /proc/net/rpc/nfs4.nametoid/channel
rpc.idmapd: Opened /proc/net/rpc/nfs4.idtoname/channel
rpc.idmapd: Opened /var/lib/nfs/rpc_pipefs//nfs/clnt19a/idmap
rpc.idmapd: New client: 19a
rpc.idmapd: Path /var/lib/nfs/rpc_pipefs//nfs/clnt19d/idmap not available. waiting...
^Crpc.idmapd: exiting on signal 2
$ sudo tail -f /var/log/messages
Mar 17 18:29:01 server1 systemd[1]: Started User Manager for UID 1007.
Mar 17 18:29:01 server1 systemd[1]: Started Session 47986 of User company-user.
(irrelevant stuff...)
Mar 17 18:29:17 server1 systemd[1]: Stopping User Manager for UID 1007...
Mar 17 18:29:17 server1 systemd[1]: Stopped User Manager for UID 1007.
Mar 17 18:29:17 server1 systemd[1]: Stopped User Runtime Directory /run/user/1007.
Mar 17 18:29:17 server1 systemd[1]: Removed slice User Slice of UID 1007.
(irrelevant stuff...)
Mar 17 18:29:02 server1 systemd[1]: Starting AibolitResident...
Mar 17 18:29:02 server1 systemd[1]: Started AibolitResident.
(irrelevant stuff...)
Mar 17 18:29:43 server1 systemd[1]: Starting AibolitResident...
Mar 17 18:29:43 server1 systemd[1]: Started AibolitResident.
(irrelevant stuff...)
Mar 17 18:30:01 server1 systemd[1]: Created slice User Slice of UID 1007.
Mar 17 18:30:01 server1 systemd[1]: Starting User Runtime Directory /run/user/1007...
Mar 17 18:30:01 server1 systemd[1]: Finished User Runtime Directory /run/user/1007.
Mar 17 18:30:01 server1 systemd[1]: Starting User Manager for UID 1007...
Mar 17 18:30:02 server1 systemd[1]: Started User Manager for UID 1007.
Mar 17 18:30:02 server1 systemd[1]: Started Session 47990 of User company-user.
ˆC (interrupted here, 1 min of logs should be enough)
$ ps aux | grep rpc.idmapd
root     2112743  0.0  0.0  20412  8960 ?        S    15:10   0:00 sudo /usr/sbin/rpc.idmapd -f -vvv
root     2112744  0.0  0.0   3476  2304 ?        S    15:10   0:00 /usr/sbin/rpc.idmapd -f -vvv
root     2333806  0.0  0.0   3476  2304 ?        Ss   18:37   0:00 /usr/sbin/rpc.idmapd
root     2334399  0.0  0.0   6416  2432 pts/5    S+   18:37   0:00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn --exclude-dir=.idea --exclude-dir=.tox --exclude-dir=.venv --exclude-dir=venv rpc.idmapd
$ sudo dmesg | tail -n 50
(irrelevant stuff...)
[925378.219878] nfs4: Deprecated parameter 'intr'
[925378.219885] nfs4: Unknown parameter 'uid'
[925389.462898] nfs4: Deprecated parameter 'intr'
(irrelevant stuff...)
[926861.644063] nfs4: Deprecated parameter 'intr'
(irrelevant stuff...)
$ sudo journalctl -u rpcbind --no-pager | tail -n 50
Mar 17 15:06:03 server1.example.com systemd[1]: Stopping RPC Bind...
Mar 17 15:06:03 server1.example.com systemd[1]: rpcbind.service: Deactivated successfully.
Mar 17 15:06:03 server1.example.com systemd[1]: Stopped RPC Bind.
Mar 17 15:06:03 server1.example.com systemd[1]: Starting RPC Bind...
Mar 17 15:06:03 server1.example.com systemd[1]: Started RPC Bind.
Mar 17 17:00:17 server1.example.com systemd[1]: Stopping RPC Bind...
Mar 17 17:00:17 server1.example.com systemd[1]: rpcbind.service: Deactivated successfully.
Mar 17 17:00:17 server1.example.com systemd[1]: Stopped RPC Bind.
Mar 17 17:00:17 server1.example.com systemd[1]: Starting RPC Bind...
Mar 17 17:00:17 server1.example.com systemd[1]: Started RPC Bind.
Mar 17 17:04:43 server1.example.com systemd[1]: Stopping RPC Bind...
Mar 17 17:04:43 server1.example.com systemd[1]: rpcbind.service: Deactivated successfully.
Mar 17 17:04:43 server1.example.com systemd[1]: Stopped RPC Bind.
Mar 17 17:05:08 server1.example.com systemd[1]: Starting RPC Bind...
Mar 17 17:05:08 server1.example.com systemd[1]: Started RPC Bind.
Mar 17 17:06:57 server1.example.com systemd[1]: Stopping RPC Bind...
Mar 17 17:06:57 server1.example.com systemd[1]: rpcbind.service: Deactivated successfully.
Mar 17 17:06:57 server1.example.com systemd[1]: Stopped RPC Bind.
Mar 17 17:07:16 server1.example.com systemd[1]: Starting RPC Bind...
Mar 17 17:07:16 server1.example.com systemd[1]: Started RPC Bind.
Mar 17 17:07:56 server1.example.com systemd[1]: Stopping RPC Bind...
Mar 17 17:07:56 server1.example.com systemd[1]: rpcbind.service: Deactivated successfully.
Mar 17 17:07:56 server1.example.com systemd[1]: Stopped RPC Bind.
Mar 17 17:08:11 server1.example.com systemd[1]: Starting RPC Bind...
Mar 17 17:08:11 server1.example.com systemd[1]: Started RPC Bind.
Mar 17 17:11:30 server1.example.com systemd[1]: Stopping RPC Bind...
Mar 17 17:11:30 server1.example.com systemd[1]: rpcbind.service: Deactivated successfully.
Mar 17 17:11:30 server1.example.com systemd[1]: Stopped RPC Bind.
Mar 17 17:12:32 server1.example.com systemd[1]: Starting RPC Bind...
Mar 17 17:12:32 server1.example.com systemd[1]: Started RPC Bind.
Mar 17 17:14:30 server1.example.com systemd[1]: Stopping RPC Bind...
Mar 17 17:14:30 server1.example.com systemd[1]: rpcbind.service: Deactivated successfully.
Mar 17 17:14:30 server1.example.com systemd[1]: Stopped RPC Bind.
Mar 17 17:15:12 server1.example.com systemd[1]: Starting RPC Bind...
Mar 17 17:15:12 server1.example.com systemd[1]: Started RPC Bind.
Mar 17 17:19:17 server1.example.com systemd[1]: Stopping RPC Bind...
Mar 17 17:19:17 server1.example.com systemd[1]: rpcbind.service: Deactivated successfully.
Mar 17 17:19:17 server1.example.com systemd[1]: Stopped RPC Bind.
Mar 17 17:22:57 server1.example.com systemd[1]: Starting RPC Bind...
Mar 17 17:22:57 server1.example.com systemd[1]: Started RPC Bind.
Mar 17 17:33:44 server1.example.com systemd[1]: Stopping RPC Bind...
Mar 17 17:33:44 server1.example.com systemd[1]: rpcbind.service: Deactivated successfully.
Mar 17 17:33:44 server1.example.com systemd[1]: Stopped RPC Bind.
Mar 17 17:33:44 server1.example.com systemd[1]: Starting RPC Bind...
Mar 17 17:33:44 server1.example.com systemd[1]: Started RPC Bind.
Mar 17 18:37:08 server1.example.com systemd[1]: Stopping RPC Bind...
Mar 17 18:37:08 server1.example.com systemd[1]: rpcbind.service: Deactivated successfully.
Mar 17 18:37:08 server1.example.com systemd[1]: Stopped RPC Bind.
Mar 17 18:37:08 server1.example.com systemd[1]: Starting RPC Bind...
Mar 17 18:37:08 server1.example.com systemd[1]: Started RPC Bind.
$ sudo journalctl -u nfs-utils --no-pager | tail -n 50

Mar 17 18:11:34 server1.example.com systemd[1]: nfs-utils.service: Deactivated successfully.
Mar 17 18:11:34 server1.example.com systemd[1]: Stopped NFS server and client services.
Mar 17 18:11:34 server1.example.com systemd[1]: Stopping NFS server and client services...
Mar 17 18:11:34 server1.example.com systemd[1]: Starting NFS server and client services...
Mar 17 18:11:34 server1.example.com systemd[1]: Finished NFS server and client services.
Mar 17 18:36:58 server1.example.com systemd[1]: nfs-utils.service: Deactivated successfully.
Mar 17 18:36:58 server1.example.com systemd[1]: Stopped NFS server and client services.
Mar 17 18:36:58 server1.example.com systemd[1]: Stopping NFS server and client services...
Mar 17 18:36:58 server1.example.com systemd[1]: Starting NFS server and client services...
Mar 17 18:36:58 server1.example.com systemd[1]: Finished NFS server and client services.
Mar 17 18:37:04 server1.example.com systemd[1]: nfs-utils.service: Deactivated successfully.
Mar 17 18:37:04 server1.example.com systemd[1]: Stopped NFS server and client services.
Mar 17 18:37:04 server1.example.com systemd[1]: Stopping NFS server and client services...
Mar 17 18:37:04 server1.example.com systemd[1]: Starting NFS server and client services...
Mar 17 18:37:04 server1.example.com systemd[1]: Finished NFS server and client services.
$ nfsidmap -c  # Clear ID mapping cache
$ nfsidmap -l  # List active ID mappings
No .id_resolver keys found.

Hmm… Expected company-user@localdomain → company-user

  • On the QNAP NAS:
$ tail -n 50 /var/log/nfs.log"
tail: cannot open '/var/log/nfs.log' for reading: No such file or directory

I checked and this is because rsyslog is not installed on the linux QNAP ships, which means NFS logs are likely not being stored separately.

So I did:

$ sudo tail -n 50 /var/log/messages | grep -i nfs

No results outputted. This probably means QNAP is not logging NFS at all.

I did try also in the QNAP NAS:

$ log_tool -q | grep -i nfs
$ log_tool -q -o 50 | grep -i nfs
$ log_tool -q -e 1 | grep -i nfs
$ log_tool -q -e 2 | grep -i nfs
$ log_tool -q -s 1 | grep -i nfs
$ log_tool -q -d "2025-03-16" -g "2025-03-17" | grep -i nfs
$ log_tool -q -p 10.10.X.X | grep -i nfs

All of the above returned nothing.

So I did:

$ sudo mount -t nfs -o vers=3 10.10.X.X:/Cloud /home2/company-user/cloud.example.com/cloud/drive/NAS03

Then, let’s check it was really mounted using v=3:

$ mount | grep nfs
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
10.10.X.X:/Cloud on /home2/company-user/cloud.example.com/cloud/drive/NAS03 type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.10.X.X,mountvers=3,mountport=52738,mountproto=udp,local_lock=none,addr=10.10.X.X)

Yeah, all good here…

But:

As root:

$ sudo ls -lah /home2/company-user/cloud.example.com/cloud/drive/NAS03
total 76K
drwxrwxrwx+ 11 company-user  company-user 4.0K Mar 17 19:29 .
drwxr-xr-x  10 company-user  company-user 4.0K Mar 13 22:40 ..
-rwxrwxrwx   1 company-user  company-user 8.1K Mar 11 15:39 .DS_Store
drwxrwxrwx  21 company-user  company-user 4.0K Mar 14 11:39 Marketing
drwxrwxrwx   7 company-user  company-user 4.0K Mar 13 09:45 Marketing2
drwxrwxrwx   3 company-user  company-user 4.0K Feb 27 18:13 Private
drwxrwxrwx   2 company-user  company-user 4.0K Mar 17 01:00 @Recently-Snapshot
drwxrwxrwx   7 company-user  company-user 4.0K Mar 12 15:10 @Recycle
drwxrwxrwx  34 company-user  company-user 4.0K Feb 28 13:25 .streams
drwxrwxrwx   2 company-user  company-user 4.0K Feb 25 16:21 .@__thumb
drwxrwxrwx   2 ec2-user users   4.0K Mar 13 18:14 .@upload_cache
drwxrwxrwx  17 ec2-user users   4.0K Mar 14 14:28 Uzair_UPLOADS

As nobody:

$ sudo -u nobody ls -lah /home2/company-user/cloud.example.com/cloud/drive/NAS03
total 76K
drwxrwxrwx+ 11 company-user  company-user 4.0K Mar 17 19:29 .
drwxr-xr-x  10 company-user  company-user 4.0K Mar 13 22:40 ..
-rwxrwxrwx   1 company-user  company-user 8.1K Mar 11 15:39 .DS_Store
drwxrwxrwx  21 company-user  company-user 4.0K Mar 14 11:39 Marketing
drwxrwxrwx   7 company-user  company-user 4.0K Mar 13 09:45 Marketing2
drwxrwxrwx   3 company-user  company-user 4.0K Feb 27 18:13 Private
drwxrwxrwx   2 company-user  company-user 4.0K Mar 17 01:00 @Recently-Snapshot
drwxrwxrwx   7 company-user  company-user 4.0K Mar 12 15:10 @Recycle
drwxrwxrwx  34 company-user  company-user 4.0K Feb 28 13:25 .streams
drwxrwxrwx   2 company-user  company-user 4.0K Feb 25 16:21 .@__thumb
drwxrwxrwx   2 ec2-user users   4.0K Mar 13 18:14 .@upload_cache
drwxrwxrwx  17 ec2-user users   4.0K Mar 14 14:28 Uzair_UPLOADS

As company-user:

$ sudo -u company-user ls -lah /home2/company-user/cloud.example.com/cloud/drive/NAS03
total 8.0K
drwxr-xr-x  2 root    root    4.0K Mar 17 14:54 .
drwxr-xr-x 10 company-user company-user 4.0K Mar 13 22:40 ..
  • logs after mounting using NFSv3
sudo dmesg | tail -n 50
Mostly irrelevant stuff, except, perhaps:
[926861.644063] nfs4: Deprecated parameter 'intr'
[928554.238802] nfs4: Deprecated parameter 'intr'

But the above are related to NFSv4, so shouldn’t affect our mount with NFSv3.
Continuing the logs:

$ sudo journalctl -xe | grep nfs
Mar 17 19:31:36 server1.example.com sudo[2374498]:     root : TTY=pts/9 ; PWD=/home2/company-user/cloud.example.com/cloud/drive ; USER=root ; COMMAND=/bin/mount -t nfs -o vers=3 10.10.X.X:/Cloud /home2/company-user/cloud.example.com/cloud/drive/NAS03
Mar 17 19:31:36 server1.example.com rpc.idmapd[2112744]: Path /var/lib/nfs/rpc_pipefs//nfs/clnt1b1/idmap not available. waiting...
Mar 17 19:31:36 server1.example.com rpc.idmapd[2333806]: Path /var/lib/nfs/rpc_pipefs//nfs/clnt1b1/idmap not available. waiting...
Mar 17 19:31:36 server1.example.com rpc.idmapd[2333806]: Path /var/lib/nfs/rpc_pipefs//nfs/clnt1b1/idmap not available. waiting...
Mar 17 19:31:36 server1.example.com rpc.idmapd[2112744]: Path /var/lib/nfs/rpc_pipefs//nfs/clnt1b1/idmap not available. waiting...
Mar 17 19:31:43 server1.example.com rpc.idmapd[2112744]: Path /var/lib/nfs/rpc_pipefs//nfs/clnt1b2/idmap not available. waiting...
Mar 17 19:31:43 server1.example.com rpc.idmapd[2333806]: Path /var/lib/nfs/rpc_pipefs//nfs/clnt1b2/idmap not available. waiting...
Mar 17 19:31:43 server1.example.com rpc.idmapd[2333806]: Path /var/lib/nfs/rpc_pipefs//nfs/clnt1b2/idmap not available. waiting...
Mar 17 19:31:43 server1.example.com rpc.idmapd[2112744]: Path /var/lib/nfs/rpc_pipefs//nfs/clnt1b2/idmap not available. waiting...
$ sudo tail -n 50 /var/log/messages
(irrelevant stuff...)
Mar 17 19:32:18 server1 systemd[1]: Stopped User Manager for UID 1007.
Mar 17 19:32:18 server1 systemd[1]: Stopping User Runtime Directory /run/user/1007...
Mar 17 19:32:18 server1 systemd[1]: Stopped User Runtime Directory /run/user/1007.
Mar 17 19:32:18 server1 systemd[1]: Removed slice User Slice of UID 1007.
(irrelevant stuff...)
Mar 17 19:33:02 server1 systemd[1]: Started Session 48184 of User company-user.
(irrelevant stuff...)
Mar 17 19:33:02 server1 systemd[2376193]: Startup finished in 249ms.
(irrelevant stuff...)