Linux Containers (LXC) are an awesome way to increase density in your virtual environment, but mounting a remote share in LXC wasn’t intuitive. Here’s the simple way to get that setup.
Mounting a remote share in LXC
Note: lxc.aa_profile is deprecated and was renamed to lxc.apparmor.profile.
Power off the LXC, and SSH into your ProxMox server, where that container is hosted.
Navigate to the directory, where your LXC config files are stored.
Edit the LXC config file. (example: 123.conf)
CIFS/SAMBA
If you want to mount a CIFS/Samba share, add this line to the bottom of the file:
lxc.apparmor.profile: lxc-container-default-with-cifs
NFS
If you want to mount a NFS share, add this line to the bottom of the file:
lxc.apparmor.profile: lxc-container-default-with-nfs
AppArmor Profiles
AppArmor Profile are located in: /etc/apparmor.d/lxc/
You may need to create these files, if they don’t exist. I have added relevant files below:
CIFS/SAMBA AppArmor Profile
/etc/apparmor.d/lxc/lxc-default-with-cifs
# Do not load this file. Rather, load /etc/apparmor.d/lxc-containers, which # will source all profiles under /etc/apparmor.d/lxc profile lxc-container-default-with-cifs flags=(attach_disconnected,mediate_deleted) { #include <abstractions/lxc/container-base> # the container may never be allowed to mount devpts. If it does, it # will remount the host's devpts. We could allow it to do it with # the newinstance option (but, right now, we don't). deny mount fstype=devpts, mount fstype=cifs, mount fstype=rpc_pipefs, mount fstype=cgroup -> /sys/fs/cgroup/**, }
NFS AppArmor Profile
lxc-default-with-nfs
# Do not load this file. Rather, load /etc/apparmor.d/lxc-containers, which # will source all profiles under /etc/apparmor.d/lxc profile lxc-container-default-with-nfs flags=(attach_disconnected,mediate_deleted) { #include <abstractions/lxc/container-base> # allow NFS (nfs/nfs4) mounts. mount fstype=nfs*, }
Finish
Save your files, and start the LXC. Mounting should now work as expected.
As an Amazon Associate I earn from qualifying purchases. Read our Privacy Policy for more info.