A handy NFS Server image comprising Alpine Linux and NFS v4 only, over TCP on port 2049.
When run, this container will make whatever directory is specified by the environment variable SHARED_DIRECTORY available to NFS v4 clients.
docker run -d --name nfs --privileged -v /some/where/fileshare:/nfsshare -e SHARED_DIRECTORY=/nfsshare itsthenetwork/nfs-server-alpine:latest
Add --net=host
or -p 2049:2049
to make the shares externally accessible via the host networking stack. This isn't necessary if using Rancher or linking containers in some other way.
Adding -e READ_ONLY
will cause the exports file to contain ro
instead of rw
, allowing only read access by clients.
Adding -e SYNC=true
will cause the exports file to contain sync
instead of async
, enabling synchronous mode. Check the exports man page for more information: https://linux.die.net/man/5/exports.
Adding -e PERMITTED="10.11.99.*"
will permit only hosts with an IP address starting 10.11.99 to mount the file share.
Due to the fsid=0
parameter set in the /etc/exports file, there's no need to specify the folder name when mounting from a client. For example, this works fine even though the folder being mounted and shared is /nfsshare:
sudo mount -v 10.11.12.101:/ /some/where/here
To be a little more explicit:
sudo mount -v -o vers=4,loud 10.11.12.101:/ /some/where/here
To unmount:
sudo umount /some/where/here
The /etc/exports file contains these parameters unless modified by the environment variables listed above:
*(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,no_root_squash)
Note that the showmount
command won't work against the server as rpcbind isn't running.
You'll note above with the docker run
command that privileged mode is required. Yes, this is a security risk but an unavoidable one it seems. You could try these instead: --cap-add SYS_ADMIN --cap-add SETPCAP --security-opt=no-new-privileges
but I've not had any luck with them myself. You may fare better with your own combination of Docker and OS. The SYS_ADMIN capability is very, very broad in any case and almost as risky as privileged mode.
See the following sub-sections for information on doing the same in non-interactive environments.
Kubernetes requires the privileged: true
option to be set:
spec:
containers:
- name: ...
image: ...
securityContext:
privileged: true
To use capabilities instead:
spec:
containers:
- name: ...
image: ...
securityContext:
capabilities:
add: ["SYS_ADMIN", "SETPCAP"]
Note that AllowPrivilegeEscalation is automatically set to true when privileged mode is set to true or the SYS_ADMIN capability added.
When using Docker Compose you can specify privileged mode like so:
privileged: true
To use capabilities instead:
cap_add:
- SYS_ADMIN
- SETPCAP
You may need to do this at the CLI to get things working:
sudo ros service enable kernel-headers
sudo ros service up kernel-headers
Alternatively you can add this to the host's cloud-config.yml (or user data on the cloud):
#cloud-config
rancher:
services_include:
kernel-headers: true
RancherOS also uses overlayfs for Docker so please read the next section.
OverlayFS does not support NFS export so please volume mount into your NFS container from an alternative (hopefully one is available).
On RancherOS the /home, /media and /mnt file systems are good choices as these are ext4.
You may need to ensure the nfs and nfsd kernel modules are loaded by running modprobe nfs nfsd
.
You'll need to use this label if you are using host network mode and want other services to resolve the NFS service's name via Rancher DNS:
labels:
io.rancher.container.dns: 'true'
The container requires the SYS_ADMIN capability, or, less securely, to be run in privileged mode.
This image can be used to export and share multiple directories with a little modification. Be aware that NFSv4 dictates that the additional shared directories are subdirectories of the root share specified by SHARED_DIRECTORY.
Note its far easier to volume mount multiple directories as subdirectories of the root/first and share the root.
To share multiple directories you'll need to mount additional volumes and specify additional environment variables in your docker run command. Here's an example:
docker run -d --name nfs --privileged -v /some/where/fileshare:/nfsshare -v /some/where/else:/nfsshare/another -e SHARED_DIRECTORY=/nfsshare -e SHARED_DIRECTORY_2=/nfsshare/another itsthenetwork/nfs-server-alpine:latest
You should then modify the nfsd.sh file to process the extra environment variables and add entries to the exports file. I've already included a working example to get you started:
if [ ! -z "${SHARED_DIRECTORY_2}" ]; then
echo "Writing SHARED_DIRECTORY_2 to /etc/exports file"
echo "{{SHARED_DIRECTORY_2}} {{PERMITTED}}({{READ_ONLY}},{{SYNC}},no_subtree_check,no_auth_nlm,insecure,no_root_squash)" >> /etc/exports
/bin/sed -i "s@{{SHARED_DIRECTORY_2}}@${SHARED_DIRECTORY_2}@g" /etc/exports
fi
You'll find you can now mount the root share as normal and the second shared directory will be available as a subdirectory. However, you should now be able to mount the second share directly too. In both cases you don't need to specify the root directory name with the mount commands. Using the docker run
command above to start a container using this image, the two mount commands would be:
sudo mount -v 10.11.12.101:/ /mnt/one
sudo mount -v 10.11.12.101:/another /mnt/two
You might want to make the root share read only, or even make it inaccessible, to encourage users to only mount the correct, more specific shares directly. To do so you'll need to modify the exports file so the root share doesn't get configured based on the values assigned to the PERMITTED or SYNC environment variables.
A successful server start should produce log output like this:
Writing SHARED_DIRECTORY to /etc/exports file
The PERMITTED environment variable is unset or null, defaulting to '*'.
This means any client can mount.
The READ_ONLY environment variable is unset or null, defaulting to 'rw'.
Clients have read/write access.
The SYNC environment variable is unset or null, defaulting to 'async' mode.
Writes will not be immediately written to disk.
Displaying /etc/exports contents:
/nfsshare *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,no_root_squash)
Starting rpcbind...
Displaying rpcbind status...
program version netid address service owner
100000 4 tcp6 ::.0.111 - superuser
100000 3 tcp6 ::.0.111 - superuser
100000 4 udp6 ::.0.111 - superuser
100000 3 udp6 ::.0.111 - superuser
100000 4 tcp 0.0.0.0.0.111 - superuser
100000 3 tcp 0.0.0.0.0.111 - superuser
100000 2 tcp 0.0.0.0.0.111 - superuser
100000 4 udp 0.0.0.0.0.111 - superuser
100000 3 udp 0.0.0.0.0.111 - superuser
100000 2 udp 0.0.0.0.0.111 - superuser
100000 4 local /var/run/rpcbind.sock - superuser
100000 3 local /var/run/rpcbind.sock - superuser
Starting NFS in the background...
rpc.nfsd: knfsd is currently down
rpc.nfsd: Writing version string to kernel: -2 -3 +4
rpc.nfsd: Created AF_INET TCP socket.
rpc.nfsd: Created AF_INET6 TCP socket.
Exporting File System...
exporting *:/nfsshare
/nfsshare <world>
Starting Mountd in the background...
Startup successful.
Thanks @sjiveson for providing us such a good idea.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。