-
Notifications
You must be signed in to change notification settings - Fork 293
Open
Description
What happened:
I did add a storage class as :
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi-nsfserver
provisioner: nfs.csi.k8s.io
parameters:
server: nsfserver
share: /
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
mountOptions:
- nfsvers=4.2
- fsc
- noexec
- nosuid
- nodevthen I add associated PV as
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: nfs.csi.k8s.io
name: pv-nfs-myshare-service1
spec:
capacity:
storage: 900Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-csi-nsfserver
mountOptions:
- fsc
- noexec
- nosuid
- nodev
csi:
driver: nfs.csi.k8s.io
volumeHandle: nfs/nfs-csi-nsfserver/myshare/service1
volumeAttributes:
server: nfs-csi-nsfserver
share: /myshare/then I did create the related PVC as :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-myshare-pv-claim
namespace: service1
spec:
volumeName: pv-nfs-myshare-service1
storageClassName: nfs-csi-nsfserver
accessModes:
- ReadWriteMany
resources:
requests:
storage: 800Giand then, I did mount the pv claim in my pod :
volumes:
- name: somefiles
persistentVolumeClaim:
claimName: nfs-myshare-pv-claimWhen I run :
:~$ kubectl exec -it csi-nfs-node-rvkvp -n nfs -c nfs -- mount | grep nfs
nsfserver:/myshare on /var/lib/kubelet/pods/bea13e5e-01bd-4d64-87e5-2329b6ce5c65/volumes/kubernetes.io~csi/pv-nfs-myshare-service1/mount type nfs4 (rw,nosuid,nodev,noexec,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,fatal_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.1,fsc,local_lock=none,addr=192.168.1.2)
nsfserver:/myshare on /var/lib/kubelet/pods/8ede7914-d858-43d4-96a0-3b41625299a3/volumes/kubernetes.io~csi/pv-nfs-myshare-service2/mount type nfs4 (rw,nosuid,nodev,noexec,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,fatal_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.1,fsc,local_lock=none,addr=192.168.1.2)(Yes, I'm using this nfs server twice on this node for 2 differents services)
we can see that added mountOptions ( fsc, noexec, nosuid, nodev, nfsvers=4.2) are here
but from inside the pod I can see
:~$ kubectl -n namespace exec -ti pods/service1-57bc4d687-djjjr -- sh
~ $ mount | grep nsfserver
nsfserver:/myshare on /somefiles type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,fatal_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.1,fsc,local_lock=none,addr=192.168.1.2)I can't see mountOptions (noexec, nosuid, nodev)
What you expected to happen:
having the defined mountOptions
Environment:
- CSI Driver version: v4.12.1
- Kubernetes version (use
kubectl version): v1.34.2 - OS (e.g. from /etc/os-release): Fedora CoreOS
- Kernel (e.g.
uname -a): 6.17.7-300.fc43.x86_64 - Install tools: Kubeadm and helm chart 4.12.1
- Others: /
Copilot
Metadata
Metadata
Assignees
Labels
No labels