Skip to content

High number of threads on csi-nfs-node. #569

@GImmekerAFP

Description

@GImmekerAFP

What happened:
High number of threads on csi-nfs-nodes.

What you expected to happen:
The thread count on de nodes remains normal.

How to reproduce it:

Anything else we need to know?:
The thread count on the on the csi-nfs-node pods keeps getting higher as time passes. This causes the system load to become higher as time passes because the number of threads just becomes higher and higer.
I dont know if the following messages are related but they might be:

On the csi-nfs-node pods the following messages are logged:

│ nfs I1219 15:22:14.747364       1 utils.go:107] GRPC call: /csi.v1.Node/NodeUnpublishVolume                                                                                                                                                           │
│ nfs I1219 15:22:14.747393       1 utils.go:108] GRPC request: {"target_path":"/var/lib/kubelet/pods/4f08f426-6185-4e2a-9ced-83ea62e4d739/volumes/kubernetes.io~csi/nfs-pv/mount","volume_id":"nfs-server-nfs-server-provisioner.nfs-server.svc.cluste │
│ nfs I1219 15:22:14.747476       1 nodeserver.go:157] NodeUnpublishVolume: unmounting volume nfs-server-nfs-server-provisioner.nfs-server.svc.cluster.local/export/pvc-084cfb0d-cb0a-4089-a6ce-5b29049f4347 on /var/lib/kubelet/pods/4f08f426-6185-4e2 │
│ nfs I1219 15:22:14.747484       1 nodeserver.go:162] force unmount nfs-server-nfs-server-provisioner.nfs-server.svc.cluster.local/export/pvc-084cfb0d-cb0a-4089-a6ce-5b29049f4347 on /var/lib/kubelet/pods/4f08f426-6185-4e2a-9ced-83ea62e4d739/volum │
│ nfs I1219 15:22:14.849002       1 utils.go:107] GRPC call: /csi.v1.Node/NodeUnpublishVolume                                                                                                                                                           │
│ nfs I1219 15:22:14.849029       1 utils.go:108] GRPC request: {"target_path":"/var/lib/kubelet/pods/bf38702d-92b6-40b4-bfc0-1c7ab46f1633/volumes/kubernetes.io~csi/nfs-pv/mount","volume_id":"nfs-server-nfs-server-provisioner.nfs-server.svc.cluste │
│ nfs I1219 15:22:14.849074       1 nodeserver.go:157] NodeUnpublishVolume: unmounting volume nfs-server-nfs-server-provisioner.nfs-server.svc.cluster.local/export/pvc-084cfb0d-cb0a-4089-a6ce-5b29049f4347 on /var/lib/kubelet/pods/bf38702d-92b6-40b │
│ nfs I1219 15:22:14.849084       1 nodeserver.go:162] force unmount nfs-server-nfs-server-provisioner.nfs-server.svc.cluster.local/export/pvc-084cfb0d-cb0a-4089-a6ce-5b29049f4347 on /var/lib/kubelet/pods/bf38702d-92b6-40b4-bfc0-1c7ab46f1633/volum │
│ nfs I1219 15:26:16.796186       1 utils.go:107] GRPC call: /csi.v1.Node/NodeUnpublishVolume                                                                                                                                                           │
│ nfs I1219 15:26:16.796327       1 utils.go:108] GRPC request: {"target_path":"/var/lib/kubelet/pods/4f08f426-6185-4e2a-9ced-83ea62e4d739/volumes/kubernetes.io~csi/nfs-pv/mount","volume_id":"nfs-server-nfs-server-provisioner.nfs-server.svc.cluste │
│ nfs I1219 15:26:16.796509       1 nodeserver.go:157] NodeUnpublishVolume: unmounting volume nfs-server-nfs-server-provisioner.nfs-server.svc.cluster.local/export/pvc-084cfb0d-cb0a-4089-a6ce-5b29049f4347 on /var/lib/kubelet/pods/4f08f426-6185-4e2 │
│ nfs I1219 15:26:16.796518       1 nodeserver.go:162] force unmount nfs-server-nfs-server-provisioner.nfs-server.svc.cluster.local/export/pvc-084cfb0d-cb0a-4089-a6ce-5b29049f4347 on /var/lib/kubelet/pods/4f08f426-6185-4e2a-9ced-83ea62e4d739/volum │
│ nfs I1219 15:26:16.896831       1 utils.go:107] GRPC call: /csi.v1.Node/NodeUnpublishVolume                                                                                                                                                           │
│ nfs I1219 15:26:16.896864       1 utils.go:108] GRPC request: {"target_path":"/var/lib/kubelet/pods/bf38702d-92b6-40b4-bfc0-1c7ab46f1633/volumes/kubernetes.io~csi/nfs-pv/mount","volume_id":"nfs-server-nfs-server-provisioner.nfs-server.svc.cluste │
│ nfs I1219 15:26:16.896926       1 nodeserver.go:157] NodeUnpublishVolume: unmounting volume nfs-server-nfs-server-provisioner.nfs-server.svc.cluster.local/export/pvc-084cfb0d-cb0a-4089-a6ce-5b29049f4347 on /var/lib/kubelet/pods/bf38702d-92b6-40b │
│ nfs I1219 15:26:16.896938       1 nodeserver.go:162] force unmount nfs-server-nfs-server-provisioner.nfs-server.svc.cluster.local/export/pvc-084cfb0d-cb0a-4089-a6ce-5b29049f4347 on /var/lib/kubelet/pods/bf38702d-92b6-40b4-bfc0-1c7ab46f1633/volum │

If a run a atop on the affected node it shows that the nfs-plugin keeps creating threads over time.
If i delete and recreate the csi-nfs-node pods the thread count goes back to normal.

Environment:

  • CSI Driver version: [csi-driver-nfs-v4.5.0](image: registry.k8s.io/sig-storage/nfsplugin:v4.5.0)
  • Kubernetes version (use kubectl version): Server Version: v1.28.2
  • OS (e.g. from /etc/os-release): PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
  • Kernel (e.g. uname -a): Linux tech-xrn0e 6.1.0-12-amd64 Add nfs csi driver #1 SMP PREEMPT_DYNAMIC Debian 6.1.52-1 (2023-09-07) x86_64 GNU/Linux
  • Install tools: flux/helm
  • Others:

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions