Skip to main content

Installation Guide

To facilitate quick integration with Curvine in cloud-native environments, Curvine provides CSI driver support. Your Pod containers can access Curvine through PV (Persistent Volume) without requiring application modifications, enabling seamless use of Curvine's caching capabilities.

The Curvine CSI driver follows the standard CSI specification and includes:

  • CSI Controller, deployed in Deployment or Statefulset mode
  • CSI Node Plugin, deployed in DaemonSet mode

Architecture Overview​

The Curvine CSI driver adopts the standard CSI architecture with two main components:

Core Components​

  1. CSI Controller

    • Runs in the Control Plane
    • Responsible for Volume creation, deletion, Attach/Detach operations
    • Includes csi-provisioner and csi-attacher sidecars
  2. CSI Node Plugin

    • Runs as DaemonSet on each Worker Node
    • Responsible for mounting Curvine storage to Pods
    • Uses FUSE technology for filesystem mounting
  3. FUSE Mounting Mechanism

    • Directly mounts Curvine filesystem paths
    • Same paths share FUSE process, saving resources
    • Supports concurrent access by multiple Pods

Prerequisites​

Environment Requirements​

  • Kubernetes 1.19+
  • Helm 3.0+
  • Accessible Curvine cluster (Master node address and port)
  • Cluster administrator privileges

Environment Check​

# Check Kubernetes version
kubectl version --short

# Check Helm version
helm version --short

# Check node status
kubectl get nodes

I. Installing Curvine CSI​

1.1 Get Helm Chart​

helm repo add curvine https://curvineio.github.io/helm-charts
helm repo update
helm search repo curvine --devel
helm install curvine-csi curvine/curvine-csi \
--namespace curvine \
--create-namespace --devel \
--version 0.0.1-dev+7ffc6a2
tip

The current Curvine Helm repository provides pre-release versions:

  • Use --devel to view them, and replace the --version in the command above with your desired version
  • curvine-csi is installed by default in the curvine namespace via Helm
  • Official release versions will be provided progressively

1.2 Configure Custom Parameters (Optional)​

curvine-csi supports rich customization parameters. If your network environment has restrictions, you can use custom images and other methods.

For example, create a custom-values.yaml file:

# Image configuration
image:
repository: ghcr.io/curvineio/curvine-csi
tag: latest
pullPolicy: IfNotPresent

# Controller configuration
controller:
replicas: 1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi

# Node configuration - Standalone mode (default)
node:
mountMode: standalone
standalone:
image: ""
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 2
memory: 2Gi

Or use Embedded mode:

# Node configuration - Embedded mode
node:
mountMode: embedded
resources:
requests:
cpu: 1000m
memory: 2Gi
limits:
cpu: 2000m
memory: 4Gi

Install with custom parameters using Helm:

helm install curvine-csi curvine/curvine-csi \ 
--namespace curvine \
--create-namespace --devel \
--values custom-values.yaml

# Check installation status
helm status curvine-csi -n curvine
tip

By default, curvine-csi uses standalone mode, where FUSE mounts run in independent pods unaffected by curvine-csi restarts (recommended). It also supports embedded mode. For architecture details and Helm configuration parameters, refer to Curvine CSI Architecture.

1.4 Upgrade and Uninstall​

# Upgrade
helm upgrade curvine curvine/curvine-csi -n curvine --devel --version xxxxx

# Uninstall
helm uninstall curvine-csi -n curvine

# Complete cleanup (including namespace)
kubectl delete namespace curvine

II. Verification and Status Check​

2.1 Check CSI Driver Registration​

# Check if CSI Driver is registered successfully
kubectl get csidriver curvine

# Example output:
# NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY
# curvine false false false

Parameter Explanation:

  • ATTACHREQUIRED: false - No Attach operation needed (direct FUSE mount)
  • PODINFOONMOUNT: false - No Pod information needed during mount

2.2 Check Controller Status​

# Check Controller Deployment
kubectl get deployment -n curvine curvine-csi-controller

# Check Controller Pod
kubectl get pods -n curvine -l app=curvine-csi-controller

# Check Controller logs
kubectl logs -n curvine \
-l app=curvine-csi-controller \
-c csi-plugin \
--tail=50

# Check Provisioner Sidecar logs
kubectl logs -n curvine \
-l app=curvine-csi-controller \
-c csi-provisioner \
--tail=50

2.3 Check Node Plugin Status​

# Check Node DaemonSet
kubectl get daemonset -n curvine curvine-csi-node

# Check all Node Plugin Pods
kubectl get pods -n curvine -l app=curvine-csi-node -o wide

# Check specific Node logs
kubectl logs -n curvine curvine-csi-node-xxxxx -c csi-plugin

# Check Node Registrar logs
kubectl logs -n curvine curvine-csi-node-xxxxx -c node-driver-registrar

III. StorageClass Explained​

StorageClass is a resource in Kubernetes that defines storage types, used for automatic creation of dynamic PVs.

3.1 StorageClass Configuration Example​

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: curvine-sc
provisioner: curvine # CSI driver name
reclaimPolicy: Delete # Reclaim policy
volumeBindingMode: Immediate # Binding mode
allowVolumeExpansion: true # Allow expansion
parameters:
# Required: Curvine cluster connection information
master-addrs: "master1:8995,master2:8995,master3:8995"

# Required: Filesystem path prefix
fs-path: "/k8s-volumes"

# Optional: Path creation strategy
path-type: "DirectoryOrCreate"

# Optional: FUSE parameters
io-threads: "4"
worker-threads: "8"

3.2 Parameter Details​

Core Parameters​

ParameterRequiredDescriptionExample
master-addrs✅Curvine Master node address list, comma-separated"10.0.0.1:8995,10.0.0.2:8995"
fs-path✅Path prefix for dynamic PVs, actual path is fs-path + pv-name"/k8s-volumes"
path-type❌Path creation strategy, defaults to Directory"DirectoryOrCreate"(Automatically creates path if it doesn't exist); "Directory"(Path must already exist)
reclaimPolicy❌PV reclaim policy, defaults to Delete"Delete"(Automatically deletes PV and storage data when PVC is deleted); "Retain"(PV is retained after PVC deletion)
Binding Mode❌PV binding mode, defaults to Immediate"Immediate"(PV is bound immediately after PVC creation); "WaitForFirstConsumer"(Waits for Pod scheduling before binding PV)
io-threads❌FUSE IO thread count"4"
worker-threads❌FUSE worker thread count"8"

3.3 Dynamic PV Path Generation Rules​

Actual mount path = fs-path + "/" + pv-name

Example:

# StorageClass configuration
fs-path: "/k8s-volumes"

# Auto-generated PV name
pv-name: "pvc-1234-5678-abcd"

# Final Curvine path
Actual path: "/k8s-volumes/pvc-1234-5678-abcd"

3.4 Create StorageClass​

Create the storageclass.yaml file:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: curvine-sc
provisioner: curvine
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
master-addrs: "m0:8995,m1:8995,m2:8995"
fs-path: "/k8s-volumes"
path-type: "DirectoryOrCreate"

Apply the configuration:

# Create StorageClass
kubectl apply -f storageclass.yaml

# View StorageClass
kubectl get storageclass curvine-sc

# Set as default StorageClass (optional)
kubectl patch storageclass curvine-sc \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
tip

Replace the master-addrs in the example above with your actual master addresses.

3.5 Multiple StorageClass Scenarios​

You can create multiple StorageClasses for different scenarios:

# Production environment - strict mode
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: curvine-prod
provisioner: curvine
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
parameters:
master-addrs: "prod-master1:8995,prod-master2:8995"
fs-path: "/production"
path-type: "Directory"

# Development environment - relaxed mode
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: curvine-dev
provisioner: curvine
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
master-addrs: "dev-master:8995"
fs-path: "/development"
path-type: "DirectoryOrCreate"

IV. Static PV Usage​

Static PV is used to mount existing data directories in Curvine, suitable for the following scenarios:

  • Multiple clusters sharing the same data
  • Need for precise control over data paths

4.1 Working Principle​

4.2 Create Static PV​

apiVersion: v1
kind: PersistentVolume
metadata:
name: curvine-pv-existing-data
labels:
type: curvine-static
spec:
storageClassName: curvine-sc
capacity:
storage: 100Gi # Declared capacity
accessModes:
- ReadWriteMany # Supports multi-Pod read-write
persistentVolumeReclaimPolicy: Retain # Retain data
csi:
driver: curvine
volumeHandle: "existing-data-volume-001" # Unique identifier
volumeAttributes:
# Required: Curvine Master address
master-addrs: "m0:8995,m1:8995,m2:8995"

# Required: Complete path in Curvine
curvine-path: "/production/user-data"

# Recommended: Use Directory to ensure path exists
path-type: "Directory"

# Optional: FUSE parameters
io-threads: "4"
worker-threads: "8"

Parameter Explanation:

  • volumeHandle: Any unique string, used to identify the PV
  • curvine-path: Complete path in Curvine filesystem, must already exist
  • path-type: Directory: Requires path to exist (recommended)

4.3 Create Static PVC​

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: curvine-pvc-existing-data
namespace: default
spec:
storageClassName: curvine-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
# Key: Specify the PV name to bind
volumeName: curvine-pv-existing-data

4.4 Verify Binding​

# Check PV status
kubectl get pv curvine-pv-existing-data
# STATUS should be Bound

# Check PVC status
kubectl get pvc curvine-pvc-existing-data
# STATUS should be Bound

# Check detailed information
kubectl describe pvc curvine-pvc-existing-data

4.5 Using Static PV in Pod​

apiVersion: v1
kind: Pod
metadata:
name: static-pv-test
spec:
containers:
- name: app
image: nginx:alpine
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: curvine-pvc-existing-data

V. Dynamic PV Usage​

Dynamic PV is the most commonly used method, automatically created and managed by the CSI Controller.

5.1 Working Principle​

5.2 Create Dynamic PVC​

Dynamic PVC requires specifying a storageclass, without specifying volumeName:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-dynamic-pvc
namespace: default
spec:
storageClassName: curvine-sc # Specify StorageClass
accessModes:
- ReadWriteOnce # or ReadWriteMany
resources:
requests:
storage: 10Gi # Request capacity

5.3 Automatic Path Generation Example​

# StorageClass configuration
fs-path: "/k8s-volumes"

# PVC name
name: my-dynamic-pvc

# Auto-generated PV
# volumeHandle: pvc-1a2b3c4d-5e6f-7g8h-9i0j-k1l2m3n4o5p6

# Actual Curvine path
# /k8s-volumes/pvc-1a2b3c4d-5e6f-7g8h-9i0j-k1l2m3n4o5p6

# You can use Curvine's cv command to check if the volume was created correctly in the Curvine cluster
./bin/cv fs ls /

5.4 Dynamic PV Complete Example​

Create the dynamic-pvc.yaml file:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data-pvc
namespace: default
spec:
storageClassName: curvine-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi

Create the dynamic-pv-pod.yaml file:

apiVersion: v1
kind: Pod
metadata:
name: dynamic-pv-test
spec:
containers:
- name: app
image: nginx:alpine
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: app-data-pvc

Apply the configuration:

# 1. Create dynamic PVC
kubectl apply -f dynamic-pvc.yaml

# 2. Check PVC status (should automatically Bound)
kubectl get pvc app-data-pvc
kubectl describe pvc app-data-pvc

# 3. Check auto-created PV
kubectl get pv

# 4. Create Pod using PVC
kubectl apply -f dynamic-pv-pod.yaml

# 5. Test write and read
kubectl exec dynamic-pv-test -- sh -c 'echo "Hello Curvine" > /usr/share/nginx/html/index.html'
kubectl exec dynamic-pv-test -- cat /usr/share/nginx/html/index.html

Appendix​

Helm Custom Parameters​

Global Settings​

Parameter PathTypeDefaultDescription
global.namespacestringcurvineNamespace where CSI driver is deployed

Image Settings​

Parameter PathTypeDefaultDescription
image.repositorystringghcr.io/curvineio/curvine-csiCSI driver image repository address
image.tagstringlatestCSI driver image tag version
image.pullPolicystringAlwaysImage pull policy (Always/IfNotPresent/Never)

CSI Driver Settings​

Parameter PathTypeDefaultDescription
csiDriver.namestringcurvineCSI driver name identifier
csiDriver.attachRequiredbooleanfalseWhether attach operation is required (volume attachment to node)
csiDriver.podInfoOnMountbooleanfalseWhether Pod information is required during mount

Controller Settings​

Parameter PathTypeDefaultDescription
controller.namestringcurvine-csi-controllerController Deployment name
controller.replicasint1Number of Controller replicas
controller.priorityClassNamestringsystem-cluster-criticalController priority class name (high priority)
controller.container.namestringcsi-pluginMain container name
controller.container.commandarray["/opt/curvine/csi"]Container start command
controller.container.argsarraySee values.yamlContainer start arguments
controller.container.env.CSI_ENDPOINTstringunix:///csi/csi.sockCSI socket endpoint address
controller.container.livenessProbe.failureThresholdint5Liveness probe failure threshold
controller.container.livenessProbe.initialDelaySecondsint10Liveness probe initial delay seconds
controller.container.livenessProbe.periodSecondsint10Liveness probe check period (seconds)
controller.container.livenessProbe.timeoutSecondsint3Liveness probe timeout (seconds)
controller.container.ports.healthzint9909Health check port
controller.container.securityContext.privilegedbooleantrueWhether to run in privileged mode
controller.container.securityContext.capabilities.addarray[SYS_ADMIN]Added Linux Capabilities
controller.tolerationsarraySee values.yamlPod toleration configuration (allows CriticalAddons scheduling)

Controller Sidecar Container Configuration​

Parameter PathTypeDefaultDescription
controller.sidecars.provisioner.imagestringquay.io/k8scsi/csi-provisioner:v1.6.0Provisioner sidecar image
controller.sidecars.provisioner.argsarraySee values.yamlProvisioner arguments (timeout 60s, log level v5)
controller.sidecars.attacher.imagestringregistry.k8s.io/sig-storage/csi-attacher:v4.5.0Attacher sidecar image
controller.sidecars.attacher.argsarraySee values.yamlAttacher arguments (log level v5)
controller.sidecars.livenessProbe.imagestringregistry.k8s.io/sig-storage/livenessprobe:v2.11.0LivenessProbe sidecar image
controller.sidecars.livenessProbe.argsarraySee values.yamlLivenessProbe arguments
controller.sidecars.livenessProbe.env.HEALTH_PORTstring"9909"Health check port

Node Settings​

Parameter PathTypeDefaultDescription
node.namestringcurvine-csi-nodeNode DaemonSet name
node.priorityClassNamestringsystem-node-criticalNode priority class name (node critical level)
node.dnsPolicystringClusterFirstWithHostNetDNS policy (cluster first + host network)
node.mountModestringstandaloneMount mode: standalone (independent pod) or embedded (embedded in CSI container)
node.container.namestringcsi-pluginMain container name
node.container.commandarray["/opt/curvine/csi"]Container start command
node.container.argsarraySee values.yamlContainer start arguments
node.container.env.CSI_ENDPOINTstringunix:///csi/csi.sockCSI socket endpoint address
node.container.livenessProbe.failureThresholdint5Liveness probe failure threshold
node.container.livenessProbe.initialDelaySecondsint10Liveness probe initial delay seconds
node.container.livenessProbe.periodSecondsint10Liveness probe check period (seconds)
node.container.livenessProbe.timeoutSecondsint3Liveness probe timeout (seconds)
node.container.ports.healthzint9909Health check port
node.container.securityContext.privilegedbooleantrueWhether to run in privileged mode
node.container.lifecycle.preStopobjectSee values.yamlContainer pre-stop hook (cleanup socket files)
node.tolerationsarray[{operator: Exists}]Pod toleration configuration (tolerates all taints)

Standalone Pod Settings​

When node.mountMode is set to standalone, the following settings apply:

Parameter PathTypeDefaultDescription
node.standalone.imagestring""Standalone Pod image, empty uses CSI image
node.standalone.resources.requests.cpustring"500m"CPU request
node.standalone.resources.requests.memorystring"512Mi"Memory request
node.standalone.resources.limits.cpustring"2"CPU limit
node.standalone.resources.limits.memorystring"2Gi"Memory limit

Example configuration:

node:
mountMode: standalone
standalone:
image: "" # Empty uses CSI image
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2"
memory: "2Gi"

Node Sidecar Container Configuration​

Parameter PathTypeDefaultDescription
node.sidecars.nodeDriverRegistrar.imagestringquay.io/k8scsi/csi-node-driver-registrar:v2.1.0Node driver registrar image
node.sidecars.nodeDriverRegistrar.argsarraySee values.yamlRegistrar arguments (log level v5)
node.sidecars.nodeDriverRegistrar.env.ADDRESSstring/csi/csi.sockCSI socket address
node.sidecars.nodeDriverRegistrar.env.DRIVER_REG_SOCK_PATHstring/var/lib/kubelet/csi-plugins/csi.curvine.io/csi.sockDriver registration path in Kubelet
node.sidecars.livenessProbe.imagestringregistry.k8s.io/sig-storage/livenessprobe:v2.11.0LivenessProbe sidecar image
node.sidecars.livenessProbe.argsarraySee values.yamlLivenessProbe arguments
node.sidecars.livenessProbe.env.ADDRESSstring/csi/csi.sockCSI socket address
node.sidecars.livenessProbe.env.HEALTH_PORTstring"9909"Health check port

Node Host Path Configuration​

Parameter PathTypeDefaultDescription
node.hostPaths.pluginDir.pathstring/var/lib/kubelet/csi-plugins/csi.curvine.io/CSI plugin directory path
node.hostPaths.pluginDir.typestringDirectoryOrCreatePath type (create if not exists)
node.hostPaths.kubeletDir.pathstring/var/lib/kubeletKubelet working directory path
node.hostPaths.kubeletDir.typestringDirectoryOrCreatePath type (create if not exists)
node.hostPaths.registrationDir.pathstring/var/lib/kubelet/plugins_registry/Plugin registration directory path
node.hostPaths.registrationDir.typestringDirectoryPath type (must exist)

Service Account Settings​

Parameter PathTypeDefaultDescription
serviceAccount.controller.namestringcurvine-csi-controller-saController service account name
serviceAccount.node.namestringcurvine-csi-node-saNode service account name

RBAC Configuration​

Parameter PathTypeDefaultDescription
rbac.createbooleantrueWhether to create RBAC resources (ClusterRole/ClusterRoleBinding)

ConfigMap Configuration (Curvine Cluster Configuration)​

Parameter PathTypeDefaultDescription
configMap.namestringcurvine-configConfigMap name
configMap.defaultModeoctal0755File default permission mode
configMap.data.curvineClusterTomlstringSee values.yamlCurvine cluster configuration file (TOML format)
configMap.data.curvineEnvShstringSee values.yamlCurvine environment variables script

Curvine Cluster Configuration Parameters in ConfigMap​

Configuration ItemTypeDefaultDescription
client.master_addrsarray[{hostname: "localhost", port: 8995}]Curvine Master node address list
log.levelstring"info"Log level (debug/info/warn/error)
log.log_dirstring"stdout"Log output directory (stdout for standard output)
log.file_namestring"curvine.log"Log file name

StorageClass Parameters (Configured at Usage Time)​

Parameter NameRequiredTypeDefaultDescription
master-addrsRequiredstringNoneCurvine Master node addresses, format: host:port,host:port
fs-pathRequiredstringNoneFilesystem path prefix, each PV creates: fs-path + pv-name
path-typeOptionalstringDirectoryPath creation strategy: DirectoryOrCreate or Directory
io-threadsOptionalstringNoneFUSE IO thread count
worker-threadsOptionalstringNoneFUSE worker thread count