text
stringlengths
1
1k
id
int64
0
8.58k
The Kubernetes project suggests that you use the OpenStack Cinder third party storage driver instead. configMap A ConfigMap provides a way to inject configuration data into pods. The data stored in a ConfigMap can be referenced in a volume of type configMap and then consumed by containerized applications running in a pod. When referencing a ConfigMap, you provide the name of the ConfigMap in the volume. You can customize the path to use for a specific entry in the ConfigMap. The following configuration shows how to mount the log-config ConfigMap onto a Pod called configmap-pod : apiVersion : v1 kind: Pod metadata : name : configmap-pod spec: containers : - name : test image : busybox:1.28 command : ['sh', '-c', 'echo "The app is running!" && tail -f /dev/null' ] volumeMounts : - name : config-vol mountPath : /etc/config volumes : - name : config-vol configMap : name : log-config items : - key: log_lev
800
el path: log_level The log-config ConfigMap is mounted as a volume, and all contents stored in its log_level entry are mounted into the Pod at path /etc/config/log_level . Note that this path is derived from the volume's mountPath and the path keyed with log_level . Note: You must create a ConfigMap before you can use it. A ConfigMap is always mounted as readOnly . A container using a ConfigMap as a subPath volume mount will not receive ConfigMap updates. Text data is exposed as files using the UTF-8 character encoding. For other character encodings, use binaryData . downwardAPI A downwardAPI volume makes downward API data available to applications. Within the volume, you can find the exposed data as read-only files in plain text format.• • •
801
Note: A container using the downward API as a subPath volume mount does not receive updates when field values change. See Expose Pod Information to Containers Through Files to learn more. emptyDir For a Pod that defines an emptyDir volume, the volume is created when the Pod is assigned to a node. As the name says, the emptyDir volume is initially empty. All containers in the Pod can read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted permanently. Note: A container crashing does not remove a Pod from a node. The data in an emptyDir volume is safe across container crashes. Some uses for an emptyDir are: scratch space, such as for a disk-based merge sort checkpointing a long computation for recovery from crashes holding files that a content-manager container fetches while a webserver container serves the data T
802
he emptyDir.medium field controls where emptyDir volumes are stored. By default emptyDir volumes are stored on whatever medium that backs the node such as disk, SSD, or network storage, depending on your environment. If you set the emptyDir.medium field to "Memory" , Kubernetes mounts a tmpfs (RAM-backed filesystem) for you instead. While tmpfs is very fast be aware that, unlike disks, files you write count against the memory limit of the container that wrote them. A size limit can be specified for the default medium, which limits the capacity of the emptyDir volume. The storage is allocated from node ephemeral storage . If that is filled up from another source (for example, log files or image overlays), the emptyDir may run out of capacity before this limit. Note: If the SizeMemoryBackedVolumes feature gate is enabled, you can specify a size for memory backed volumes. If no size is specified, memory backed volumes are sized to node allocatable memory. emptyDir configuration exa
803
mple apiVersion : v1 kind: Pod metadata : name : test-pd spec: containers : - image : registry.k8s.io/test-webserver name : test-container volumeMounts : - mountPath : /cache name : cache-volume volumes :• •
804
- name : cache-volume emptyDir : sizeLimit : 500Mi fc (fibre channel) An fc volume type allows an existing fibre channel block storage volume to mount in a Pod. You can specify single or multiple target world wide names (WWNs) using the parameter targetWWNs in your Volume configuration. If multiple WWNs are specified, targetWWNs expect that those WWNs are from multi-path connections. Note: You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them. See the fibre channel example for more details. gcePersistentDisk (removed) Kubernetes 1.29 does not include a gcePersistentDisk volume type. The gcePersistentDisk in-tree storage driver was deprecated in the Kubernetes v1.17 release and then removed entirely in the v1.28 release. The Kubernetes project suggests that you use the Google Compute Engine Persistent Disk CSI third party storage driver instead. gitRepo (deprecated) Warning: The
805
gitRepo volume type is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. A gitRepo volume is an example of a volume plugin. This plugin mounts an empty directory and clones a git repository into this directory for your Pod to use. Here is an example of a gitRepo volume: apiVersion : v1 kind: Pod metadata : name : server spec: containers : - image : nginx name : nginx volumeMounts : - mountPath : /mypath name : git-volume volumes : - name : git-volume gitRepo : repository : "git@somewhere:me/my-git-repository.git" revision : "22f1d8406d464b0c0874075539c1f2e96c253775
806
glusterfs (removed) Kubernetes 1.29 does not include a glusterfs volume type. The GlusterFS in-tree storage driver was deprecated in the Kubernetes v1.25 release and then removed entirely in the v1.26 release. hostPath A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications. Warning: Using the hostPath volume type presents many security risks. If you can avoid using a hostPath volume, you should. For example, define a local PersistentVolume , and use that instead. If you are restricting access to specific directories on the node using admission-time validation, that restriction is only effective when you additionally require that any mounts of that hostPath volume are read only . If you allow a read-write mount of any host path by an untrusted Pod, the containers in that Pod may be able to subvert the read-write host mount. Take care when u
807
sing hostPath volumes, whether these are mounted as read-only or as read- write, because: Access to the host filesystem can expose privileged system credentials (such as for the kubelet) or privileged APIs (such as the container runtime socket), that can be used for container escape or to attack other parts of the cluster. Pods with identical configuration (such as created from a PodTemplate) may behave differently on different nodes due to different files on the nodes. Some uses for a hostPath are: running a container that needs access to node-level system components (such as a container that transfers system logs to a central location, accessing those logs using a read-only mount of /var/log ) making a configuration file stored on the host system available read-only to a static pod ; unlike normal Pods, static Pods cannot access ConfigMaps hostPath volume types In addition to the required path property, you can optionally specify a type for a hostPath volume. The available values
808
for type are: Value Behavior ""Empty string (default) is for backward compatibility, which means that no checks will be performed before mounting the hostPath volume. DirectoryOrCreate• • •
809
Value Behavior If nothing exists at the given path, an empty directory will be created there as needed with permission set to 0755, having the same group and ownership with Kubelet. Directory A directory must exist at the given path FileOrCreateIf nothing exists at the given path, an empty file will be created there as needed with permission set to 0644, having the same group and ownership with Kubelet. File A file must exist at the given path Socket A UNIX socket must exist at the given path CharDevice (Linux nodes only) A character device must exist at the given path BlockDevice (Linux nodes only) A block device must exist at the given path Caution: The FileOrCreate mode does not create the parent directory of the file. If the parent directory of the mounted file does not exist, the pod fails to start. To ensure that this mode works, you can try to mount directories and files separately, as shown in the FileOrCreate example for hostPath . Some files or directories created on the
810
underlying hosts might only be accessible by root. You then either need to run your process as root in a privileged container or modify the file permissions on the host to be able to read from (or write to) a hostPath volume. hostPath configuration example Linux node Windows node --- # This manifest mounts /data/foo on the host as /foo inside the # single container that runs within the hostpath-example-linux Pod. # # The mount into the container is read-only. apiVersion : v1 kind: Pod metadata : name : hostpath-example-linux spec: os: { name : linux } nodeSelector : kubernetes.io/os : linux containers : - name : example-container image : registry.k8s.io/test-webserver volumeMounts : - mountPath : /foo name : example-volume readOnly : true volumes : - name : example-volume # mount /data/foo, but only if that directory already exists hostPath :•
811
path: /data/foo # directory location on host type: Directory # this field is optional --- # This manifest mounts C:\Data\foo on the host as C:\foo, inside the # single container that runs within the hostpath-example-windows Pod. # # The mount into the container is read-only. apiVersion : v1 kind: Pod metadata : name : hostpath-example-windows spec: os: { name : windows } nodeSelector : kubernetes.io/os : windows containers : - name : example-container image : microsoft/windowsservercore:1709 volumeMounts : - name : example-volume mountPath : "C:\\foo" readOnly : true volumes : # mount C:\Data\foo from the host, but only if that directory already exists - name : example-volume hostPath : path: "C:\\Data\\foo" # directory location on host type: Directory # this field is optional hostPath FileOrCreate configuration example The following manifest defines a Pod that mounts /var/local/aaa inside the single containe
812
r in the Pod. If the node does not already have a path /var/local/aaa , the kubelet creates it as a directory and then mounts it into the Pod. If /var/local/aaa already exists but is not a directory, the Pod fails. Additionally, the kubelet attempts to make a file named /var/local/aaa/1.txt inside that directory (as seen from the host); if something already exists at that path and isn't a regular file, the Pod fails. Here's the example manifest: apiVersion : v1 kind: Pod metadata : name : test-webserver spec: os: { name : linux } nodeSelector : kubernetes.io/os : linux containers : - name : test-webserve
813
image : registry.k8s.io/test-webserver:latest volumeMounts : - mountPath : /var/local/aaa name : mydir - mountPath : /var/local/aaa/1.txt name : myfile volumes : - name : mydir hostPath : # Ensure the file directory is created. path: /var/local/aaa type: DirectoryOrCreate - name : myfile hostPath : path: /var/local/aaa/1.txt type: FileOrCreate iscsi An iscsi volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod. Unlike emptyDir , which is erased when a Pod is removed, the contents of an iscsi volume are preserved and the volume is merely unmounted. This means that an iscsi volume can be pre- populated with data, and that data can be shared between pods. Note: You must have your own iSCSI server running with the volume created before you can use it. A feature of iSCSI is that it can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a volume wi
814
th your dataset and then serve it in parallel from as many Pods as you need. Unfortunately, iSCSI volumes can only be mounted by a single consumer in read-write mode. Simultaneous writers are not allowed. See the iSCSI example for more details. local A local volume represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported. Compared to hostPath volumes, local volumes are used in a durable and portable manner without manually scheduling pods to nodes. The system is aware of the volume's node constraints by looking at the node affinity on the PersistentVolume. However, local volumes are subject to the availability of the underlying node and are not suitable for all applications. If a node becomes unhealthy, then the local volume becomes inaccessible by the pod. The pod using this volume is unable to run. Applications using local volumes must be able
815
to tolerate this reduced availability, as well as potential data loss, depending on the durability characteristics of the underlying disk. The following example shows a PersistentVolume using a local volume and nodeAffinity
816
apiVersion : v1 kind: PersistentVolume metadata : name : example-pv spec: capacity : storage : 100Gi volumeMode : Filesystem accessModes : - ReadWriteOnce persistentVolumeReclaimPolicy : Delete storageClassName : local-storage local : path: /mnt/disks/ssd1 nodeAffinity : required : nodeSelectorTerms : - matchExpressions : - key: kubernetes.io/hostname operator : In values : - example-node You must set a PersistentVolume nodeAffinity when using local volumes. The Kubernetes scheduler uses the PersistentVolume nodeAffinity to schedule these Pods to the correct node. PersistentVolume volumeMode can be set to "Block" (instead of the default value "Filesystem") to expose the local volume as a raw block device. When using local volumes, it is recommended to create a StorageClass with volumeBindingMode set to WaitForFirstConsumer . For more details, see the local StorageClass example. Delaying volume binding
817
ensures that the PersistentVolumeClaim binding decision will also be evaluated with any other node constraints the Pod may have, such as node resource requirements, node selectors, Pod affinity, and Pod anti-affinity. An external static provisioner can be run separately for improved management of the local volume lifecycle. Note that this provisioner does not support dynamic provisioning yet. For an example on how to run an external local provisioner, see the local volume provisioner user guide . Note: The local PersistentVolume requires manual cleanup and deletion by the user if the external static provisioner is not used to manage the volume lifecycle. nfs An nfs volume allows an existing NFS (Network File System) share to be mounted into a Pod. Unlike emptyDir , which is erased when a Pod is removed, the contents of an nfs volume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre- populated with data, and that data can be shared between pods
818
. NFS can be mounted by multiple writers simultaneously. apiVersion : v1 kind: Po
819
metadata : name : test-pd spec: containers : - image : registry.k8s.io/test-webserver name : test-container volumeMounts : - mountPath : /my-nfs-data name : test-volume volumes : - name : test-volume nfs: server : my-nfs-server.example.com path: /my-nfs-volume readOnly : true Note: You must have your own NFS server running with the share exported before you can use it. Also note that you can't specify NFS mount options in a Pod spec. You can either set mount options server-side or use /etc/nfsmount.conf . You can also mount NFS volumes via PersistentVolumes which do allow you to set mount options. See the NFS example for an example of mounting NFS volumes with PersistentVolumes. persistentVolumeClaim A persistentVolumeClaim volume is used to mount a PersistentVolume into a Pod. PersistentVolumeClaims are a way for users to "claim" durable storage (such as an iSCSI volume) without knowing the details of the particular cloud environmen
820
t. See the information about PersistentVolumes for more details. portworxVolume (deprecated) FEATURE STATE: Kubernetes v1.25 [deprecated] A portworxVolume is an elastic block storage layer that runs hyperconverged with Kubernetes. Portworx fingerprints storage in a server, tiers based on capabilities, and aggregates capacity across multiple servers. Portworx runs in-guest in virtual machines or on bare metal Linux nodes. A portworxVolume can be dynamically created through Kubernetes or it can also be pre- provisioned and referenced inside a Pod. Here is an example Pod referencing a pre-provisioned Portworx volume: apiVersion : v1 kind: Pod metadata : name : test-portworx-volume-pod spec: containers
821
- image : registry.k8s.io/test-webserver name : test-container volumeMounts : - mountPath : /mnt name : pxvol volumes : - name : pxvol # This Portworx volume must already exist. portworxVolume : volumeID : "pxvol" fsType : "<fs-type>" Note: Make sure you have an existing PortworxVolume with name pxvol before using it in the Pod. For more details, see the Portworx volume examples. Portworx CSI migration FEATURE STATE: Kubernetes v1.25 [beta] The CSIMigration feature for Portworx has been added but disabled by default in Kubernetes 1.23 since it's in alpha state. It has been beta now since v1.25 but it is still turned off by default. It redirects all plugin operations from the existing in-tree plugin to the pxd.portworx.com Container Storage Interface (CSI) Driver. Portworx CSI Driver must be installed on the cluster. To enable the feature, set CSIMigrationPortworx=true in kube-controller-manager and kubelet. projected A projected volume m
822
aps several existing volume sources into the same directory. For more details, see projected volumes . rbd FEATURE STATE: Kubernetes v1.28 [deprecated] Note: The Kubernetes project suggests that you use the Ceph CSI third party storage driver instead, in RBD mode. An rbd volume allows a Rados Block Device (RBD) volume to mount into your Pod. Unlike emptyDir , which is erased when a pod is removed, the contents of an rbd volume are preserved and the volume is unmounted. This means that a RBD volume can be pre-populated with data, and that data can be shared between pods. Note: You must have a Ceph installation running before you can use RBD. A feature of RBD is that it can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a volume with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, RBD volumes can only be mounted by a single consumer in read-write mode. Simultaneous writers are not allowed.
823
See the RBD example for more details
824
RBD CSI migration FEATURE STATE: Kubernetes v1.28 [deprecated] The CSIMigration feature for RBD , when enabled, redirects all plugin operations from the existing in-tree plugin to the rbd.csi.ceph.com CSI driver. In order to use this feature, the Ceph CSI driver must be installed on the cluster and the CSIMigrationRBD feature gate must be enabled. (Note that the csiMigrationRBD flag has been removed and replaced with CSIMigrationRBD in release v1.24) Note: As a Kubernetes cluster operator that administers storage, here are the prerequisites that you must complete before you attempt migration to the RBD CSI driver: You must install the Ceph CSI driver ( rbd.csi.ceph.com ), v3.5.0 or above, into your Kubernetes cluster. considering the clusterID field is a required parameter for CSI driver for its operations, but in-tree StorageClass has monitors field as a required parameter, a Kubernetes storage admin has to create a clusterID based on the monitors hash ( ex: #echo -n '<mon
825
itors_string>' | md5sum ) in the CSI config map and keep the monitors under this clusterID configuration. Also, if the value of adminId in the in-tree Storageclass is different from admin , the adminSecretName mentioned in the in-tree Storageclass has to be patched with the base64 value of the adminId parameter value, otherwise this step can be skipped. secret A secret volume is used to pass sensitive information, such as passwords, to Pods. You can store secrets in the Kubernetes API and mount them as files for use by pods without coupling to Kubernetes directly. secret volumes are backed by tmpfs (a RAM-backed filesystem) so they are never written to non-volatile storage. Note: You must create a Secret in the Kubernetes API before you can use it. A Secret is always mounted as readOnly . A container using a Secret as a subPath volume mount will not receive Secret updates. For more details, see Configuring Secrets . vsphereVolume (deprecated) Note: The Kubernetes project recomm
826
ends using the vSphere CSI out-of-tree storage driver instead. A vsphereVolume is used to mount a vSphere VMDK volume into your Pod. The contents of a volume are preserved when it is unmounted. It supports both VMFS and VSAN datastore. For more information, see the vSphere volume examples.• • • • •
827
vSphere CSI migration FEATURE STATE: Kubernetes v1.26 [stable] In Kubernetes 1.29, all operations for the in-tree vsphereVolume type are redirected to the csi.vsphere.vmware.com CSI driver. vSphere CSI driver must be installed on the cluster. You can find additional advice on how to migrate in-tree vsphereVolume in VMware's documentation page Migrating In-Tree vSphere Volumes to vSphere Container Storage lug-in . If vSphere CSI Driver is not installed volume operations can not be performed on the PV created with the in-tree vsphereVolume type. You must run vSphere 7.0u2 or later in order to migrate to the vSphere CSI driver. If you are running a version of Kubernetes other than v1.29, consult the documentation for that version of Kubernetes. Note: The following StorageClass parameters from the built-in vsphereVolume plugin are not supported by the vSphere CSI driver: diskformat hostfailurestotolerate forceprovisioning cachereservation diskstripes objectspacereservation iopslimi
828
t Existing volumes created using these parameters will be migrated to the vSphere CSI driver, but new volumes created by the vSphere CSI driver will not be honoring these parameters. vSphere CSI migration complete FEATURE STATE: Kubernetes v1.19 [beta] To turn off the vsphereVolume plugin from being loaded by the controller manager and the kubelet, you need to set InTreePluginvSphereUnregister feature flag to true. You must install a csi.vsphere.vmware.com CSI driver on all worker nodes. Using subPath Sometimes, it is useful to share one volume for multiple uses in a single pod. The volumeMounts[*].subPath property specifies a sub-path inside the referenced volume instead of its root. The following example shows how to configure a Pod with a LAMP stack (Linux Apache MySQL PHP) using a single, shared volume. This sample subPath configuration is not recommended for production use. The PHP application's code and assets map to the volume's html folder and the MySQL database is sto
829
red in the volume's mysql folder. For example:• • • • • •
830
apiVersion : v1 kind: Pod metadata : name : my-lamp-site spec: containers : - name : mysql image : mysql env: - name : MYSQL_ROOT_PASSWORD value : "rootpasswd" volumeMounts : - mountPath : /var/lib/mysql name : site-data subPath : mysql - name : php image : php:7.0-apache volumeMounts : - mountPath : /var/www/html name : site-data subPath : html volumes : - name : site-data persistentVolumeClaim : claimName : my-lamp-site-data Using subPath with expanded environment variables FEATURE STATE: Kubernetes v1.17 [stable] Use the subPathExpr field to construct subPath directory names from downward API environment variables. The subPath and subPathExpr properties are mutually exclusive. In this example, a Pod uses subPathExpr to create a directory pod1 within the hostPath volume /var/log/pods . The hostPath volume takes the Pod name from the downwardAPI . The hos
831
t directory /var/log/pods/pod1 is mounted at /logs in the container. apiVersion : v1 kind: Pod metadata : name : pod1 spec: containers : - name : container1 env: - name : POD_NAME valueFrom : fieldRef : apiVersion : v1 fieldPath : metadata.name image : busybox:1.28 command : [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ] volumeMounts
832
- name : workdir1 mountPath : /logs # The variable expansion uses round brackets (not curly brackets). subPathExpr : $(POD_NAME) restartPolicy : Never volumes : - name : workdir1 hostPath : path: /var/log/pods Resources The storage media (such as Disk or SSD) of an emptyDir volume is determined by the medium of the filesystem holding the kubelet root dir (typically /var/lib/kubelet ). There is no limit on how much space an emptyDir or hostPath volume can consume, and no isolation between containers or between pods. To learn about requesting space using a resource specification, see how to manage resources . Out-of-tree volume plugins The out-of-tree volume plugins include Container Storage Interface (CSI), and also FlexVolume (which is deprecated). These plugins enable storage vendors to create custom storage plugins without adding their plugin source code to the Kubernetes repository. Previously, all volume plugins were "in-tree". The "in-tree" pl
833
ugins were built, linked, compiled, and shipped with the core Kubernetes binaries. This meant that adding a new storage system to Kubernetes (a volume plugin) required checking code into the core Kubernetes code repository. Both CSI and FlexVolume allow volume plugins to be developed independent of the Kubernetes code base, and deployed (installed) on Kubernetes clusters as extensions. For storage vendors looking to create an out-of-tree volume plugin, please refer to the volume plugin FAQ . csi Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads. Please read the CSI design proposal for more information. Note: Support for CSI spec versions 0.2 and 0.3 are deprecated in Kubernetes v1.13 and will be removed in a future release. Note: CSI drivers may not be compatible across all Kubernetes releases. Please check the specific CSI driver's documentation for s
834
upported deployments steps for each Kubernetes release and a compatibility matrix. Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users may use the csi volume type to attach or mount the volumes exposed by the CSI driver
835
A csi volume can be used in a Pod in three different ways: through a reference to a PersistentVolumeClaim with a generic ephemeral volume with a CSI ephemeral volume if the driver supports that The following fields are available to storage administrators to configure a CSI persistent volume: driver : A string value that specifies the name of the volume driver to use. This value must correspond to the value returned in the GetPluginInfoResponse by the CSI driver as defined in the CSI spec . It is used by Kubernetes to identify which CSI driver to call out to, and by CSI driver components to identify which PV objects belong to the CSI driver. volumeHandle : A string value that uniquely identifies the volume. This value must correspond to the value returned in the volume.id field of the CreateVolumeResponse by the CSI driver as defined in the CSI spec . The value is passed as volume_id on all calls to the CSI volume driver when referencing the volume. readOnly : An optional boolean v
836
alue indicating whether the volume is to be "ControllerPublished" (attached) as read only. Default is false. This value is passed to the CSI driver via the readonly field in the ControllerPublishVolumeRequest . fsType : If the PV's VolumeMode is Filesystem then this field may be used to specify the filesystem that should be used to mount the volume. If the volume has not been formatted and formatting is supported, this value will be used to format the volume. This value is passed to the CSI driver via the VolumeCapability field of ControllerPublishVolumeRequest , NodeStageVolumeRequest , and NodePublishVolumeRequest . volumeAttributes : A map of string to string that specifies static properties of a volume. This map must correspond to the map returned in the volume.attributes field of the CreateVolumeResponse by the CSI driver as defined in the CSI spec . The map is passed to the CSI driver via the volume_context field in the ControllerPublishVolumeRequest , NodeStageVolumeR
837
equest , and NodePublishVolumeRequest . controllerPublishSecretRef : A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI ControllerPublishVolume and ControllerUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the Secret contains more than one secret, all secrets are passed. nodeExpandSecretRef : A reference to the secret containing sensitive information to pass to the CSI driver to complete the CSI NodeExpandVolume call. This field is optional, and may be empty if no secret is required. If the object contains more than one secret, all secrets are passed. When you have configured secret data for node-initiated volume expansion, the kubelet passes that data via the NodeExpandVolume() call to the CSI driver. In order to use the nodeExpandSecretRef field, your cluster should be running Kubernetes version 1.25 or later. If you are running Kubernetes Version 1.25 or 1.26, you must ena
838
ble the feature gate named CSINodeExpandSecret for each kube-apiserver and for the kubelet on every node. In Kubernetes version 1.27 this feature has been enabled by default and no explicit enablement of the feature gate is required. You must also be using a CSI driver that supports or requires secret data during node-initiated storage resize operations. nodePublishSecretRef : A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume call. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed. nodeStageSecretRef : A reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodeStageVolume call. This field is optional,• • • • • • • • • • • •
839
and may be empty if no secret is required. If the Secret contains more than one secret, all secrets are passed. CSI raw block volume support FEATURE STATE: Kubernetes v1.18 [stable] Vendors with external CSI drivers can implement raw block volume support in Kubernetes workloads. You can set up your PersistentVolume/PersistentVolumeClaim with raw block volume support as usual, without any CSI specific changes. CSI ephemeral volumes FEATURE STATE: Kubernetes v1.25 [stable] You can directly configure CSI volumes within the Pod specification. Volumes specified in this way are ephemeral and do not persist across pod restarts. See Ephemeral Volumes for more information. For more information on how to develop a CSI driver, refer to the kubernetes-csi documentation Windows CSI proxy FEATURE STATE: Kubernetes v1.22 [stable] CSI node plugins need to perform various privileged operations like scanning of disk devices and mounting of file systems. These operations differ for each host operatin
840
g system. For Linux worker nodes, containerized CSI node plugins are typically deployed as privileged containers. For Windows worker nodes, privileged operations for containerized CSI node plugins is supported using csi-proxy , a community-managed, stand-alone binary that needs to be pre- installed on each Windows node. For more details, refer to the deployment guide of the CSI plugin you wish to deploy. Migrating to CSI drivers from in-tree plugins FEATURE STATE: Kubernetes v1.25 [stable] The CSIMigration feature directs operations against existing in-tree plugins to corresponding CSI plugins (which are expected to be installed and configured). As a result, operators do not have to make any configuration changes to existing Storage Classes, PersistentVolumes or PersistentVolumeClaims (referring to in-tree plugins) when transitioning to a CSI driver that supersedes an in-tree plugin. Note: Existing PVs created by a in-tree volume plugin can still be used in the future without any con
841
figuration changes, even after the migration to CSI is completed for that volume type, and even after you upgrade to a version of Kubernetes that doesn't have compiled-in support for that kind of storage
842
As part of that migration, you - or another cluster administrator - must have installed and configured the appropriate CSI driver for that storage. The core of Kubernetes does not install that software for you. After that migration, you can also define new PVCs and PVs that refer to the legacy, built-in storage integrations. Provided you have the appropriate CSI driver installed and configured, the PV creation continues to work, even for brand new volumes. The actual storage management now happens through the CSI driver. The operations and features that are supported include: provisioning/delete, attach/detach, mount/unmount and resizing of volumes. In-tree plugins that support CSIMigration and have a corresponding CSI driver implemented are listed in Types of Volumes . The following in-tree plugins support persistent storage on Windows nodes: azureFile gcePersistentDisk vsphereVolume flexVolume (deprecated) FEATURE STATE: Kubernetes v1.23 [deprecated] FlexVolume is an out-of-tree p
843
lugin interface that uses an exec-based model to interface with storage drivers. The FlexVolume driver binaries must be installed in a pre-defined volume plugin path on each node and in some cases the control plane nodes as well. Pods interact with FlexVolume drivers through the flexVolume in-tree volume plugin. For more details, see the FlexVolume README document. The following FlexVolume plugins , deployed as PowerShell scripts on the host, support Windows nodes: SMB iSCSI Note: FlexVolume is deprecated. Using an out-of-tree CSI driver is the recommended way to integrate external storage with Kubernetes. Maintainers of FlexVolume driver should implement a CSI Driver and help to migrate users of FlexVolume drivers to CSI. Users of FlexVolume should move their workloads to use the equivalent CSI Driver. Mount propagation Mount propagation allows for sharing volumes mounted by a container to other containers in the same pod, or even to other pods on the same node.• • • •
844
Mount propagation of a volume is controlled by the mountPropagation field in containers[*].volumeMounts . Its values are: None - This volume mount will not receive any subsequent mounts that are mounted to this volume or any of its subdirectories by the host. In similar fashion, no mounts created by the container will be visible on the host. This is the default mode. This mode is equal to rprivate mount propagation as described in mount(8) However, the CRI runtime may choose rslave mount propagation (i.e., HostToContainer ) instead, when rprivate propagation is not applicable. cri-dockerd (Docker) is known to choose rslave mount propagation when the mount source contains the Docker daemon's root directory ( /var/lib/docker ). HostToContainer - This volume mount will receive all subsequent mounts that are mounted to this volume or any of its subdirectories. In other words, if the host mounts anything inside the volume mount, the container will see it mounted there. Similarly, if
845
any Pod with Bidirectional mount propagation to the same volume mounts anything there, the container with HostToContainer mount propagation will see it. This mode is equal to rslave mount propagation as described in the mount(8) Bidirectional - This volume mount behaves the same the HostToContainer mount. In addition, all volume mounts created by the container will be propagated back to the host and to all containers of all pods that use the same volume. A typical use case for this mode is a Pod with a FlexVolume or CSI driver or a Pod that needs to mount something on the host using a hostPath volume. This mode is equal to rshared mount propagation as described in the mount(8) Warning: Bidirectional mount propagation can be dangerous. It can damage the host operating system and therefore it is allowed only in privileged containers. Familiarity with Linux kernel behavior is strongly recommended. In addition, any volume mounts created by containers in pods must be destroyed (un
846
mounted) by the containers on termination. What's next Follow an example of deploying WordPress and MySQL with Persistent Volumes . Persistent Volumes This document describes persistent volumes in Kubernetes. Familiarity with volumes , StorageClasses and VolumeAttributesClasses is suggested.• •
847
Introduction Managing storage is a distinct problem from managing compute instances. The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this, we introduce two new API resources: PersistentVolume and PersistentVolumeClaim. A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes . It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resou
848
rces (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany, ReadWriteMany, or ReadWriteOncePod, see AccessModes ). While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the StorageClass resource. See the detailed walkthrough with working examples . Lifecycle of a volume and claim PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs follows this lifecycle: Provisioning There are two ways PVs may be provisioned: statically or dynamically. Static A
849
cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption. Dynamic When none of the static PVs the administrator created match a user's PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a storage class and the administrator must have created and configured that class for dynamic provisioning to occur. Claims that request the class "" effectively disable dynamic provisioning for themselves
850
To enable dynamic storage provisioning based on storage class, the cluster administrator needs to enable the DefaultStorageClass admission controller on the API server. This can be done, for example, by ensuring that DefaultStorageClass is among the comma-delimited, ordered list of values for the --enable-admission-plugins flag of the API server component. For more information on API server command-line flags, check kube-apiserver documentation. Binding A user creates, or in the case of dynamic provisioning, has already created, a PersistentVolumeClaim with a specific amount of storage requested and with certain access modes. A control loop in the control plane watches for new PVCs, finds a matching PV (if possible), and binds them together. If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC. Otherwise, the user will always get at least what they asked for, but the volume may be in excess of what was requested. Once bound, PersistentVol
851
umeClaim binds are exclusive, regardless of how they were bound. A PVC to PV binding is a one-to-one mapping, using a ClaimRef which is a bi-directional binding between the PersistentVolume and the PersistentVolumeClaim. Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster. Using Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a Pod. For volumes that support multiple access modes, the user specifies which mode is desired when using their claim as a volume in a Pod. Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as they need it. Users schedule Pods and access their claimed PVs by including a persistentVolumeClaim section in a Pod's volumes bl
852
ock. See Claims As Volumes for more details on this. Storage Object in Use Protection The purpose of the Storage Object in Use Protection feature is to ensure that PersistentVolumeClaims (PVCs) in active use by a Pod and PersistentVolume (PVs) that are bound to PVCs are not removed from the system, as this may result in data loss. Note: PVC is in active use by a Pod when a Pod object exists that is using the PVC. If a user deletes a PVC in active use by a Pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any Pods. Also, if an admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC. You can see that a PVC is protected when the PVC's status is Terminating and the Finalizers list includes kubernetes.io/pvc-protection : kubectl describe pvc hostpath Name: hostpath Namespace: defaul
853
StorageClass: example-hostpath Status: Terminating Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-class =example-hostpath volume.beta.kubernetes.io/storage-provisioner =example.com/hostpath Finalizers: [kubernetes.io/pvc-protection ] ... You can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too: kubectl describe pv task-pv-volume Name: task-pv-volume Labels: type=local Annotations: <none> Finalizers: [kubernetes.io/pv-protection ] StorageClass: standard Status: Terminating Claim: Reclaim Policy: Delete Access Modes: RWO Capacity: 1Gi Message: Source: Type: HostPath (bare host directory volume ) Path: /tmp/data HostPathType: Events: <none> Reclaiming When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of
854
the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled, or Deleted. Retain The Retain reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps. Delete the PersistentVolume. The associated storage asset in external infrastructure still exists after the PV is deleted. Manually clean up the data on the associated storage asset accordingly. Manually delete the associated storage asset. If you want to reuse the same storage asset, create a new PersistentVolume with the same storage asset definition.1. 2. 3
855
Delete For volume plugins that support the Delete reclaim policy, deletion removes both the PersistentVolume object from Kubernetes, as well as the associated storage asset in the external infrastructure. Volumes that were dynamically provisioned inherit the reclaim policy of their StorageClass , which defaults to Delete . The administrator should configure the StorageClass according to users' expectations; otherwise, the PV must be edited or patched after it is created. See Change the Reclaim Policy of a PersistentVolume . Recycle Warning: The Recycle reclaim policy is deprecated. Instead, the recommended approach is to use dynamic provisioning. If supported by the underlying volume plugin, the Recycle reclaim policy performs a basic scrub ( rm -rf /thevolume/* ) on the volume and makes it available again for a new claim. However, an administrator can configure a custom recycler Pod template using the Kubernetes controller manager command line arguments as described in the referen
856
ce . The custom recycler Pod template must contain a volumes specification, as shown in the example below: apiVersion : v1 kind: Pod metadata : name : pv-recycler namespace : default spec: restartPolicy : Never volumes : - name : vol hostPath : path: /any/path/it/will/be/replaced containers : - name : pv-recycler image : "registry.k8s.io/busybox" command : ["/bin/sh" , "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1" ] volumeMounts : - name : vol mountPath : /scrub However, the particular path specified in the custom recycler Pod template in the volumes part is replaced with the particular path of the volume that is being recycled. PersistentVolume deletion protection finalizer FEATURE STATE: Kubernetes v1.23 [alpha] Finalizers can be added on a PersistentVolume to ensure that PersistentVolumes having Delete reclaim policy are deleted only after the backing storage are deleted
857
. The newly introduced finalizers kubernetes.io/pv-controller and external- provisioner.volume.kubernetes.io/finalizer are only added to dynamically provisioned volumes
858
The finalizer kubernetes.io/pv-controller is added to in-tree plugin volumes. The following is an example kubectl describe pv pvc-74a498d6-3929-47e8-8c02-078c1ece4d78 Name: pvc-74a498d6-3929-47e8-8c02-078c1ece4d78 Labels: <none> Annotations: kubernetes.io/createdby: vsphere-volume-dynamic-provisioner pv.kubernetes.io/bound-by-controller: yes pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume Finalizers: [kubernetes.io/pv-protection kubernetes.io/pv-controller ] StorageClass: vcp-sc Status: Bound Claim: default/vcp-pvc-1 Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 1Gi Node Affinity: <none> Message: Source: Type: vSphereVolume (a Persistent Disk resource in vSphere ) VolumePath: [vsanDatastore ] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes- dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk FSType:
859
ext4 StoragePolicyName: vSAN Default Storage Policy Events: <none> The finalizer external-provisioner.volume.kubernetes.io/finalizer is added for CSI volumes. The following is an example: Name: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d Labels: <none> Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com Finalizers: [kubernetes.io/pv-protection external-provisioner.volume.kubernetes.io/finalizer ] StorageClass: fast Status: Bound Claim: demo-app/nginx-logs Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 200Mi Node Affinity: <none> Message: Source: Type: CSI (a Container Storage Interface (CSI) volume source ) Driver: csi.vsphere.vmware.com FSType: ext4 VolumeHandle: 44830fa8-79b4-406b-8b58-621ba25353fd ReadOnly: false VolumeAttributes: storage.kubernetes.io/csiProvisionerIdent
860
ity =1648442357185-8081- csi.vsphere.vmware.com type=vSphere CNS Block Volume Events: <none
861
When the CSIMigration{provider} feature flag is enabled for a specific in-tree volume plugin, the kubernetes.io/pv-controller finalizer is replaced by the external- provisioner.volume.kubernetes.io/finalizer finalizer. Reserving a PersistentVolume The control plane can bind PersistentVolumeClaims to matching PersistentVolumes in the cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them. By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its claimRef field, then the PersistentVolume and PersistentVolumeClaim will be bound. The binding happens regardless of some volume matching criteria, including node affinity. The control plane still checks that storage class , access modes, and requested storage size are valid. apiVersion : v1 kind: PersistentVolumeClaim metadata : name : foo-pvc namespace : foo
862
spec: storageClassName : "" # Empty string must be explicitly set otherwise default StorageClass will be set volumeName : foo-pv ... This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify the relevant PersistentVolumeClaim in the claimRef field of the PV so that other PVCs can not bind to it. apiVersion : v1 kind: PersistentVolume metadata : name : foo-pv spec: storageClassName : "" claimRef : name : foo-pvc namespace : foo ... This is useful if you want to consume PersistentVolumes that have their persistentVolumeReclaimPolicy set to Retain , including cases where you are reusing an existing PV. Expanding Persistent Volumes Claims FEATURE STATE: Kubernetes v1.24 [stable
863
Support for expanding PersistentVolumeClaims (PVCs) is enabled by default. You can expand the following types of volumes: azureFile (deprecated) csi flexVolume (deprecated) rbd (deprecated) portworxVolume (deprecated) You can only expand a PVC if its storage class's allowVolumeExpansion field is set to true. apiVersion : storage.k8s.io/v1 kind: StorageClass metadata : name : example-vol-default provisioner : vendor-name.example/magicstorage parameters : resturl : "http://192.168.10.100:8080" restuser : "" secretNamespace : "" secretName : "" allowVolumeExpansion : true To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never created to satisfy the claim. Instead, an existing volume is resized. Warning: Directly editing the size of a PersistentVolume can prevent an automatic resize of that volume. If you edit the capacity of a Persiste
864
ntVolume, and then edit the .spec of a matching PersistentVolumeClaim to make the size of the PersistentVolumeClaim match the PersistentVolume, then no storage resize happens. The Kubernetes control plane will see that the desired state of both resources matches, conclude that the backing volume size has been manually increased and that no resize is necessary. CSI Volume expansion FEATURE STATE: Kubernetes v1.24 [stable] Support for expanding CSI volumes is enabled by default but it also requires a specific CSI driver to support volume expansion. Refer to documentation of the specific CSI driver for more information. Resizing a volume containing a file system You can only resize volumes containing a file system if the file system is XFS, Ext3, or Ext4. When a volume contains a file system, the file system is only resized when a new Pod is using the PersistentVolumeClaim in ReadWrite mode. File system expansion is either done when a Pod is starting up or when a Pod is running and the
865
underlying file system supports online expansion. FlexVolumes (deprecated since Kubernetes v1.23) allow resize if the driver is configured with the RequiresFSResize capability to true. The FlexVolume can be resized on Pod restart.• • • •
866
Resizing an in-use PersistentVolumeClaim FEATURE STATE: Kubernetes v1.24 [stable] In this case, you don't need to delete and recreate a Pod or deployment that is using an existing PVC. Any in-use PVC automatically becomes available to its Pod as soon as its file system has been expanded. This feature has no effect on PVCs that are not in use by a Pod or deployment. You must create a Pod that uses the PVC before the expansion can complete. Similar to other volume types - FlexVolume volumes can also be expanded when in-use by a Pod. Note: FlexVolume resize is possible only when the underlying driver supports resize. Recovering from Failure when Expanding Volumes If a user specifies a new size that is too big to be satisfied by underlying storage system, expansion of PVC will be continuously retried until user or cluster administrator takes some action. This can be undesirable and hence Kubernetes provides following methods of recovering from such failures. Manually with Cluster Adminis
867
trator access By requesting expansion to smaller size If expanding underlying storage fails, the cluster administrator can manually recover the Persistent Volume Claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller without administrator intervention. Mark the PersistentVolume(PV) that is bound to the PersistentVolumeClaim(PVC) with Retain reclaim policy. Delete the PVC. Since PV has Retain reclaim policy - we will not lose any data when we recreate the PVC. Delete the claimRef entry from PV specs, so as new PVC can bind to it. This should make the PV Available . Re-create the PVC with smaller size than PV and set volumeName field of the PVC to the name of the PV. This should bind new PVC to existing PV. Don't forget to restore the reclaim policy of the PV. FEATURE STATE: Kubernetes v1.23 [alpha] Note: Recovery from failing PVC expansion by users is available as an alpha feature since Kubernetes 1.23. The Reco
868
verVolumeExpansionFailure feature must be enabled for this feature to work. Refer to the feature gate documentation for more information. If the feature gates RecoverVolumeExpansionFailure is enabled in your cluster, and expansion has failed for a PVC, you can retry expansion with a smaller size than the previously requested value. To request a new expansion attempt with a smaller proposed size, edit .spec.resources for that PVC and choose a value that is less than the value you previously tried. This is useful if expansion to a higher value did not succeed because of capacity constraint. If that has happened, or you suspect that it might have, you can retry expansion by specifying a size that is within the capacity limits of underlying storage provider. You can monitor status of resize operation by watching .status.allocatedResourceStatuses and events on the PVC.• • 1. 2. 3. 4. 5
869
Note that, although you can specify a lower amount of storage than what was requested previously, the new value must still be higher than .status.capacity . Kubernetes does not support shrinking a PVC to less than its current size. Types of Persistent Volumes PersistentVolume types are implemented as plugins. Kubernetes currently supports the following plugins: csi - Container Storage Interface (CSI) fc - Fibre Channel (FC) storage hostPath - HostPath volume (for single node testing only; WILL NOT WORK in a multi- node cluster; consider using local volume instead) iscsi - iSCSI (SCSI over IP) storage local - local storage devices mounted on nodes. nfs - Network File System (NFS) storage The following types of PersistentVolume are deprecated. This means that support is still available but will be removed in a future Kubernetes release. azureFile - Azure File ( deprecated in v1.21) flexVolume - FlexVolume ( deprecated in v1.23) portworxVolume - Portworx volume ( deprecated in v1
870
.25) vsphereVolume - vSphere VMDK volume ( deprecated in v1.19) cephfs - CephFS volume ( deprecated in v1.28) rbd - Rados Block Device (RBD) volume ( deprecated in v1.28) Older versions of Kubernetes also supported the following in-tree PersistentVolume types: awsElasticBlockStore - AWS Elastic Block Store (EBS) ( not available in v1.27) azureDisk - Azure Disk ( not available in v1.27) cinder - Cinder (OpenStack block storage) ( not available in v1.26) photonPersistentDisk - Photon controller persistent disk. ( not available starting v1.15) scaleIO - ScaleIO volume. ( not available starting v1.21) flocker - Flocker storage. ( not available starting v1.25) quobyte - Quobyte volume. ( not available starting v1.25) storageos - StorageOS volume. ( not available starting v1.25) Persistent Volumes Each PV contains a spec and status, which is the specification and status of the volume. The name of a PersistentVolume object must be a valid DNS subdomain name . apiVersion :
871
v1 kind: PersistentVolume metadata : name : pv0003 spec: capacity : storage : 5Gi volumeMode : Filesystem accessModes :• • • • • • • • • • • • • • • • • • •
872
- ReadWriteOnce persistentVolumeReclaimPolicy : Recycle storageClassName : slow mountOptions : - hard - nfsvers=4.1 nfs: path: /tmp server : 172.17.0.2 Note: Helper programs relating to the volume type may be required for consumption of a PersistentVolume within a cluster. In this example, the PersistentVolume is of type NFS and the helper program /sbin/mount.nfs is required to support the mounting of NFS filesystems. Capacity Generally, a PV will have a specific storage capacity. This is set using the PV's capacity attribute which is a Quantity value. Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc. Volume Mode FEATURE STATE: Kubernetes v1.18 [stable] Kubernetes supports two volumeModes of PersistentVolumes: Filesystem and Block . volumeMode is an optional API parameter. Filesystem is the default mode used when volumeMode parameter is omitted. A volume with volumeMode: File
873
system is mounted into Pods into a directory. If the volume is backed by a block device and the device is empty, Kubernetes creates a filesystem on the device before mounting it for the first time. You can set the value of volumeMode to Block to use a volume as a raw block device. Such volume is presented into a Pod as a block device, without any filesystem on it. This mode is useful to provide a Pod the fastest possible way to access a volume, without any filesystem layer between the Pod and the volume. On the other hand, the application running in the Pod must know how to handle a raw block device. See Raw Block Volume Support for an example on how to use a volume with volumeMode: Block in a Pod. Access Modes A PersistentVolume can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS
874
can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities. The access modes are: ReadWriteOnc
875
the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node. For single pod access, please see ReadWriteOncePod. ReadOnlyMany the volume can be mounted as read-only by many nodes. ReadWriteMany the volume can be mounted as read-write by many nodes. ReadWriteOncePod FEATURE STATE: Kubernetes v1.29 [stable] the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across the whole cluster can read that PVC or write to it. Note: The ReadWriteOncePod access mode is only supported for CSI volumes and Kubernetes version 1.22+. To use this feature you will need to update the following CSI sidecars to these versions or greater: csi-provisioner:v3.0.0+ csi-attacher:v3.3.0+ csi-resizer:v1.3.0+ In the CLI, the access modes are abbreviated to: RWO - ReadWriteOnce ROX - ReadOnlyMany RWX - ReadWriteMany RWOP -
876
ReadWriteOncePod Note: Kubernetes uses volume access modes to match PersistentVolumeClaims and PersistentVolumes. In some cases, the volume access modes also constrain where the PersistentVolume can be mounted. Volume access modes do not enforce write protection once the storage has been mounted. Even if the access modes are specified as ReadWriteOnce, ReadOnlyMany, or ReadWriteMany, they don't set any constraints on the volume. For example, even if a PersistentVolume is created as ReadOnlyMany, it is no guarantee that it will be read- only. If the access modes are specified as ReadWriteOncePod, the volume is constrained and can be mounted on only a single Pod. Important! A volume can only be mounted using one access mode at a time, even if it supports many. Volume Plugin ReadWriteOnce ReadOnlyMany ReadWriteMany ReadWriteOncePod AzureFile - CephFS - CSIdepends on the driverdepends on the driverdepends on the driverdepends on the driver FC - - FlexVolume depends on the driver- HostPa
877
th - - - iSCSI - -• • • • • •
878
Volume Plugin ReadWriteOnce ReadOnlyMany ReadWriteMany ReadWriteOncePod NFS - RBD - - VsphereVolume -- (works when Pods are collocated)- PortworxVolume - - Class A PV can have a class, which is specified by setting the storageClassName attribute to the name of a StorageClass . A PV of a particular class can only be bound to PVCs requesting that class. A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class. In the past, the annotation volume.beta.kubernetes.io/storage-class was used instead of the storageClassName attribute. This annotation is still working; however, it will become fully deprecated in a future Kubernetes release. Reclaim Policy Current reclaim policies are: Retain -- manual reclamation Recycle -- basic scrub ( rm -rf /thevolume/* ) Delete -- delete the volume For Kubernetes 1.29, only nfs and hostPath volume types support recycling. Mount Options A Kubernetes administrator can specify additional mount options for
879
when a Persistent Volume is mounted on a node. Note: Not all Persistent Volume types support mount options. The following volume types support mount options: azureFile cephfs (deprecated in v1.28) cinder (deprecated in v1.18) iscsi nfs rbd (deprecated in v1.28) vsphereVolume Mount options are not validated. If a mount option is invalid, the mount fails. In the past, the annotation volume.beta.kubernetes.io/mount-options was used instead of the mountOptions attribute. This annotation is still working; however, it will become fully deprecated in a future Kubernetes release.• • • • • • • • •
880
Node Affinity Note: For most volume types, you do not need to set this field. You need to explicitly set this for local volumes. A PV can specify node affinity to define constraints that limit what nodes this volume can be accessed from. Pods that use a PV will only be scheduled to nodes that are selected by the node affinity. To specify node affinity, set nodeAffinity in the .spec of a PV. The PersistentVolume API reference has more details on this field. Phase A PersistentVolume will be in one of the following phases: Available a free resource that is not yet bound to a claim Bound the volume is bound to a claim Released the claim has been deleted, but the associated storage resource is not yet reclaimed by the cluster Failed the volume has failed its (automated) reclamation You can see the name of the PVC bound to the PV using kubectl describe persistentvolume <name> . Phase transition timestamp FEATURE STATE: Kubernetes v1.29 [beta] The .status field for a PersistentVolume
881
can include an alpha lastPhaseTransitionTime field. This field records the timestamp of when the volume last transitioned its phase. For newly created volumes the phase is set to Pending and lastPhaseTransitionTime is set to the current time. Note: You need to enable the PersistentVolumeLastPhaseTransitionTime feature gate to use or see the lastPhaseTransitionTime field. PersistentVolumeClaims Each PVC contains a spec and status, which is the specification and status of the claim. The name of a PersistentVolumeClaim object must be a valid DNS subdomain name . apiVersion : v1 kind: PersistentVolumeClaim metadata : name : myclaim spec: accessModes : - ReadWriteOnce volumeMode : Filesystem resources
882
requests : storage : 8Gi storageClassName : slow selector : matchLabels : release : "stable" matchExpressions : - {key: environment, operator: In, values : [dev]} Access Modes Claims use the same conventions as volumes when requesting storage with specific access modes. Volume Modes Claims use the same convention as volumes to indicate the consumption of the volume as either a filesystem or block device. Resources Claims, like Pods, can request specific quantities of a resource. In this case, the request is for storage. The same resource model applies to both volumes and claims. Selector Claims can specify a label selector to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields: matchLabels - the volume must have a label with this value matchExpressions - a list of requirements made by specifying key, list of values, and operator that relates the key and
883
values. Valid operators include In, NotIn, Exists, and DoesNotExist. All of the requirements, from both matchLabels and matchExpressions , are ANDed together – they must all be satisfied in order to match. Class A claim can request a particular class by specifying the name of a StorageClass using the attribute storageClassName . Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC. PVCs don't necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster, depending on whether the DefaultStorageClass admission plugin is turned on. If the admission plugin is turned on, the administrator may specify a default StorageClass. All PVCs that have no storageClassName can be bou
884
nd only to PVs of that default. Specifying a default StorageClass is done by setting the annotation • •
885
storageclass.kubernetes.io/is-default-class equal to true in a StorageClass object. If the administrator does not specify a default, the cluster responds to PVC creation as if the admission plugin were turned off. If more than one default StorageClass is specified, the newest default is used when the PVC is dynamically provisioned. If the admission plugin is turned off, there is no notion of a default StorageClass. All PVCs that have storageClassName set to "" can be bound only to PVs that have storageClassName also set to "". However, PVCs with missing storageClassName can be updated later once default StorageClass becomes available. If the PVC gets updated it will no longer bind to PVs that have storageClassName also set to "". See retroactive default StorageClass assignment for more details. Depending on installation method, a default StorageClass may be deployed to a Kubernetes cluster by addon manager during installation. When a PVC specifies a selector in addition to requ
886
esting a StorageClass, the requirements are ANDed together: only a PV of the requested class and with the requested labels may be bound to the PVC. Note: Currently, a PVC with a non-empty selector can't have a PV dynamically provisioned for it. In the past, the annotation volume.beta.kubernetes.io/storage-class was used instead of storageClassName attribute. This annotation is still working; however, it won't be supported in a future Kubernetes release. Retroactive default StorageClass assignment FEATURE STATE: Kubernetes v1.28 [stable] You can create a PersistentVolumeClaim without specifying a storageClassName for the new PVC, and you can do so even when no default StorageClass exists in your cluster. In this case, the new PVC creates as you defined it, and the storageClassName of that PVC remains unset until default becomes available. When a default StorageClass becomes available, the control plane identifies any existing PVCs without storageClassName . For the PVCs that eit
887
her have an empty value for storageClassName or do not have this key, the control plane then updates those PVCs to set storageClassName to match the new default StorageClass. If you have an existing PVC where the storageClassName is "", and you configure a default StorageClass, then this PVC will not get updated. In order to keep binding to PVs with storageClassName set to "" (while a default StorageClass is present), you need to set the storageClassName of the associated PVC to "". This behavior helps administrators change default StorageClass by removing the old one first and then creating or setting another one. This brief window while there is no default causes PVCs without storageClassName created at that time to not have any default, but due to the retroactive default StorageClass assignment this way of changing defaults is safe.
888
Claims As Volumes Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the Pod using the claim. The cluster finds the claim in the Pod's namespace and uses it to get the PersistentVolume backing the claim. The volume is then mounted to the host and into the Pod. apiVersion : v1 kind: Pod metadata : name : mypod spec: containers : - name : myfrontend image : nginx volumeMounts : - mountPath : "/var/www/html" name : mypd volumes : - name : mypd persistentVolumeClaim : claimName : myclaim A Note on Namespaces PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with "Many" modes ( ROX , RWX ) is only possible within one namespace. PersistentVolumes typed hostPath A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage. See an example of hostPath typed volume . Raw Block Volume Support FEATU
889
RE STATE: Kubernetes v1.18 [stable] The following volume plugins support raw block volumes, including dynamic provisioning where applicable: CSI FC (Fibre Channel) iSCSI Local volume OpenStack Cinder RBD (deprecated) RBD (Ceph Block Device; deprecated) VsphereVolume• • • • • • •
890
PersistentVolume using a Raw Block Volume apiVersion : v1 kind: PersistentVolume metadata : name : block-pv spec: capacity : storage : 10Gi accessModes : - ReadWriteOnce volumeMode : Block persistentVolumeReclaimPolicy : Retain fc: targetWWNs : ["50060e801049cfd1" ] lun: 0 readOnly : false PersistentVolumeClaim requesting a Raw Block Volume apiVersion : v1 kind: PersistentVolumeClaim metadata : name : block-pvc spec: accessModes : - ReadWriteOnce volumeMode : Block resources : requests : storage : 10Gi Pod specification adding Raw Block Device path in container apiVersion : v1 kind: Pod metadata : name : pod-with-block-volume spec: containers : - name : fc-container image : fedora:26 command : ["/bin/sh" , "-c"] args: [ "tail -f /dev/null" ] volumeDevices : - name : data devicePath : /dev/xvda volumes : - name : data persistentVolumeClaim : claimName : block-pvc
891
Note: When adding a raw block device for a Pod, you specify the device path in the container instead of a mount path. Binding Block Volumes If a user requests a raw block volume by indicating this using the volumeMode field in the PersistentVolumeClaim spec, the binding rules differ slightly from previous releases that didn't consider this mode as part of the spec. Listed is a table of possible combinations the user and admin might specify for requesting a raw block device. The table indicates if the volume will be bound or not given the combinations: Volume binding matrix for statically provisioned volumes: PV volumeMode PVC volumeMode Result unspecified unspecified BIND unspecified Block NO BIND unspecified Filesystem BIND Block unspecified NO BIND Block Block BIND Block Filesystem NO BIND Filesystem Filesystem BIND Filesystem Block NO BIND Filesystem unspecified BIND Note: Only statically provisioned volumes are supported for alpha release. Administrators should take care to cons
892
ider these values when working with raw block devices. Volume Snapshot and Restore Volume from Snapshot Support FEATURE STATE: Kubernetes v1.20 [stable] Volume snapshots only support the out-of-tree CSI volume plugins. For details, see Volume Snapshots . In-tree volume plugins are deprecated. You can read about the deprecated volume plugins in the Volume Plugin FAQ . Create a PersistentVolumeClaim from a Volume Snapshot apiVersion : v1 kind: PersistentVolumeClaim metadata : name : restore-pvc spec: storageClassName : csi-hostpath-sc dataSource : name : new-snapshot-test kind: VolumeSnapshot apiGroup : snapshot.storage.k8s.io accessModes : - ReadWriteOnce resources
893
requests : storage : 10Gi Volume Cloning Volume Cloning only available for CSI volume plugins. Create PersistentVolumeClaim from an existing PVC apiVersion : v1 kind: PersistentVolumeClaim metadata : name : cloned-pvc spec: storageClassName : my-csi-plugin dataSource : name : existing-src-pvc-name kind: PersistentVolumeClaim accessModes : - ReadWriteOnce resources : requests : storage : 10Gi Volume populators and data sources FEATURE STATE: Kubernetes v1.24 [beta] Kubernetes supports custom volume populators. To use custom volume populators, you must enable the AnyVolumeDataSource feature gate for the kube-apiserver and kube-controller- manager. Volume populators take advantage of a PVC spec field called dataSourceRef . Unlike the dataSource field, which can only contain either a reference to another PersistentVolumeClaim or to a VolumeSnapshot, the dataSourceRef field can contain a reference to any object in the same namespace, except for c
894
ore objects other than PVCs. For clusters that have the feature gate enabled, use of the dataSourceRef is preferred over dataSource . Cross namespace data sources FEATURE STATE: Kubernetes v1.26 [alpha] Kubernetes supports cross namespace volume data sources. To use cross namespace volume data sources, you must enable the AnyVolumeDataSource and CrossNamespaceVolumeDataSource feature gates for the kube-apiserver and kube-controller- manager. Also, you must enable the CrossNamespaceVolumeDataSource feature gate for the csi- provisioner. Enabling the CrossNamespaceVolumeDataSource feature gate allows you to specify a namespace in the dataSourceRef field
895
Note: When you specify a namespace for a volume data source, Kubernetes checks for a ReferenceGrant in the other namespace before accepting the reference. ReferenceGrant is part of the gateway.networking.k8s.io extension APIs. See ReferenceGrant in the Gateway API documentation for details. This means that you must extend your Kubernetes cluster with at least ReferenceGrant from the Gateway API before you can use this mechanism. Data source references The dataSourceRef field behaves almost the same as the dataSource field. If one is specified while the other is not, the API server will give both fields the same value. Neither field can be changed after creation, and attempting to specify different values for the two fields will result in a validation error. Therefore the two fields will always have the same contents. There are two differences between the dataSourceRef field and the dataSource field that users should be aware of: The dataSource field ignores invalid values (as i
896
f the field was blank) while the dataSourceRef field never ignores values and will cause an error if an invalid value is used. Invalid values are any core object (objects with no apiGroup) except for PVCs. The dataSourceRef field may contain different types of objects, while the dataSource field only allows PVCs and VolumeSnapshots. When the CrossNamespaceVolumeDataSource feature is enabled, there are additional differences: The dataSource field only allows local objects, while the dataSourceRef field allows objects in any namespaces. When namespace is specified, dataSource and dataSourceRef are not synced. Users should always use dataSourceRef on clusters that have the feature gate enabled, and fall back to dataSource on clusters that do not. It is not necessary to look at both fields under any circumstance. The duplicated values with slightly different semantics exist only for backwards compatibility. In particular, a mixture of older and newer controllers are able to inte
897
roperate because the fields are the same. Using volume populators Volume populators are controllers that can create non-empty volumes, where the contents of the volume are determined by a Custom Resource. Users create a populated volume by referring to a Custom Resource using the dataSourceRef field: apiVersion : v1 kind: PersistentVolumeClaim metadata : name : populated-pvc spec: dataSourceRef : name : example-name kind: ExampleDataSource apiGroup : example.storage.k8s.io accessModes : - ReadWriteOnce resources :• • •
898
requests : storage : 10Gi Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the PVC to provide feedback on the status of the creation, including warnings if the PVC cannot be created due to some missing component. You can install the alpha volume data source validator controller into your cluster. That controller generates warning Events on a PVC in the case that no populator is registered to handle that kind of data source. When a suitable populator is installed for a PVC, it's the responsibility of that populator controller to report Events that relate to volume creation and issues during the process. Using a cross-namespace volume data source FEATURE STATE: Kubernetes v1.26 [alpha] Create a ReferenceGrant to allow the namespace owner to accept the reference. You define a populated volume by specifying a cross namespace volume data s
899