Configuring Openshift Container Storage with Local disks on workers
Configuration guide for demo / lab environment purposes to create a dynamically provisionable storage.
Author(s): Adam Bulla | Created: 11 June 2020 | Last modified: 11 June 2020
Tested on: Red Hat OpenShift Platform v 4.3.0+
Table of contents
Configuring Openshift Container Storage with Local disks on workers↑
Introduction↑
This note is meant to be an easily followable guide, in order to create dynamically provisionable storage in an OpenShift deployment. Especially in lab and development envs, when a cloud provider is not available, or fully on prem deployment is favorable, dynamically provisionable storage is hard to come by. To solve this problem, we are going to use Openshift Container Storage, with a configuration, that can be easily created, even on an on prem deployment.
It will use local storage devices, provided by worker nodes, with a Ceph filesystem consuming all of it. Pod's PVCs will then be able to fulfilled using the new volumes provided by Ceph filesystem
References↑
This guide heavily relies on information, code snippets, and experience gained form these sources:
- OCS 4.2 in OCP 4.2.14 - UPI installation in RHV
- Deploying your storage backend using OpenShift Container Storage 4
- Deploying OpenShift Container Storage using Local Devices
The goal of this guide, is to grant an easy to follow reference for the configuration, instead of reading through 3 guides, and piecing together the necessary information.
Prerequisites↑
The following prerequisites have to be met for this guide:
- Internet access, or a locally available repository is needed, from where Operators can be installed, and the necessary pods can be deployed.
- Worker nodes, at least 3, that will provide storage (might work with less, if the
replica
clause of yamls is adjusted)- Storage providing worker nodes can still function as normal compute nodes
- These nodes must provide dedicated raw block devices.
- At least two storages, one for monitoring, one for cephfs
- Monitoring: 100 Gb (might work with as little as 10Gb)
- Storage: 2 Tb (might work with arbitrarily small sizes, but at least 300 Gb is advised)
Configuring Local Storage↑
- Create the new workers, or add the new storage devices to workers inteded to be used for storage
- Note: With VMware, disks usually does not have a UUID, but they can be enabled. This is not necessary, but advised
- To enable VMware disk UUIDs, follow this guide: https://sort.veritas.com/public/documents/sfha/6.2/vmwareesx/productguides/html/sfhas_virtualization/ch10s05s01.htm
- IMPORTANT: If no UUIDs are enabled, then the disks intended to be in the same replica group (e.g. all storage disks) MUST have the same device name (e.g. /dev/sdb)
- Create the
local-storage
namespace - Install the local storage operator through the Operator Hub, and wait while the pods start up
- Tag the storage providing worker nodes with a label:
cluster.ocs.openshift.io/openshift-storage=""
- This is to easily identify the workers that are intended to provide storage.
oc label node NODE_FQDN cluster.ocs.openshift.io/openshift-storage=""
- Or add the label
cluster.ocs.openshift.io/openshift-storage
to the node on the console
- Optional Gather the UUIDs of storage devices
- This is only needed if these will be used to identify the storage devices, but this is the recommended way.
- Log in to the worker machine
- Run
lsblk
-> Identify the device name/dev/sdX
of both storage devices ls -hal /dev/disk/by-id/
-> gather the paths for the storage devices
- Run
- Create the
local-storage-filesystem.yaml
for the monitoring local filesystem- The
matchExpressions
can use any kind of expression, but it is easier with the label set in the previous point - The
devicePaths
clause can also use other paths for the disks (e.g./dev/sdX
), the important part is that they must identify the storages across all matched nodes. - It is recommended to use the by-id paths for disks
- The
apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
name: "local-disks-fs"
namespace: "local-storage"
spec:
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: cluster.ocs.openshift.io/openshift-storage
operator: In
values:
- ""
storageClassDevices:
- storageClassName: "local-sc"
volumeMode: Filesystem
devicePaths:
- /dev/disk/by-id/<REPLACE_WITH_DISK_PATH>
- /dev/disk/by-id/<REPLACE_WITH_DISK_PATH>
- /dev/disk/by-id/<REPLACE_WITH_DISK_PATH>
- Run
oc create -f local-storage-filesystem.yaml
- Create the
local-storage-block.yml
for the OSD volumes- As with the previous yaml, different
matchExpressions
anddevicePaths
can be used, but this is the recommended
- As with the previous yaml, different
apiVersion: local.storage.openshift.io/v1
kind: LocalVolume
metadata:
name: local-block
namespace: local-storage
labels:
app: ocs-storagecluster
spec:
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: cluster.ocs.openshift.io/openshift-storage
operator: In
values:
- ""
storageClassDevices:
- storageClassName: localblock
volumeMode: Block
devicePaths:
- /dev/disk/by-id/<REPLACE_WITH_DISK_PATH>
- /dev/disk/by-id/<REPLACE_WITH_DISK_PATH>
- /dev/disk/by-id/<REPLACE_WITH_DISK_PATH>
- Wait until all pods and PVs are created. On the console, you should see something similar, except with the claim column empty:
Installing and configuring OpenShift Container Storage↑
- Add this label (
topology.rook.io/rack=rackX
) to the storage nodes, where X is different for each nodeoc label node NODE_FQDN "topology.rook.io/rack=rackX" --overwrite
- Create the namespace
openshift-storage
, and add the labelopenshift.io/cluster-monitoring: "true"
to it. - Install the
OpenShift Container Storage
from the Operator Hub in theopenshift-storage
namespace, and wait until the pods are ready - Create the OCS cluster config YAML
ocs-cluster-config.yaml
- The
resources
andlimits
in the example YAML file are not needed, they can be used though if the worker nodes are resource limited. The values listed should be the minimal working values (as of writing).apiVersion: ocs.openshift.io/v1 kind: StorageCluster metadata: name: ocs-storagecluster namespace: openshift-storage spec: resources: mds: limits: cpu: 2 memory: 4Gi limits: cpu: 2 memory: 4Gi rgw: limits: cpu: 1 memory: 2Gi requests: cpu: 1 memory: 2Gi mon: limits: cpu: 1 memory: 1Gi requests: cpu: 1 memory: 1Gi mgr: limits: cpu: 1 memory: 1Gi requests: cpu: 1 memory: 1Gi noobaa-core: limits: cpu: 1 memory: 1Gi requests: cpu: 1 memory: 1Gi noobaa-db: limits: cpu: 1 memory: 1Gi requests: cpu: 1 memory: 1Gi manageNodes: false monPVCTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi # Storage space allocated for the monitoring part storageClassName: 'local-sc' # Storage class for monitoring part volumeMode: Filesystem storageDeviceSets:
- The
- count: 1
dataPVCTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Ti # Storage space allocated for the storage part
storageClassName: 'localblock' # Storage class for the storage part
volumeMode: Block
name: ocs-deviceset
placement: {}
portable: true
replica: 3
resources:
limits:
cpu: 1
memory: 4Gi
requests:
cpu: 1
memory: 4Gi
- ReadWriteOnce
resources:
requests:
storage: 2Ti # Storage space allocated for the storage part
storageClassName: 'localblock' # Storage class for the storage part
volumeMode: Block
name: ocs-deviceset
placement: {}
portable: true
replica: 3
resources:
limits:
cpu: 1
memory: 4Gi
requests:
cpu: 1
memory: 4Gi
- Run the YAML config
oc create -f ocs-cluster-config.yaml
and wait for all pods to start up. - You can verify that the config was successful, if the PVs created in the last section are now bound, like in this picture, and two new storage classes are created, named
ocs-storagecluster-ceph-rbd
andocs-storagecluster-cephfs