Configuring Openshift Container Storage with Local disks on workers

Configuration guide for demo / lab environment purposes to create a dynamically provisionable storage.


Author(s): Adam Bulla | Created: 11 June 2020 | Last modified: 11 June 2020
Tested on: Red Hat OpenShift Platform v 4.3.0+

Configuring Openshift Container Storage with Local disks on workers

Introduction

This note is meant to be an easily followable guide, in order to create dynamically provisionable storage in an OpenShift deployment. Especially in lab and development envs, when a cloud provider is not available, or fully on prem deployment is favorable, dynamically provisionable storage is hard to come by. To solve this problem, we are going to use Openshift Container Storage, with a configuration, that can be easily created, even on an on prem deployment.

It will use local storage devices, provided by worker nodes, with a Ceph filesystem consuming all of it. Pod's PVCs will then be able to fulfilled using the new volumes provided by Ceph filesystem

References

This guide heavily relies on information, code snippets, and experience gained form these sources:

The goal of this guide, is to grant an easy to follow reference for the configuration, instead of reading through 3 guides, and piecing together the necessary information.

Prerequisites

The following prerequisites have to be met for this guide:

  • Internet access, or a locally available repository is needed, from where Operators can be installed, and the necessary pods can be deployed.
  • Worker nodes, at least 3, that will provide storage (might work with less, if the replica clause of yamls is adjusted)
    • Storage providing worker nodes can still function as normal compute nodes
    • These nodes must provide dedicated raw block devices.
  • At least two storages, one for monitoring, one for cephfs
    • Monitoring: 100 Gb (might work with as little as 10Gb)
    • Storage: 2 Tb (might work with arbitrarily small sizes, but at least 300 Gb is advised)

Configuring Local Storage

  1. Create the new workers, or add the new storage devices to workers inteded to be used for storage
  2. Create the local-storage namespace
  3. Install the local storage operator through the Operator Hub, and wait while the pods start up
  4. Tag the storage providing worker nodes with a label: cluster.ocs.openshift.io/openshift-storage=""
    • This is to easily identify the workers that are intended to provide storage.
    • oc label node NODE_FQDN cluster.ocs.openshift.io/openshift-storage=""
    • Or add the label cluster.ocs.openshift.io/openshift-storage to the node on the console
  5. Optional Gather the UUIDs of storage devices
    • This is only needed if these will be used to identify the storage devices, but this is the recommended way.
    • Log in to the worker machine
      • Run lsblk -> Identify the device name /dev/sdX of both storage devices
      • ls -hal /dev/disk/by-id/ -> gather the paths for the storage devices
  6. Create the local-storage-filesystem.yaml for the monitoring local filesystem
    • The matchExpressions can use any kind of expression, but it is easier with the label set in the previous point
    • The devicePaths clause can also use other paths for the disks (e.g. /dev/sdX), the important part is that they must identify the storages across all matched nodes.
    • It is recommended to use the by-id paths for disks
apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
  name: "local-disks-fs"
  namespace: "local-storage"
spec:
  nodeSelector:
    nodeSelectorTerms:
    - matchExpressions:
          - key: cluster.ocs.openshift.io/openshift-storage
            operator: In                                                                          
            values:
              - "" 
  storageClassDevices:
    - storageClassName: "local-sc"
      volumeMode: Filesystem
      devicePaths:
        - /dev/disk/by-id/<REPLACE_WITH_DISK_PATH>
        - /dev/disk/by-id/<REPLACE_WITH_DISK_PATH>
        - /dev/disk/by-id/<REPLACE_WITH_DISK_PATH>
  1. Run oc create -f local-storage-filesystem.yaml
  2. Create the local-storage-block.yml for the OSD volumes
    • As with the previous yaml, different matchExpressions and devicePaths can be used, but this is the recommended
apiVersion: local.storage.openshift.io/v1
kind: LocalVolume
metadata:
  name: local-block
  namespace: local-storage
  labels:
    app: ocs-storagecluster
spec:
  nodeSelector:
    nodeSelectorTerms:
      - matchExpressions:
          - key: cluster.ocs.openshift.io/openshift-storage
            operator: In
            values:
              - ""
  storageClassDevices:
    - storageClassName: localblock
      volumeMode: Block
      devicePaths:
        - /dev/disk/by-id/<REPLACE_WITH_DISK_PATH>
        - /dev/disk/by-id/<REPLACE_WITH_DISK_PATH>
        - /dev/disk/by-id/<REPLACE_WITH_DISK_PATH>
  1. Wait until all pods and PVs are created. On the console, you should see something similar, except with the claim column empty:

Physical Volumes

Installing and configuring OpenShift Container Storage

  1. Add this label (topology.rook.io/rack=rackX) to the storage nodes, where X is different for each node
    • oc label node NODE_FQDN "topology.rook.io/rack=rackX" --overwrite
  2. Create the namespace openshift-storage, and add the label openshift.io/cluster-monitoring: "true" to it.
  3. Install the OpenShift Container Storage from the Operator Hub in the openshift-storage namespace, and wait until the pods are ready
  4. Create the OCS cluster config YAML ocs-cluster-config.yaml
    • The resources and limits in the example YAML file are not needed, they can be used though if the worker nodes are resource limited. The values listed should be the minimal working values (as of writing).
      
      apiVersion: ocs.openshift.io/v1
      kind: StorageCluster
      metadata:
      name: ocs-storagecluster
      namespace: openshift-storage
      spec:
      resources:
      mds:
      limits:
        cpu: 2
        memory: 4Gi
      limits:
        cpu: 2
        memory: 4Gi
      rgw:
      limits:
        cpu: 1
        memory: 2Gi
      requests:
        cpu: 1
        memory: 2Gi
      mon:
      limits:
        cpu: 1
        memory: 1Gi
      requests:
        cpu: 1
        memory: 1Gi
      mgr:
      limits:
        cpu: 1
        memory: 1Gi
      requests:
        cpu: 1
        memory: 1Gi
      noobaa-core:
      limits:
        cpu: 1
        memory: 1Gi
      requests:
        cpu: 1
        memory: 1Gi
      noobaa-db:
      limits:
        cpu: 1
        memory: 1Gi
      requests:
        cpu: 1
        memory: 1Gi
      manageNodes: false
      monPVCTemplate:
      spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 100Gi              # Storage space allocated for the monitoring part
      storageClassName: 'local-sc'    # Storage class for monitoring part
      volumeMode: Filesystem
      storageDeviceSets:
  • count: 1 dataPVCTemplate: spec: accessModes:
    • ReadWriteOnce resources: requests: storage: 2Ti # Storage space allocated for the storage part storageClassName: 'localblock' # Storage class for the storage part volumeMode: Block name: ocs-deviceset placement: {} portable: true replica: 3 resources: limits: cpu: 1 memory: 4Gi requests: cpu: 1 memory: 4Gi
  1. Run the YAML config oc create -f ocs-cluster-config.yaml and wait for all pods to start up.
  2. You can verify that the config was successful, if the PVs created in the last section are now bound, like in this picture, and two new storage classes are created, named ocs-storagecluster-ceph-rbd and ocs-storagecluster-cephfs

Physical Volumes