Install Red Hat OpenShift Platform 4.4.3 in disconnected network environment on bare metal VMs

Installation guide for demo / lab environment purposes, not for production.


Author(s): Tamas Bures | Created: 27 May 2020 | Last modified: 27 May 2020
Tested on: Red Hat OpenShift Platform v 4.4.3

Install and configure Red Hat OpenShift Cloud Platform 4.4.3 in restricted network environment on bare metal servers

In this tutorial, I'll show you how to install OpenShift platform on a separated network environment (no internet connection) using bare metal (virtual) servers hosted on ESXi 6.7.

Prerequisites

  • A working ESXi environment (my ESXi host): 10.109.10.101
  • Uploaded ISO files to ESXi datastore (see Resources)

Network

I will use a dedicated network for this installation. The subnet is: 10.109.200.0/24 and the domain is sechu.ibm.

Machines

In this setup, I will use a 3 master - 3 worker pods setup. The list below shows all the machines I will create and configure. Yeah, quite lot...

  • DNS
    • Hostname: dns.cp4s.sechu.ibm
    • IP: 10.109.200.53
  • Webserver
    • Hostname: cdn.cp4s.sechu.ibm
    • IP: 10.109.200.44
  • Terminal
    • Hostname: terminal.cp4s.sechu.ibm
    • IP: 10.109.200.222
  • Load Balancer
    • Hostnames:
      • lb.cp4s.sechu.ibm
      • api.cp4s.sechu.ibm
      • api-int.cp4s.sechu.ibm
      • *.apps.cp4s.sechu.ibm
    • IP: 10.109.200.20
  • Master 1
    • Hostnames:
      • m1.cp4s.sechu.ibm
      • etcd-0.cp4s.sechu.ibm
    • IP: 10.109.200.80
  • Master 2
    • Hostnames:
      • m2.cp4s.sechu.ibm
      • etcd-1.cp4s.sechu.ibm
    • IP: 10.109.200.90
  • Master 3
    • Hostnames:
      • m3.cp4s.sechu.ibm
      • etcd-2.cp4s.sechu.ibm
    • IP: 10.109.200.100
  • Worker 1
    • Hostname: w1.cp4s.sechu.ibm
    • IP: 10.109.200.180
  • Worker 2
    • Hostname: w2.cp4s.sechu.ibm
    • IP: 10.109.200.190
  • Worker 3
    • Hostname: w3.cp4s.sechu.ibm
    • IP: 10.109.200.200
  • Bootstrap
    • Hostname: bootstrap.cp4s.sechu.ibm
    • IP: 10.109.200.33

Resources

Download the following files and resources:

  • Red Hat Enterprise Linux x86_64 7.5 Server installer (*.iso)
  • Red Hat Enterprise Linux CoreOS x86_64 4.4.3 boot file (*.iso)
  • Red Hat Enterprise Linux CoreOS x86_64 4.4.3 Metal BIOS Raw file (*.gz)
  • OpenShift4 Client Tools (Linux binaries)
  • Pull Secret (JSON file)

Additional resources

  • Access to Red Hat additional repositories:
    • Optional
    • Extras
    • EPEL

Installation

Create DNS server

In order to make the environment working properly, you need to setup a DNS server in this network with reverse resolutaion as well.

I will use a minimal RHEL 7.5 for this. I assume the minimal RHEL 7.5 is installed and it can access to Red Hat repositories, the image booted for the first time.

  1. Disable SELINUX:

     sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  2. Install required packages:

     yum -y install telnet ftp net-tools mc mlocate deltarpm bind bind-utils
  3. Cleaning up:

     rm -vf /root/install*log
     rm -vf /root/anaconda-ks.cfg
  4. Disable unneccessary services:

     for i in abrt-ccpp abrtd atd auditd blk-availability certmonger cpuspeed cups \
         mcelogd mdmonitor netconsole numad oddjobd portreserve rhnsd rhsmcertd smartd \
         winbind postfix iptables ip6tables firewalld kdump; \
         do \
             systemctl disable $i; \
     done
  5. Copy the hostname to the /etc/hosts file in case of network loss:

     cp /etc/hosts /etc/hosts.backup
     echo `ifconfig | sed -En \ 
     's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'` `hostname` >> /etc/hosts
  6. Create the DNS forward zone:

     vi /var/named/forward.cp4s.sechu.ibm

    Add the following:

         $TTL 86400
         @   IN  SOA     dns.cp4s.sechu.ibm. root.cp4s.sechu.ibm. (
                 2020030912  ;Serial
                 3600        ;Refresh
                 1800        ;Retry
                 604800      ;Expire
                 86400       ;Minimum TTL
         )
         @           IN  NS  dns.cp4s.sechu.ibm.
    
         @           IN  A   10.109.200.53
         dns         IN  A   10.109.200.53
         @           IN  A   10.109.200.80
         m1          IN  A   10.109.200.80
         etcd-0      IN  A   10.109.200.80
         @           IN  A   10.109.200.90
         m2          IN  A   10.109.200.90
         etcd-1      IN  A   10.109.200.90
         @           IN  A   10.109.200.100
         m3          IN  A   10.109.200.100
         etcd-2      IN  A   10.109.200.100
         @           IN  A   10.109.200.180
         w1          IN  A   10.109.200.180
         @           IN  A   10.109.200.190
         w2          IN  A   10.109.200.190
         @           IN  A   10.109.200.200
         w3          IN  A   10.109.200.200
         @           IN  A   10.109.200.33
         bootstrap   IN  A   10.109.200.33
         @           IN  A   10.109.200.20
         lb          IN  A   10.109.200.20
         api         IN  A   10.109.200.20
         api-int     IN  A   10.109.200.20
         apps        IN  A   10.109.200.20
         *.apps      IN  A   10.109.200.20
         _etcd-server-ssl._tcp   IN  SRV     0   10  2380 etcd-0.cp4s.sechu.ibm.
                                 IN  SRV     0   10  2380 etcd-1.cp4s.sechu.ibm.
                                 IN  SRV     0   10  2380 etcd-2.cp4s.sechu.ibm.
         @           IN  A   10.109.200.44
         cdn         IN  A   10.109.200.44
         @           IN  A   10.109.200.222
         terminal    IN  A   10.109.200.222
  7. Create the DNS reverse zone:

     vi /var/named/reverse.cp4s.sechu.ibm

    Add the following content:

         $TTL 86400
         @   IN  SOA     dns.cp4s.sechu.ibm. root.cp4s.sechu.ibm. (
                 2020030912  ;Serial
                 3600        ;Refresh
                 1800        ;Retry
                 604800      ;Expire
                 86400       ;Minimum TTL
         )
         @           IN  NS          dns.cp4s.sechu.ibm.
         @           IN  PTR         cp4s.sechu.ibm.
    
         masterdns   IN  A       10.109.200.53
         53          IN  PTR     dns.cp4s.sechu.ibm.
         m1          IN  A       10.109.200.80
         80          IN  PTR     m1.cp4s.sechu.ibm.
         etcd-0      IN  A       10.109.200.80
         80          IN  PTR     etcd-0.cp4s.sechu.ibm.
         m2          IN  A       10.109.200.90
         90          IN  PTR     m2.cp4s.sechu.ibm.
         etcd-1      IN  A       10.109.200.90
         90          IN  PTR     etcd-1.cp4s.sechu.ibm.
         m3          IN  A       10.109.200.100
         100         IN  PTR     m3.cp4s.sechu.ibm.
         etcd-2      IN  A       10.109.200.100
         100         IN  PTR     etcd-2.cp4s.sechu.ibm.
         w1          IN  A       10.109.200.180
         180         IN  PTR     w1.cp4s.sechu.ibm.
         w2          IN  A       10.109.200.190
         190         IN  PTR     w2.cp4s.sechu.ibm.
         w3          IN  A       10.109.200.200
         200         IN  PTR     w3.cp4s.sechu.ibm.
         bootstrap   IN  A       10.109.200.33
         33          IN  PTR     bootstrap.cp4s.sechu.ibm.
         lb          IN  A       10.109.200.20
         20          IN  PTR     lb.cp4s.sechu.ibm.
         cdn         IN  A       10.109.200.44
         44          IN  PTR     cdn.cp4s.sechu.ibm.
         terminal    IN  A       10.109.200.222
         222         IN  PTR     terminal.cp4s.sechu.ibm.
  8. Modify DNS server main configuration (/etc/named.conf):

         options {
             listen-on port 53 { 127.0.0.1; 10.109.200.53; };
             listen-on-v6 port 53 { ::1; };
             directory   "/var/named";
             dump-file   "/var/named/data/cache_dump.db";
             statistics-file "/var/named/data/named_stats.txt";
             memstatistics-file "/var/named/data/named_mem_stats.txt";
             recursing-file  "/var/named/data/named.recursing";
             secroots-file   "/var/named/data/named.secroots";
             allow-query     { localhost; 10.109.200.0/24; };
    
             recursion yes;
    
             dnssec-enable yes;
             dnssec-validation yes;
    
             /* Path to ISC DLV key */
             bindkeys-file "/etc/named.iscdlv.key";
    
             managed-keys-directory "/var/named/dynamic";
    
             pid-file "/run/named/named.pid";
             session-keyfile "/run/named/session.key";
         };
    
         logging {
                 channel default_debug {
                         file "data/named.run";
                         severity dynamic;
                 };
         };
    
         zone "." IN {
             type hint;
             file "named.ca";
         };
    
         # sechu.ibm
         zone "cp4s.sechu.ibm" IN {
             type master;
             file "forward.cp4s.sechu.ibm";
             allow-update {
                 127.0.0.1;
                 10.109.200.53;
             };
         };
    
         zone "100.109.200.in-addr.arpa" IN {
             type master;
             file "reverse.cp4s.sechu.ibm";
             allow-update { none; };
         };
    
         include "/etc/named.rfc1912.zones";
         include "/etc/named.root.key";
  9. Enable and start the DNS server service:

     systemctl enable named
     systemctl start named
  10. Test DNS:

     for n in dns.cp4s.sechu.ibm m1.cp4s.sechu.ibm etcd-0.cp4s.sechu.ibm \ 
     m2.cp4s.sechu.ibm etcd-1.cp4s.sechu.ibm m3.cp4s.sechu.ibm etcd-2.cp4s.sechu.ibm \
     w1.cp4s.sechu.ibm w2.cp4s.sechu.ibm bootstrap.cp4s.sechu.ibm \ 
     api.cp4s.sechu.ibm api-int.cp4s.sechu.ibm apps.cp4s.sechu.ibm lb.cp4s.sechu.ibm \ 
     cdn.cp4s.sechu.ibm terminal.cp4s.sechu.ibm; do nslookup $n; done

    Example output:

         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   dns.cp4s.sechu.ibm
         Address: 10.109.200.53
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   m1.cp4s.sechu.ibm
         Address: 10.109.200.80
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   etcd-0.cp4s.sechu.ibm
         Address: 10.109.200.80
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   m2.cp4s.sechu.ibm
         Address: 10.109.200.90
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   etcd-1.cp4s.sechu.ibm
         Address: 10.109.200.90
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   m3.cp4s.sechu.ibm
         Address: 10.109.200.100
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   etcd-2.cp4s.sechu.ibm
         Address: 10.109.200.100
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   w1.cp4s.sechu.ibm
         Address: 10.109.200.180
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   w2.cp4s.sechu.ibm
         Address: 10.109.200.190
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   bootstrap.cp4s.sechu.ibm
         Address: 10.109.200.33
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   api.cp4s.sechu.ibm
         Address: 10.109.200.20
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   api-int.cp4s.sechu.ibm
         Address: 10.109.200.20
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   apps.cp4s.sechu.ibm
         Address: 10.109.200.20
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   lb.cp4s.sechu.ibm
         Address: 10.109.200.20
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   cdn.cp4s.sechu.ibm
         Address: 10.109.200.44
    
         Server:     10.109.200.53
         Address:    10.109.200.53#53
    
         Name:   terminal.cp4s.sechu.ibm
         Address: 10.109.200.222

Create Webserver machine

In order to make the master and worker images to install we need a webserver holding the boot image and the ignition files.

I will use a minimal RHEL 7.5 for this. I assume the minimal RHEL 7.5 is installed and it can access to Red Hat repositories, the image booted for the first time.

  1. Disable SELINUX:

     sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  2. Install required packages:

     yum -y install telnet ftp net-tools mc mlocate deltarpm httpd
  3. Cleaning up:

     rm -vf /root/install*log
     rm -vf /root/anaconda-ks.cfg
  4. Disable unneccessary services:

     for i in abrt-ccpp abrtd atd auditd blk-availability certmonger cpuspeed cups \
     mcelogd mdmonitor netconsole numad oddjobd portreserve rhnsd rhsmcertd smartd \
     winbind postfix iptables ip6tables firewalld kdump; do systemctl disable $i; done
  5. Copy the hostname to the /etc/hosts file in case of network loss:

    
     cp /etc/hosts /etc/hosts.backup
     echo `ifconfig | sed -En \ 
     's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'` `hostname` >> /etc/hosts
  6. Enable and start the webserver service:

     systemctl enable httpd
     systemctl start httpd
  7. Create a folder which will hold the ignition files (generated later).

     mkdir -p /var/www/html/ignition
  8. Copy the RHCOS boot image (*-metal-bios.raw.gz) to the webserver root /var/www/html with name: bios.raw.gz.

  9. Change the permission to user:group of apache. Without this, no one will be able to read the file:

     chown -R apache:apache /var/www/html

Create Load Balancer machine

The Load Balancer will handle to separate the management and service requests between the master and worker nodes. So all management requests will be routed to the master nodes and all service requests will be routed to the worker nodes.

I will use a minimal RHEL 7.5 for this. I assume the minimal RHEL 7.5 is installed and it can access to Red Hat repositories, the image booted for the first time.

  1. Disable SELINUX:

     sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  2. Install required packages:

     yum -y install telnet ftp net-tools mc mlocate haproxy
  3. Cleaning up:

     rm -vf /root/install*log
     rm -vf /root/anaconda-ks.cfg
  4. Disable unneccessary services:

     for i in abrt-ccpp abrtd atd auditd blk-availability certmonger cpuspeed cups \
         mcelogd mdmonitor netconsole numad oddjobd portreserve rhnsd rhsmcertd smartd \
         winbind postfix iptables ip6tables firewalld kdump; \
         do \
             systemctl disable $i; \
     done
  5. Copy the hostname to the /etc/hosts file in case of network loss:

     cp /etc/hosts /etc/hosts.backup
     echo `ifconfig | sed -En \ 
     's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'` `hostname` >> /etc/hosts
  6. Create the haproxy configuration:

     mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.original
     vi /etc/haproxy/haproxy.cfg

    Add the following content:

         global
             log 127.0.0.1 local2
             chroot /var/lib/haproxy
             pidfile /var/run/haproxy.pid
             maxconn 4000
             user haproxy
             group haproxy
             daemon
             stats socket /var/lib/haproxy/stats
             ssl-default-bind-ciphers PROFILE=SYSTEM
             ssl-default-server-ciphers PROFILE=SYSTEM
    
         defaults
             mode http
             log global
             option httplog
             option dontlognull
             option http-server-close
             option forwardfor except 127.0.0.0/8
             option redispatch
             retries 3
             timeout http-request 10s
             timeout queue 1m
             timeout connect 10s
             timeout client 1m
             timeout server 1m
             timeout http-keep-alive 10s
             timeout check 10s
             maxconn 3000
    
         listen haproxy3-monitoring *:8080
             mode http
             option forwardfor
             option httpclose
             stats enable
             stats show-legends
             stats refresh 5s
             stats uri /stats
             stats realm Haproxy\ Statistics
             stats auth haproxy:password
             stats admin if TRUE
    
         frontend ocp4-kubernetes-api-server
             mode tcp
             option tcplog
             bind api.cp4s.sechu.ibm:6443
             default_backend ocp4-kubernetes-api-server
    
         frontend ocp4-machine-config-server
             mode tcp
             option tcplog
             bind api.cp4s.sechu.ibm:22623
             default_backend ocp4-machine-config-server
    
         frontend ocp4-router-http
             mode tcp
             option tcplog
             bind apps.cp4s.sechu.ibm:80
             default_backend ocp4-router-http
    
         frontend ocp4-router-https
             mode tcp
             option tcplog
             bind apps.cp4s.sechu.ibm:443
             default_backend ocp4-router-https
    
         backend ocp4-kubernetes-api-server
             mode tcp
             balance source
             server boostrap bootstrap.cp4s.sechu.ibm:6443 check
             server m1 m1.cp4s.sechu.ibm:6443 check
             server m2 m2.cp4s.sechu.ibm:6443 check
             server m3 m3.cp4s.sechu.ibm:6443 check
    
         backend ocp4-machine-config-server
             mode tcp
             balance source
             server bootstrap bootstrap.cp4s.sechu.ibm:22623 check
             server m1 m1.cp4s.sechu.ibm:22623 check
             server m2 m2.cp4s.sechu.ibm:22623 check
             server m3 m3.cp4s.sechu.ibm:22623 check
    
         backend ocp4-router-http
             mode tcp
             server w1 w1.cp4s.sechu.ibm:80 check
             server w2 w2.cp4s.sechu.ibm:80 check
             server w3 w3.cp4s.sechu.ibm:80 check
    
         backend ocp4-router-https
             mode tcp
             server w1 w1.cp4s.sechu.ibm:443 check
             server w2 w2.cp4s.sechu.ibm:443 check
             server w3 w3.cp4s.sechu.ibm:80 check
  7. Enable rsyslog by uncommenting the following two lines in file /etc/rsyslog.conf:

     $ModLoad imudp
     $UDPServerRun 514
  8. Create syslog definition to haproxy:

     vi /etc/rsyslog.d/haproxy.conf

    Add content:

     local2.=info     /var/log/haproxy-access.log    #For Access Log
     local2.notice    /var/log/haproxy-info.log      #For Service Info - Backend, loadbalancer
  9. Restart syslog:

     systemctl restart rsyslog
  10. However SELINUX has been disabled, we need to issue the following command to allow haproxy to bind to restricted ports (I guess it's a bug):

     setsebool -P haproxy_connect_any=1 
  11. Enable and start haproxy:

     systemctl enable haproxy
     systemctl start haproxy
  12. Check haproxy with a browser, navigate to http://10.109.200.20:8080/stats. Username: haproxy, password is password.

Create Terminal machine, initial config and local repository

This machine will help us to create and initialize our OCP cluster. It must have internet access as well because we are going to clone the required repository to provide them to the nodes in the restricted network. The best option if this machine will have:

  • 1 NIC to the OCP subnetwork (10.109.200.0/24)
  • 1 NIC to be able to connect to the Internet

I will use a minimal RHEL 7.5 for this. I assume the minimal RHEL 7.5 is installed and it can access to Red Hat repositories, the image booted for the first time.

  1. Disable SELINUX:

     sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  2. Install required packages:

     yum -y install telnet ftp net-tools mc podman httpd-tools jq
  3. Cleaning up:

     rm -vf /root/install*log
     rm -vf /root/anaconda-ks.cfg
  4. Disable unneccessary services:

     for i in abrt-ccpp abrtd atd auditd blk-availability certmonger cpuspeed cups \
         mcelogd mdmonitor netconsole numad oddjobd portreserve rhnsd rhsmcertd smartd \
         winbind postfix iptables ip6tables firewalld kdump; \
         do \
             systemctl disable $i; \
     done
  5. Copy the hostname to the /etc/hosts file in case of network loss:

     cp /etc/hosts /etc/hosts.backup
     echo `ifconfig | sed -En \ 
     's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'` `hostname` >> /etc/hosts
  6. Install the OpenShift4 Client Tools by downloading the file, extracting it, and copying to the proper directory:

     mv oc /usr/local/bin/
     mv openshift-install /usr/local/bin/
  7. Create the reporitory location:

     mkdir -p /opt/registry/{auth,certs,data}
  8. Generate a self signed certificate:

     cd /opt/registry/certs
     openssl req -newkey rsa:4096 -nodes -sha256 -keyout domain.key -x509 -days 365 -out domain.crt

    Answers for the questions (in order of appearance):

    • HU
    • Budapest
    • Budapest
    • IBM Hungary
    • Security Business Unit
    • terminal.cp4s.sechu.ibm
    • yourname@ibm.com
  9. Generate a user name (repository) and a password (password) for your registry that uses the bcrpt format:

     htpasswd -bBc /opt/registry/auth/htpasswd repository password
  10. Create the mirror-registry container to host your registry:

     podman run --name mirror-registry -p 5000:5000 \ 
         -v /opt/registry/data:/var/lib/registry:z \ 
         -v /opt/registry/auth:/auth:z \ 
         -e "REGISTRY_AUTH=htpasswd" \ 
         -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
         -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
         -v /opt/registry/certs:/certs:z \
         -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
         -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
         -d docker.io/library/registry:2

    Restart the repository:

     podman stop mirror-registry
     podman start mirror-registry
  11. Add the self-signed certificate to your list of trusted certificates:

     cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/
     update-ca-trust
  12. Confirm that the registry is available:

     curl -u repository:password -k https://terminal.cp4s.sechu.ibm:5000/v2/_catalog

    Response:

     {"repositories":[]}
  13. Generate the base64-encoded user name and password or token for your mirror registry:

     echo -n 'repository:password' | base64 -w0

    Note the output.

  14. Create a working directory for later use and to hold required resources:

     mkdir -p /root/os4
  15. Upload your pull-secret.txt file to this folder. (Can be obtained here.)

  16. Make a copy of your pull secret in JSON format:

     cat /root/os4/pull-secret.txt | jq .  > /root/os4/pull-secret.json
  17. Edit the pull secret file and add a section that describes your registry to it:

     {
         "auths": {
             "terminal.cp4s.sechu.ibm:5000": {
                 "auth": "<base64 output from step 13>",
                 "email": "yourname@ibm.com"
             },
             "cloud.openshift.com": {
                 "auth": "b3Blb...==",
                 "email": "yourname@ibm.com"
             },
             "quay.io": {
                 "auth": "b3BlbnNoaW...==",
                 "email": "yourname@ibm.com"
             },
             "registry.connect.redhat.com": {
                 "auth": "NTI3MzMz...==",
                 "email": "yourname@ibm.com"
             },
             "registry.redhat.io": {
                 "auth": "NTI3MzMzNj...==",
                 "email": "yourname@ibm.com"
             }
         }
     }
  18. Clone the repository:

     oc adm release mirror -a /root/os4/pull-secret.json \ 
         --from=quay.io/openshift-release-dev/ocp-release@sha256:039a4ef7c128a049ccf916a1d68ce93e8f5494b44d5a75df60c85e9e7191dacc \ 
         --to-release-image=terminal.cp4s.sechu.ibm:5000/ocp4/openshift4:4.2.18 \
         --to=terminal.cp4s.sechu.ibm:5000/ocp4/openshift4
  19. Record the output about imageContentSources section, it looks something like this:

     imageContentSources:
       - mirrors:
         - terminal.cp4s.sechu.ibm:5000/ocp4/openshift4
         source: quay.io/openshift-release-dev/ocp-release
       - mirrors:
         - terminal.cp4s.sechu.ibm:5000/ocp4/openshift4
         source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
  20. Create install-config.yaml file:

     vi /root/os4/install-config.yaml

    Add content:

     apiVersion: v1
     baseDomain: sechu.ibm
     compute:
     - hyperthreading: Enabled
       name: worker
       replicas: 0
     controlPlane:
       hyperthreading: Enabled
       name: master
       replicas: 3
     metadata:
       name: cp4s
     networking:
       clusterNetworks:
       - cidr: 10.254.0.0/16
         hostPrefix: 24
       networkType: OpenShiftSDN
       serviceNetwork:
       - 172.30.0.0/16
     platform:
       none: {}
     pullSecret: '<PULL_SECRET>'
     sshKey: '<SSH_PUBLIC_KEY>'
     additionalTrustBundle: | 
       <CERT>
     imageContentSources:
       - mirrors:
         - terminal.cp4s.sechu.ibm:5000/ocp4/openshift4
         source: quay.io/openshift-release-dev/ocp-release
       - mirrors:
         - terminal.cp4s.sechu.ibm:5000/ocp4/openshift4
         source: quay.io/openshift-release-dev/ocp-v4.0-art-dev

    Where:

    • <PULL_SECRET> is the content of your pull-secret.json file
    • <SSH_PUBLIC_KEY> is the earlier generated SSH key's public pair (id_rsa.pub)
    • <CERT> is the content of the certificate created for repository (domain.crt)

    To convert the pull-secret.json file to one line:

     jq -c . < pull-secret.json

    To view the content of your SSH public key:

     cat /root/.ssh/id_rsa.pub

    and if you don't have a key until now, you can generate one with the following command:

     ssh-keygen

    this will place the file mentioned before under the <user_root>/.ssh/id_rsa.pub

    To view the content of your certificate:

     cat /opt/registry/certs/domain.crt
  21. Create manifest files with the openshift-install command:

     openshift-install create manifests
  22. Disable master schedulable (if not, masters will have combined role: master,worker, see details):

     sed -i 's/mastersSchedulable: true/mastersSchedulable: false/g' manifests/cluster-scheduler-02-config.yml
  23. Check output:

     cat manifests/cluster-scheduler-02-config.yml

    Check that schedulable is false!

  24. Create ignition config files:

     openshift-install create ignition-configs

    Sample output:

     INFO Consuming OpenShift Install (Manifests) from target directory
     INFO Consuming Common Manifests from target directory
     INFO Consuming Openshift Manifests from target directory
     INFO Consuming Master Machines from target directory
     INFO Consuming Worker Machines from target directory
  25. Copy the generated files to the webserver ignition folder:

     scp -r * root@cdn.cp4s.sechu.ibm:/var/www/html/ignition
  26. Modify permission to user:group apache on the newly copied files to be able to read them and check permissions:

     ssh root@cdn.cp4s.sechu.ibm "chown -R apache:apache /var/www/html/ && ls -lah /var/www/html/ignition"

    Example output:

     drwxr-xr-x 3 apache apache  143 May 27 11:13 .
     drwxr-xr-x 3 apache apache   41 May 21 16:05 ..
     drwxr-x--- 2 apache apache   50 May 27 11:13 auth
     -rw-r----- 1 apache apache 303K May 27 11:13 bootstrap.ign
     -rw-r----- 1 apache apache 1.8K May 27 11:13 master.ign
     -rw-r----- 1 apache apache   96 May 27 11:13 metadata.json
     -rw-r--r-- 1 apache apache 3.0K May 27 11:13 pull-secret.json
     -rw-r--r-- 1 apache apache 2.7K May 27 11:13 pull-secret.txt
     -rw-r----- 1 apache apache 1.8K May 27 11:13 worker.ign

Install Red Hat Enterprise Linux CoreOS images for cluster machines

It's time to create the machines will build the cluster up. We are targeting a 3 masters - 3 workers + 1 as a bootstrap machine. The process is the same for all machines, the only difference are the Ignition config files (master.ign,worker.ign,bootstrap.ign), IP and hostname values.

  1. Create the virtual machine base config:

    • Masters / Bootstrap:
      • 4 vCPU
      • 16 GB memory
      • 150 GB disk
    • Workers:
      • 8 vCPU
      • 32 GB memory
      • 150 GB disk
  2. Boot up the images with the ISO file.

  3. Once the image started, press TAB or E to enter boot parameters.

  4. Add the related information below. It must be in one line.

    The IP clause follows the following pattern:

    $IP address$::$GATEWAY$:$MASK$:$HOSTNAME$:ens192:none

    • ens192 will be the name of network interface
    • none will instruct that there is no DHCP involved
    • The coreos.inst.image_url must point to the bios.raw.gz file hosted on the webserver.
    • The coreos.inst.ignition_url must point to required ignition file hosted on the webserver.

    Master 1

     ip=10.109.200.80::10.109.0.3:255.255.0.0:m1.cp4s.sechu.ibm:ens192:none \
     nameserver=10.109.200.53 \
     coreos.inst.install_dev=sda \
     coreos.inst.image_url=http://10.109.200.44/bios.raw.gz \
     coreos.inst.ignition_url=http://10.109.200.44/ignition/master.ign

    Master 2

     ip=10.109.200.90::10.109.0.3:255.255.0.0:m2.cp4s.sechu.ibm:ens192:none \
     nameserver=10.109.200.53 \
     coreos.inst.install_dev=sda \
     coreos.inst.image_url=http://10.109.200.44/bios.raw.gz \
     coreos.inst.ignition_url=http://10.109.200.44/ignition/master.ign

    Master 3

     ip=10.109.200.100::10.109.0.3:255.255.0.0:m3.cp4s.sechu.ibm:ens192:none \
     nameserver=10.109.200.53 \
     coreos.inst.install_dev=sda \
     coreos.inst.image_url=http://10.109.200.44/bios.raw.gz \
     coreos.inst.ignition_url=http://10.109.200.44/ignition/master.ign

    Worker 1

     ip=10.109.200.180::10.109.0.3:255.255.0.0:w1.cp4s.sechu.ibm:ens192:none \
     nameserver=10.109.200.53 \
     coreos.inst.install_dev=sda \
     coreos.inst.image_url=http://10.109.200.44/bios.raw.gz \
     coreos.inst.ignition_url=http://10.109.200.44/ignition/worker.ign

    Worker 2

     ip=10.109.200.190::10.109.0.3:255.255.0.0:w2.cp4s.sechu.ibm:ens192:none \
     nameserver=10.109.200.53 \
     coreos.inst.install_dev=sda \
     coreos.inst.image_url=http://10.109.200.44/bios.raw.gz \
     coreos.inst.ignition_url=http://10.109.200.44/ignition/worker.ign

    Worker 3

     ip=10.109.200.200::10.109.0.3:255.255.0.0:w3.cp4s.sechu.ibm:ens192:none \
     nameserver=10.109.200.53 \
     coreos.inst.install_dev=sda \
     coreos.inst.image_url=http://10.109.200.44/bios.raw.gz \
     coreos.inst.ignition_url=http://10.109.200.44/ignition/worker.ign

    Bootstrap

     ip=10.109.200.33::10.109.0.3:255.255.0.0:bootstrap.cp4s.sechu.ibm:ens192:none \
     nameserver=10.109.200.53 \
     coreos.inst.install_dev=sda \
     coreos.inst.image_url=http://10.109.200.44/bios.raw.gz \
     coreos.inst.ignition_url=http://10.109.200.44/ignition/bootstrap.ign
  5. Once you created the images and the installation finished, the images will reboot and now it's time to have a coffee break. Approximately 20-40 minutes required to the images to create the clusters and bring Kubernetes up.

  6. While the images are configuring themselved, on the terminal server issue the following command. If the desired time (20 mins) timed out, simply restart the command.

     openshift-install --dir=/root/os4 wait-for bootstrap-complete --log-level info

    Sample output:

     INFO Waiting up to 20m0s for the Kubernetes API at https://api.cp4s.sechu.ibm:6443...
     INFO API v1.17.1 up
     INFO Waiting up to 40m0s for bootstrapping to complete...
  7. Once bootstrapping finished, you must remove the bootstrap config from the Load Balancer config!

  8. To list the configured nodes:

    Export the KUBECONFIG:

     export KUBECONFIG=/root/os4/auth/kubeconfig
     oc get nodes

    Example output:

     NAME                STATUS   ROLES           AGE     VERSION
     m1.cp4s.sechu.ibm   Ready    master          48m     v1.17.1
     m2.cp4s.sechu.ibm   Ready    master          48m     v1.17.1
     m3.cp4s.sechu.ibm   Ready    master          46m     v1.17.1
     w1.cp4s.sechu.ibm   Ready    worker          4m34s   v1.17.1
     w2.cp4s.sechu.ibm   Ready    worker          4m39s   v1.17.1
     w3.cp4s.sechu.ibm   Ready    worker          4m28s   v1.17.1
  9. Query CSRs:

     oc get csr

    Example output:

     NAME        AGE     REQUESTOR                                                                   CONDITION
     csr-9lp9c   20m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
     csr-dbpkg   20m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
     csr-fm2hn   7m27s   system:node:w2.cp4s.sechu.ibm                                               Approved,Issued
     csr-nprs5   10m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
     csr-nxnp2   20m     system:node:m2.cp4s.sechu.ibm                                               Approved,Issued
     csr-qlxxk   10m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
     csr-qwh4h   20m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
     csr-sl7hl   10m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
     csr-t7bv9   7m37s   system:node:w3.cp4s.sechu.ibm                                               Approved,Issued
     csr-w9pqt   20m     system:node:m3.cp4s.sechu.ibm                                               Approved,Issued
     csr-wdm66   7m35s   system:node:w1.cp4s.sechu.ibm                                               Approved,Issued
     csr-x7txv   20m     system:node:m1.cp4s.sechu.ibm                                               Approved,Issued
  10. If there are any Pending requests, approve them manually using the NAME value:

     oc adm certificate approve <name>

    or approve all with a single commmand:

     oc get csr --no-headers | awk '{print $1}' | xargs oc adm certificate approve
  11. To get forward, configure an emptyDir storage by issuing the following. command:

     oc patch configs.imageregistry.operator.openshift.io cluster \
         --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
  12. Issue the following command to finish the setup

     openshift-install wait-for install-complete

    Example output:

     INFO Waiting up to 30m0s for the cluster at https://api.cp4s.sechu.ibm:6443 to initialize...
     INFO Waiting up to 10m0s for the openshift-console route to be created...
     INFO Install complete!
     INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/os4/auth/kubeconfig'
     INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cp4s.sechu.ibm
     INFO Login to the console with user: kubeadmin, password: <random_generated_password>