Install Red Hat OpenShift Platform 4.3.24 in disconnected network environment on bare metal VMs
Installation guide for demo / lab environment purposes, not for production.
Author(s): Tamas Bures | Created: 07 June 2020 | Last modified: 07 June 2020
Tested on: Red Hat OpenShift Platform v 4.3.24
Table of contents
- Install and configure Red Hat OpenShift Cloud Platform 4.3.24 in restricted network environment on bare metal servers
- Prerequisites
- Network
- Machines
- Resources
- Additional resources
- Installation
- Create DNS server
- Create Load Balancer machine
- Create bastion machine with webserver, initial config and local repository
- Install Red Hat Enterprise Linux CoreOS images for cluster machines
Install and configure Red Hat OpenShift Cloud Platform 4.3.24 in restricted network environment on bare metal servers↑
In this tutorial, I'll show you how to install OpenShift platform on a separated network environment (no internet connection) using bare metal (virtual) servers hosted on ESXi 6.7.
Prerequisites↑
- A working ESXi environment (my ESXi host):
10.109.10.101
- Uploaded ISO files to ESXi datastore (see Resources)
Network↑
I will use a dedicated network for this installation. The subnet is: 10.109.200.0/24
and the domain is sechu.ibm
.
Machines↑
In this setup, I will use a 3 master - 3 worker node setup. The list below shows all the machines I will create and configure. Yeah, quite lot...
- DNS
- Hostname:
dns.cp4s.sechu.ibm
- IP:
10.109.200.53
- Hostname:
- Bastion
- Hostname:
bastion.cp4s.sechu.ibm
- IP:
10.109.200.222
- Hostname:
- Load Balancer
- Hostnames:
lb.cp4s.sechu.ibm
api.cp4s.sechu.ibm
api-int.cp4s.sechu.ibm
*.apps.cp4s.sechu.ibm
- IP:
10.109.200.20
- Hostnames:
- Master 0
- Hostnames:
master-0.cp4s.sechu.ibm
etcd-0.cp4s.sechu.ibm
- IP:
10.109.200.80
- Hostnames:
- Master 1
- Hostnames:
master-1.cp4s.sechu.ibm
etcd-1.cp4s.sechu.ibm
- IP:
10.109.200.90
- Hostnames:
- Master 2
- Hostnames:
master-2.cp4s.sechu.ibm
etcd-2.cp4s.sechu.ibm
- IP:
10.109.200.100
- Hostnames:
- Worker 0
- Hostname:
worker-0.cp4s.sechu.ibm
- IP:
10.109.200.180
- Hostname:
- Worker 1
- Hostname:
worker-1.cp4s.sechu.ibm
- IP:
10.109.200.190
- Hostname:
- Worker 2
- Hostname:
worker-2.cp4s.sechu.ibm
- IP:
10.109.200.200
- Hostname:
- Bootstrap
- Hostname:
bootstrap.cp4s.sechu.ibm
- IP:
10.109.200.33
- Hostname:
Resources↑
Download the following files and resources:
- Red Hat Enterprise Linux x86_64 7.5 Server installer (
*.iso
) - Red Hat Enterprise Linux CoreOS x86_64 4.3.8 boot file (
*.iso
) - Red Hat Enterprise Linux CoreOS x86_64 4.3.8 Metal BIOS Raw file (
*.gz
) - OpenShift4 Client Tools 4.3.24 (Linux binaries)
- Pull Secret (JSON file)
Additional resources↑
- Access to Red Hat additional repositories:
- Optional
- Extras
- EPEL
Installation↑
Create DNS server↑
In order to make the environment working properly, you need to setup a DNS server in this network with reverse resolutaion as well.
I will use a minimal RHEL 7.5 for this. I assume the minimal RHEL 7.5 is installed and it can access to Red Hat repositories, the image booted for the first time.
-
Disable SELINUX:
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
-
Install required packages:
yum -y install telnet ftp net-tools mc mlocate deltarpm bind bind-utils
-
Cleaning up:
rm -vf /root/install*log rm -vf /root/anaconda-ks.cfg
-
Disable unneccessary services:
for i in abrt-ccpp abrtd atd auditd blk-availability certmonger cpuspeed cups \ mcelogd mdmonitor netconsole numad oddjobd portreserve rhnsd rhsmcertd smartd \ winbind postfix iptables ip6tables firewalld kdump; \ do \ systemctl disable $i; \ done
-
Copy the hostname to the
/etc/hosts
file in case of network loss:cp /etc/hosts /etc/hosts.backup echo `ifconfig | sed -En \ 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'` `hostname` >> /etc/hosts
-
Create the DNS forward zone:
vi /var/named/forward.cp4s.sechu.ibm
Add the following:
$TTL 86400 @ IN SOA dns.cp4s.sechu.ibm. root.cp4s.sechu.ibm. ( 2020030912 ;Serial 3600 ;Refresh 1800 ;Retry 604800 ;Expire 86400 ;Minimum TTL ) @ IN NS dns.cp4s.sechu.ibm. @ IN A 10.109.200.53 dns IN A 10.109.200.53 master-0 IN A 10.109.200.80 etcd-0 IN A 10.109.200.80 master-1 IN A 10.109.200.90 etcd-1 IN A 10.109.200.90 master-2 IN A 10.109.200.100 etcd-2 IN A 10.109.200.100 worker-0 IN A 10.109.200.180 worker-1 IN A 10.109.200.190 worker-2 IN A 10.109.200.200 bootstrap IN A 10.109.200.33 lb IN A 10.109.200.20 api IN A 10.109.200.20 api-int IN A 10.109.200.20 apps IN A 10.109.200.20 *.apps IN A 10.109.200.20 _etcd-server-ssl._tcp IN SRV 0 10 2380 etcd-0.cp4s.sechu.ibm. IN SRV 0 10 2380 etcd-1.cp4s.sechu.ibm. IN SRV 0 10 2380 etcd-2.cp4s.sechu.ibm. bastion IN A 10.109.200.222
-
Create the DNS reverse zone:
vi /var/named/reverse.cp4s.sechu.ibm
Add the following content:
$TTL 86400 @ IN SOA dns.cp4s.sechu.ibm. root.cp4s.sechu.ibm. ( 2020030912 ;Serial 3600 ;Refresh 1800 ;Retry 604800 ;Expire 86400 ;Minimum TTL ) @ IN NS dns.cp4s.sechu.ibm. @ IN PTR cp4s.sechu.ibm. masterdns IN A 10.109.200.53 53 IN PTR dns.cp4s.sechu.ibm. master-0 IN A 10.109.200.80 80 IN PTR master-1.cp4s.sechu.ibm. etcd-0 IN A 10.109.200.80 80 IN PTR etcd-0.cp4s.sechu.ibm. master-1 IN A 10.109.200.90 90 IN PTR master-1.cp4s.sechu.ibm. etcd-1 IN A 10.109.200.90 90 IN PTR etcd-1.cp4s.sechu.ibm. master-2 IN A 10.109.200.100 100 IN PTR master-2.cp4s.sechu.ibm. etcd-2 IN A 10.109.200.100 100 IN PTR etcd-2.cp4s.sechu.ibm. worker-0 IN A 10.109.200.180 180 IN PTR worker-0.cp4s.sechu.ibm. worker-1 IN A 10.109.200.190 190 IN PTR worker-1.cp4s.sechu.ibm. worker-2 IN A 10.109.200.200 200 IN PTR worker-2.cp4s.sechu.ibm. bootstrap IN A 10.109.200.33 33 IN PTR bootstrap.cp4s.sechu.ibm. lb IN A 10.109.200.20 20 IN PTR lb.cp4s.sechu.ibm. bastion IN A 10.109.200.222 222 IN PTR bastion.cp4s.sechu.ibm.
-
Modify DNS server main configuration (
/etc/named.conf
):options { listen-on port 53 { 127.0.0.1; 10.109.200.53; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; recursing-file "/var/named/data/named.recursing"; secroots-file "/var/named/data/named.secroots"; allow-query { localhost; 10.109.200.0/24; }; <----- IMPORTANT! recursion yes; dnssec-enable yes; dnssec-validation yes; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; pid-file "/run/named/named.pid"; session-keyfile "/run/named/session.key"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; # sechu.ibm zone "cp4s.sechu.ibm" IN { type master; file "forward.cp4s.sechu.ibm"; allow-update { 127.0.0.1; 10.109.200.53; }; }; zone "100.109.200.in-addr.arpa" IN { type master; file "reverse.cp4s.sechu.ibm"; allow-update { none; }; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key";
-
Enable and start the DNS server service:
systemctl enable named systemctl start named
-
Test DNS:
for n in dns.cp4s.sechu.ibm master-0.cp4s.sechu.ibm etcd-0.cp4s.sechu.ibm \ master-1.cp4s.sechu.ibm etcd-1.cp4s.sechu.ibm master-2.cp4s.sechu.ibm etcd-2.cp4s.sechu.ibm \ worker-0.cp4s.sechu.ibm worker-1.cp4s.sechu.ibm bootstrap.cp4s.sechu.ibm \ api.cp4s.sechu.ibm api-int.cp4s.sechu.ibm apps.cp4s.sechu.ibm lb.cp4s.sechu.ibm \ bastion.cp4s.sechu.ibm; do nslookup $n; done
Example output:
Server: 10.109.200.53 Address: 10.109.200.53#53 Name: dns.cp4s.sechu.ibm Address: 10.109.200.53 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: master-0.cp4s.sechu.ibm Address: 10.109.200.80 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: etcd-0.cp4s.sechu.ibm Address: 10.109.200.80 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: master-1.cp4s.sechu.ibm Address: 10.109.200.90 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: etcd-1.cp4s.sechu.ibm Address: 10.109.200.90 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: master-2.cp4s.sechu.ibm Address: 10.109.200.100 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: etcd-2.cp4s.sechu.ibm Address: 10.109.200.100 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: worker-0.cp4s.sechu.ibm Address: 10.109.200.180 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: worker-1.cp4s.sechu.ibm Address: 10.109.200.190 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: bootstrap.cp4s.sechu.ibm Address: 10.109.200.33 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: api.cp4s.sechu.ibm Address: 10.109.200.20 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: api-int.cp4s.sechu.ibm Address: 10.109.200.20 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: apps.cp4s.sechu.ibm Address: 10.109.200.20 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: lb.cp4s.sechu.ibm Address: 10.109.200.20 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: cdn.cp4s.sechu.ibm Address: 10.109.200.44 Server: 10.109.200.53 Address: 10.109.200.53#53 Name: bastion.cp4s.sechu.ibm Address: 10.109.200.222
Create Load Balancer machine↑
The Load Balancer will handle to separate the management and service requests between the master and worker nodes. So all management requests will be routed to the master nodes and all service requests will be routed to the worker nodes.
I will use a minimal RHEL 7.5 for this. I assume the minimal RHEL 7.5 is installed and it can access to Red Hat repositories, the image booted for the first time.
-
Disable SELINUX:
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
-
Install required packages:
yum -y install telnet ftp net-tools mc mlocate haproxy
-
Cleaning up:
rm -vf /root/install*log rm -vf /root/anaconda-ks.cfg
-
Disable unneccessary services:
for i in abrt-ccpp abrtd atd auditd blk-availability certmonger cpuspeed cups \ mcelogd mdmonitor netconsole numad oddjobd portreserve rhnsd rhsmcertd smartd \ winbind postfix iptables ip6tables firewalld kdump; \ do \ systemctl disable $i; \ done
-
Copy the hostname to the
/etc/hosts
file in case of network loss:cp /etc/hosts /etc/hosts.backup echo `ifconfig | sed -En \ 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'` `hostname` >> /etc/hosts
-
Create the haproxy configuration:
mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.original vi /etc/haproxy/haproxy.cfg
Add the following content:
global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats ssl-default-bind-ciphers PROFILE=SYSTEM ssl-default-server-ciphers PROFILE=SYSTEM defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen haproxy3-monitoring *:8080 mode http option forwardfor option httpclose stats enable stats show-legends stats refresh 5s stats uri /stats stats realm Haproxy\ Statistics stats auth haproxy:password stats admin if TRUE frontend ocp4-kubernetes-api-server mode tcp option tcplog bind api.cp4s.sechu.ibm:6443 default_backend ocp4-kubernetes-api-server frontend ocp4-machine-config-server mode tcp option tcplog bind api.cp4s.sechu.ibm:22623 default_backend ocp4-machine-config-server frontend ocp4-router-http mode tcp option tcplog bind apps.cp4s.sechu.ibm:80 default_backend ocp4-router-http frontend ocp4-router-https mode tcp option tcplog bind apps.cp4s.sechu.ibm:443 default_backend ocp4-router-https backend ocp4-kubernetes-api-server mode tcp balance source server boostrap bootstrap.cp4s.sechu.ibm:6443 check server master-0 master-0.cp4s.sechu.ibm:6443 check server master-1 master-1.cp4s.sechu.ibm:6443 check server master-2 master-2.cp4s.sechu.ibm:6443 check backend ocp4-machine-config-server mode tcp balance source server bootstrap bootstrap.cp4s.sechu.ibm:22623 check server master-0 master-0.cp4s.sechu.ibm:22623 check server master-1 master-1.cp4s.sechu.ibm:22623 check server master-2 master-2.cp4s.sechu.ibm:22623 check backend ocp4-router-http mode tcp server worker-0 worker-0.cp4s.sechu.ibm:80 check server worker-1 worker-1.cp4s.sechu.ibm:80 check server worker-2 worker-2.cp4s.sechu.ibm:80 check backend ocp4-router-https mode tcp server worker-0 worker-0.cp4s.sechu.ibm:443 check server worker-1 worker-1.cp4s.sechu.ibm:443 check server worker-2 worker-2.cp4s.sechu.ibm:80 check
-
Enable rsyslog by uncommenting the following two lines in file
/etc/rsyslog.conf
:$ModLoad imudp $UDPServerRun 514
-
Create syslog definition to haproxy:
vi /etc/rsyslog.d/haproxy.conf
Add content:
local2.=info /var/log/haproxy-access.log #For Access Log local2.notice /var/log/haproxy-info.log #For Service Info - Backend, loadbalancer
-
Restart syslog:
systemctl restart rsyslog
-
However SELINUX has been disabled, we need to issue the following command to allow haproxy to bind to restricted ports (I guess it's a bug):
setsebool -P haproxy_connect_any=1
-
Enable and start haproxy:
systemctl enable haproxy systemctl start haproxy
-
Check haproxy with a browser, navigate to
http://10.109.200.20:8080/stats
. Username:haproxy
, password ispassword
.
Create bastion machine with webserver, initial config and local repository↑
This machine will help us to create and initialize our OCP cluster. It must have internet access as well because we are going to clone the required repository to provide them to the nodes in the restricted network. The best option if this machine will have:
- 1 NIC to the OCP subnetwork (10.109.200.0/24)
- 1 NIC to be able to connect to the Internet
I will use a minimal RHEL 7.5 for this. I assume the minimal RHEL 7.5 is installed and it can access to Red Hat repositories, the image booted for the first time.
-
Disable SELINUX:
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
-
Install required packages:
yum -y install telnet ftp net-tools mc podman httpd httpd-tools jq
-
Cleaning up:
rm -vf /root/install*log rm -vf /root/anaconda-ks.cfg
-
Disable unneccessary services:
for i in abrt-ccpp abrtd atd auditd blk-availability certmonger cpuspeed cups \ mcelogd mdmonitor netconsole numad oddjobd portreserve rhnsd rhsmcertd smartd \ winbind postfix iptables ip6tables firewalld kdump; \ do \ systemctl disable $i; \ done
-
Copy the hostname to the
/etc/hosts
file in case of network loss:cp /etc/hosts /etc/hosts.backup echo `ifconfig | sed -En \ 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'` `hostname` >> /etc/hosts
-
Install the OpenShift 4 Client Tools by downloading the file, extracting it, and copying to the proper directory:
IMPORTANT NOTE: Make sure the OCP client tools version matches the desired OCP version! It has an impact of the mirroring process as well as the installation!
mv oc /usr/local/bin/ mv openshift-install /usr/local/bin/
-
Enable and start the webserver service and create a folder to hold the ignition files:
systemctl enable httpd systemctl start httpd mkdir -p /var/www/html/ignition
-
Copy the RHCOS boot image (
*-metal.raw.gz
) to the webserver root/var/www/html
with name:bios.raw.gz
. -
Change the permission to user:group of
apache
. Without this, no one will be able to read the file:chown -R apache:apache /var/www/html
-
Create the reporitory location:
mkdir -p /opt/registry/{auth,certs,data}
-
Generate a self signed certificate:
cd /opt/registry/certs openssl req -newkey rsa:4096 -nodes -sha256 -keyout domain.key -x509 -days 365 -out domain.crt
Answers for the questions (in order of appearance):
- HU
- Budapest
- Budapest
- IBM Hungary
- Security Business Unit
- bastion.cp4s.sechu.ibm
- yourname@ibm.com
-
Generate a user name (
repository
) and a password (password
) for your registry that uses the bcrpt format:htpasswd -bBc /opt/registry/auth/htpasswd repository password
-
Create the mirror-registry container to host your registry:
podman run --name mirror-registry -p 5000:5000 \ -v /opt/registry/data:/var/lib/registry:z \ -v /opt/registry/auth:/auth:z \ -e "REGISTRY_AUTH=htpasswd" \ -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \ -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ -v /opt/registry/certs:/certs:z \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ -d docker.io/library/registry:2
Restart the repository:
podman stop mirror-registry podman start mirror-registry
-
Add the self-signed certificate to your list of trusted certificates:
cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust
-
Confirm that the registry is available:
curl -u repository:password -k https://bastion.cp4s.sechu.ibm:5000/v2/_catalog
Response:
{"repositories":[]}
-
Generate the base64-encoded user name and password or token for your mirror registry:
echo -n 'repository:password' | base64 -w0
Note the output.
-
Create a working directory for later use and to hold required resources:
mkdir -p /root/os4
-
Upload your pull-secret.txt file to this folder. (Can be obtained here.)
-
Make a copy of your pull secret in JSON format:
cat /root/os4/pull-secret.txt | jq . > /root/os4/pull-secret.json
-
Edit the
pull secret
file and add a section that describes your registry to it:{ "auths": { "bastion.cp4s.sechu.ibm:5000": { "auth": "<base64 output from step 13>", "email": "yourname@ibm.com" }, "cloud.openshift.com": { "auth": "b3Blb...==", "email": "yourname@ibm.com" }, "quay.io": { "auth": "b3BlbnNoaW...==", "email": "yourname@ibm.com" }, "registry.connect.redhat.com": { "auth": "NTI3MzMz...==", "email": "yourname@ibm.com" }, "registry.redhat.io": { "auth": "NTI3MzMzNj...==", "email": "yourname@ibm.com" } } }
-
Clone the repository:
oc adm release mirror -a /root/os4/pull-secret.json \ --from=quay.io/openshift-release-dev/ocp-release@sha256:039a4ef7c128a049ccf916a1d68ce93e8f5494b44d5a75df60c85e9e7191dacc \ --to-release-image=bastion.cp4s.sechu.ibm:5000/ocp4/openshift4:4.2.18 \ --to=bastion.cp4s.sechu.ibm:5000/ocp4/openshift4
-
Record the output about
imageContentSources
section, it looks something like this:imageContentSources: - mirrors: - bastion.cp4s.sechu.ibm:5000/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - bastion.cp4s.sechu.ibm:5000/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
-
Create
install-config.yaml
file:vi /root/os4/install-config.yaml
Add content:
apiVersion: v1 baseDomain: sechu.ibm compute: - hyperthreading: Enabled name: worker replicas: 0 controlPlane: hyperthreading: Enabled name: master replicas: 3 metadata: name: cp4s networking: clusterNetworks: - cidr: 10.254.0.0/16 hostPrefix: 24 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: '<PULL_SECRET>' sshKey: '<SSH_PUBLIC_KEY>' additionalTrustBundle: | <CERT> imageContentSources: - mirrors: - bastion.cp4s.sechu.ibm:5000/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - bastion.cp4s.sechu.ibm:5000/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
Where:
<PULL_SECRET>
is the content of yourpull-secret.json
file<SSH_PUBLIC_KEY>
is the earlier generated SSH key's public pair (id_rsa.pub
)<CERT>
is the content of the certificate created for repository (domain.crt
)
To convert the
pull-secret.json
file to one line:jq -c . < pull-secret.json
To view the content of your SSH public key:
cat /root/.ssh/id_rsa.pub
and if you don't have a key until now, you can generate one with the following command:
ssh-keygen
this will place the file mentioned before under the
<user_root>/.ssh/id_rsa.pub
To view the content of your certificate:
cat /opt/registry/certs/domain.crt
-
Create manifest files with the
openshift-install
command:openshift-install create manifests
-
Disable master schedulable (if not, masters will have combined role: master,worker, see details):
sed -i 's/mastersSchedulable: true/mastersSchedulable: false/g' manifests/cluster-scheduler-02-config.yml
-
Check output:
cat manifests/cluster-scheduler-02-config.yml
Check that schedulable is
false
! -
Create ignition config files:
openshift-install create ignition-configs
Sample output:
INFO Consuming OpenShift Install (Manifests) from target directory INFO Consuming Common Manifests from target directory INFO Consuming Openshift Manifests from target directory INFO Consuming Master Machines from target directory INFO Consuming Worker Machines from target directory
-
Copy the generated files to the webserver
ignition
folder:cp -R *.ign *.json auth/* /var/www/html/ignition
-
Modify permission to user:group
apache
on the newly copied files to be able to read them and check permissions:chown -R apache:apache /var/www/html/ && ls -lah /var/www/html/ignition
Example output:
drwxr-xr-x 3 apache apache 143 May 27 11:13 . drwxr-xr-x 3 apache apache 41 May 21 16:05 .. drwxr-x--- 2 apache apache 50 May 27 11:13 auth -rw-r----- 1 apache apache 303K May 27 11:13 bootstrap.ign -rw-r----- 1 apache apache 1.8K May 27 11:13 master.ign -rw-r----- 1 apache apache 96 May 27 11:13 metadata.json -rw-r--r-- 1 apache apache 3.0K May 27 11:13 pull-secret.json -rw-r--r-- 1 apache apache 2.7K May 27 11:13 pull-secret.txt -rw-r----- 1 apache apache 1.8K May 27 11:13 worker.ign
Install Red Hat Enterprise Linux CoreOS images for cluster machines↑
It's time to create the machines will build the cluster up. We are targeting a 3 masters - 3 workers + 1 as a bootstrap machine. The process is the same for all machines, the only difference are the Ignition config files (master.ign
,worker.ign
,bootstrap.ign
), IP and hostname values.
-
Before you are moving on, make sure that you open 6 individual SSH session on the Bastion machine and issue the following command, one per session. This command will run indefinitely and tries to ssh into the target machine and set the hostname you set up in your DNS server. This might be a bug because once the image installed, the hostname will not persist in the installed RHCOS and it will fallback to
localhost
and mess up the installation.THIS IS REQUIRED OR THE INSTALLATION WILL FAIL.
Alternatively you can setup a DHCP and IP address / hostname reservation.
Create a file on bastion machine on path
/root/.ssh/config
, and add the following content:Host 10.109.200.* StrictHostKeyChecking no
Then open the sessions and start the commands below:
Session 1 - Master 0:
until ssh core@10.109.200.80 "sudo hostnamectl set-hostname master-0.cp4s.sechu.ibm"; do sleep 1; done
Session 2 - Master 1:
until ssh core@10.109.200.90 "sudo hostnamectl set-hostname master-1.cp4s.sechu.ibm"; do sleep 1; done
Session 3 - Master 2:
until ssh core@10.109.200.100 "sudo hostnamectl set-hostname master-2.cp4s.sechu.ibm"; do sleep 1; done
Session 4 - Worker 0:
until ssh core@10.109.200.180 "sudo hostnamectl set-hostname worker-0.cp4s.sechu.ibm"; do sleep 1; done
Session 5 - Worker 1:
until ssh core@10.109.200.190 "sudo hostnamectl set-hostname worker-1.cp4s.sechu.ibm"; do sleep 1; done
Session 6 - Worker 2:
until ssh core@10.109.200.200 "sudo hostnamectl set-hostname worker-2.cp4s.sechu.ibm"; do sleep 1; done
-
Create the virtual machine base config:
- Masters / Bootstrap:
- 4 vCPU
- 16 GB memory
- 150 GB disk
- Workers:
- 8 vCPU
- 32 GB memory
- 150 GB disk
- Masters / Bootstrap:
-
Boot up the images with the ISO file.
- Set Linux / Red Hat Enterprise Linux 7 for Linux version on ESXi
-
Once the image started, press
TAB
orE
to enter boot parameters. -
Add the related information below. It must be in one line.
The IP clause follows the following pattern:
$IP ADDRESS$::$GATEWAY$:$MASK$:$HOSTNAME$:ens192:none
ens192
will be the name of network interfacenone
will instruct that there is no DHCP involved- The
coreos.inst.image_url
must point to thebios.raw.gz
file hosted on the webserver. - The
coreos.inst.ignition_url
must point to required ignition file hosted on the webserver for the given type (bootstrap, master or worker).
Master 0
ip=10.109.200.80::10.109.0.3:255.255.0.0:master-0.cp4s.sechu.ibm:ens192:none \ nameserver=10.109.200.53 \ coreos.inst.install_dev=sda \ coreos.inst.image_url=http://10.109.200.222/bios.raw.gz \ coreos.inst.ignition_url=http://10.109.200.222/ignition/master.ign
Master 1
ip=10.109.200.90::10.109.0.3:255.255.0.0:master-1.cp4s.sechu.ibm:ens192:none \ nameserver=10.109.200.53 \ coreos.inst.install_dev=sda \ coreos.inst.image_url=http://10.109.200.222/bios.raw.gz \ coreos.inst.ignition_url=http://10.109.200.222/ignition/master.ign
Master 2
ip=10.109.200.100::10.109.0.3:255.255.0.0:master-2.cp4s.sechu.ibm:ens192:none \ nameserver=10.109.200.53 \ coreos.inst.install_dev=sda \ coreos.inst.image_url=http://10.109.200.222/bios.raw.gz \ coreos.inst.ignition_url=http://10.109.200.222/ignition/master.ign
Worker 0
ip=10.109.200.180::10.109.0.3:255.255.0.0:worker-0.cp4s.sechu.ibm:ens192:none \ nameserver=10.109.200.53 \ coreos.inst.install_dev=sda \ coreos.inst.image_url=http://10.109.200.222/bios.raw.gz \ coreos.inst.ignition_url=http://10.109.200.222/ignition/worker.ign
Worker 1
ip=10.109.200.190::10.109.0.3:255.255.0.0:worker-1.cp4s.sechu.ibm:ens192:none \ nameserver=10.109.200.53 \ coreos.inst.install_dev=sda \ coreos.inst.image_url=http://10.109.200.222/bios.raw.gz \ coreos.inst.ignition_url=http://10.109.200.222/ignition/worker.ign
Worker 2
ip=10.109.200.200::10.109.0.3:255.255.0.0:worker-2.cp4s.sechu.ibm:ens192:none \ nameserver=10.109.200.53 \ coreos.inst.install_dev=sda \ coreos.inst.image_url=http://10.109.200.222/bios.raw.gz \ coreos.inst.ignition_url=http://10.109.200.222/ignition/worker.ign
Bootstrap
ip=10.109.200.33::10.109.0.3:255.255.0.0:bootstrap.cp4s.sechu.ibm:ens192:none \ nameserver=10.109.200.53 \ coreos.inst.install_dev=sda \ coreos.inst.image_url=http://10.109.200.222/bios.raw.gz \ coreos.inst.ignition_url=http://10.109.200.222/ignition/bootstrap.ign
-
Once you created the images and the installation finished, the images will reboot and now it's time to have a coffee break. Approximately 20-40 minutes required to the images to create the clusters and bring Kubernetes up.
-
While the images are configuring themselved, on the bastion server issue the following command. If the desired time (20 mins) timed out, simply restart the command.
openshift-install --dir=/root/os4 wait-for bootstrap-complete --log-level info
Sample output:
INFO Waiting up to 20m0s for the Kubernetes API at https://api.cp4s.sechu.ibm:6443... INFO API v1.16.2 up INFO Waiting up to 40m0s for bootstrapping to complete...
-
Once bootstrapping finished, you must remove the bootstrap config from the Load Balancer config and then restart it!
-
To list the configured nodes:
Export the KUBECONFIG:
export KUBECONFIG=/root/os4/auth/kubeconfig
oc get nodes
Example output:
NAME STATUS ROLES AGE VERSION master-0 Ready master 2d3h v1.16.2+18cfcc9 master-1 Ready master 2d3h v1.16.2+18cfcc9 master-2 Ready master 2d3h v1.16.2+18cfcc9 worker-0 Ready worker 2d3h v1.16.2+18cfcc9 worker-1 Ready worker 2d3h v1.16.2+18cfcc9 worker-2 Ready worker 2d3h v1.16.2+18cfcc9
-
Query CSRs (Certificate Signing Request):
oc get csr
Example output:
NAME AGE REQUESTOR CONDITION csr-9xj9x 47m system:node:master-0 Approved,Issued csr-hn5xg 47m system:node:master-1 Approved,Issued csr-zpq48 46m system:node:master-2 Approved,Issued csr-crrm6 12m system:node:worker-1 Pending csr-pr7gm 12m system:node:worker-0 Pending csr-xw5vz 10m system:node:worker-2 Pending
-
If there are any Pending requests, approve them manually using the NAME value:
oc adm certificate approve <name>
or approve all with a single commmand:
oc get csr --no-headers | awk '{print $1}' | xargs oc adm certificate approve
-
To get forward, configure an emptyDir storage by issuing the following. command:
oc patch configs.imageregistry.operator.openshift.io cluster \ --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
-
Issue the following command to finish the setup
openshift-install wait-for install-complete
Example output:
INFO Waiting up to 30m0s for the cluster at https://api.cp4s.sechu.ibm:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/os4/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cp4s.sechu.ibm INFO Login to the console with user: kubeadmin, password: <random_generated_password>