Overview:
- OpenShift cluster, applications, and users administration
- Maintenance and Troubleshooting of Kubernetes clusters
- Security management of the cluster
- User Provisioned Infrastructure (UPI)
- Cluster applications consist of multiple resources that are configured together, and each resource has a definition document and a configuration applied.
- Declarative paradigm of resource management for specifying desired states that the system will configure, vs imperative commands that manually configure the system step-by step
Software Alphabet Soup:
- Open Shift Container (OC)
- Red Hat OpenShift Container Platform (ROCP), based on Kubernetes
- Single-node base metal (SNO), a single-node implementation, meaning a RHOCP cluster running on a single-node (host server)
Using Deployment Strategies:
- Deployment strategies change or upgrade applications/instances with or without downtime so that users barely notice a change
- Users generally access applications through a route handled by a router, so updates can focus on the DeploymentConfig object features or routing features.
- Most deployment strategies are supported through the DeploymentConfig object, and additional strategies are supported through router features.
- - Object features impact all routes that use the application
- - Router features impact targeted individual routes
- Deployment readiness check fails, the DeploymentConfig object retries to run the pod until it times out. Default time-out is 10m, set in dc.spec.strategy.*parems --> TimeoutSeconds
Rolling Deployment Updates:
- Default deployment strategy when none specified in the DeploymentConfig object
- Replaces instances of previous application/deployment with new versions by deploying new pods and waiting for them to become "ready" before scaling down the old version instances.
- - Waiting on the new versions to be "ready" is called a "canary test", and this method, a "canary deployment".
- Will be aborted if the new pods don't become ready, the deployment rolls back to to its previous version.
- Should not be used if the old instance application is not compatible, and cannot run along side of the new version. Application should be designed to handle "N-1" compatibility.
- The rollingParams defaults:
- - updatePeriodSeconds - wait time for individual pod updates: 1
- - intervalSeconds - wait time after update for polling deployment status: 1
- - timeoutSeconds (optional) - wait time for scaling up/down event before timeout: 600
- - maxSurge (optional) - maximum percentage or number of instance rollover at one time: "25%"
- - maxUnavailable (optional) - maximum percentage or number of instances down/in-process at one time: "25%" (or 1 in OC)
- - pre and post - default to {}, are lifecycle hooks to be done before and after the rolling update
- If you want faster rollouts, use maxSurge with a high value. If you want low resource quotas and partial unavailability is okay, limit with maxUnavailability.
- If you implement complex checks (such as end-to-end workload workflows to the new instance(s)), custom deployment or blue-green deployments strategies are performance instead of a simpler rolling update.
Important:
In ROCP, the maxUnvailable is 1 for all machine config pools, RH recommends not to change this value to 3 for the control plane pool, but update one control plane node at a time.
Rolling Deployment Updates Order:
1. Executes pre lifecycle hook
2. Scales up new replication controller-based instances on surge count/percentage
3. Scales down old replication controller-based instances on max unable percentage
4. Repeats #2-3 scaling until the replication controller has reached the desired replica count and the old replication controller count has reached 0
5. Executes post lifecycle hook
Example rolling deployment demo from RH documentation:
Set-up a application to rollover:
[admin@rocp ~]$ oc new-app quay.io/openshifttest/deployment-example:latest
[admin@rocp ~]$ oc expose svc/deployment-example
[admin@rocp ~]$ oc scale dc/deployment-example --replicas=3
The following tag command will cause a rollover:
[admin@rocp ~]$ oc tag deployment-example:v2 deployment-example:latest
Watch the v1 to v2 rollover with:
[admin@rocp ~]$ oc describe dc deployment-example
Perform a rolling deployment update using the ROCP Developer Perspective:
- web console --> Developer perspective --> Topology view --> select/highlight application node --> Overview (tab/panel)
- In the Overview panel, confirm Update Strategy: Rolling, click Actions (dropdown) --> select Start Rollout
Edit Deployment configuration, image settings, and environmental variables in the ROCP Developer Perspective:
- web console --> Developer perspective --> Topology view --> click/open application --> Details (panel)
- In the Details panel, click Actions (dropdown) --> select Edit Deployment
- In the Edit Deployment window, edit the options desired:
- - Click Pause rollouts to temporarily disable updated application rollouts
- - Click Scaling to change the number of instance replicas
- - Click Save (button)
Recreate Deployment Update:
- Recreate deployment strategy
- Incurs downtime because, for a brief period, no instances of your application are running.
- Old code and new code do not run at the same time.
- Basic rollout behavior
- Use when:
- - Migration data transformation hooks have to be run before the new deployment starts
- - Application doesn't support old and new versions of code running in a rolling deployment
- - Application requires a RWO (read-write-once) volume, not supported being shared between multiple replicas
- Supports pre, mid, and post lifecycle hooks
- The recreateParams are all optional
Recreate Deployment Updates Order:
1. Executes pre lifecycle hook
2. Scales down previous deployment to 0 instances
3. Executes mid lifecycle hook
4. Scales up new deployment
5. Executes post lifecycle hook
Note:
- If number of replicas > 1, the first instance will be validated for readiness (wait for "ready") before scaling up the rest of the instance count. If the first replica fails, the deployment recreate fails and aborts.
Perform a recreate deployment update using the ROCP Developer Perspective:
- web console --> Developer perspective --> Topology view --> click/open application node --> Details (panel)
- In the Details panel, click Actions (dropdown) --> select Edit Deployment Config
- - In the YAML editor, change the spec.strategy.type to Recreate
- - Click Save (button)
- web console --> Developer perspective --> Topology view --> highlight/select application node --> Overview (tab/panel)
- In the Overview panel, confirm Update Strategy: Recreate, click Actions (dropdown) --> select Start Rollout
Imperative Commands vs. Declarative Commands:
Imperative commands in Kubernetes directly manipulate the state of the system by executing specific commands, while declarative commands involve defining the desired state in a configuration file that cumulates to what the state will be.
The imperative approach allows the administrator to issue step-by-step commands, where the result of each gives the administrator the flexibility of adaptability based on the last command's response, whereas the declarative are written instructions, called manifests which Kubernetes read and apply cluster changes to meet the state the resource manifest defines. The industry generally prefers the latter due to:
- Increased reproducibility/consistency
- Better version control
- Better GitOps methodology
Resource Manifest:
- file in YAML or JSON format, and thus a single document which can be version-controlled readily
- simplify administration by encapsulating all the attributes of an application in a file or a set of related files which are then can be run repeatably with consistent results allowing for the CI/CD pipelines of GitOps.
Imperative command example:
[admin@rocp ~]$ kubectl create deployment mysql-pod --port 3306 --image registry.ocp4.mindwatering.net:8443/mysql:latest --env="MYSQL_USER='dbuser'" --env="MYSQL_PASSWORD='hardpassword' --env="MYSQL_DATABASE='dbname'"
deployment.apps/mysql-pod created
Adding the --save-config and --dry-run=client options, respectively allow what would have been run to be saved in that resources configuration nomenclature into a manifest file.
[admin@rocp ~]$ kubectl create deployment mysql-pod --namespace=mysql-manifest --port 3306 --image registry.ocp4.mindwatering.net:8443/mysql:latest --replicas=1 --env="MYSQL_USER='dbuser'" --env="MYSQL_PASSWORD='hardpassword' --env="MYSQL_DATABASE='dbname'" --save-config --dry-run=client > ~/mysql-deployment.yaml
[admin@rocp ~]$ cat mysql-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: mysql-manifest
annotations:
...
creationTimestamp: null
labels:
app: mysql-pod
name: mysql-pod
spec:
replicas: 1
selector:
matchLabels:
app: mysql-pod
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mysql-pod
spec:
containers:
- image: registry.ocp4.mindwatering.net:8443/mysql:latest
name: mysql
env:
- MYSQL_USER=dbuser
- MYSQL_PASSWORD=hardpassword
- MYSQL_DATABASE=dbname
ports:
- containerPort: 3306
resources: {}
status: {}
Notes:
- The order of parameters matter. For example, if the --env are moved earlier in the command they are not added.
- Never include the password like this, abstract with the credential.
- Add the service resource manifest to this one as one file separated by the --- delineator, or keep in separate files which are loaded together.
The declarative command syntax:
[admin@rocp ~]$ kubectl create -f ~/mysql-deployment.yaml
IMPORTANT:
- The kubectl create above does not take into account for the current running state of the resource, in this case, the mysql-pod resource. Executing kubectl create -f against a manifest for a live resource gives an error because the mysql-pod is already running. When using kubectl create to create/deploy a resource, the --save-config option produces the required annotations for future kubectl apply commands to operate.
- The kubectl apply command tries to apply updates w/o causing issues. In contrast, the kubectl apply -f command is declarative, and considers the difference between the current resource state in the cluster and the intended resource state that is expressed in the manifest. If the specified resource in the manifest file does not exist, then the kubectl apply command creates the resource. If any fields in the last-applied-configuration annotation of the live resource are not present in the manifest, then the command removes those fields from the live configuration. After applying changes to the live resource, the kubectl apply command updates the last-applied-configuration annotation of the live resource to account for the change.
- The kubectl apply command compares: the manifest file, the live configuration of the resource(s) in the cluster, and the configuration stored in the last-applied-configuration annotation.
To help verify syntax and whether an applied manifest update would succeed, use the --dry-run=server and the --validate=true flags. The dry-run=client does not have the validation of the cluster resource controllers server-side dry-run.
[admin@rocp ~]$ kubectl apply -f ~/mysql-deployment.yaml --dry-run=server --validate=true
deployment.apps/hello-openshift created (server dry-run)
Diff Tools vs Kubectl Diff:
Kubernetes resource controllers automatically add annotations and attributes to the live resource that make the output of other OS text-based diff tools report many differences that have no impact on the resource configuration, causing confusion and wasted time. Using the kubectl diff command confirms that a live resource matches, or does not match, a resource configuration that a manifest provides. Because other tools cannot know all details about how any controllers might change a resource, the the kubectl diff tool handles whether the cluster would determine if a change is meaningful. Moreover, GitOps tools depend on the kubectl diff command to determine whether anyone changed resources outside the GitOps workflow.
OC Diff Update:
Applying manifest changes with oc diff may not generate new pods for changes in secret and configuration maps because these elements are only read at deployment/pod start-up. Like the kubectl diff command, it compares the running deployment pod's configuration against the file specified in the diff. If the configuration changes require a restart, it has to be done separately. The pod could be deleted, but the oc rollout command will stop and replace pods to minimize downtime.
[admin@rocp ~]$ oc diff -f mysql-pod.yaml
or
[admin@rocp ~]$ cat mysqlservice.yaml | oc diff -f -
[admin@rocp ~]$ oc rollout restart deployment mysql-deployment.yaml
OC Patch Update:
The oc patch command allows partial YAML snippets to be applied to live resources in a repeatably declarative way. The applies to a deployment/pod regardless whether the patched resource/configuration already exists in the manifest yaml file - existing configuration is updated, new configuration is added.
[admin@rocp ~]$ oc patch deployment mysql-pod -p '{<insertjsonsnippet}'
deployment/mysql-pod patched
[admin@rocp ~]$ oc patch deployment mysql-pod -p ~/mysql-deploypatch.yaml
deployment/mysql-pod patched
CLI Reference:
docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/cli_tools/index#cli-developer-commands
Creating Manifests from Git:
Maintaining application manifests in Git provides version controlling, and ability to deploy new versions of apps from Git. When you setup your GitBash access you typically create a folder structure in a specific location where it was run.
In this example, our git folder/project is: ~/gitlab.mindwatering.net/mwdev/mysql-deployment/
The version numbers are set when you commit; this git repo has v1.0, v1.1.
a. Login:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful ...
b, Create new OC project:
[admin@rocp ~]$ oc new-project mysql-deployment
Now using project "mysql-deployment" on server ...
c. Switch to the v1.1 of the Git repo:
[admin@rocp ~]$ cd ~/gitlab.mindwatering.net/mwdev/
[admin@rocp ~]$ git clone https://github.mindwatering.net/mwdev/mysql-deployment.git
Cloning into 'mysql-deployment' ...
[admin@rocp ~]$ git log --online
... (HEAD -> main, tag: <branchversion>, origin ...
<Note the tag version number. That's the version of the application manifest for mwsqldb>
[admin@rocp ~]$ cd mysql-deployment/
[admin@rocp ~]$ git checkout v1.1
branch 'v1.0' set up to track 'origin/v1.1' ...
d. In the app's folder, validate the v1.1 version of the mysql-deployment application can be deployed:
[admin@rocp ~]$ oc apply -f . --validate=true --dry-run=server
<confirm dry run>
e. After a successful dry-run, deploy the application:
[admin@rocp ~]$ oc apply -f .
f. Watch the deployments and pod instances and confirm the new app becomes available and its pods have a running state:
(Technically, this watch command looks at all deployments and pods, not just the one just deployed. So we will like see much more than just the new app.)
[admin@rocp ~]$ watch oc get deployments,pods
Every 2.0s: oc get deployments,pods ...
NAME READY UP-TO-DATE AVAILABLE AGE
...
deployment.apps/mysql-pod 1/1 1 1 60s
...
NAME READY STATUS RESTARTS AGE
...
pod/mysql-pod-6fddbbf94f-2pghj 1/1 Running 0 60s
...
<cntrl+c>, to end the watch
g. Review the current deployment manifest:
[admin@rocp ~]$ oc get deployment mysql-pod -o yaml
<view output>
h. Confirm still in the git working folder, and delete the current running deployment:
admin@rocp ~]$ pwd
.../gitlab.mindwatering.net/mwdev/mysql-deployment/
admin@rocp ~]$ oc delete -f .
<view components deleted>
previous page
|