Thursday, July 31, 2025

Cloud CI-CD Cheat Sheet

In 2024, we checked out GitLab Cheat Sheet to streamline collaborative workflow and then leverage CI/CD pipelines. However, it is interesting to tell the back story how we got from the 1990s to modern day CI/CD.

Let's check it out!

Evolution of SoftwareDeployment: Physical Servers to Container Orchestration

Era of Physical Servers: 1990s and Before
Back in the 1990s Software was predominantly deployed directly onto physical servers, many often housed in on-premises data centers. Each server typically dedicated to specific application [or set of applications].

Challenges: Scalability, Isolation, Resource Utilization
  involved procuring, setting up, deploying to additional physical servers = time consuming + expensive
  multiple apps could interfer with one another leading to system crashes or other performance issues
  some servers underutilized while others overwhelmed which meant inefficient resource distribution

Dawn of Virtualization: 2000s
Introduction of virtualization technologies like those provided by VMware allowed Virtual Machines [VMs] to each run a physical server which meant each VM operating as though it were on own dedicated hardware.

Benefits: Resource Efficiency, Isolation, Snapshot + Cloning
   multiple VMs could share resources of single server leading to better resource utilization
   VMs provide new level of isolation btwn apps = failure of one VM did not affect other VM
   VM state could be saved + cloned making it easier to replicate environments for scaling

Containerization: Rise of Docker
Next significant shift was containerization with Docker at the forefront. Unlike VMs, containers share host OS running in isolated User space which is lightweight and portable and can startup / shutdown more rapidly.

Advantages: Speed, Portability, Density
   containers start almost instantly i.e. applications launched and scaled in only a matter of seconds
   container images are consistent across environments = it works on my machine issues minimized
   lightweight nature = many containers run on host machine = better resource utilization than VMs

Container Orchestration: Enter Kubernetes
Increased container adoption prompted the need for container orchestration technologies like Kubernetes to manage scale and monitor containerized applications especially those hosted by managed Cloud providers.

Functions: Auto-scaling, Self-healing, Load Balancing, Service Discovery
   orchestration systems can automatically scale apps based on denamd or sudden traffic spikes
   if container or node fails then the orchestrator can restart or replace it = increased reliability!
   incoming requests are automatically distributed across containers ensure optimal performance
   as containers move across nodes, services can be discovered without any manual intervention

Summary of Definitions
Docker
   Platform as a Service product that uses OS-level virtualization Software in packages as containers
   Containers are isolated from one another bundle their own software, libraries, and configurations
   All containers share signle OS kernel on host thus use fewer resources than Virtual Machines

Kubernetes
   Open-source container orchestration system automating app deployment, scaling and management
   Runs containerized applications in cluster host machines from containers typically built using Docker

Helm
   Kubernetes package manager simplifies managing and deploying applications to clusters via "Charts"
   Helm facilitates configuration separated out in Values files and scaled out across all environments

Summary of Technology
Docker
   Dockerfile
   Image
   Container
 text file that contains all commands used to assemble a Docker image template
 executable package that includes code, runtime, environment variables and config
 running instance of a Docker image isolated from other processes running on host

Kubernetes
   Namespace
   Workload
   Pod
   Node
   Replicaset
   Deployment
   Service
 scope cluster resources and a way to isolate Kubernetes objects
 containerized application running within the Kubernetes cluster
 smallest deployable unit as created and managed in Kubernetes
 workloads are placed in Containers on Pods to be run on Nodes
 maintains a stable set of replica pods available running any time
 provide a declarative way to update all Pods and Replicasets
 abstract way to expose an application running on a set of Pods

DEMO Hello World
   Execute code on localhost [IDE]
   Build Docker image and locally
   Provision local Kubernetes cluster
 TEST after deployment
 curl http://localhost:8080
 Hello World

Python Flask API application:

DEMO Docker Commands
  # Create KinD cluster
  kind create cluster --name flask-cluster
  # Create Dockerfile | Build Docker image
  docker buiild --pull -rm -f "Dockerfile" -t flask-api:latest "."
  # Execute Docker container
  docker run --rm -d -p 8080:8080/tcp flask-api:latest
  # Test endpoint
  curl http://localhost:8080

Dockerfile
KinD = Kubernetes in Docker is a tool for running local Kubernetes cluster using Docker container "nodes".

DEMO Kubernetes Commands
  # Load image into KinD cluster
  kind load docker-image flask-api:latest --name flask-cluster
  # Setup KinD cluster
  kubectl create ns test-ns
  kubectl config set-context --current --namespace=test-ns
  # Rollout Kubernetes Deployment and Service resources
  kubectl apply -f Kubernetes.yaml
  # Test endpoint
  curl http://localhost:8080

Kubernetes.yaml


LIMITATIONS
DEMO Hello World is sufficient to demonstrate the process on localhost but has many real world limitations!

Limitations
   Everything is on localhost - Cloud Computing typically requires Kubernetes cluster(s)
   Manually build Docker image from the Dockerfile
   Manually push Docker image to container registry
   Manually deploy running Docker container into Kubernetes cluster [Deployment exposed as Service]
   All Kubernetes resource values are hardcoded into declarative YAML file [Deployment and a Service]
   No facility to scale deployment across multiple environments: DEV, IQA, UAT, Prod
   Environment variables can be injected but is very brittle and cumbersome process
   No real immediate and secure way to inject secret information into deployment [secret password]

Solution
Next step is to integrate GitLab CI/CD pipeline to solve these issues and automate build deployment process!
This will be the topic of the next post.