Thursday, July 31, 2025

Cloud CI-CD Cheat Sheet

In 2024, we checked out GitLab Cheat Sheet to streamline collaborative workflow and then leverage CI/CD pipelines. However, it is interesting to tell the back story how we got from the 1990s to modern day CI/CD.

Let's check it out!

Evolution of SoftwareDeployment: Physical Servers to Container Orchestration

Era of Physical Servers: 1990s and Before
Back in the 1990s Software was predominantly deployed directly onto physical servers, many often housed in on-premises data centers. Each server typically dedicated to specific application [or set of applications].

Challenges: Scalability, Isolation, Resource Utilization
  involved procuring, setting up, deploying to additional physical servers = time consuming + expensive
  multiple apps could interfer with one another leading to system crashes or other performance issues
  some servers underutilized while others overwhelmed which meant inefficient resource distribution

Dawn of Virtualization: 2000s
Introduction of virtualization technologies like those provided by VMware allowed Virtual Machines [VMs] to each run a physical server which meant each VM operating as though it were on own dedicated hardware.

Benefits: Resource Efficiency, Isolation, Snapshot + Cloning
   multiple VMs could share resources of single server leading to better resource utilization
   VMs provide new level of isolation btwn apps = failure of one VM did not affect other VM
   VM state could be saved + cloned making it easier to replicate environments for scaling

Containerization: Rise of Docker
Next significant shift was containerization with Docker at the forefront. Unlike VMs, containers share host OS running in isolated User space which is lightweight and portable and can startup / shutdown more rapidly.

Advantages: Speed, Portability, Density
   containers start almost instantly i.e. applications launched and scaled in only a matter of seconds
   container images are consistent across environments = it works on my machine issues minimized
   lightweight nature = many containers run on host machine = better resource utilization than VMs

Container Orchestration: Enter Kubernetes
Increased container adoption prompted the need for container orchestration technologies like Kubernetes to manage scale and monitor containerized applications especially those hosted by managed Cloud providers.

Functions: Auto-scaling, Self-healing, Load Balancing, Service Discovery
   orchestration systems can automatically scale apps based on denamd or sudden traffic spikes
   if container or node fails then the orchestrator can restart or replace it = increased reliability!
   incoming requests are automatically distributed across containers ensure optimal performance
   as containers move across nodes, services can be discovered without any manual intervention

Summary of Definitions
Docker
   Platform as a Service product that uses OS-level virtualization Software in packages as containers
   Containers are isolated from one another bundle their own software, libraries, and configurations
   All containers share signle OS kernel on host thus use fewer resources than Virtual Machines

Kubernetes
   Open-source container orchestration system automating app deployment, scaling and management
   Runs containerized applications in cluster host machines from containers typically built using Docker

Helm
   Kubernetes package manager simplifies managing and deploying applications to clusters via "Charts"
   Helm facilitates configuration separated out in Values files and scaled out across all environments

Summary of Technology
Docker
   Dockerfile
   Image
   Container
 text file that contains all commands used to assemble a Docker image template
 executable package that includes code, runtime, environment variables and config
 running instance of a Docker image isolated from other processes running on host

Kubernetes
   Namespace
   Workload
   Pod
   Node
   Replicaset
   Deployment
   Service
 scope cluster resources and a way to isolate Kubernetes objects
 containerized application running within the Kubernetes cluster
 smallest deployable unit as created and managed in Kubernetes
 workloads are placed in Containers on Pods to be run on Nodes
 maintains a stable set of replica pods available running any time
 provide a declarative way to update all Pods and Replicasets
 abstract way to expose an application running on a set of Pods

DEMO Hello World
   Execute code on localhost [IDE]
   Build Docker image and locally
   Provision local Kubernetes cluster
 TEST after deployment
 curl http://localhost:8080
 Hello World

Python Flask API application:

DEMO Docker Commands
  # Create KinD cluster
  kind create cluster --name flask-cluster
  # Create Dockerfile | Build Docker image
  docker buiild --pull -rm -f "Dockerfile" -t flask-api:latest "."
  # Execute Docker container
  docker run --rm -d -p 8080:8080/tcp flask-api:latest
  # Test endpoint
  curl http://localhost:8080

Dockerfile
KinD = Kubernetes in Docker is a tool for running local Kubernetes cluster using Docker container "nodes".

DEMO Kubernetes Commands
  # Load image into KinD cluster
  kind load docker-image flask-api:latest --name flask-cluster
  # Setup KinD cluster
  kubectl create ns test-ns
  kubectl config set-context --current --namespace=test-ns
  # Rollout Kubernetes Deployment and Service resources
  kubectl apply -f Kubernetes.yaml
  # Test endpoint
  curl http://localhost:8080

Kubernetes.yaml


LIMITATIONS
DEMO Hello World is sufficient to demonstrate the process on localhost but has many real world limitations!

Limitations
   Everything is on localhost - Cloud Computing typically requires Kubernetes cluster(s)
   Manually build Docker image from the Dockerfile
   Manually push Docker image to container registry
   Manually deploy running Docker container into Kubernetes cluster [Deployment exposed as Service]
   All Kubernetes resource values are hardcoded into declarative YAML file [Deployment and a Service]
   No facility to scale deployment across multiple environments: DEV, IQA, UAT, Prod
   Environment variables can be injected but is very brittle and cumbersome process
   No real immediate and secure way to inject secret information into deployment [secret password]

Solution
Next step is to integrate GitLab CI/CD pipeline to solve these issues and automate build deployment process!
This will be the topic of the next post.

Monday, June 2, 2025

Cloud Setup Cheat Sheet II

In the previous post, we checked out Cloud Cheat Sheet to explain cluster provisioning process for managed cloud providers such as Azure AKS. Now we will resume to provision clusters: Amazon EKS and Google GKE.
Let's check it out!

Pre-Requisites
This blog post assumes an Azure, AWS, GCP account is setup plus all the corresponding CLIs are configured!

AWS Login
Navigate to https://aws.amazon.com | Sign In | Sign in using root user email. Root user | Root user email address e.g. steven_boland@hotmail.com | Next | Enter password. Setup AWS Multi-Factor Authentication.

AWS Single Sign On
Accessing AWS clusters programmatically is recommened to setup and configure AWS SSO. Example config:
  sso_start_url = https://stevepro.awsapps.com/start
  sso_region = eu-west-1
  sso_account_id = 4xxxxxxxxxx8
  sso_role_name = AdministratorAccess
  region = eu-west-1
  output = json

eksctl
Command-line tool that abstracts complexity involved in setting up AWS EKS clusters. Here is how to install:

Linux
 curl --silent --location "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$(uname
 -s)_amd64.tar.gz" | tar xz -C /tmp
 sudo mv /tmp/eksctl /usr/local/bin

Mac OS/X
 brew tap eksctl-io/eksctl  brew install eksctl

Windows
 Launch Powershell  choco install eksctl


Master Key
Next, create master SSH key for secure, automated and controlled access to your Kubernetes infrastructure:
 cd ~/.ssh
 ssh-keygen -t rsa -b 4096 -N '' -f master_ssh_key
 eval $(ssh-agent -s)
 ssh-add master_ssh_key


Amazon EKS
Amazon provides Elastic Kubernetes Service as a fully managed Kubernetes container orchestration service. Follow all instructions below in order to provision a Kubernetes cluster and test its functionality end-to-end. Download code sample here.

Pre-Requisites
  aws sso login

Check Resources
  aws ec2 describe-instances --query 'Reservations[*].Instances[*].InstanceId' --output table
  aws ec2 describe-addresses --query 'Addresses[*].PublicIp' --output table
  aws ec2 describe-key-pairs --query 'KeyPairs[*].KeyName' --output table
  aws ec2 describe-volumes --query 'Volumes[*].VolumeId' --output table
  aws ec2 describe-vpcs --query 'Vpcs[*].VpcId' --output table
  aws cloudformation list-stacks --query 'StackSummaries[*].StackName' --output table
  aws cloudwatch describe-alarms --query 'MetricAlarms[*].AlarmName' --output table
  aws ecr describe-repositories --query 'repositories[*].repositoryName' --output table
  aws ecs list-clusters --query 'clusterArns' --output table
  aws eks list-clusters --query 'clusters' --output table
  aws elasticbeanstalk describe-environments --query 'Environments[*].EnvironmentName' --output table
  aws elb describe-load-balancers --query 'LoadBalancerDescriptions[*].LoadBalancerName' --output table
  aws elbv2 describe-load-balancers --query 'LoadBalancers[*].LoadBalancerName' --output table
  aws iam list-roles --query 'Roles[*].RoleName' --output table
  aws iam list-users --query 'Users[*].UserName' --output table
  aws lambda list-functions --query 'Functions[*].FunctionName' --output table
  aws rds describe-db-instances --query 'DBInstances[*].DBInstanceIdentifier' --output table
  aws route53 list-hosted-zones --query 'HostedZones[*].Name' --output table
  aws s3 ls
  aws sns list-topics --query 'Topics[*].TopicArn' --output table
  aws sqs list-queues --query 'QueueUrls' --output table
  aws ssm describe-parameters --query 'Parameters[*].Name' --output table

Cluster YAML
  kind: ClusterConfig
  apiVersion: eksctl.io/v1alpha5
  
  metadata:
    name: stevepro-aws-eks
    region: eu-west-1
    version: "1.27"
    tags:
      createdBy: stevepro
  
  kubernetesNetworkConfig:
    ipFamily: IPv4
  
  iam:
    withOIDC: true
    serviceAccounts:
    - metadata:
        name: ebs-csi-controller-sa
        namespace: kube-system
      attachPolicyARNs:
      - "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
      roleOnly: true
      roleName: stevepro-aws-eks-AmazonEKS_EBS_CSI_DriverRole
  
  addons:
  - name: aws-ebs-csi-driver
    version: v1.38.1-eksbuild.2
    serviceAccountRoleARN: \
  	arn:aws:iam::4xxxxxxxxxx8:role/stevepro-aws-eks-AmazonEKS_EBS_CSI_DriverRole
  - name: vpc-cni
    version: v1.19.2-eksbuild.1
  - name: coredns
    version: v1.10.1-eksbuild.18
  - name: kube-proxy
    version: v1.27.16-eksbuild.14
  
  nodeGroups:
    - name: stevepro-aws-eks
      instanceType: m5.large
      desiredCapacity: 0
      minSize: 0
      maxSize: 3
      ssh:
        allow: true
        publicKeyPath: ~/.ssh/master_ssh_key.pub
      preBootstrapCommands:
        - "true"

Create Cluster
  eksctl create cluster -f ~/stevepro-awseks/cluster.yaml          \
     --kubeconfig ~/stevepro-awseks/kubeconfig                     \
     --verbose 5

Scale Nodegroup
  eksctl scale nodegroup                                           \
     --cluster=stevepro-aws-eks                                    \
     --name=stevepro-aws-eks                                       \
     --nodes=3                                                     \
     --nodes-min=0                                                 \
     --nodes-max=3                                                 \
     --verbose 5

Deploy Test
  kubectl create ns test-ns
  kubectl config set-context --current --namespace=test-ns
  kubectl apply -f Kubernetes.yaml
  kubectl port-forward service/flask-api-service 8080:80
  curl http://localhost:8080

Output
  Hello World (Python)!

Shell into Node
  kubectl get po -o wide
  cd ~/.ssh
  ssh -i master_ssh_key ec2-user@node-ip-address

Cleanup
  kubectl delete -f Kubernetes.yaml
  kubectl delete ns test-ns

Delete Cluster
  eksctl delete cluster                                            \
     --name=stevepro-aws-eks                                       \
     --region eu-west-1                                            \
     --force

ERRORS
Error: getting availability zones for region operation error EC2: DescribeAvailabilityZones, StatusCode: 403
Reference: Dashboard | IAM | Users | SteveProXNA | Permissions | Add Permission | AdministratorAccess:
  {
     "Version": "2012-10-17",
     "Statement": [
         {
             "Effect": "Allow",
             "Action": "*",
             "Resource": "*"
         }
     ]
  }

Error: unable to determine AMI from SSM Parameter Store: operation SSM: GetParameter, StatusCode: 400
AWS Dashboard | IAM | Users | SteveProXNA | Create new group | Permission | AdministratorAccess-Amplify
  {
     "Version": "2012-10-17",
     "Statement": [
         {
             "Effect": "Allow",
             "Action": ],
                 "ssm:GetParameter",
                 "ssm:GetParameters"
             ],
             "Resource": "arn:aws:ssm:*:*:parameter/aws/service/eks/optimized-ami/*"
         },
         {
             "Effect": "Allow",
             "Action": "ec2:DescribeImages",
             "Resource": "*"
         }
     ]
  }


Google GKE
Google provides the Google Kubernetes Engine as fully managed Kubernetes container orchestration service. Follow all instructions below in order to provision a Kubernetes cluster and test its functionality end-to-end.
Download code sample here.

Pre-Requisites
  gcloud auth login
  gcloud auth application-default login
  gcloud auth configure-docker
  gcloud config set project SteveProProject

Check Resources
  gcloud compute instances list
  gcloud compute disks list
  gcloud compute forwarding-rules list
  gcloud compute firewall-rules list
  gcloud compute addresses list
  gcloud container clusters list

Create Cluster
   gcloud container clusters create stevepro-gcp-gke               \
     --project=steveproproject                                     \
     --zone europe-west1-b                                         \
     --machine-type=e2-standard-2                                  \
     --disk-type pd-standard                                       \
     --cluster-version=1.30.10-gke.1070000                         \
     --num-nodes 3                                                 \
     --network=default                                             \
     --create-subnetwork=name=stevepro-gcp-gke-subnet,range=/28    \
     --enable-ip-alias                                             \
     --enable-intra-node-visibility                                \
     --logging=NONE                                                \
     --monitoring=NONE                                             \
     --enable-network-policy                                       \
     --labels=prefix=stevepro-gcp-gke,created-by=${USER}           \
     --no-enable-managed-prometheus                                \
     --quiet --verbosity debug

Get Credentials
  gcloud container clusters get-credentials stevepro-gcp-gke       \
     --zone=europe-west1-b                                         \
     --quiet --verbosity debug

IMPORTANT - if you do not have the following gke gcloud auth plugin then execute the following commands:
  gcloud components install gke-gcloud-auth-plugin
  gke-gcloud-auth-plugin --version

Deploy Test
  kubectl create ns test-ns
  kubectl config set-context --current --namespace=test-ns
  kubectl apply -f Kubernetes.yaml
  kubectl port-forward service/flask-api-service 8080:80
  curl http://localhost:8080

Output
  Hello World (Python)!

Shell into Node
  mkdir -p ~/GitHub/luksa
  cd ~/GitHub/luksa
  git clone https://github.com/luksa/kubectl-plugins.git
  cd kubectl-plugins
  chmod +x kubectl-ssh
  kubectl get nodes
  ./kubectl-ssh node gke-stevepro-gcp-gke-default-pool-0b4ca8ca-sjpj

Cleanup
  kubectl delete -f Kubernetes.yaml
  kubectl delete ns test-ns

Delete Cluster
  gcloud container clusters delete stevepro-gcp-gke                \
     --zone europe-west1-border                                    \
     --quiet --verbosity debug

Summary
To summarize, we have now setup and provisioned Azure AKS, Amazon EKS and Google GKE clusters with end-to-end tests. In future we could explore provisioning AWS and GCP Kubeadm clusters using Terraform!

Monday, May 5, 2025

Cloud Setup Cheat Sheet

In 2024, we checked out GitLab Cheat Sheet to streamline collaborative team workflows that leverage CI/CD pipelines. Now, we will explain cluster provisioning process for managed cloud providers: Azure, AWS + GCP.
Let's check it out!

Pre-Requisites
This blog post assumes an Azure, AWS, GCP account is setup. The following links document paid or free tier:
 Azure [Microsoft]  AZ  PAID Tier Account  FREE Tier Account
 Amazon Web Services  AWS  PAID Tier Account  FREE Tier Account
 Google Cloud Platform  GCP  PAID Tier Account  FREE Tier Account

Azure CLI
The Azure Command Line Interface is a set of commands used to create and manage Azure resources. The CLI is available across services designed to get working with Azure quickly with an emphasis on automation.

Linux
Install the Azure CLI on Linux | Choose an installation method e.g. apt (Ubunuty, Debian) | Launch Terminal
 curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

Mac OS/X
Install Azure CLI on Mac OS/X | Install with Homebrew | Install Homebrew manager if you haven't already!
 brew update && brew install azure-cli

Windows
Install Azure CLI on Windows | Microsoft Install (MSI) | Download the Latest MSI of the Azure CLI (64-bit)
 Download and install https://aka.ms/installazurecliwindowsx64

After installing the Azure CLI on Linux, Mac OS/X, Windows confirm the current working version of the CLI:
 az version


AWS CLI
The AWS Command Line Interface is a unified tool used to manage your AWS services. Use the AWS CLI tool to download configure and control AWS services from the command line and automate them through scripts.

Linux
Install the AWS CLI on Linux | Linux tab | Command line installer - Linux x86 (64-bit) | Launch the Terminal
 curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
 unzip awscliv2.zip
 sudo ./aws/install

Mac OS/X
Install the AWS CLI on MacOS/X | macOS tab | GUI installer | Download the macOS pkg file AWSCLIV2.pkg
 Download and install https://awscli.amazonaws.com/AWSCLIV2.pkg

Windows
Install the AWS CLI on Windows | Windows tab | Download MSI | Download Windows (64-bit) AWSCLIV2.msi
 Download and install https://awscli.amazonaws.com/AWSCLIV2.msi

After installing the AWS CLI on Linux, Mac OS/X, Windows confirm the current working version of the CLI:
 aws --version


GCP CLI
The GCP Command Line Interface is used to create and manage Google Cloud resources + services directly from the command line and to perform common platform tasks faster by controlling cloud resources at scale.

Linux
Install the gcloud CLI | Linux tab | Platform Linux 64-bit (x86_64) | Launch Terminal + execute commands:
 curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-linux-x86_64.tar.gz
 tar -xf google-cloud-cli-linux-x86_64.tar.gz
 cd google-cloud-sdk  ./install.sh

Mac OS/X
Install the gcloud CLI | macOS tab | Platform macOS macOS 64-bit (ARM64, Apple silicon) | Launch Terminal
 curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-darwin-arm.tar.gz
 tar -xf google-cloud-cli-darwin-arm.tar.gz
 cd google-cloud-sdk  ./install.sh

Windows
Install the gcloud CLI | Windows tab | Download the Google Cloud CLI installer GoogleCloudSDKInstaller.exe
 Download and install https://dl.google.com/dl/cloudsdk/channels/rapid/GoogleCloudSDKInstaller.exe

After installing the gcloud CLI on Linux, Mac OS/X, Windows confirm the current working version of the CLI:
 gcloud init  gcloud version


Master Key
Next, create master SSH key for secure, automated and controlled access to your Kubernetes infrastructure:
 cd ~/.ssh
 ssh-keygen -t rsa -b 4096 -N '' -f master_ssh_key
 eval $(ssh-agent -s)
 ssh-add master_ssh_key


Azure AKS
Microsoft provides Azure Kubernetes Services as fully managed Kubernetes container orchestration service. Follow all instructions below in order to provision a Kubernetes cluster and end-to-end test its functionality.
Download code sample here.

Pre-Requisites
  az login

Check Resources
  az account list --output table
  az group list --output table
  az resource list --output table
  az resource list --query "[?location=='northeurope']" --output table
  az vm list --output table
  az aks list --output table
  az container list --output table
  az storage account list --output table
  az network public-ip list --output table

Create Group
  az group create --name stevepro-azraks-rg --location northeurope --debug

Security Principal
  az ad sp create-for-rbac --name ${USER}-sp --skip-assignment

Output
  {
     "appId": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
     "displayName": "stevepro-sp",
     "password": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
     "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
  }

Export
  export AZ_SP_ID=<value_from_appId>
  export AZ_SP_PASSWORD=<value_from_password>

Create Cluster
  az aks create --name stevepro-azraks                 \
     --resource-group stevepro-azraks-rg               \
     --dns-name-prefix stevepro-azraks                 \
     --node-count 3                                    \
     --node-vm-size Standard_D2s_v3                    \
     --kubernetes-version 1.31                         \
     --ssh-key-value ~/.ssh/master_ssh_key.pub         \
     --service-principal ${AZ_SP_ID}                   \
     --client-secret ${AZ_SP_PASSWORD}                 \
     --load-balancer-sku standard                      \
     --network-plugin azure --debug

Get Credentials
  export KUBECONFIG=~/.kube/config
  az aks get-credentials --name stevepro-azraks        \
     --resource-group stevepro-azraks-rg --file ~/.kube/config

Deploy Test
  kubectl create ns test-ns
  kubectl config set-context --current --namespace=test-ns
  kubectl apply -f Kubernetes.yaml
  kubectl port-forward service/flask-api-service 8080:80
  curl http://localhost:8080

Output
  Hello World (Python)!

Shell into Node
  mkdir -p ~/GitHub/luksa
  cd ~/GitHub/luksa
  git clone https://github.com/luksa/kubectl-plugins.git
  cd kubectl-plugins
  chmod +x kubectl-ssh
  kubectl get nodes
  ./kubectl-ssh node aks-nodepool1-20972701-vmss000000

Cleanup
  kubectl delete -f Kubernetes.yaml
  kubectl delete ns test-ns

Delete Cluster
  az aks delete --name stevepro-azraks                 \
     --resource-group stevepro-azraks-rg

Delete Group
  az group delete --name stevepro-azraks-rg --yes --no-wait
  az group delete --name NetworkWatcherRG --yes --no-wait

Summary
To summarize, we have setup CLIs for Azure, Amazon and Google and provisioned an Azure AKS Kubernetes cluster with end-to-end testing. Next, we will resume to provision clusters for Amazon EKS and Google GKE. This will be the topic of the next post.

Wednesday, January 1, 2025

Retrospective XVI

Last year, I conducted a simple retrospective for 2023. Therefore, here is a retrospective for year 2024.

2024 Achievements
  • Transfer all Windows and Linux keyboard shortcuts and muscle memory to new Mac Book Pro
  • Transfer all important Windows and Linux applications navigations for M1-powered MacBooks
  • Build GitLab CI/CD pipelines extending DevOps skillset and streamline collaborative workflow
  • Provision Kubernetes clusters for GitLab CI/CD pipelines e.g. Azure AKS, AWS-EKS, GCP-GKE
  • Configure Doom open source port for Windows and Linux to debug step thru the source code
  • Launch fulltime Python coding experience to learn AI focusing on RL Reinforcement Learning
  • Experiment with OpenAI Gym project for RL research and build Atari available environments
  • Investigate OpenAI Retro project for RL research on classic Sega 8-bit + 16-bit video games

Note: building OpenAI projects for classic Sega 8/16-bit video games integration is a big achievement!

2025 Objectives
  • Document DevOps managed clusters provisioning experience with AWS / Azure / GCP providers
  • Channel cloud computing knowledge toward software architecture or infrastructure certification
  • Harness Python potential power invoking C/C++ [PyBind11] with code magnitudes times faster
  • Extend OpenAI Gym and Retro projects for more Indie video games + Reinforcement Learning!

Artificial Intelligence
Artificial Intelligence refers to capability of machines to imitate human intelligence. AI empowers machines to acquire knowledge, adapt and independently make decisions like teaching a computer to act human like.

Machine Learning
AI involves a crucial element known as Machine Learning. ML is akin to training computers to improve tasks without providing detailed instructions. Machines utilize data to learn and enhance the performance without explicit programming and concentrates on creating algorithms for computers to learn from data to improve.

Deep Learning
Deep Learning involves artificial neural networks inspired by the human brain: mimicking how human brains work. DL excels at handling complex tasks and large datasets efficiently and achieves remarkable success in areas like natural language processing and computer vision despite complexity and interpretation challenges.

Generative AI
Generative AI is the latest innovation in the AI field. Instead of just identifying patterns GenAI goes one step further by actually attempting to produce new content that closely resembles what humans might create.

Outline
 Artificial Intelligence  Artificial Intelligence is the "big brain"
 Machine Learning  Machine Learning is its learning process
 Deep Learning  Deep Learning is its intricate wiring
 Generative AI  Generative AI is the creative spark


Gen AI and LLMs are revolutionizing our personal and professional lives From supercharged digital assistants to seemingly omniscient chatbots these technologies are driving a new era of convenience, productivity, and connectivity.

Traditional AI uses predictive models to classify data, recognize patterns, + predict outcomes within specific context whereas Gen AI models generate entirely new outputs rather than simply making predictions based on prior experience.

This shift from prediction to creation opens up new realms of innovation: in healthcare traditional predictive model can spot suspicious lesion in lung tissue MRI whereas GenAI could also determine the likelihood that patient will develop pneumonia or other lung diseases and offer treatment recommendations based on best practices gleaned from thousands of similar cases.

Example
GenAI powered healthcare chatbots can assist patients and healthcare providers and medical administrators:
 01. Symptom Checker  07. Mental Health Support
 02. Appointment Scheduling  08. Insurance and Billing Assistance
 03. Medication Reminders  09. Virtual Consultations and Telemedicine
 04. Health Tips and Preventive Care  10. Health Records Access
 05. Lab Results Interpretation  11. Emergency Triage
 06. Chronic Disease Management  

By leveraging conversational AI healthcare chatbot can improve patient engagement and provide real-time support and optimize the workflow for healthcare providers. Finally, Reinforcement Learning From Human Feedback RLHF can be integrated to further improve model performance over original pre-trained version!

Future
Artificial Intelligence is changing industries across the globe from healthcare and finance to marketing and logistics. As we enter 2025, the demand for skilled AI professionals continues to soar. Start out by building strong foundations in Python and understand key concepts such as machine learning and neural networks.

Therefore, whether an AI beginner or seasoned tech professional, here are the top 10 AI skills for success:
 No.  AI Skill Key Tools
 01  Machine Learning (ML) Scikit-learn, TensorFlow, PyTorch
 02  Deep Learning Keras, PyTorch, Google Colab
 03  Natural Language Processing (NLP) NLTK, SpaCy, GPT-based models (e.g., GPT-4)
 04  Data Science and Analytics NumPy, Pandas, Jupyter Notebooks
 05  Computer Vision OpenCV, YOLO (You Only Look Once), TensorFlow
 06  AI Ethics and Bias Mitigation AI Ethics Courses, Fairness Indicators (Google)
 07  AI Infrastructure and Cloud Computing Amazon Web Services, Microsoft Azure, Google Cloud AI
 08  Reinforcement Learning OpenAI Gym, TensorFlow Agents, Stable Baselines3
 09  AI Operations (MLOps) Docker, Kubernetes, Kubeflow, MLflow
 10  Generative AI Generative Adversarial Networks, DALL-E, GPT models

Finally, the GenAI market is poised to explode, growing to $1.3 trillion over the next 10 years from market size of just $40 billion in 2022. Therefore, it would be extraordinary to integrate GenAI to build content for OpenAI-based retro video games only to be trained by Reinforcement Learning algorithms to beat them J