![]() |
![]() |
![]() |
Pre-Requisites
This blog post assumes an Azure, AWS, GCP account is setup plus all the corresponding CLIs are configured!
AWS Login
Navigate to https://aws.amazon.com | Sign In | Sign in using root user email. Root user | Root user email address e.g. steven_boland@hotmail.com | Next | Enter password. Setup AWS Multi-Factor Authentication.
AWS Single Sign On
Accessing AWS clusters programmatically is recommened to setup and configure AWS SSO. Example config:
sso_start_url = https://stevepro.awsapps.com/start sso_region = eu-west-1 sso_account_id = 4xxxxxxxxxx8 sso_role_name = AdministratorAccess region = eu-west-1 output = json |
eksctl
Command-line tool that abstracts complexity involved in setting up AWS EKS clusters. Here is how to install:
Linux
curl --silent --location "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp sudo mv /tmp/eksctl /usr/local/bin |
Mac OS/X
brew tap eksctl-io/eksctl | brew install eksctl |
Windows
Launch Powershell | choco install eksctl |
Master Key
Next, create master SSH key for secure, automated and controlled access to your Kubernetes infrastructure:
cd ~/.ssh ssh-keygen -t rsa -b 4096 -N '' -f master_ssh_key |
eval $(ssh-agent -s) ssh-add master_ssh_key |
Amazon EKS
Amazon provides Elastic Kubernetes Service as a fully managed Kubernetes container orchestration service. Follow all instructions below in order to provision a Kubernetes cluster and test its functionality end-to-end. Download code sample here.
Pre-Requisites
aws sso login |
Check Resources
aws ec2 describe-instances --query 'Reservations[*].Instances[*].InstanceId' --output table aws ec2 describe-addresses --query 'Addresses[*].PublicIp' --output table aws ec2 describe-key-pairs --query 'KeyPairs[*].KeyName' --output table aws ec2 describe-volumes --query 'Volumes[*].VolumeId' --output table aws ec2 describe-vpcs --query 'Vpcs[*].VpcId' --output table aws cloudformation list-stacks --query 'StackSummaries[*].StackName' --output table aws cloudwatch describe-alarms --query 'MetricAlarms[*].AlarmName' --output table aws ecr describe-repositories --query 'repositories[*].repositoryName' --output table aws ecs list-clusters --query 'clusterArns' --output table aws eks list-clusters --query 'clusters' --output table aws elasticbeanstalk describe-environments --query 'Environments[*].EnvironmentName' --output table aws elb describe-load-balancers --query 'LoadBalancerDescriptions[*].LoadBalancerName' --output table aws elbv2 describe-load-balancers --query 'LoadBalancers[*].LoadBalancerName' --output table aws iam list-roles --query 'Roles[*].RoleName' --output table aws iam list-users --query 'Users[*].UserName' --output table aws lambda list-functions --query 'Functions[*].FunctionName' --output table aws rds describe-db-instances --query 'DBInstances[*].DBInstanceIdentifier' --output table aws route53 list-hosted-zones --query 'HostedZones[*].Name' --output table aws s3 ls aws sns list-topics --query 'Topics[*].TopicArn' --output table aws sqs list-queues --query 'QueueUrls' --output table aws ssm describe-parameters --query 'Parameters[*].Name' --output table |
Cluster YAML
kind: ClusterConfig apiVersion: eksctl.io/v1alpha5 metadata: name: stevepro-aws-eks region: eu-west-1 version: "1.27" tags: createdBy: stevepro kubernetesNetworkConfig: ipFamily: IPv4 iam: withOIDC: true serviceAccounts: - metadata: name: ebs-csi-controller-sa namespace: kube-system attachPolicyARNs: - "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy" roleOnly: true roleName: stevepro-aws-eks-AmazonEKS_EBS_CSI_DriverRole addons: - name: aws-ebs-csi-driver version: v1.38.1-eksbuild.2 serviceAccountRoleARN: \ arn:aws:iam::4xxxxxxxxxx8:role/stevepro-aws-eks-AmazonEKS_EBS_CSI_DriverRole - name: vpc-cni version: v1.19.2-eksbuild.1 - name: coredns version: v1.10.1-eksbuild.18 - name: kube-proxy version: v1.27.16-eksbuild.14 nodeGroups: - name: stevepro-aws-eks instanceType: m5.large desiredCapacity: 0 minSize: 0 maxSize: 3 ssh: allow: true publicKeyPath: ~/.ssh/master_ssh_key.pub preBootstrapCommands: - "true" |
Create Cluster
eksctl create cluster -f ~/stevepro-awseks/cluster.yaml \ --kubeconfig ~/stevepro-awseks/kubeconfig \ --verbose 5 |
Scale Nodegroup
eksctl scale nodegroup \ --cluster=stevepro-aws-eks \ --name=stevepro-aws-eks \ --nodes=3 \ --nodes-min=0 \ --nodes-max=3 \ --verbose 5 |
Deploy Test
kubectl create ns test-ns kubectl config set-context --current --namespace=test-ns kubectl apply -f Kubernetes.yaml kubectl port-forward service/flask-api-service 8080:80 curl http://localhost:8080 |
Output
Hello World (Python)! |
Shell into Node
kubectl get po -o wide cd ~/.ssh ssh -i master_ssh_key ec2-user@node-ip-address |
Cleanup
kubectl delete -f Kubernetes.yaml kubectl delete ns test-ns |
Delete Cluster
eksctl delete cluster \ --name=stevepro-aws-eks \ --region eu-west-1 \ --force |
ERRORS
Error: getting availability zones for region operation error EC2: DescribeAvailabilityZones, StatusCode: 403
Reference: Dashboard | IAM | Users | SteveProXNA | Permissions | Add Permission | AdministratorAccess:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "*", "Resource": "*" } ] } |
Error: unable to determine AMI from SSM Parameter Store: operation SSM: GetParameter, StatusCode: 400
AWS Dashboard | IAM | Users | SteveProXNA | Create new group | Permission | AdministratorAccess-Amplify
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ], "ssm:GetParameter", "ssm:GetParameters" ], "Resource": "arn:aws:ssm:*:*:parameter/aws/service/eks/optimized-ami/*" }, { "Effect": "Allow", "Action": "ec2:DescribeImages", "Resource": "*" } ] } |
Google GKE
Google provides the Google Kubernetes Engine as fully managed Kubernetes container orchestration service. Follow all instructions below in order to provision a Kubernetes cluster and test its functionality end-to-end.
Download code sample here.
Pre-Requisites
gcloud auth login gcloud auth application-default login |
gcloud auth configure-docker gcloud config set project SteveProProject |
Check Resources
gcloud compute instances list gcloud compute disks list gcloud compute forwarding-rules list |
gcloud compute firewall-rules list gcloud compute addresses list gcloud container clusters list |
Create Cluster
gcloud container clusters create stevepro-gcp-gke \ --project=steveproproject \ --zone europe-west1-b \ --machine-type=e2-standard-2 \ --disk-type pd-standard \ --cluster-version=1.30.10-gke.1070000 \ --num-nodes 3 \ --network=default \ --create-subnetwork=name=stevepro-gcp-gke-subnet,range=/28 \ --enable-ip-alias \ --enable-intra-node-visibility \ --logging=NONE \ --monitoring=NONE \ --enable-network-policy \ --labels=prefix=stevepro-gcp-gke,created-by=${USER} \ --no-enable-managed-prometheus \ --quiet --verbosity debug |
Get Credentials
gcloud container clusters get-credentials stevepro-gcp-gke \ --zone=europe-west1-b \ --quiet --verbosity debug |
IMPORTANT - if you do not have the following gke gcloud auth plugin then execute the following commands:
gcloud components install gke-gcloud-auth-plugin gke-gcloud-auth-plugin --version |
Deploy Test
kubectl create ns test-ns kubectl config set-context --current --namespace=test-ns kubectl apply -f Kubernetes.yaml kubectl port-forward service/flask-api-service 8080:80 curl http://localhost:8080 |
Output
Hello World (Python)! |
Shell into Node
mkdir -p ~/GitHub/luksa cd ~/GitHub/luksa git clone https://github.com/luksa/kubectl-plugins.git cd kubectl-plugins chmod +x kubectl-ssh kubectl get nodes ./kubectl-ssh node gke-stevepro-gcp-gke-default-pool-0b4ca8ca-sjpj |
Cleanup
kubectl delete -f Kubernetes.yaml kubectl delete ns test-ns |
Delete Cluster
gcloud container clusters delete stevepro-gcp-gke \ --zone europe-west1-border \ --quiet --verbosity debug |
Summary
To summarize, we have now setup and provisioned Azure AKS, Amazon EKS and Google GKE clusters with end-to-end tests. In future we could explore provisioning AWS and GCP Kubeadm clusters using Terraform!
No comments:
Post a Comment