Monday, November 15, 2021

Vintage Van Halen Code Complete

There's only one way to rock! Vintage Van Halen celebrates forty awesome riffs shred by Edward Van Halen. Built using the Sega Genesis Development Kit, Vintage Van Halen is available for free download from itch.io.

Let's check it out!

Note: Vintage Van Halen development based on SGDK Programming Setup + SGDK Programming Sample. Download source code here.


Inspiration
Eddie Van Halen is regarded as one of the greatest guitarists of all time. His innovations revolutionized guitar playing and influenced generations of guitarists. Eddie is responsible for some of the most memorable riffs in rock history and his band "Van Halen" continues to remain as one of the world's top selling artists of all time.

Instructions
Simple: move the joystick Up and Down to select a multi-choice answer: 1, 2, 3, 4. Press button A to select an answer or progress forward through any prompts. Note: Press button B to always go back. Joystick Left and Right are not used at all. Finally, Press button C during the game play to replay any riff at any time!

Tools
Here is a list of Tools, Frameworks, Utilities and Emulators that were used in development of this project:
 Programming  SGDK
 Compiler  gcc 4.9.3
 IDE  Visual Studio 2015
 Languages  C / 68000
 Graphics  Image Resizer / BMP converter
 Music  Audacity / YouTube
 Emulators  Emulicious / Gens KMod

ROM Hacking
You can hack this ROM! Download + dump VintageVanHalen into Hex Editor, e.g. HxD, and modify bytes:
 ADDRESS  VARIABLE  DESCRIPTION
 0x004F  DelaySpeed  Used to speed through any game delay.
 0x0050  Invincible  Non-zero value always enables cheating.
 0x0051  RiffSelect  Set the value to 2,3,4 index otherwise 1.
 0x0052  DiffSelect  Set value to 1=Easy otherwise 2=Hard.

hack_manager.c
#ifndef __HACK_MANAGER_H__
#define __HACK_MANAGER_H__

#define PEEK(addr)       (*(unsigned char *)(addr))
#define POKE(addr, data) (*(unsigned char *)(addr) = (data))

#define HACKER_START  0x004F
void engine_hack_manager_load()
{
  struct_hack_object *ho = &global_hack_object;
  ho->hack_delayspeed = PEEK( HACKER_START - 2 );        // 0x01DE Used to speed through any game delay.
  ho->hack_invincible = PEEK( HACKER_START - 1 );        // 0x01DF Non-zero value enables always cheats.
  ho->hack_riffselect = PEEK( HACKER_START + 1 );        // 0x01E0 Set value to 2,3,4 index otherwise 1.
  ho->hack_diffselect = PEEK( HACKER_START + 2 );        // 0x01E1 Set value to 1=Easy otherwise 2=Hard.
}
#endif//__HACK_MANAGER_H__

Cheats
Hack the ROM [above] to show the answers for every quiz during entire game session or alternatively press button C five times on Title screen when prompted to "Press Start" to show the answers to the current quiz!

Also, on Title screen press + hold joystick down while holding button B. This will show all the game statistics persisted across all game sessions. Finally, on Splash screen press and hold button B to reset all game stats.

storage_manager.c
#ifndef _STORAGE_MANAGER_H_
#define _STORAGE_MANAGER_H_

void engine_storage_manager_code()
{
  sRamOffSet = 0x0000;
  signed char byte;

  SYS_disableInts();
  SRAM_enable();

  byte = SRAM_readByte( sRamOffSet++ );        // Read.
  SRAM_writeByte( sRamOffSet++, byte );        // Write.

  SRAM_disable();
  SYS_enableInts();
}
#endif//_STORAGE_MANAGER_H_

Credits
Extra special thanks goes to @MegadriveDev for the SGDK. Plus StevePro Studios would like to give thanks: @bigevilboss, @matteusbeus, @MoonWatcherMD, @ohsat_games, @SpritesMind for SGDK support online!



Summary
Vintage Van Halen is the first 16-bit project ever built by StevePro Studios for the Sega MegaDrive / Genesis. Fortunately, like Sega Master System, the MegaDrive community provides fantastic online help and support.

After years of developing for Sega Master System, it was seamless to apply many of the programming skills acquired from 8-bit development to 16-bit thus the project was completed in a very short time. Awesome J

Wednesday, September 15, 2021

Kubernetes Cheat Sheet

Kubernetes is an open source container orchestration system for automating application deployment, scaling and management. Maintained by the Cloud Native Computing Foundation, Kubernetes works with a range of container tools and runs applications deployed in containers in a cluster often with images built using Docker.

Let's check K8s out!

Tools
Here are some commonly used tools that can be installed to work with + help manage Kubernetes clusters:
 kubectl  Command-line tool to run commands against Kubernetes clusters
 minikube  Runs a single-node Kubernetes cluster on your personal computer
 kind  Runs Kubernetes on your local computer inside Docker containers
 kubeadm  Used to create and manage and secure larger Kubernetes clusters

Installation
Install Kubernetes with Docker Desktop on Windows and Mac OS/X. Install Kubernetes on Linux accordingly.

Minikube
Here are instructions how to get Kubernetes running locally as single node cluster on Ubuntu with Minikube:
# Prerequisites
sudo apt install cpu-checker && sudo kvm-ok
sudo apt install libvirt-clients libvirt-daemon-system qemu-kvm \
    && sudo usermod -a -G libvirt $(whoami) \
    && newgrp libvirt
    
sudo virt-host-validate
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl \
    && sudo install kubectl /usr/local/bin && rm kubectl

curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \
    && sudo install docker-machine-driver-kvm2 /usr/local/bin/ && rm docker-machine-driver-kvm
    
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
    && sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64

minikube version
minikube start


KinD
Kubernetes in Docker [KinD] allows an approach to run a full Kubernetes cluster using Docker containers to simulate multiple Kubernetes nodes operating all at once instead of running everything in virtual machines.

Assume Go installed. Launch Terminal. Enter commands to install KinD. Export path in ~/.bashrc and reboot:
 go get sigs.k8s.io/kind

 # Add KinD to $PATH in ~/.bashrc
 export PATH="$PATH:~/go/bin/"
 sudo reboot
 # Create test and delete cluster
 kind create cluster --wait 5m \
 export KUBECONFIG="$(kind get kubeconfig-path)"
 kubectl cluster-info
 kind delete cluster


kubeadm
kubeadm is another tool built to provide a fast path for creating Kuberentes clusters. kubeadm performs all actions to get a minimum viable cluster up and running. Once installed you can then create the K8s cluster.
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
kubeadm version

Dashboard
Once Kubernetes cluster is setup there are various options to view all resources. Minikube: simply the launch terminal | minikube dashboard. Alternatively, install VS Code Kubernetes extension and view all resources.


KinD dashboard is more involved. Enter these commands to configure KinD dashboard to run in the browser.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
kubectl get pod -n kubernetes-dashboard
kubectl create clusterrolebinding default-admin --clusterrole cluster-admin --serviceaccount=default:default

token=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']==\
'default')].data.token}"|base64 --decode)
echo $token
kubectl proxy

Launch the browser and enter the following URL. Once prompted enter the service token above and click OK.
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy

IMPORTANT
If you receive following error: listen tcp 127.0.0.1:8001: bind: address already in use then enter command:
 netstat -tulnap | grep 5000
 sudo apt install net-tools
 sudo fuser -k 5000/tcp
 netstat -tulnap | grep 5000


k9s is the Kubernetes CLI to manage clusters. Download the latest binary and extract into /usr/local/bin. You may need to manually create the ~/.k9s folder otherwise you may receive the error Permission denied.


kubectl
Enable auto completion as per the kubectl Cheat Sheet and update ~/.bashrc to alias the kubectl command:
 # Auto complete
 source <(kubectl completion bash)
 echo "source <(kubectl completion bash)" >> ~/.bashrc
 
 # Alias kubectl
 alias k='kubectl'
 complete -F __start_kubectl k
 alias kdr='kubectl --dry-run=client -o yaml'

The Kubernetes cluster configuration file lives at ~/.kube/config. Obtain Kubernetes config + context info:
 kubectl config view  Show Kubernetes cluster configuration
 kubectl config get-contexts  Show all Kubernetes cluster contexts
 kubectl config current-context  Show current Kubernetes cluster context
 kubectl config use-context my-cluster-name  Set default context to my-cluster-name
 kubectl config set-context --current --namespace=my-ns  Set default context namespace to my-ns


Definitions
Here is some basic terminology when working with containerized applications running in Kubernetes cluster:
 Namespace  Scope cluster resources and a way to isolate Kubernetes objects
 Workload  Containerized application running within the Kubernetes cluster
 Container  Decouples applications from the underlying host infrastructure
 Pod  Smallest deployable unit as created and managed in Kubernetes
 Node  Workloads are placed in Containers on Pods to be run on Nodes
 Deployment  Provides a declarative way to update all Pods and ReplicaSets
 Service  Absract way to expose an application running on a set of Pods


Planes
Networking back in the day would have rules + policies about how to route network packets. These policies would make up the network control plane. The control plane is concerned with establishing network policy. Meanwhile, the data plane is everything else in the network architecture that enforces all network policies.

Control Plane
In Kubernetes, the control plane is the set of components that "make global decisions about the cluster" e.g. scheduling as well as detecting and responding to cluster events e.g. auto scaling pods due to traffic spikes.

Listed are control plane components that run on the master node to keep the cluster in the "desired state":
 Store (etcd)  Key-value backing store for all Kubernetes objects and data info
 API Server  Exposes the Kubernetes API as the front end for the control plane
 Controller-Manager  Runs controller processes e.g. Nodes, Jobs, Endpoints + Services
 Scheduler  Watches for newly created pods and assigns nodes to run them on

Data Plane
In Kubernetes, the data plane is the set of worker nodes with their pods and containers that enforce all the global decisions made about the cluster from the master node e.g. auto scaling pods due to traffic spikes.

Listed are data plane components that run on each worker node maintaining pods and runtime environment:
 Kubelet  Worker agent that makes sure containers are running inside Pods
 Container Runtime  Software that is responsible for running containers on worker nodes
 Kube Proxy  Network proxy that maintains all network rules on the worker nodes

IMPORTANT
Here are 2x simple commands to get full information about control and data planes for Kubernetes cluster:
 kubectl cluster-info  Kubernetes control plane is running at the designated IP address
 kubectl cluster-info dump  Full description of all Kubernetes components running in the cluster


Commands
Here is list of useful commands. Foreach command you can add the --help flag to see more options to pass:
 kubectl get all -A  Get all resources across all namespaces
 kubectl describe pod my-pod  Get the full YAML declaration for my-pod
 kubectl get pod my-pod -o yaml  Get the expanded declaration for my-pod
 kubectl logs -f my-pod  Get logs for my-pod + tail all its updates
 kubectl get nodes -o wide  List all full details of nodes in namepsace
 kubectl get deployments -A  List all deployments from all namespaces
 kubectl get services -A  List all services across all the namespaces
List all container(s) for my-pod kubectl get pod my-pod -o jsonpath="{.spec.containers[*].image}".

Tutorial
Hello MiniKube tutorial shows you how to create a sample app on Kubernetes applying the above resources.
 minikube start  Start Minikube cluster
 minikube dashboard  Launch Minikube dashboard
 minikube ip  Display Minikube IP address
 minikube ssh  Secure shell into Minikube node
 minikube stop  Stop Minikube cluster
 minikube delete  Delete Minikube cluster

Create Deployment
 minikube start
 minikube dashboard
 kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
 kubectl get deployments

Create Service
 kubectl expose deployment hello-node --type=LoadBalancer --port=8080
 kubectl get services
 minikube service hello-node

Exec into Pod
 kubectl attach hello-node-7567d9fdc9-mblh7 -i
 kubectl exec hello-node-7567d9fdc9-mblh7 -- ls /
 kubectl exec -it hello-node-7567d9fdc9-mblh7 -- /bin/sh
 kubectl exec --stdin --tty hello-node-7567d9fdc9-mblh7 -- /bin/sh

Tail Logs
 kubectl logs -f hello-node-7567d9fdc9-mblh7
 minikube stop
 minikube delete


Management
The tutorial demonstrates imperative commands using kubectl to operate directly on live Kubernetes objects in the cluster. This is the useful to get started or run one-off tasks however these actions provide no history.

Whereas using declarative object configuration requires configuration files to be stored locally first. All CRUD operations will be detected automatically per Kuberentes object thus configuration can be version controlled.


Example
Code an example from scratch as full end-to-end Web API demo on local host, in Docker and on Kubernetes. Launch Terminal | go mod init testwebapi. Launch VS Code. Enter following code. Press F5 to debug main.go.
 main.go
 package main
 import (
 	"fmt"
 	"html"
 	"log"
 	"net/http"
 )
 func main() {
 	bind := ":8081"
 	log.Println("Start web server on port", bind)
 	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
 	  fmt.Fprintf(w, "Hello, %q", html.EscapeString(r.URL.Path))
 	})
 	log.Fatal(http.ListenAndServe(bind, nil))
 }

Test main.go: Launch Terminal | curl http://localhost:8081/test. Next, create Dockerfile to build image:
  1. Ctrl+Shift+P. Choose Docker: Add Docker Files to Workspace...
  2. Select Application Platform. Choose GoLang
  3. What port does app listen on? Choose 8081
  4. Include optional Docker Compose files? Select No
 Dockerfile
 #build stage
 FROM golang:alpine AS builder
 RUN apk add --no-cache git
 WORKDIR /go/src/app
 COPY . .
 RUN go get -d -v ./...
 RUN go build -o /go/bin/app -v ./...
 #final stage
 FROM alpine:latest
 RUN apk --no-cache add ca-certificates
 COPY --from=builder /go/bin/app /app
 ENTRYPOINT /app
 LABEL Name=golang20 Version=0.0.1
 EXPOSE 8081

In VS Code | Right click Dockerfile | Build Image... Choose name to tag image. Once complete Image will be listed in Docker extension Images list. Expand image built | Run. Refresh Docker extension Containers list.

Test Dockerfile: Launch Terminal | curl http://localhost:8081/test. Lastly, deploy Web API to the cluster. Enter the following YAML as image deployment to local installed cluster and expose the endpoint as service:
 Kubernetes.yaml
 --- Deployment
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: testwebapi
 spec:
   replicas: 1
   selector:
     matchLabels:
       app: testwebapi
   template:
     metadata:
       labels:
         app: testwebapi
     spec:
       containers:
         - name: testwebapi
           image: stevepro/testwebapi:1.0
           imagePullPolicy: Never
           resources:
             limits:
               memory: "128Mi"
               cpu: "500m"
           ports:
             - containerPort: 8081 
 --- Service
 apiVersion: v1
 kind: Service
 metadata:
   name: testwebapi-service
 spec:
   type: NodePort
   ports:
     - name: http
       port: 8082
       targetPort: 8081
   selector:
     app: testwebapi
 
 
 
 
 
 
 
 
 
 
 
 
IMPORTANT: Kubernetes Templates is a handy VS Code extension that helps create Kubernetes YAML files!

Minikube
Ensure that Minikube is installed. Launch Terminal | minikube start. Ensure images will be deployed locally:
 minikube start
 minikube docker-env
 eval $(minikube -p minikube docker-env)

Build the image as above but this time in the local Minikube cluster docker build -t stevepro/testwebapi:1.0 .
Apply the deployment and service YAML using the kubectl. Verify objects created from minikube dashboard.
 docker build -t stevepro/testwebapi:1.0 .
 kubectl apply -f Kubernetes.yaml

Test Kubernetes: Launch Terminal. Execute minikube service testwebapi-service --url to obtain the cluster IP address + port. Build Minikube cluster socket and test the API | curl http://192.168.49.2:30799/test.
 minikube service testwebapi-service --url
 curl http://192.168.49.2:30799/test

Finally, clean up and delete deployment + service YAML after testing complete and destroy Minikube cluster.
 kubectl delete -f Kubernetes.yaml
 minikube stop


KinD
Code an example from above as full end-to-end Web API demo on local host, in Docker and on Kubernetes. Ensure that KinD is installed. Launch Terminal | kind create cluster. Ensure images will be deployed locally:
 kind create cluster

Build the image as above but this time in the local KinD cluster as docker build -t stevepro/testwebapi:2.0. Load the newly built local image into KinD cluster. Apply the deployment and service YAML using the kubectl.
 docker build -t stevepro/testwebapi:2.0 .
 kind load docker-image stevepro/testwebapi:2.0
 kubectl apply -f Kubernetes.yaml

Test Kubernetes: Launch Terminal. Execute kubectl get nodes -o wide to obtain cluster INTERNAL-IP. Execute kubectl get services to obtain cluster port. Build socket and test | curl http://172.18.0.2:31196/test.
 kubectl get nodes -o wide
 kubectl get services
 curl http://172.18.0.2:31196/test

Finally, clean up and delete deployment + service YAML after testing complete and destroy the KinD cluster.
 kubectl delete -f Kubernetes.yaml
 kind delete cluster


Source Code
Finally, navigate the Kubernetes source code to familiarize yourself with the code base and debug step thru the source code especially if you would like to participate in any one of the Special Interest Groups [SIGs].

git clone the Kubernetes source code. Launch folder in VS Code. Search for main.go e.g. from ./cmd/cloud-controller-manager. Right click main.go | Open in | Terminal. go build . go run main.go. Press F5 to debug:
 git clone https://github.com/kubernetes/kubernetes.git
 cd kubernetes
 make
 find -L -type f -name 'main.go'
 cd ./cmd/cloud-controller-manager
 go build .
 go run main.go
 Press F5


Summary
To summarize, Kubernetes has greatly simplified cloud native infrastructure for developers and provides a scalable framework for application deployment. However, just like any new tool or technology, Kubernetes brings with it new security challenges especially due to the ephemeral nature of containerized applications.

Consequently, the Container Network Interface [CNI] initiative was created to define a standardized common interface between container execution and Kubernetes networking layer to address these security concerns.

Thus Kubernetes plugins like Calico has become the most popular CNI for cluster networking thru definition and enforcement of network policies. Calico prescribes to which pods can send and receive traffic securely throughout the cluster which becomes critical esp. if Kubernetes adoption continues to grow in the future!

Tuesday, August 31, 2021

Docker Cheat Sheet

Docker is a Platform as a Service product that uses OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software + libraries and configuration files yet share single operating system kernel thus use fewer resource than virtual machines.

Let's check it out!

Installation
Docker Engine is available on a variety of Linux platforms. On Windows and Mac OS/X it is easiest to install Docker Desktop as a static binary installation. Install Docker Engine as per Operating System instructions.

Windows
Install Docker Desktop on Windows which will install the Docker Engine, Docker Compose and Kubernetes.

Mac OS/X
Install Docker Desktop on Mac OS/X which will install the Docker Engine, Docker Compose and Kubernetes.

Linux
Docker provides .deb + .rpm packages for various Linux distributions. For example Install Docker on Ubuntu:
sudo apt-get update
sudo apt-get remove docker docker-engine docker.io
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker
docker version
docker images

The final 2x commands may throw the following error while trying to connect to the Docker daemon socket:

Here is one way to fix "Got permission denied while trying to connect to the Docker daemon socket" error:
sudo groupadd docker
sudo usermod -aG docker ${USER}
sudo reboot
docker version
docker images

IMPORTANT
You may also install docker-compose esp. if you are defining + running multi-container Docker applicaitons.
sudo apt install docker-compose
sudo docker-compose --version

If an older version of docker-compose is installed + you would like to upgrade then complete the following:
sudo apt remove docker-compose
sudo apt update
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo docker–compose --version


Definitions
When you learn Docker, you find that Docker Images are simply templates used to build Docker Containers. Docker Containers are small isolated environments which are runnable version of their Images. Finally, the Dockerfile contains all the commands that specify how an Image should be built and Container should run.

On brand new install on Docker there will be no Docker Images built or Docker Containers running. Verify:


Hello World
The simplest way to pull down Docker image from the Docker Hub is to execute docker run hello-world.


Getting Started
Here is more information on Docker Orientation and setup tutorial to build and run an image as a container:
docker run -d -p 80:80 docker/getting-started
docker images
docker ps

Launch browser. Enter http://localhost. You'll now see the same content locally as the Getting Started page! Afterwards cleanup. Enter the following commands from the Terminal to stop container and remove image:
docker stop $(docker ps -q)
docker rmi $(docker images -qa) --force
docker system prune -a

Example #1
Code an example from scratch: build simple Hello application in Python and run on localhost in a container:
 app.py  Dockerfile
 from flask import Flask
 
 app = Flask(__name__)
 @app.route("/")
 
 def hello():
     return "Hello World!"
 
 if __name__ == "__main__":
     app.run(debug=True, host="0.0.0.0")
 
 
 # Inherit from the Python Docker image
 FROM python:3.7-slim
 # Install the Flask package via pip
 RUN pip install flask==1.0.2
 # Copy the source code to app folder
 COPY ./app.py /app/
 # Change the working directory
 WORKDIR /app/
 # Set "python" as the entry point
 ENTRYPOINT ["python"]
 # Set the command as the script name
 CMD ["app.py"]
Follow best practices for writing DockerFile as each FROM, RUN, COPY, CMD instruction creates one layer.

Enter the following commands to build and run image as a container. Navigate browser to localhost:5000.
 Docker Image  Docker Container
 docker build -t flask_app:0.1 .
 docker images
 docker run -d -p 5000:5000 flask_app:0.1
 docker ps
Also curl http://localhost:5000. To stop the running container enter command: docker stop $(docker ps -q)

IMPORTANT
If you receive OSError: [Errno 98] Address already in use [Port 5000] then execute the following commands:
 netstat -tulnap | grep 5000
 sudo apt install net-tools
 sudo fuser -k 5000/tcp
 netstat -tulnap | grep 5000


Example #2
Code an example from scratch: build simple Hello application in GoLang but automate Dockerfile generation.
 app.go
 package main
 import (
 	"fmt"
 	"log"
 	"os"
 	"net/http"
 	"github.com/gorilla/mux"
 )
 func main() {
 	port := "5000"
 	r := mux.NewRouter()
 	r.HandleFunc("/", hello)
 	http.Handle("/", r)
 	fmt.Println("Starting up on " + port)
 	log.Fatal(http.ListenAndServe(":" + port, nil))
 }
 func hello(w http.ResponseWriter, req *http.Request) {
 	fmt.Fprintln(w, "Hello world!")
 }

Launch VS Code. Open folder hosting app.go. Install Docker extension. Generate Dockerfile with the actions:
  1. Ctrl+Shift+P. Choose Docker: Add Docker Files to Workspace...
  2. Select Application Platform. Choose GoLang
  3. What port does app listen on? Choose 5000
  4. Include optional Docker Compose files? Select No
 Dockerfile
 #build stage
 FROM golang:alpine AS builder
 RUN apk add --no-cache git
 WORKDIR /go/src/app
 COPY . .
 RUN go get -d -v ./...
 RUN go build -o /go/bin/app -v ./...
 #final stage
 FROM alpine:latest
 RUN apk --no-cache add ca-certificates
 COPY --from=builder /go/bin/app /app
 ENTRYPOINT /app
 LABEL Name=golang20 Version=0.0.1
 EXPOSE 5000

In VS Code | Right click Dockerfile | Build Image... Choose name to tag image. Once complete Image will be listed in Docker extension Images list. Expand image built | Run. Refresh Docker extension Containers list. Navigate browser to localhost:5000. Expand Container to see all files from the running Image built earlier:



Example #3
Code an example from scratch: build simple Hello application in C++ and run interactively inside container:
 main.cpp  Dockerfile
 #include <iostream>
 using namespace std;
 int main()
 {
     cout << "Hello World C++" << endl;
     return 0;
 };
 FROM gcc:latest

 COPY . /usr/src/cpp_test
 WORKDIR /usr/src/cpp_test

 RUN g++ -o Test main.cpp
 CMD [ "./Test" ]

Enter the following commands to build and run image as a container. Run interactive inside container also.
 Run normally  Run interactively
 docker build . -t cpp_test:1.0
 docker run --rm cpp_test:1.0
 docker run -it cpp_test:1.0 bash
 ./Test


Example #4
Run real world example. Pull Envoy image from Internet. Run locally to prove all dependencies contained:
docker run --rm envoyproxy/envoy-dev:716ee8abc526d51f07ed6d3c2a5aa8a3b2481d9d --version
docker run --rm envoyproxy/envoy-dev:716ee8abc526d51f07ed6d3c2a5aa8a3b2481d9d --help

Download envoy-demo.yaml configuration. Run following command + navigate to http://localhost:10000.
docker run --rm -it \
      -v $(pwd)/envoy-demo.yaml:/envoy-demo.yaml \
      -p 9901:9901 \
      -p 10000:10000 \
      envoyproxy/envoy-dev:1acf02f70c75a7723d0269b7f375b3a94cb0fbf0 \
          -c envoy-demo.yaml

 curl -v localhost:10000


Interactive
Shell into the remote running Container. Navigate the file system from above using the following command:
 docker exec -it $(docker ps -q) bash
 docker run -it [image_name]:[tag] bash


Commands
Here is list of useful commands. Foreach command you can add the --help flag to see more options to pass:
 Command  Description
 docker build -t [image_name]:[tag] .  Build a Docker image
 docker run --name [container_name] [image_name]:[tag]  Run a Docker container specifying a name
 docker logs -f [container_id_or_name]  Fetch the logs of a container
 docker exec -it [container_id_or_name] bash  Run a command in a running container
 docker rm $(docker ps -aq)  Remove all containers
 docker rmi $(docker images -aq)  Remove all images
 docker rmi $(docker images -f dangling=true -q)  Remove all dangling images
Note: another command to clean up dangling images is simply docker image prune.

Samples
Here is a list of useful commonly used sample commands that can be used frequently as simple cheat sheet:
 Command  Description
 docker images  Show all Docker images
 docker ps  Show all running containers
 docker stop $(docker ps -q)  Stop all running containers
 docker system prune -a  Remove all images + containers
IMPORTANT: if any commands above throw a "docker: 'docker' is not a docker command" then run as sudo!

Registration
Create an account on the Docker hub. In VS Code | Docker extension | Registries | Connect Registry. Enter case sensitive Docker credentials. Right click Image | Push Image to registry | Pull Image from the registry.

Debugging
Once Dockerfiles become more complex you may like to follow some of these tips for debugging containers. The following video demonstrates some of these ideas to debug Docker containers like override ENTRYPOINT
  1. Override the container entrypoint and exec onto it
  2. Use docker cp to copy files between containers and host
  3. Run a debugger inside the container and connect to it from the host system

Summary
To summarize, Docker provides an open standard for packaging and distributing containerized applications, however, the challenge can then become how to coordinate and schedule increasing amount of containers?

This is where Kubernetes can help: Kubernetes is open-source orchestration software that provides an API controlling how and where those containers will run. Kubernetes can scale out multple deployed and more!
This will be the topic of the next post.

Thursday, April 1, 2021

Z80 Programming Sample II

In the previous post, we checked out Z80 Programming Sample for the Sega Master System: an 8-bit video game console based on the Z80 chip. Here we used WLA-DX assembler for our development environment.

Here, we analyzed existing Indie game projects and disassembled classic 8-bit commercial games to better understand the Z80 development process. For completeness, we would now like to better understand this relationship between source code written in C using the devkitSMS and the underlying Z80 assembly code.

Let's check it out!

Software
Follow all instructions from the previous post: this documents how to setup all the pre-requisite software. Note: ensure you have downloaded and installed WLA-DX assembler + Visual Studio Code cross platform.

Process
Build small demo project in C using the devkitSMS. Follow the numerous posts from devkitSMS category to completion. Next, disassemble the output. Refactor the underlying Z80 assembly code using this process:

Step #1
Create Temp01 folder | copy output.sms. Launch Emulicious | open output.sms. Tools | Debugger | Ctrl + A select all disassembled Z80 assembly code | Save Temp01.asm. Follow instructions from previous post using Binary-File-Write utility replace all instances of ".incbin ..." to refer to binary data files in the data folder.

Step #2
Create Temp02 folder | copy output.sms + output.map. Launch Emulicious | open output.sms. In Debugger | Ctrl + A to select all disassembled Z80 assembly code that now has devkitSMS symbols | Save Temp02.asm.

Step #3
Create asm folder. Copy .vscode folder from above with launch.json and tasks.json. Copy build.bat and build.sh. Don't forget to grant execute permission chmod +x build.sh command. Copy data folder above.

Step #4
Merge Temp01.asm with Temp02.asm! That is keep the structure of Temp01.asm but replace all generic auto-generated labels from Temp01 with specific labels and code from Temp02.asm. Save result file as main.asm.

Step #5
Launch Visual Studio Code. Open Temp03 foder. Create sub-folders that replicate the original C source code structure e.g. banks, devkit, engine, object, screen. Start from the top of main.asm and refactor as follows:

.sdsctag
Begin with the .sdsctag which includes the ROM build major . minor version, author, name and description:
.sdsctag 1.0,"Van Halen","Van Halen Record Covers for the SMS Power! 2021 Competition","StevePro Studios"

memory_manager.inc
Create memory_manager.inc beneath devkit folder. Add .include to file. Move all memory map code in here.

enum_manager.inc
Create enum_manager.inc beneath devkit folder. Add .include to file. Move all .enum exports here. Rename RAM enum references to actual variables used throughout the codebase. Ensure RAM addresses are correct.

define_manager.inc
Create define_manager.inc beneath devkit folder. Add .include to this file. Move all .define definitions here.
.define VDPControl $bf
.define VDPData $be
.define VRAMWrite $4000
.define CRAMWrite $c000

out.inc
Create content folder. Create out.inc beneath content folder. Add .include to file after the LABEL_70_ block. Extract three OUT sections OUT128, OUT64 and OUT32 into out.inc. Set the .ORG address at each section.

psg_manager.inc
Create psg_manager.inc beneath devkit folder. Add .include to file after the main loop block. Extract all PSG functions from PSGStop to PSGSFXFrame into psg_manager.inc. Ensure RAM references replaced by enums.

devkit_manager.inc
Create devkit_manager.inc beneath devkit folder. Add .include to file. Extract all functions from SMS_init to SFX_CHANNELS2AND3 into devkit_manager.inc. Ensure all RAM references replaced by enums as above.

engine
Create the following *_manager.inc files beneath engine folder: asm, audio, content, cursor, font, input, record, screen, scroll, storage, timer. Extract all code from main.asm to each corresponding engine file.

object
Create the following *_object.inc files beneath object folder: cursor, record. Extract all code from main.asm to each corresponding object file. Don't forget to add the corresponding .include statements in main.asm.

screen
Create the following *_screen.inc files beneath screen folder: none, splash, title, scroll, select and record. Extract all code from main.asm to each corresponding screen file. Add corresponding .include statements.

content
Create gfx.inc and psg.inc files beneath content folder. Extract all code from main.asm to each content file.

sms_manager.inc
Create sms_manager.inc beneath devkit folder. Add .include to file after all div functions. Extract all functions from UNSAFE_SMS_copySpritestoSAT to SMS_loadPSGaidencompressedTiles into the sms_manager.inc file.

bank_manager.inc
Create bank_manager.inc beneath engine folder. Add .include to file as last line of main.asm. Remove auto-generated data for SDSC and .incbin. In banks_manager.inc update labels + set .incbin to banks resources.

Sections
Finally, wrap logical blocks of code as .section free and hardcoded address code as .section force for example $0000, $0038 and $0066. Wrap banked code as .section free or superfree + ensure all BANK # has SLOT 2.

Opcodes
Manually disassemble code using the full Z80 Opcode list or Opcodes sorted by value. Any byte data can be converted using the Hex to ASCII text converter. Finally, here is a list of common opcodes regularly found:
 Opcode Mnemonic  Opcode Mnemonic  Opcode Mnemonic
 $00 nop  $C1 pop bc  $18 nn ld a, nn
 $C9 ret  $D1 pop de  $3E nn ld a, nn
 $3C inc a  $E1 pop hl  $DD $39 add ix, sp
 $3D dec a  $F1 pop af  $DD $E1 pop ix
 $A7 and a  $C5 push bc  $DD $E5 push ix
 $AF xor a  $D5 push de  $DD $F9 ld sp, ix
 $B7 or a  $E5 push hl  $C3 nnnn jp &nnnn
 $BF cp a  $F5 push af  $CD nnnn call &nnnn
IMPORTANT: nn represents a one byte operand in table above whereas nnnn represents two bytes operand.

Troubleshooting
Ensure that labels do not begin with an underscore otherwise you may receive FIX_REFERENCES error when assembling Z80 code with WLA-DX. Also, ensure all disassembled labels, esp. labels with "$" are removed. Otherwise when you debug step through Z80 source code breakpoints may skipped via disassembled code.

Download code sample here.

Summary
Although Z80 programming for the Sega Master System may be very niche development style, it still attracts interest on coding medium / large sized games using WLA-DX with further options for Z80 IDEs using Eclipse and other information on Visual Studio Code for SMS Development including sample SMS Framework to test.

Now that we have setup a productive Z80 assembler development environment and better understand this relationship between source code written in C using the devkitSMS and the underlying Z80 assembly code, we are now finally in a great spot to build our own Z80 projects from scratch for the Sega Master System!

Wednesday, March 17, 2021

Z80 Programming Sample

In the previous post, we checked out Z80 Programming Setup for the Sega Master System: an 8-bit video game console based on the Z80 chip. Here we used WLA-DX assembler for our development environment.

Now that we are setup, we would like to analyze existing Indie game projects for the Sega Master System and disassemble some classic 8-bit commercial games to better understand the Z80 development process.

Let's check it out!

Software
Follow all instructions from the previous post: this documents how to setup all the pre-requisite software. Note: ensure you have downloaded and installed WLA-DX assembler + Visual Studio Code cross platform.

Homebrew Games
In the previous post, we setup the obligatory "Hello World" program. Now we would like to analyze some larger Z80 projects. Checkout "Racing Game" article on SMS Power! which is an in depth coding tutorial.

Car Racer (classic)
Create C:\CarRacerClassic or ~/CarRacerClassic. Copy .vscode folder from previous post with launch.json + tasks.json. Copy build.bat + build.sh. Don't forget to grant execute permission chmod +x build.sh command.

Download source code. Copy Assets folder to CarRacerClassic. Copy main.asm file from Racer (classic) folder Version 1.12 here also. Launch VS Code. Some things to remember when coding Z80 source cross platform:
  1.  Ensure forward slash "/" used at all times for cross platform development
  2.  Ensure case sensitive folders + files are used at all times for include files
  3.  Ensure carriage returns is used between variables to avoid wla-dx errors

Press Ctrl + Shift + B to execute build script => compile, link and run output.sms launched from Emulicious! Assumption Emulicious installed C:\SEGA\Emulicious on Windows or ~/SEGA/Emulicious Mac OS/X + Linux.

Debugging
Launch Emulicious externally. Ensure Emulicious Debugger is installed with VS Code as per previous post. Open main.asm and set breakpoints. Press F5. Emulicious debugger should now break into Z80 assembly source code! Step through + investigate register variables, stackframes, access watch window, call stack:

Car Racer (rebooted)
Create C:\CarRacerRebooted or ~/CarRacerRebooted. Copy .vscode folder from above with launch.json and tasks.json. Copy build.bat + build.sh. Don't forget to grant execute permission chmod +x build.sh command.

Download source code. Copy all assets folders with *.inc files to CarRacerRebooted. Copy main.asm file from Racer (rebooted) folder here also. Launch VS Code. Tweak Z80 assembly code as above to be cross platform.

Press Ctrl + Shift + B to execute build script => compile, link and run! Repeat process for all these projects:
 Sega Master System  Astroswab
 Sega Master System  Car Racer [classic]
 Sega Master System  Car Racer [reboot]
 Sega Master System  Digger Ball
 Sega Master System  Digger Chan
 
 Sega Master System  Fairy Forest
 Sega Master System  Jetpac
 Sega Master System  KunKun & KokoKun
 Sega Master System  Mega Man 2
 Sega Master System  Minesweeper
IMPORTANT
For all examples: Launch Emulicious separately first. In VS Code press F5 to debug step through Z80 code!

Commercial Games
Now let us enhance this process to disassemble some commercial games built for the Sega Master System. This way, we may gain insight into the early development process plus be able to hack original source code!

Transbot
Create C:\Transbot or ~/Transbot folder. Copy .vscode folder from above with launch.json and tasks.json files. Copy build.bat + build.sh too. Don't forget to grant execute permission chmod +x build.sh command.

Download Transbot ROM. Launch Emulicious | Open ROM. Tools menu | Debugger | press Ctrl + A to select all disassembled code. Save as Transbot.asm. Update all ".incbin ..." statements using the following utility:

Utility
Download Binary-File-Write utility. Copy both TransBot.asm and TransBot.sms files to input folder. Update the config file | set key="fileName" value to "Transbot". Run BinaryFileWrite.exe. Copy output to Transbot folder.

Launch VS Code. Open Transbot folder. All prior ".incbin..." statements should now refer to binary data files. Press Ctrl + Shift + B to execute build script => compile, link and run! Repeat process for all these projects:
 Sega Master System  After Burner
 Sega Master System  Alien Syndrome
 SG-1000 / SC-3000  Congo Bongo
 SG-1000 / SC-3000  Flicky
 Sega Master System  Golden Axe
 
 SG-1000 / SC-3000  Monaco GP
 Sega Master System  Out Run
 Sega Master System  Shinobi
 Sega Master System  Transbot
 Sega Master System  Wonder Boy
IMPORTANT
For all examples: Launch Emulicious separately first. In VS Code press F5 to debug step through Z80 code!

Flicky
Create C:\Flicky or ~/Flicky folder. Repeat entire process for Transbot as above but with Flicky.sg ROM. Copy the .vscode folder with launch.json and tasks.json files plus build.bat + build.sh. Retreive main.asm + data folder from utility. Press Ctrl + Shift + B to execute build script + press F5 to debug step through Z80 code!

Hack
Use this new setup to hack the original source code! For example, in Flicky you always start at level #1. Wouudn't it be cool to start at any level? Also, wouldn't it be cool to have infinite lives? Let's check it out!

Start level and lives count default values are hardcoded in ROM and loaded into RAM. After debug stepping through Flicky source code we find a piece of code that loads 20x bytes of ROM into RAM on game start:

Bytes $02C3 and $02C4 default to $01; could either of these be start level #1? Byte $02CA defaults to $02; could this be lives left? The corresponding data is loaded in RAM at $C0E7, $C0E8 and $C0EE respectively.

Launch Emulicious. Tools menu | Memory Editor | RAM. Right click $C0E7 | Toggle Watchpoint. Repeat for $C0E8 and $C0EE. Resume code. Play game and die! The value at RAM $C0EE decreases from $02 to $01.

Therefore, RAM $C0EE stores the lives count loaded from ROM $02CA! Replace the original $02 with $FF for "infinite" lives. Repeat process: complete level; see RAM $C0E8 stores start level loaded from ROM $02C4!


Summary
Now that we have a productive Z80 assembler development environment and analyzed some larger projects to better understand the development process we are in a great spot to build our own projects from scratch.

For completeness, we would still like to better understand relationship between Sega Master System source code written in C using the devkitSMS and connect the underlying Z80 assembly code generated accordingly. This will be the topic of the next post.