Tuesday, March 17, 2026

PyBind Setup Cheat Sheet

In 2024, we checked out OpenAI Retro Cheat Sheet as an open-source project that provides an interface to interact with various retro video games for purpose of Reinforcement Learning research. This project: using C/C++ for high-performance but leverages pybind11 to create bindings for Python client code consumption.

Let's check it out!

pybind11
A lightweight header-only library that can be used to integrate C++ with Python to create bindings exposing C++ functions to Python. Client code written in Python can be consumed to invoke the underlying C++ code.

Installation
All examples here are executed on Ubuntu Linux. Therefore install pybind11 globally to begin the examples:
 sudo apt-get update
 sudo apt-get install pybind11-dev
 sudo apt install build-essential g++

Example I
Create an example that exposes C++ function to Python with pybind11. Launch terminal | Enter commands:
  mkdir -p ~/HelloPyBind
  cd HelloPyBind
  python  -m venv .venv
  source .venv/bin/activate           # OR .\.venv\Scripts\activate
  which python
  `which python` --version            # Python 3.8.10
  pip install pybind11
  pip install --upgrade pip


Create the following files: example.cpp, setup.py, test.py. Enter the following C++ and Python source code:
  example.cpp
  #include <pybind11/pybind11.h>
  
  int add(int x, int y)
  {
      return x + y;
  }
  
  PYBIND11_MODULE(example, m)
  {
      // optional module docstring
      m.doc() = "pybind11 example plugin";
      m.def("add", &add, "A function which adds two numbers");
  }

  setup.py
  from setuptools import setup, Extension
  import pybind11
  
  ext_modules = [
      Extension(
          "example",
          ["example.cpp"],
          include_dirs=[pybind11.get_include()],
          language="c++"
      ),
  ]
  
  setup(
      name="example",
      version="0.1",
      ext_modules=ext_modules,
  )

  test.py
  import example
  
  result = example.add(1, 2)
  print(f"1 + 2 = {result}")

Build C++ code using setup.py build inplace. Finally execute python test.py for Python to execute C++ code!
  python setup.py build_ext --inplace		# example.cpython-38-x86_64-linux-gnu.so
  python test.py				# OUTPUT	1 + 2 = 3


Example II
Repeat previous exercise but prefer PyCharm IDE. Launch PyCharm | New Project. Enter the following info:

 Location: ~/HelloPyBind
 Interpreter type:  uv
 Python version: 3.11
 Path to uv:  ~/.local/bin/uv

PyCharm should setup UV virtual environment and configure Python interpreter if not then enter commands:
  uv venv --python 3.11
  source .venv/bin/activate           # OR .\.venv\Scripts\activate
  which python
  `which python` --version            # Python 3.11.11

In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
  uv add pybind11
  uv add setuptools
  uv lock
  uv sync

Create the following files: example.cpp, setup.py, test.py. Enter C++ and Python code similar to Example I. Build C++ code using setup.py build install. Finally execute uv run test.py for Python to execute C++ code!
  uv run setup.py build		
  uv run setup.py install		# example.cpython-38-x86_64-linux-gnu.so
  uv run test.py			# OUTPUT	3 + 5 = 8	9 - 5 = 4


Example III
Repeat previous exercise but prefer CMake to build C++ code via CMakeLists.txt. Create PyCharm Project:
 Location: ~/HelloPyBind
 Interpreter type:  uv
 Python version: 3.11
 Path to uv:  ~/.local/bin/uv

In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
  uv add pybind11
  uv sync

Create the following files: example.cpp, CMakeLists.txt, test.py. Enter code similar to Example II but update:
  CMakeLists.txt
  cmake_minimum_required(VERSION 3.16)
  project(example)
  
  # Find the Python 3.11-specific pybind11 CONFIG from pip
  execute_process(
          COMMAND ${Python3_EXECUTABLE} -m pybind11 --cmakedir
          OUTPUT_VARIABLE pybind11_DIR
          OUTPUT_STRIP_TRAILING_WHITESPACE
  )
  find_package(pybind11 REQUIRED CONFIG)
  pybind11_add_module(example example.cpp)

Build C++ code using cmake and make. In the PyCharm Terminal | Enter the following commands to build:
  mkdir -p build
  cd build
  cmake -DPython3_EXECUTABLE=$(which python) ..
  make -j$(grep -c ^processor /proc/cpuinfo)

Enter commands to copy library to be used. Finally execute uv run test.py for Python to execute C++ code!
  python -c "import sysconfig, shutil, glob;		\		
  dst = sysconfig.get_paths()['platlib'];		\
  so = glob.glob('*.so')[0];				\
  shutil.copy2(so, dst)"
  cd ..
  uv run test.py			# OUTPUT	Hello, World!

IMPORTANT
The first 3x examples worked but tightly coupled Python and C++ without being able to debug separately!

Example IV
Repeat previous exercise but prefer to modify the project layout to separate top level Python and C++ code:
  ~/HelloPyBind/
  ├── cpp/
  │   ├── src/
  │   │   ├── api/
  │   │   │   ├── my_api.h
  │   │   │   └── my_api.cpp
  │   │   ├── bindings/
  │   │   │   └── pybind_module.cpp       # pybind11 bindings
  │   │   ├── CMakeLists.txt
  │   │   └── main.cpp                    # C++ executable entry point
  │   ├── tests/
  │   │   ├── CMakeLists.txt
  │   │   └── test_api.cpp
  │   └── CMakeLists.txt                  # top-level C++ (CLion entry point)
  │
  └── python/
      ├── .venv/
      │   └── lib/
      │       └── python3.11/
      │           └── site-packages/
      │               └── my_api_py.cpython-311-x86_64-linux-gnu.so
      ├── test.py
      ├── pyproject.toml
      └── README.md

Create PyCharm Project. Setup virtual environment as before then create CLion project to build C++ code.
 Location: ~/HelloPyBind/python
 Interpreter type:  uv
 Python version: 3.11
 Path to uv:  ~/.local/bin/uv

Launch CLion | New Project. Create C++ Executable using C++ 17. Enter the following CLion information:
 C++ C++ Executable
 Location: ~/HelloPyBind/cpp
 Language standard: C++17

Set build directory in CLion. File menu | Settings... | Build, Execution, Deployment | CMake | Build directory

Setup folder layout as above. Enter all C++ source code and tests. Rebuild entire solution in Debug mode.

IMPORTANT
CMakeLists.txt files are configured to copy shared object SO file into the Python .venv virtual environment

Launch PyCharm | Complete the test runner. Finally execute uv run test.py for Python to execute C++ code!
  uv run test.py				# OUTPUT	1 + 2 = 3


Example V
Repeat previous exercise but prefer more complexity to build C++ code with classes as consumed by Python

Create PyCharm Project. Setup virtual environment as before then create CLion project to build C++ code. Launch CLion | New Project. Create C++ Executable using C++ 17. Enter the following CLion from before.

Set build directory in CLion. File menu | Settings... | Build, Execution, Deployment | CMake | Build directory. Setup folder layout as above. Enter all C++ source code and tests. Rebuild entire solution in Debug mode.

Launch PyCharm | Complete the test runner. Finally execute uv run test.py for Python to execute C++ code!
  uv run test.py		# OUTPUT	
  # Guitar: 'Fender' [6-string] = $1500.0
  # Guitar: 'Ibanez' [7-string] = $1200.0
  # Guitar: 'Gibson' [6-string] = $2400.0


Example VI
Repeat previous exercise but prefer more complexity to build C++ code with templates consumed by Python

Create PyCharm Project. Setup virtual environment as before then create CLion project to build C++ code. Launch CLion | New Project. Create C++ Executable using C++ 17. Enter the following CLion from before.

Set build directory in CLion. File menu | Settings... | Build, Execution, Deployment | CMake | Build directory. Setup folder layout as above. Enter all C++ source code and tests. Rebuild entire solution in Debug mode.

Launch PyCharm | Complete the test runner. Finally execute uv run test.py for Python to execute C++ code!
  # OUTPUT	
  # Container[0] = 0.0
  # Container[1] = 1.0
  # Container[2] = 2.0
  # Container[3] = 3.0
  # Container[4] = 4.0
  # OUTPUT	
  # Container[5] = 5.0
  # Container[6] = 6.0
  # Container[7] = 7.0
  # Container[8] = 8.0
  # Container[9] = 9.0


Example VII
Repeat previous exercise but prefer Visual Studio 2022 on Windows to build an increasing C++ code base:
  ~/HelloPyBind/
  ├── cpp/
  │   ├── src/
  │   │   ├── core/			  # Core API implementation
  │   │   │   ├── *.h
  │   │   │   └── *.cpp
  │   │   ├── math/			  # Math-related API
  │   │   │   ├── *.h
  │   │   │   └── *.cpp
  │   │   ├── mesh/			  # Mesh-related API
  │   │   │   ├── *.h
  │   │   │   └── *.cpp
  │   │   ├── bindings/
  │   │   │   └── pybind_module.cpp       # pybind11 bindings
  │   │   ├── CMakeLists.txt
  │   │   └── main.cpp                    # C++ executable entry point
  │   ├── tests/
  │   │   ├── CMakeLists.txt
  │   │   ├── test_matrix.cpp
  │   │   ├── test_vector.cpp
  │   │   ├── test_mesh.cpp
  │   │   ├── test_mesh_algorithms.cpp
  │   │   └── test_mesh_processor.cpp
  │   └── CMakeLists.txt                  # top-level C++ (CLion entry point)
  └── python/

Launch Visual Studio 2022 | Continue without code. File | Open | CMake... Navigate to cpp/CMakeLists.txt. Build menu | Build All. Finally, choose Test menu | Test Explorer. Run All Tests in View or choose to Debug:


Summary
To summarize, we have demonstrated various PyBind examples in which C++ library code is consumed by a single Python API exclusively. However, in future there may be instances in which the C++ library may need to be consumed by multiple languages. In this case ctypes.cdll.LoadLibrary() may be better that PyBind!

Tuesday, February 3, 2026

Python Package Cheat Sheet

In 2020, we checked out Python Setup Cheat Sheet as an interpreted high-level programming language runs on Windows, Mac OS/X and Linux using pip as the de facto standard package-management system. However while this worked, requirements.txt is brittle with package dependencies + versioning. Poetry was introduced to resolve deterministic builds but at slower dependency resolution. Enter uv for blazing speed + reliability J

Let's check it out!

History
Python Setup Cheat Sheet detailed how to install pip as the de facto standard package-management system on Windows, Mac OS/X and Linux. However, requirements.txt does not lock down transitive dependencies + versioning which becomes brittle. Poetry solved this problem using pyproject.toml configuration and lock file.

uv
Replicating Poetry with more deterministic builds using using pyproject.toml configuration and lock file, uv is built in Rust by Astral as a full rethinking of Python packaging designed for speed and simplicity. uv: pitched as "A single tool to replace pip, pip-tools, pipx, poetry, pyenv, twine, virtualenv and more". Here is a detailed article comparing the Python Packaging Landscape: Pip vs. Poetry vs. UV from a developer's standpoint.

IMPORTANT
First check out the traditional way using python and pip to compare and illustrate benefits of now using uv:

Virtual Environment
A virtual environment isolates Python project interpreter and installed packages from the system and other Python projects which means each project should have its own environment, its version and dependencies.

Create a virtual environment in the traditional way using python and pip then activate virtual environment:
 python -m venv .venv
 Linux OR Mac OS/X  source .venv/bin/activate
 Windows  .\.venv\Scripts\activate

Next, install packages using python, pip and requirements.txt OR poetry with pyproject.toml configuration:
Brittle and/or slow [transitive] dependency resolution!

Installation
Download and install uv for Linux, Mac OS/X or Windows OR Launch PyCharm and install from home page:
 Linux  curl -LsSf https://astral.sh/uv/install.sh | sh
 Mac OS/X   brew update && brew install uv
 Windows   powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

uv init
Create new directory and execute following commands to initialize Python project with an interpreter version

Launch Terminal | Execute the following commands:
  mkdir HelloUV
  cd HelloUV
  uv init --python 3.11.11

uv venv
Navigate into Python project and execute following commands to create the virtual environment and activate

Launch Terminal | Execute the following commands:
  uv venv --python 3.11.11
  source .venv/bin/activate		# Windows: .\.venv\Scripts\activate

uv python
At this point you have Python project initialized with virtual environment activated. Confirm correct version of Python interpreter installed and activate. When using PyCharm ensure the IDE interpreter path is aligned!

Inside PyCharm Terminal | Execute the following commands:
  which python
  `which python` --version

uv add
Install Python packages either using uv pip install or uv add commands. Prefer uv add because this updates pyproject.toml file automagically thus you are able to execute uv sync command to install the dependencies.

Inside PyCharm Terminal | Execute the following commands:
  uv pip list
  uv add requests
  uv sync

uv tree
After execute uv add you can verify what is installed by checking pyproject.toml file or execute uv pip list but another useful method is uv tree which shows hierarchy of all your project's dependencies and relationships.

Inside PyCharm Terminal | Execute the following command: uv tree

uv sync
Execute uv add or update pyproject.toml to install dependencies. Execute uv sync to update environment.

uv lock
The uv.lock file records all the exact versions of all project dependencies UV figures are compatible from the pyproject.toml file to ensure constant deterministic builds with the same dependenices each time and CI/CD

uv tool
UV tool installs persistent tools into virtual environment that are required for that particular Python project:

Inside PyCharm Terminal | Execute the following commands:
  uv tool list
  uv tool install ruff
  uv tool run ruff check
  uv tool upgrade --all
  uv tool uninstall ruff
  uv tool list

uvx tool
Finally uvx is an alias for uv tool run but is designed to be run immediately without installing it persistently:

Inside PyCharm Terminal | Execute the following command: uvx ruff check

uv cache
When you use uv all tools and dependencies are stored in cache. These commands remove all cached data:
  uv cache clean
  rm -r "$(uv python dir)"
  rm -r "$(uv tool dir)"

Commands
Here is a quick summary of popular uv commands used during workflow. A comprehensive list can be found.
  uv init --app				# Scaffold project
  uv python install 3.11		# Install Python
  uv venv --python 3.11			# Create virtual environment
  uv add requests			# Add dependencies
  uv add -D pytest			# Add dev dependencies
  uv sync --frozen			# Sync (locked)
  uv sync				# Sync (normal)
  uv run python main.py			# Run program
  uv run python -V			# Show Python
  uvx ruff check .			# Run tools ad‑hoc
  uv lock				# Update lockfile

Docker
A well-built uv Docker image simplifies deployment + ensures application runs consistently in environment:
  FROM python:3.11.11-slim
  # Install uv
  COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
  COPY . /app
  WORKDIR /app
  # Install dependencies and clear cache
  RUN uv sync --no-dev --frozen
  RUN rm -rf ~/.cache/uv
  CMD ["python", "-c", "print('Hello, World!')"]

This is an example Dockerfile snippet how to integrate uv! But build and run execution with the commands:
  docker build -t uv-hello .
  docker run --rm uv-hello

GitHub Actions
A common GitHub Actions pattern is to use UV to install only Prod dependencies during your build or deploy:
  name: Testing
  on:
    push:
      branches:
        - "main"
  jobs:
    build:
      runs-on: ubuntu-latest
      steps:
        - uses: actions/checkout@v4
        - name: Install UV
          run: curl -LsSf https://astral.sh/uv/install.sh | sh
        - name: Sync only prod dependencies
          run: uv sync --no-group dev --group prod
        - name: Run tests
          run: uv run pytest

Summary
To summarize, uv has been pitched "One Tool for Everything" as uv replaces pip, virtualenv, pip-tools, pipx, poetry, pyenv, twine and is fast at every step. However, this has only scratched the surface as uv integrates well with other tools for fast reproducible Python workflows such as direnv, pre-commit hooks, pdm + more!
 uv  Handles Python installs, environment creation, and dependency resolution with locking
 direnv   Evaluates .envrc managing environment variables automatically applied to shell session
 pdm  Orchestrate workflows + provide unified tooling to build, version, and publish pacakges

Thursday, January 1, 2026

Retrospective XVII

Last year, I conducted a simple retrospective for 2024. Therefore, here is a retrospective for year 2025.

2025 Achievements
  • Document DevOps managed clusters provisioning setup E.g.: Azure AKS, AWS-EKS, GCP-GKE
  • Present + report The Evolution of Software Deployment Cloud CI/CD setup theory to practical
  • Consolidate pytest unit test framework features such as fixtures, factories, mocking, patching
  • Transition Python package management from pip to poetry and finally to uv for blazing speed
  • Introduction to MLOps theory learning to practical ML models and deployment implementation
  • Extend GitLab CI/CD experience to upskill to GitHub Actions executing on localhost using ACT
  • Resurrect PyBind knowledge on OpenAI Retro to Mac on CLion for Python to C/C++ examples
  • Reverse Engineer RetroAchievements project open source code for Simpsons Trivia integration

Note: Reverse Engineering and setup RetroAchievements project open source code is a big achievement!

2026 Objectives
  • Document Python package management move from pip to poetry to uv for new Python projects
  • Continue MLOps integration to bridge Software Engineering to Machine Learning knowledge gap
  • Register PyBind Python to C/C++ setup cross platform and explore debugging across languages
  • Harness prior OpenAI Retro and RetroAchievements experience and apply to future AI projects!

AI Demand
According to 2025 analysis global AI and ML Engineer job openings numbering in the hundreds of thousands with employers increasingly willing to hire based on demonstrable skills rather than advanced degrees. One market study forecasts AI Engineering sector could grow from $17.4 billion in 2025 to $87.5 billion by 2030 implying a sustained growth rate for the next 5yrs.

Therefore, as a Professional Software Engineer how are some ways to transition to a more focused AI-role?

MLOps Engineer
By expanding traditional Software Development Life Cycle skills across DevOps as modern delivery practices have evolved one next step is the progression into MLOps: enabling the deployment monitoring and the mgt of machine learning models built by dedicated ML Engineers. This provides immediate value to the AI-driven teams while supporting ongoing machine learning expertise.

AI Engineer
An AI Engineer specializes in building integrating deploying and optimizing machine + deep learning systems bridging the gap between research models and production-ready software. A Software Engineer focuses on: application logic architecture performance and code quality whereas an AI Engineer works at the intersection of Software Engineering and Data Science.

Future
In 2024 we checked out OpenAI Gym and Retro to provide early exposure to AI-centric principles integrating Reinforcement Learning environments using Python and C/C++ to upskill AI concepts such as agent training environment design reproducibility and iterative experimentation.

In 2025 we checked out RetroAchievements ecosystem to further deepen these skills dealing with emulation internals memory inspection and state tracking all highly relevant to RL and model-driven control systems.

Together this experience forms connection between long-standard Professional Software Engineer expertise and the emerging demands of AI Engineering. Combining systems programming with complex environment instrumentation and practical RL-work aligns with one of the fastest-growing roles in the technology sector!

Saturday, November 15, 2025

Simpsons Trivia Retro Achievements

In 2018, Simpsons Trivia built for the Sega Master System was one entry in the annual SMS Power! Coding competition. The immediate forum feedback super positive. In 2020, member FeRcHuLeS posted enjoyment of the game and mastered on retroachievements.org. I was unaware of this website so I had to investigate!

Let's check it out!

RetroAchievements
RetroAchievements is an online service which provides users with fan-made achievement sets for many retro gaming platforms such as the Sega Master System. Navigate to RetroAchievements website. Choose Sign up Create an account. Download RALibretro emulator which facilitates achievements integration to retro games!

Launch RALibretro emulator | Choose RetroAchievements menu | Login | Enter RetroAchievements site info:


Cores
RetroAchievements cores are a special version of emulator cores. Essentially a core is a plug-in or emulator that runs platform specific games. Choose Settings menu | Manage Cores... | Master System and Download


Simpsons Trivia
Download Simpsons Trivia v1.02. Choose RALibretro File menu | Select Core | Master System | Genesis Plus GX ensures audio support. Choose File menu | Load Game | SimpsonsTrivia-v1.02.sms. Game ready to play:

IMPORTANT: Simpsons Trivia v1.02 must be selected as this has the correct Retro Achievements game hash! Choose arrow keys for Left, Right, Up, Down movement. Choose Z for Fire1, X for Fire2, P or Enter to Pause.

Implementation
Simpsons Trivia game has 4x difficulty categories: Easy, Normal, Hard and Pro! plus 4x questions rounds: 5, 10, 25, and 50. Thus 20x achievements setup: Perfect win for 4x categories * 4x rounds plus "Get between 80-100%" for 4x categories. Shout out to Bl4h8L4hBl4h setting up 20x achievements as per documentation!

Also, shout out to BenGhazi explaining how RALibretro emulator processes achievements e.g. Achievement has 4x important pieces of data stored in RAM: Difficulty [0x00], Selected Rounds [0x05], Correct Answers [0x05] and ScreenID [0x0C]. When all values are aligned on Game Over screen then the achievement is set!

IMPORTANT: here is Tag 1.02 source code excerpt which corroborates Game Over screen definition [0x0C]:
  // Screen type.
  #define SCREEN_TYPE_NONE	0
  #define SCREEN_TYPE_SPLASH	1
  #define SCREEN_TYPE_TITLE	2
  #define SCREEN_TYPE_INTRO	3
  #define SCREEN_TYPE_DIFF	4
  #define SCREEN_TYPE_LONG	5
  #define SCREEN_TYPE_READY	6
  #define SCREEN_TYPE_LEVEL	7
  #define SCREEN_TYPE_NUMBER	8
  #define SCREEN_TYPE_PLAY	9
  #define SCREEN_TYPE_QUIZ	10
  #define SCREEN_TYPE_SCORE	11
  #define SCREEN_TYPE_OVER	12

GitHub
RetroAchievements GitHub includes the following source code repositories for game achievement integration:
 rcheevos  Library to parse and evaluate achievements and leaderboards for RetroAchievements
 RAInterface  Enables RetroAchievements emulators to interact with the server via RA_Integration.dll
 RAIntegration   The DLL responsible to integrate emulators with RetroAchievements.org
 RALibretro  RALibretro is the multi-emulator used to develop RetroAchievements

Clone
On Windows, launch Terminal. Create C:\GitHub\RetroAchievements directory. Git clone the following repos:
 mkdir -p C:\GitHub\RetroAchievements
 cd C:\GitHub\RetroAchievements
 git clone --quiet https://github.com/RetroAchievements/rcheevos.git
 git clone --quiet https://github.com/RetroAchievements/RAInterface.git
 git clone --recursive --depth 1 -q https://github.com/RetroAchievements/RAIntegration.git 
 git clone --recursive --depth 1 -q https://github.com/RetroAchievements/RALibretro.git

IMPORTANT: use the --quiet [or -q] flag in order to suppress all the superfluous submodule logging output!


rcheevos
rcheevos is a C library that makes it easier to process Retro Achievements data. Launch Visual Studio 2022. Open rcheevos-test.sln. Choose Debug | x64. Rebuild Solution. Open test/test.c. Press F5 to Debug source:


RAInterface
RAInterface is a submodule which provides emulator hooks to integrate with RA server via RA_Integration.dll

RAIntegration
RAIntegration is the main DLL used for interfacing with retroachievements.org. Launch Visual Studio 2022. Open RA_Integration.sln. Choose Debug | x64. Rebuild Solution. Install CppUnitTest Test Adapter to run all Interface + Integration tests: Extensions menu | Manage Extensions | Test Adapter CppUnitTest Framework

Launch Test Explorer | Test menu | Test Explorer | Right click specific test e.g. RA_Interface.Tests | Debug:


RALibretro
RALibretro is the multi-emulator used to develop RetroAchievements and earn them. RALibretro uses libretro cores to do the actual emulation with RAIntegration DLL to connect with the site. Launch Visual Studio 2022. Open RALibretro.sln. Choose Debug | x64. Rebuild Solution. Open src/main.cpp. Press F5 to Debug source:


Overview
Here is an overview of the 1x C project rcheevos and 3x C++ projects to interface + integrate + emulate:
 Project  Role  Dependencies
 rcheevos  Core engine: parse + evaluate achievements / leaderboards  None
 RAInterface  Interface library for emulators to interact with the RA stack  rcheevos
 RAIntegration   Full DLL integration [login/server/achievement submission]  RAInterface + rcheevos 
 RALibretro  Multi-emulator that uses RA stack to support achievements  RAIntegration

libretro
libretro is an API designed for retro games and emulators to be compiled as DLLs which can be used in front ends like RALibretro that implement the libretro API. This is reminiscent of the OpenAI Retro work we did in 2024 which also uses libretro cores but exposes them in Python with Reinforcement Learning environments.

Similar to RetroAchievements OpenAI Retro uses its own Game Integration tool to inspect memory for points of interest e.g. starting state, reward function + done condition whereas RALibretro uses Memory Inspector:


Summary
To summarize, the RetroAchievements integration into Simpsons Trivia game has been awesome as gamers continue to play this game and top the achievements more than seven years after it was published! In fact, many hard core gamers have published YouTube videos showcasing their achievements for others to follow:

The next step would be reseach the documentation fully and create some RetroAchievements sets myself J

Monday, September 15, 2025

Pytest Setup Cheat Sheet

In 2020, we checked out Python Setup Cheat Sheet as an interpreted high-level programming language with all code samples' unit tests using unittest package TestCase class. However, since then we have learned that pytest allows writing shorter more readable tests with less boilerplate. Plus we would like to include mocks!!

Let's check it out!

Frameworks
When developing code in Python there are typically five Top Python Testing Frameworks that are favorable:
 NAME  MONIKER   DESCRIPTION
 unittest  PyUnit  The default Python testing framework built-in with the Python Standard Library
 pytest  Pytest  Popular testing frameworks known for simplicity, flexibility + powerful features
 noseTest  Nose2  Enhanced unittest version offering additional plugins to support test execution
 DocTest  DocTest  Python Standard Library module generates tests within source code DocString
 Robot  Robot  Acceptance testing keyword-driven module that simplifies testcase automation

Here are some reasons why pytest currently seems to be the most popular Python unit test framework out:
  1. Simple and Readable Syntax
     You write plain Python functions instead of creating large verbose classes
     Assertions use plain assert statements which provide more detailed output
  2. Rich Plugin Ecosystem
     Plugins like pytest-mock, pytest-asyncio, pytest-cov, and more
     Easy to extend pytest plugins or write your own custom plugins
  3. Powerful Fixtures
     Allows for clean and re-usable setup and teardown using fixtures
     Supports various test level scopes, autouse, and parametrization
  4. Test Discovery
     Automatically discovers tests in files named test_*.py
     No need to manually register tests or use loader classes
  5. Great Reporting
     Colored output, diffs for failing assertions, and optional verbosity
     Integrates easily with tools like coverage, tox, and CI/CD systems
  6. Supports Complex Testing Needs
     Parameterized tests (@pytest.mark.parametrize)
     parallel test execution (pytest-xdist) + hooks

pytest
  pip install pytest

Setup
Depending on your stack here is some great documentation to setup pytest on PyCharm, VS Code or Poetry.

Configuration
In pytest, pytest.ini is the main configuration file used to customize and control pytest behavior across the unit test suite. pytest.ini hosts pytest options, test paths, plugin settings and markers to attach to the test functions to categorize, filter or modify their behavior. Here is a sample pytest.ini configuration file as base:
  [pytest]  
  addopts = -ra -q
  testpaths = tests
  markers =
      slow: marks tests as slow (deselect with '-m "not slow"')
      db: marks tests requiring database

Fixtures
Fixtures are methods in pytest that provide fixed baseline for tests to run. Fixtures can be used to setup all preconditions for tests, provide data, or perform teardown after tests finished via @pytest.fixture decorator.

Scope
Fixtures have scope: Function, Class, Module + Session which define how long fixture available during test:
 SCOPE DESCRIPTION
 Function Fixture created once per test function and destroyed at end of test function
 Class Fixture created once per test class and destroyed at the end of test class
 Module Fixture created once per test module and destroyed at end of test module
 Session Fixture created once per test session and destroyed at end of test session

conftest
In pytest, conftest.py file is used to share fixtures across multiple tests. All the fixtures in conftest.py will be automagically detected without needing to import. conftest: typically scoped at test root directory structure.

Dependencies
Dependency Injection: when fixtures are requested by other fixtures although this adds complexity to tests!

autouse
Simple trick to avoid defining fixture in each test: use the autouse=True flag to apply fixture to all tests.

yield
When you use yield in fixture function setup code executes before yield and teardown executes after yield:
  import pytest  
  @pytest.fixture
  def my_fixture(): 
      # setup code
      yield "fixture value"
      # teardown code

Arguments
Use pytest fixtures with arguments to write re-usable fixtures that can easily share across tests also known as Parameterized fixtures using @pytest.fixture(params=[0, 1, 2]) syntax. Note: these fixtures should not be confused with the @pytest.mark.parametrize decorator which can be used to specify inputs and outputs!

Factories
Factories, in the context of pytest fixtures, are functions that are used to create and return instances of objects that are needed to generate test data or objects with specific configuration in re-usable manner:
 conftest.py  unittest.py
 @pytest.fixture
 def user_creds(): 
   def _user_creds(name: str, email: str):
     return {"name": name, "email": email}  
   return _user_creds
 def test_user_creds(user_creds):
   assert user_creds("John", "x@abc.com")=={  
     "name": "John",  
     "email": "x@abc.com",
   }

Best practices for organizing tests include: Organizing Tests by Testing Pyramid, Structure Should Mirror Application Code, Group or Organize Fixtures and Organize Tests Outside Application Code for scalability.

Mocking
Mocking is technique that allows you to isolate pieces of code being tested from its dependencies so the test can focus on the code under test in isolation. The unittest.mock package offers Mock and MagicMock objects:

Mock
A mock object simulates the behavior of the object it replaces by creating attributes and methods on-the-fly.

MagicMock
Subclass of Mock with default implementations for most magic methods (__len__, __getitem__, etc.). Useful when mocking objects that interact with Python's dunder methods that enable custom behaviors for common operations.

Patching
Patching is technique that temporarily replaces real objects in code with mock objects during test execution. Patching helps ensure external systems do not affect test outcomes thus tests are consistent and repeatable.

IMPORTANT - Mocks are NOT stubs!
When we combine @patch decorator with return_value or side_effect it is a stub but from the mock package!
 METHOD DESCRIPTION
 return_value Specify the single value of Mock object to be returned when method called
 side_effect Specify multiple values of Mock object to be returned when method called

Difference
In pytest, Mock and patch are both tools for simulating or replacing parts of your code during testing. Mock creates mock objects while patch temporarily replaces real objects with mocks during tests to isolate code:
 Mock  patch
  from unittest.mock import Mock  
  
  mock_obj = Mock()
  mock_obj.some_method.return_value = 42 
  result = mock_obj.some_method()  
  assert result == 42
  from unittest.mock import patch
  
  def external_function(): 
      pass
  
  @patch('module_name.external_function')  
  def test_function(mock_external): 
      mock_external.return_value = "Mock data"
      result = external_function() 
      assert result == "Mock data"
IMPORTANT
When creating mocks it is critical to ensure mock objects accurately reflect objects they are replacing. Thus, it is best practice to use autospec=True to ensure mock objects respect function signatures being replaced!

Assertions
For completeness, here is list of assertion methods to verify method on mock object was called during tests:
 METHOD DESCRIPTION
 assert_called verify specific method on mock object has been called during a test
 assert_called_once verify specific method on mock object has been called only one time
 assert_called_once_with verify specific method on mock object called once with specific args
 assert_called_with verify every time method on mock object called with fixed arguments
 assert_not_called verify specific method on mock object was not called during the test
 assert_has_calls verify the order in which specific method on mock object was called
 assert_any_call verify specific method on mock object has been called at least once

Monkeypatch
Monkeypatching is technique used to modify code behavior at runtime especially where certain dependencies or settings make it challenging to isolate functionality for example environment variables or system paths:
  app.py   test_app.py
  import os
  def get_app_mode() -> str:
      app_mode = os.getenv("APP_MODE") 
      return app_mode.lower()
  def test_get_app_mode(monkeypatch):
      """Test behavior when APP_MODE is set."""
      monkeypatch.setenv("APP_MODE", "Testing") 
      assert get_app_mode() == "testing"

pytest-mock
pytest-mock is pytest plugin built on top of unittest.mock that provides an easy-to-use mocker fixture that can be used to create mock objects and patch functions. When you use mocker.patch() method provided by pytest-mock default behavior is to replace the object with MagicMock() so pytest-mock uses MagicMock().
  pip install pytest-mock

  app.py
  import requests
  from http import HTTPStatus
  
  def get_user_name(user_id: int) -> str:
      response = requests.get(f"https://api.example.com/users/{user_id}")
      return response.json()['name'] if response.status_code == HTTPStatus.OK else None

  test_app.py
  from http import HTTPStatus
  from app import get_user_name
  
  def test_get_user_name(mocker):
      mock_response = mocker.Mock()
      mock_response.status_code = http.HTTPStatus.OK
      mock_response.json.return_value = {'name': 'Test'}
      mocker.patch('app.requests.get', return_value=mock_response)
      result = get_user_name(1)
      assert result == 'Test'

Legacy
In many legacy Python codebases you may detect references to Mock(), MagicMock() and @patch decorator from unittest.mock with pytest. Teams often keep the old style unless there compelling reason to refactor it.

Recommendation
However, here are some recommendations to prefer pytest-mock and mocker fixture for future unit testing:
  1. Prefer pytest-mock and the mocker fixture
     Cleaner syntax than unittest.mock.patch
     Automatically cleaned up after each test
     Plays well with other pytest fixtures
     Centralizes all patching into one fixture (mocker)
  2. Use monkeypatch for patching env vars, system paths and etc.
     Prefer monkeypatch for clarity and idiomatic pytest style
     e.g. os.environ, system paths, or patching open()
  3. Avoid @patch decorators unless migrating old tests
     Can be harder to read or stack with multiple patches
     Better to use mocker.patch() inline as cleaner syntax
  4. Use autospec=True when mocking complex or external APIs
     Ensure mocks behave like the real objects (catch bad call signatures)
  5. Use fixtures to share mocks across tests
     When you have mock used by multiple tests then define it as a fixture
tl;dr
Prefer pytest-mock (mocker fixture) for readability and less boilerplate. Import tools like MagicMock, Mock, call, ANY from unittest.mock when needed. Avoid @patch unless needed — inline mocker.patch() is usually cleaner. Keep everything in one style within a test module for consistency.

pytest-asyncio
Concurrency allows a program to efficiently execute its tasks asynchronously i.e. executing tasks while other tasks are waiting. pytest-asyncio simplifies handling event loops + managing async fixtures thru unit testing.
  pip install pytest-asyncio

  app.py   test_app.py
  import asyncio
  
  
  async def fetch_data():
      # Simulate I/O operation.
      await asyncio.sleep(1)
      return {"status": "OK", "data": [42]} 
  import pytest
  from app import fetch_data
  
  @pytest.mark.asyncio
  async def test_fetch_data():
      result = await fetch_data()
      assert result["status"] == "OK" 
      assert result["data"] == [42]
Consequently AsyncMock from unittest.mock allows you to mock asynchronous functions and/or coroutines.

CI/CD
GitHub Actions is feature-rich CI/CD platform and offers an easy and flexible way to automate your testing processes. GitHub Actions mainly consist of files called workflows. The workflow file contains job or several jobs that consist of sequence of steps. Here is sample YAML file that will trigger the workflow on git push:
  ~/.github/workflows/run_test.yml
  name: Run Unit Test via Pytest
  on: [push]
  jobs:
    build:
      runs-on: ubuntu-latest
      strategy:
        matrix:
          python-version: ["3.10"]
      steps:
        - uses: actions/checkout@v3
        - name: Set up Python ${{ matrix.python-version }}
          uses: actions/setup-python@v4
          with:
            python-version: ${{ matrix.python-version }}
        - name: Install dependencies
          run: |
            python -m pip install --upgrade pip
            if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
        - name: Lint with Ruff
          run: |
            pip install ruff
            ruff --format=github --target-version=py310 .
          continue-on-error: true
        - name: Test with pytest
          run: |
            coverage run -m pytest  -v -s
        - name: Generate Coverage Report
          run: |
            coverage report -m

Summary
To summarize, we have setup pytest for more robust unit testing with mocks and stubs via patching. Looking forward there are additional ways to improve unit test development experience with pytest as per the article:
  1. Use Markers To Prioritise Tests
     Organize tests in such a way that prioritizes key functionalities first
     Running tests with critical functionality first provide faster feedback
  2. Do More With Less (Parametrized Testing)
     Parametrized Testing allows you to test multiple scenarios in single test function
     Feed different parameters into same test logic covering more scenarios + less code
  3. Profiling Tests
     Identify the slow-running unit tests using the --durations=XXX flag
     Use the pytest-profiling plugin to generate tabular and heat graphs
  4. Run Tests In Parallel (Use pytest-xdist)
     Use the pytest-xdist plugin to distribute tests across multiple CPUs
     Tests run in parallel, use resources better, provide faster feedback!

Sunday, August 31, 2025

Cloud CI-CD Cheat Sheet II

In the previous post, we checked out Cloud CI/CD Cheat Sheet to transition from the 1990s to modern day CI/CD. Now lets integrate GitLab with GitFlow SDLC to demonstrate the Kubernetes CI/CD pipeline benefits.

Let's check it out!

GitLab CI/CD
   Create .gitlab-ci.yml at the root of project
   this is the driver file that co-ordinates stages:
   Build / Lint / Deploy

gitlab-ci.yml


Variables
   Generic Variables used in all environments and environment specific variables to build software
   Rules that can be used to automate deployments to "lower" environments vs. Manual deployments
   YAML that builds the Docker image and push image to container registry of the developer's choice
   YAML that has instructions on how to deploy latest built Docker image to Kubernetes cluster

 environments.yml  deployment-rules.yml

Artefacts
   YAML files that contain Helm chart artefacts used like Deployment and Service YAML
   YAML files that contain Values to be injected including environment specific variables

 deployment.yaml  service.yaml

NOTE: Hardcoded non-sensitive variables stored in Values YAML files including all environment variables:

Whereas sensitive information is stored in Kubernetes secret resources and injected at deployment time.

GitFlow SDLC
Development
   GitLab source code repo has main branch for all the Prod deployments
   GitLab source code repo has develop branch as the integration branch
   develop branch for feature development and deployment to DEV / UAT
   GitFlow: ensure develop branch is stable: cut feature branch off develop

Deployment
   Submit Pull Request | Merge to develop branch | Trigger build
   Auto-deploy to DEV | Manual deploy to UAT [when QA ready]

Testing
   Feature completed on DEV / preliminary testing on UAT cut release branch off develop
   Deploy release branch to UAT - complete feature testing and regression testing
   Any bugs on UAT in release candidate then cut bugfix branch off release branch
   Fix bug | Submit Pull Request | Merge to release branch | Re-deploy to UAT [manually]

Release
   Once release candidate is stable / all bugs fixed: then submit Pull Request release branch to main
   This action will build pipeline but NOT deploy!! Manually deploy to Prod when stakeholders agree!!


Alignment
   Finally, after deploy to Prod from main submit PR from main to develop for alignment
   Hotfixes available similar to bugfix | Cut hotfix branch from main and submit PR deploy to Prod
   After hotfix merged to main and deployed to Prod submit PR from main to develop for alignment


Kubernetes Management: Rancher
Q. What is Rancher?
Open source platform that simplifies the deployment, scaling and management of your Kubernetes clusters:
   Kubernetes: open source orchestration platform that automates management of containerized apps
   Rancher: open source container platform built on top of Kubernetes to simplify cluster management
   Download Kubernetes cluster configuration kubeconfig files from Rancher to connect to your clusters


Kubernetes kubeconfig
   kubeconfig file is YAML configuration used to connect to Kubernetes clusters, users and contexts
   Download DEV kubeconfig file from Rancher to localhost ~/.kube/dev-config
   Download UAT kubeconfig file from Rancher to localhost ~/.kube/uat-config

SETUP
  # Setup the global KUBECONFIG environment variable
  export KUBECONFIG=~/.kube/config:~/.kube/dev-config:~/.kube/uat-config
  # Flatten multiple kubeconfig files into one "master" kubeconfig file
  kubectl config view --flatten > one-config.yaml
  # Rename accordingly
  mv one-config.yaml ~/.kube/config
  # Confirm cluster configuration update
  kubectl config get-contexts


Deployment Verification
Monitor cluster - What is kubectl?
   Command line tool run commands against Kubernetes clusters - communicate using Kubernetes API
   Post-deployment use kubectl commands to verify the health of cluster ensuring all pods re-spawned


TEST Deployment
Finally, test endpoint(s) via curl or in Postman:
  # Test endpoint
  kubectl port-forward service/flask-api-service 8080:80
  curl http://localhost:8080/api/v1 --header "Content-Type: application/json"
  # RESPONSE
  {"message": "Hello World (Python)!"}


CI/CD Pipeline Benefits
Four Benefits of CI/CD - successful pipeline strategy helps your team deliver higher quality Software faster!
   Increased speed of innovation + automation = deployments that are faster and more regular
   Code in Production adds immediate value instead of sat waiting in a deployment queue!
   Engineers become more productive instead of focus on boring / mundane manual tasks
   Higher code quality due to continuous automated build, test, deploy rinse + repeat cycles J

Summary
To summarize, we have now highlighted the back story transitioning from the 1990s to modern day CI/CD and outlined the integration process with GitFlow SDLC to demonstrate Kubernetes CI/CD pipeline benefits!