Tuesday, May 5, 2026

PyTorch Setup Cheat Sheet II

In the previous post, we checked out PyTorch Setup as an open-source machine learning framework widely used for building + training neural networks. Here we installed and configured our nVidia driver and CUDA Toolkit to build some simple PyTorch programs. However, we built PyTorch3d + TorchSparse packages from source each time. Now we'd like to pre-build custom wheels housed on localhost for future examples reuse.
Let's check it out!

Example I
All examples listed are Python 3.10.19 compatible. Launch PyCharm | New Project. Enter the following info:

 Location: ~/HelloPyTorch3dWheels
 Interpreter type:  uv
 Python version: 3.10
 Path to uv:  ~/.local/bin/uv

PyCharm should setup UV virtual environment and configure Python interpreter if not then enter commands:
  uv venv --python 3.10
  source .venv/bin/activate           # OR .\.venv\Scripts\activate
  which python
  `which python` --version            # Python 3.10.19

In the PyCharm Terminal | Enter the following commands for UV to install packages to build custom wheels:
  uv pip install twine
  uv pip install numpy==1.26.4
  uv pip install --index-url https://download.pytorch.org/whl/cu121 \
      "torch==2.2.0+cu121" "torchvision==0.17.0"  "torchaudio==2.2.0"

Create directory to house custom wheels mkdir -p wheelhouse-cu121. Write shell scripts to code wheel logic:
 PyTorch3D chmod +x steveprobuild_pytorch3d_wheel.sh
 TorchSparse chmod +x steveprobuild_torchsparse_wheel.sh

Execute shell scripts to build custom wheels. These will be the custom wheels used locally on next examples:
 COMMAND ARTIFACT
 bash steveprobuild_pytorch3d_wheel.sh stevepropytorch3d-0.7.7-cp310-cp310-linux_x86_64.whl
 bash steveprobuild_torchsparse_wheel.sh steveprotorchsparse-2.0.0b0-cp310-cp310-linux_x86_64.whl

Example II
Repeat previous exercise using locally built wheels. Launch PyCharm | New Project. Enter the following info:
 Location: ~/HelloPyTorch3dWheels
 Interpreter type:  uv
 Python version: 3.10
 Path to uv:  ~/.local/bin/uv

Copy custom wheels built from previous example: mkdir wheelhouse-cu121. Copy files into new directory:
  mkdir -p wheelhouse-cu121
  cp -r ../01-Example/wheelhouse-cu121 .

In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
  export MAX_JOBS=1
  export NVCC_THREADS=1
  uv lock
  uv sync

Create the following file: main.py. Enter the command uv run main.py. Verify PyTorch packages installed!

  Example II
  Hello PyTorch3d
  torch 2.2.0+cu121
  pytorch3d 0.7.7
  cuda 12.1
  cuda True
  torch_geometric 2.7.0
  torch_scatter 2.1.2+pt22cu121
  torch_sparse 0.6.18+pt22cu121
  torch_cluster 1.6.3+pt22cu121
  torch_spline_conv 1.2.2+pt22cu121
  torchsparse 2.0.0b

Example III
Repeat previous exercise but upload custom wheels. Launch PyCharm | New Project. Enter the following info
 Location: ~/HelloPyTorch3dWheels
 Interpreter type:  uv
 Python version: 3.10
 Path to uv:  ~/.local/bin/uv

Copy custom wheels built from previous example: mkdir wheelhouse-cu121. Copy files into new directory:
  mkdir -p wheelhouse-cu121
  cp -r ../01-Example/wheelhouse-cu121 .

In the PyCharm Terminal | Enter the following commands to install devpi packages to upload wheels locally:
  uv pip install devpi-server
  uv pip install devpi-client
  uv pip install devpi-web
  uv pip install twine

Launch Terminal #1. Initialize devpi-server on localhost port 3141 and start. Navigate to the homepage URL
  devpi-init					# Launch browser
  devpi-server --host 127.0.0.1 --port 3141	# Navigate http://localhost:3141

Launch Terminal #2. Connect to the devpi-client and login. Create custom cuda-wheels index and activate
  devpi use http://localhost:3141
  devpi login root --password=''
  devpi index -c cuda-wheels bases=root/pypi  
  devpi use root/cuda-wheels

Finally upload 2x custom wheel files to devpi-web available at http://localhost:3141/root/cuda-wheels
  twine upload --repository-url http://localhost:3141/root/cuda-wheels/ wheelhouse-cu121/*  



Example IV
Repeat previous exercise using uploaded wheels. Launch PyCharm | New Project. Enter the following info:
 Location: ~/HelloPyTorch3dWheels
 Interpreter type:  uv
 Python version: 3.10
 Path to uv:  ~/.local/bin/uv

In the PyCharm Terminal | Enter the following commands to install devpi packages to consume local wheels:
  uv pip install devpi-server
  uv pip install devpi-client
  uv pip install devpi-web
  uv pip install twine

Launch Terminal #1. Initialize devpi-server on localhost port 3141 and start. Navigate to the homepage URL
  devpi-init					# Launch browser
  devpi-server --host 127.0.0.1 --port 3141	# Navigate http://localhost:3141

In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
  export MAX_JOBS=1
  export NVCC_THREADS=1
  uv lock
  uv sync

Create the following file: main.py. Enter the command uv run main.py. Verify PyTorch packages installed!

  Example IV
  Hello PyTorch3d
  torch 2.2.0+cu121
  pytorch3d 0.7.7
  cuda 12.1
  cuda True
  torch_geometric 2.7.0
  torch_scatter 2.1.2+pt22cu121
  torch_sparse 0.6.18+pt22cu121
  torch_cluster 1.6.3+pt22cu121
  torch_spline_conv 1.2.2+pt22cu121
  torchsparse 2.0.0b

Example V
Repeat previous exercise but wrap logic as Azure ML endpoint. Launch PyCharm | New Project. Enter info:

 Location: ~/HelloAzureML
 Interpreter type:  uv
 Python version: 3.10
 Path to uv:  ~/.local/bin/uv

PyCharm should setup UV virtual environment and configure Python interpreter if not then enter commands:
  uv venv --python 3.10
  source .venv/bin/activate           # OR .\.venv\Scripts\activate
  which python
  `which python` --version            # Python 3.10.19

In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
  export MAX_JOBS=1
  export NVCC_THREADS=1
  uv lock
  uv sync

Create the following directories app and tests. Enter all code for app/scoring.py. Test all code using pytest.
Launch terminals. Enter following commands: build Docker image run Docker container and submit request:

 Terminal #1
  docker build -t azml-gpu-local:latest .
  docker run --gpus all -p 5001:5001 azml-gpu-local:latest

 Terminal #2
  curl --location --request POST 'http://localhost:5001/score' \
    --header 'Content-Type: application/json' \
    --data-raw '{
    "points": [
      [0.0, 0.0, 1.0],
      [0.1, 0.0, 0.99],
      [-0.1, 0.0, 0.99],
      [0.0, 0.1, 0.99],
      [0.0, -0.1, 0.99],
      [0.7, 0.7, 0.0],
      [-0.7, 0.7, 0.0],
      [0.7, -0.7, 0.0],
      [-0.7, -0.7, 0.0],
      [0.0, 0.0, -1.0]
    ]
  }'


  Example V
  {
      "num_vertices": 1524,
      "num_faces": 2892,
      "mean_pixel": 0.9926620125770569,
      "image_shape": [
  		128,
  		128,
  		3
  	],
      "device": "cuda:0"
  }

Cleanup as necessary: stop and remove the docker process esp. if that process is continued to be in use:
  docker stop $(docker ps -q)
  docker rm -f $(docker ps -q)

Summary
To summarize, we have built PyTorch3d and TorchSparse packages from source, pre-built the custom wheels housed on localhost for examples reuse. We are now in an excellent position: drive Deep Learning education!

Saturday, April 4, 2026

PyTorch Setup Cheat Sheet

PyTorch is an open-source machine learning framework widely used for building + training neural networks. While PyTorch provides the core building blocks for tensors, neural networks, and training loops, most real-world machine learning problems involve domain-specific data—images, audio, graphs + also 3D geometry!
Let's check it out!

PyTorch ecosystem extends specialized libraries for different domains [vision, audio, 3D, sparse data] go-to choice for everything from prototyping research ideas to deploying production-grade deep learning systems.

PACKAGES
 torch Core PyTorch library providing tensors, autograd and neural network building blocks
 torchaudio Tools + datasets for audio processing, waveform transformations and speech models
 torchvision Computer Vision utilities for datasets: pretrained models and image transformations
 pytorch3d 3D Deep Learning library for working with meshes, point clouds rendering pipelines
 torchsparse High-performance sparse tensor library for 3D geometry e.g. LiDAR and voxel grids
 Torch-scatter Optimized operations for scattering and aggregating neural networks tensor values
 Torch-sparse Sparse matrix operations for PyTorch + often used in 3D Geometric Deep Learning
 Torch-cluster Clustering and neighborhood learning query operations e.g. kNN and radius graphs
 Torch-spline-conv  Spline-based convolution layers for Geometric Deep Learning on graphs and meshes

CUDA
CUDA [Compute Unified Device Architecture] is a proprietary parallel computing platform and API that allows software to use GPUs for accelerated general-purpose processing in high-performance scientific computing.

Therefore, to leverage PyTorch computations on GPUs one must install nVidia GPU drivers that support your CUDA version, install CUDA Toolkit that includes nvcc compiler with all PyTorch compatible built packages.

Development setup example: Ubuntu 20.04 | Linux kernel 5.15.0-9 | nVidia GPU Driver v535 | CUDA 12.1

INSTALLATION
Complete nVidia CUDA Toolkit installation guide found here but can be summarized in the following steps:
  uname -r				# 5.15.0-139-generic

Step #1 - Remove partial CUDA installs
  sudo apt remove -y cuda* nvidia-cuda-toolkit
  sudo apt autoremove -y
  sudo rm -rf /usr/local/cuda*

Step #2 - Install nVidia Driver 535
  sudo apt update
  sudo apt install -y nvidia-driver-535
  sudo reboot

Step #3 - Verify nVidia Driver and CUDA version - nvidia-smi


Step #4 - Install CUDA 12.1 Toolkit
  sudo mkdir -p /etc/apt/keyrings
  
  curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub \
   | sudo gpg --dearmor -o /etc/apt/keyrings/cuda-archive-keyring.gpg
  
  echo "deb [signed-by=/etc/apt/keyrings/cuda-archive-keyring.gpg] \
  https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /" \
  | sudo tee /etc/apt/sources.list.d/cuda.list
  
  sudo apt update

Step #5 - Configure Environment Variables - source ~/.bashrc
  export CUB_HOME=~/cub
  export CUDA_HOME=/usr/local/cuda-12.1
  export PATH=$CUDA_HOME/bin:$PATH
  export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH

Step #6 - Verify CUDA 12.1 Toolkit - nvcc --version
 Kernel 5.15.0-139
 Driver nvidia-driver-535
 CUDA 12.1 /usr/local/cuda-12.1
 CUDA toolkit /usr/local/cuda-12.1/bin/nvcc

Example I
All examples listed are Python 3.10.19 compatible. Launch PyCharm | New Project. Enter the following info:

 Location: ~/HelloPyTorch3d
 Interpreter type:  uv
 Python version: 3.10
 Path to uv:  ~/.local/bin/uv

PyCharm should setup UV virtual environment and configure Python interpreter if not then enter commands:
  uv venv --python 3.10
  source .venv/bin/activate           # OR .\.venv\Scripts\activate
  which python
  `which python` --version            # Python 3.10.19

In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
  export MAX_JOBS=1
  export NVCC_THREADS=1
  uv lock
  uv sync

Create the following file: main.py. Enter the command uv run main.py. Verify PyTorch packages installed!

  Example I
  Hello PyTorch3d
  torch 2.2.0+cu121
  pytorch3d 0.7.7
  cuda 12.1
  cuda True
  torch_geometric 2.7.0
  torch_scatter 2.1.2+pt22cu121
  torch_sparse 0.6.18+pt22cu121
  torch_cluster 1.6.3+pt22cu121
  torch_spline_conv 1.2.2+pt22cu121
  torchsparse 2.0.0b

Example II
Repeat previous exercise but wrap logic as Azure ML endpoint. Launch PyCharm | New Project. Enter info:

 Location: ~/HelloAzureML
 Interpreter type:  uv
 Python version: 3.10
 Path to uv:  ~/.local/bin/uv

PyCharm should setup UV virtual environment and configure Python interpreter if not then enter commands:
  uv venv --python 3.10
  source .venv/bin/activate           # OR .\.venv\Scripts\activate
  which python
  `which python` --version            # Python 3.10.19

In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
  export MAX_JOBS=1
  export NVCC_THREADS=1
  uv lock
  uv sync

Create the following files: app/scoring.py and .env. Enter the command which azmlinfsrv. Edit configurations Copy and paste the azmlinfsrv path as the script value. Enter the following info to launch Azure ML endpoint:

 script value ~/HelloAzureML/.venv/bin/azmlinfsrv
 Parameters --entry app/scoring.py
 Working directory ~/HelloAzureML
 Path to .env files ~/HelloAzureML/.env

Alternatively, launch terminals. Enter following commands to launch Azure ML endpoint and submit request:

 Terminal #1
  set -a; source .env; set +a
  ~/.venv/bin/azmlinfsrv --entry app/scoring.py

 Terminal #2
  curl --location --request POST 'http://localhost:5001/score' \
    --header 'Content-Type: application/json' \
    --data-raw '{}'

  Example II
  {
      "message": "Hello PyTorch3d",
      "torch_version": "2.2.0+cu121",
      "pytorch3d_version": "0.7.7",
      "cuda_version": "12.1",
      "cuda_available": true,
      "torch_geometric_version": "2.7.0",
      "torch_scatter_version": "2.1.2+pt22cu121",
      "torch_sparse_version": "0.6.18+pt22cu121",
      "torch_cluster_version": "1.6.3+pt22cu121",
      "torch_spline_conv_version": "1.2.2+pt22cu121",
      "torchsparse_version": "2.0.0b"
  }

Cleanup as necessary: find and kill processes using TCP port 5001 esp. if that port is continued to be in use:
  sudo fuser -k 5001/tcp		# Linux
  sudo lsof -ti tcp:5001		# MacOS

Summary
To summarize, we have installed and configured our nVidia driver and CUDA Toolkit and built some simple PyTorch programs. However, we have built the PyTorch3d + TorchSparse packages from source each time. Ideally, we would like to pre-build custom wheels once and house on localhost for future examples reuse!
This will be the topic of the next post.

Tuesday, March 17, 2026

PyBind Setup Cheat Sheet

In 2024, we checked out OpenAI Retro Cheat Sheet as an open-source project that provides an interface to interact with various retro video games for purpose of Reinforcement Learning research. This project: using C/C++ for high-performance but leverages pybind11 to create bindings for Python client code consumption.

Let's check it out!

pybind11
A lightweight header-only library that can be used to integrate C++ with Python to create bindings exposing C++ functions to Python. Client code written in Python can be consumed to invoke the underlying C++ code.

Installation
All examples here are executed on Ubuntu Linux. Therefore install pybind11 globally to begin the examples:
 sudo apt-get update
 sudo apt-get install pybind11-dev
 sudo apt install build-essential g++

Example I
Create an example that exposes C++ function to Python with pybind11. Launch terminal | Enter commands:
  mkdir -p ~/HelloPyBind
  cd HelloPyBind
  python  -m venv .venv
  source .venv/bin/activate           # OR .\.venv\Scripts\activate
  which python
  `which python` --version            # Python 3.8.10
  pip install pybind11
  pip install --upgrade pip


Create the following files: example.cpp, setup.py, test.py. Enter the following C++ and Python source code:
  example.cpp
  #include <pybind11/pybind11.h>
  
  int add(int x, int y)
  {
      return x + y;
  }
  
  PYBIND11_MODULE(example, m)
  {
      // optional module docstring
      m.doc() = "pybind11 example plugin";
      m.def("add", &add, "A function which adds two numbers");
  }

  setup.py
  from setuptools import setup, Extension
  import pybind11
  
  ext_modules = [
      Extension(
          "example",
          ["example.cpp"],
          include_dirs=[pybind11.get_include()],
          language="c++"
      ),
  ]
  
  setup(
      name="example",
      version="0.1",
      ext_modules=ext_modules,
  )

  test.py
  import example
  
  result = example.add(1, 2)
  print(f"1 + 2 = {result}")

Build C++ code using setup.py build inplace. Finally execute python test.py for Python to execute C++ code!
  python setup.py build_ext --inplace		# example.cpython-38-x86_64-linux-gnu.so
  python test.py				# OUTPUT	1 + 2 = 3


Example II
Repeat previous exercise but prefer PyCharm IDE. Launch PyCharm | New Project. Enter the following info:

 Location: ~/HelloPyBind
 Interpreter type:  uv
 Python version: 3.11
 Path to uv:  ~/.local/bin/uv

PyCharm should setup UV virtual environment and configure Python interpreter if not then enter commands:
  uv venv --python 3.11
  source .venv/bin/activate           # OR .\.venv\Scripts\activate
  which python
  `which python` --version            # Python 3.11.11

In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
  uv add pybind11
  uv add setuptools
  uv lock
  uv sync

Create the following files: example.cpp, setup.py, test.py. Enter C++ and Python code similar to Example I. Build C++ code using setup.py build install. Finally execute uv run test.py for Python to execute C++ code!
  uv run setup.py build		
  uv run setup.py install		# example.cpython-38-x86_64-linux-gnu.so
  uv run test.py			# OUTPUT	3 + 5 = 8	9 - 5 = 4


Example III
Repeat previous exercise but prefer CMake to build C++ code via CMakeLists.txt. Create PyCharm Project:
 Location: ~/HelloPyBind
 Interpreter type:  uv
 Python version: 3.11
 Path to uv:  ~/.local/bin/uv

In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
  uv add pybind11
  uv sync

Create the following files: example.cpp, CMakeLists.txt, test.py. Enter code similar to Example II but update:
  CMakeLists.txt
  cmake_minimum_required(VERSION 3.16)
  project(example)
  
  # Find the Python 3.11-specific pybind11 CONFIG from pip
  execute_process(
          COMMAND ${Python3_EXECUTABLE} -m pybind11 --cmakedir
          OUTPUT_VARIABLE pybind11_DIR
          OUTPUT_STRIP_TRAILING_WHITESPACE
  )
  find_package(pybind11 REQUIRED CONFIG)
  pybind11_add_module(example example.cpp)

Build C++ code using cmake and make. In the PyCharm Terminal | Enter the following commands to build:
  mkdir -p build
  cd build
  cmake -DPython3_EXECUTABLE=$(which python) ..
  make -j$(grep -c ^processor /proc/cpuinfo)

Enter commands to copy library to be used. Finally execute uv run test.py for Python to execute C++ code!
  python -c "import sysconfig, shutil, glob;		\		
  dst = sysconfig.get_paths()['platlib'];		\
  so = glob.glob('*.so')[0];				\
  shutil.copy2(so, dst)"
  cd ..
  uv run test.py			# OUTPUT	Hello, World!

IMPORTANT
The first 3x examples worked but tightly coupled Python and C++ without being able to debug separately!

Example IV
Repeat previous exercise but prefer to modify the project layout to separate top level Python and C++ code:
  ~/HelloPyBind/
  ├── cpp/
  │   ├── src/
  │   │   ├── api/
  │   │   │   ├── my_api.h
  │   │   │   └── my_api.cpp
  │   │   ├── bindings/
  │   │   │   └── pybind_module.cpp       # pybind11 bindings
  │   │   ├── CMakeLists.txt
  │   │   └── main.cpp                    # C++ executable entry point
  │   ├── tests/
  │   │   ├── CMakeLists.txt
  │   │   └── test_api.cpp
  │   └── CMakeLists.txt                  # top-level C++ (CLion entry point)
  │
  └── python/
      ├── .venv/
      │   └── lib/
      │       └── python3.11/
      │           └── site-packages/
      │               └── my_api_py.cpython-311-x86_64-linux-gnu.so
      ├── test.py
      ├── pyproject.toml
      └── README.md

Create PyCharm Project. Setup virtual environment as before then create CLion project to build C++ code.
 Location: ~/HelloPyBind/python
 Interpreter type:  uv
 Python version: 3.11
 Path to uv:  ~/.local/bin/uv

Launch CLion | New Project. Create C++ Executable using C++ 17. Enter the following CLion information:
 C++ C++ Executable
 Location: ~/HelloPyBind/cpp
 Language standard: C++17

Set build directory in CLion. File menu | Settings... | Build, Execution, Deployment | CMake | Build directory

Setup folder layout as above. Enter all C++ source code and tests. Rebuild entire solution in Debug mode.

IMPORTANT
CMakeLists.txt files are configured to copy shared object SO file into the Python .venv virtual environment

Launch PyCharm | Complete the test runner. Finally execute uv run test.py for Python to execute C++ code!
  uv run test.py				# OUTPUT	1 + 2 = 3


Example V
Repeat previous exercise but prefer more complexity to build C++ code with classes as consumed by Python

Create PyCharm Project. Setup virtual environment as before then create CLion project to build C++ code. Launch CLion | New Project. Create C++ Executable using C++ 17. Enter the following CLion from before.

Set build directory in CLion. File menu | Settings... | Build, Execution, Deployment | CMake | Build directory. Setup folder layout as above. Enter all C++ source code and tests. Rebuild entire solution in Debug mode.

Launch PyCharm | Complete the test runner. Finally execute uv run test.py for Python to execute C++ code!
  uv run test.py		# OUTPUT	
  # Guitar: 'Fender' [6-string] = $1500.0
  # Guitar: 'Ibanez' [7-string] = $1200.0
  # Guitar: 'Gibson' [6-string] = $2400.0


Example VI
Repeat previous exercise but prefer more complexity to build C++ code with templates consumed by Python

Create PyCharm Project. Setup virtual environment as before then create CLion project to build C++ code. Launch CLion | New Project. Create C++ Executable using C++ 17. Enter the following CLion from before.

Set build directory in CLion. File menu | Settings... | Build, Execution, Deployment | CMake | Build directory. Setup folder layout as above. Enter all C++ source code and tests. Rebuild entire solution in Debug mode.

Launch PyCharm | Complete the test runner. Finally execute uv run test.py for Python to execute C++ code!
  # OUTPUT	
  # Container[0] = 0.0
  # Container[1] = 1.0
  # Container[2] = 2.0
  # Container[3] = 3.0
  # Container[4] = 4.0
  # OUTPUT	
  # Container[5] = 5.0
  # Container[6] = 6.0
  # Container[7] = 7.0
  # Container[8] = 8.0
  # Container[9] = 9.0


Example VII
Repeat previous exercise but prefer Visual Studio 2022 on Windows to build an increasing C++ code base:
  ~/HelloPyBind/
  ├── cpp/
  │   ├── src/
  │   │   ├── core/			  # Core API implementation
  │   │   │   ├── *.h
  │   │   │   └── *.cpp
  │   │   ├── math/			  # Math-related API
  │   │   │   ├── *.h
  │   │   │   └── *.cpp
  │   │   ├── mesh/			  # Mesh-related API
  │   │   │   ├── *.h
  │   │   │   └── *.cpp
  │   │   ├── bindings/
  │   │   │   └── pybind_module.cpp       # pybind11 bindings
  │   │   ├── CMakeLists.txt
  │   │   └── main.cpp                    # C++ executable entry point
  │   ├── tests/
  │   │   ├── CMakeLists.txt
  │   │   ├── test_matrix.cpp
  │   │   ├── test_vector.cpp
  │   │   ├── test_mesh.cpp
  │   │   ├── test_mesh_algorithms.cpp
  │   │   └── test_mesh_processor.cpp
  │   └── CMakeLists.txt                  # top-level C++ (CLion entry point)
  └── python/

Launch Visual Studio 2022 | Continue without code. File | Open | CMake... Navigate to cpp/CMakeLists.txt. Build menu | Build All. Finally, choose Test menu | Test Explorer. Run All Tests in View or choose to Debug:


Summary
To summarize, we have demonstrated various PyBind examples in which C++ library code is consumed by a single Python API exclusively. However, in future there may be instances in which the C++ library may need to be consumed by multiple languages. In this case ctypes.cdll.LoadLibrary() may be better that PyBind!

Tuesday, February 3, 2026

Python Package Cheat Sheet

In 2020, we checked out Python Setup Cheat Sheet as an interpreted high-level programming language runs on Windows, Mac OS/X and Linux using pip as the de facto standard package-management system. However while this worked, requirements.txt is brittle with package dependencies + versioning. Poetry was introduced to resolve deterministic builds but at slower dependency resolution. Enter uv for blazing speed + reliability J

Let's check it out!

History
Python Setup Cheat Sheet detailed how to install pip as the de facto standard package-management system on Windows, Mac OS/X and Linux. However, requirements.txt does not lock down transitive dependencies + versioning which becomes brittle. Poetry solved this problem using pyproject.toml configuration and lock file.

uv
Replicating Poetry with more deterministic builds using using pyproject.toml configuration and lock file, uv is built in Rust by Astral as a full rethinking of Python packaging designed for speed and simplicity. uv: pitched as "A single tool to replace pip, pip-tools, pipx, poetry, pyenv, twine, virtualenv and more". Here is a detailed article comparing the Python Packaging Landscape: Pip vs. Poetry vs. UV from a developer's standpoint.

IMPORTANT
First check out the traditional way using python and pip to compare and illustrate benefits of now using uv:

Virtual Environment
A virtual environment isolates Python project interpreter and installed packages from the system and other Python projects which means each project should have its own environment, its version and dependencies.

Create a virtual environment in the traditional way using python and pip then activate virtual environment:
 python -m venv .venv
 Linux OR Mac OS/X  source .venv/bin/activate
 Windows  .\.venv\Scripts\activate

Next, install packages using python, pip and requirements.txt OR poetry with pyproject.toml configuration:
Brittle and/or slow [transitive] dependency resolution!

Installation
Download and install uv for Linux, Mac OS/X or Windows OR Launch PyCharm and install from home page:
 Linux  curl -LsSf https://astral.sh/uv/install.sh | sh
 Mac OS/X   brew update && brew install uv
 Windows   powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

uv init
Create new directory and execute following commands to initialize Python project with an interpreter version

Launch Terminal | Execute the following commands:
  mkdir HelloUV
  cd HelloUV
  uv init --python 3.11.11

uv venv
Navigate into Python project and execute following commands to create the virtual environment and activate

Launch Terminal | Execute the following commands:
  uv venv --python 3.11.11
  source .venv/bin/activate		# Windows: .\.venv\Scripts\activate

uv python
At this point you have Python project initialized with virtual environment activated. Confirm correct version of Python interpreter installed and activate. When using PyCharm ensure the IDE interpreter path is aligned!

Inside PyCharm Terminal | Execute the following commands:
  which python
  `which python` --version

uv add
Install Python packages either using uv pip install or uv add commands. Prefer uv add because this updates pyproject.toml file automagically thus you are able to execute uv sync command to install the dependencies.

Inside PyCharm Terminal | Execute the following commands:
  uv pip list
  uv add requests
  uv sync

uv tree
After execute uv add you can verify what is installed by checking pyproject.toml file or execute uv pip list but another useful method is uv tree which shows hierarchy of all your project's dependencies and relationships.

Inside PyCharm Terminal | Execute the following command: uv tree

uv sync
Execute uv add or update pyproject.toml to install dependencies. Execute uv sync to update environment.

uv lock
The uv.lock file records all the exact versions of all project dependencies UV figures are compatible from the pyproject.toml file to ensure constant deterministic builds with the same dependenices each time and CI/CD

uv tool
UV tool installs persistent tools into virtual environment that are required for that particular Python project:

Inside PyCharm Terminal | Execute the following commands:
  uv tool list
  uv tool install ruff
  uv tool run ruff check
  uv tool upgrade --all
  uv tool uninstall ruff
  uv tool list

uvx tool
Finally uvx is an alias for uv tool run but is designed to be run immediately without installing it persistently:

Inside PyCharm Terminal | Execute the following command: uvx ruff check

uv cache
When you use uv all tools and dependencies are stored in cache. These commands remove all cached data:
  uv cache clean
  rm -r "$(uv python dir)"
  rm -r "$(uv tool dir)"

Commands
Here is a quick summary of popular uv commands used during workflow. A comprehensive list can be found.
  uv init --app				# Scaffold project
  uv python install 3.11		# Install Python
  uv venv --python 3.11			# Create virtual environment
  uv add requests			# Add dependencies
  uv add -D pytest			# Add dev dependencies
  uv sync --frozen			# Sync (locked)
  uv sync				# Sync (normal)
  uv run python main.py			# Run program
  uv run python -V			# Show Python
  uvx ruff check .			# Run tools ad‑hoc
  uv lock				# Update lockfile

Docker
A well-built uv Docker image simplifies deployment + ensures application runs consistently in environment:
  FROM python:3.11.11-slim
  # Install uv
  COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
  COPY . /app
  WORKDIR /app
  # Install dependencies and clear cache
  RUN uv sync --no-dev --frozen
  RUN rm -rf ~/.cache/uv
  CMD ["python", "-c", "print('Hello, World!')"]

This is an example Dockerfile snippet how to integrate uv! But build and run execution with the commands:
  docker build -t uv-hello .
  docker run --rm uv-hello

GitHub Actions
A common GitHub Actions pattern is to use UV to install only Prod dependencies during your build or deploy:
  name: Testing
  on:
    push:
      branches:
        - "main"
  jobs:
    build:
      runs-on: ubuntu-latest
      steps:
        - uses: actions/checkout@v4
        - name: Install UV
          run: curl -LsSf https://astral.sh/uv/install.sh | sh
        - name: Sync only prod dependencies
          run: uv sync --no-group dev --group prod
        - name: Run tests
          run: uv run pytest

Summary
To summarize, uv has been pitched "One Tool for Everything" as uv replaces pip, virtualenv, pip-tools, pipx, poetry, pyenv, twine and is fast at every step. However, this has only scratched the surface as uv integrates well with other tools for fast reproducible Python workflows such as direnv, pre-commit hooks, pdm + more!
 uv  Handles Python installs, environment creation, and dependency resolution with locking
 direnv   Evaluates .envrc managing environment variables automatically applied to shell session
 pdm  Orchestrate workflows + provide unified tooling to build, version, and publish pacakges

Thursday, January 1, 2026

Retrospective XVII

Last year, I conducted a simple retrospective for 2024. Therefore, here is a retrospective for year 2025.

2025 Achievements
  • Document DevOps managed clusters provisioning setup E.g.: Azure AKS, AWS-EKS, GCP-GKE
  • Present + report The Evolution of Software Deployment Cloud CI/CD setup theory to practical
  • Consolidate pytest unit test framework features such as fixtures, factories, mocking, patching
  • Transition Python package management from pip to poetry and finally to uv for blazing speed
  • Introduction to MLOps theory learning to practical ML models and deployment implementation
  • Extend GitLab CI/CD experience to upskill to GitHub Actions executing on localhost using ACT
  • Resurrect PyBind knowledge on OpenAI Retro to Mac on CLion for Python to C/C++ examples
  • Reverse Engineer RetroAchievements project open source code for Simpsons Trivia integration

Note: Reverse Engineering and setup RetroAchievements project open source code is a big achievement!

2026 Objectives
  • Document Python package management move from pip to poetry to uv for new Python projects
  • Continue MLOps integration to bridge Software Engineering to Machine Learning knowledge gap
  • Register PyBind Python to C/C++ setup cross platform and explore debugging across languages
  • Harness prior OpenAI Retro and RetroAchievements experience and apply to future AI projects!

AI Demand
According to 2025 analysis global AI and ML Engineer job openings numbering in the hundreds of thousands with employers increasingly willing to hire based on demonstrable skills rather than advanced degrees. One market study forecasts AI Engineering sector could grow from $17.4 billion in 2025 to $87.5 billion by 2030 implying a sustained growth rate for the next 5yrs.

Therefore, as a Professional Software Engineer how are some ways to transition to a more focused AI-role?

MLOps Engineer
By expanding traditional Software Development Life Cycle skills across DevOps as modern delivery practices have evolved one next step is the progression into MLOps: enabling the deployment monitoring and the mgt of machine learning models built by dedicated ML Engineers. This provides immediate value to the AI-driven teams while supporting ongoing machine learning expertise.

AI Engineer
An AI Engineer specializes in building integrating deploying and optimizing machine + deep learning systems bridging the gap between research models and production-ready software. A Software Engineer focuses on: application logic architecture performance and code quality whereas an AI Engineer works at the intersection of Software Engineering and Data Science.

Future
In 2024 we checked out OpenAI Gym and Retro to provide early exposure to AI-centric principles integrating Reinforcement Learning environments using Python and C/C++ to upskill AI concepts such as agent training environment design reproducibility and iterative experimentation.

In 2025 we checked out RetroAchievements ecosystem to further deepen these skills dealing with emulation internals memory inspection and state tracking all highly relevant to RL and model-driven control systems.

Together this experience forms connection between long-standard Professional Software Engineer expertise and the emerging demands of AI Engineering. Combining systems programming with complex environment instrumentation and practical RL-work aligns with one of the fastest-growing roles in the technology sector!

Saturday, November 15, 2025

Simpsons Trivia Retro Achievements

In 2018, Simpsons Trivia built for the Sega Master System was one entry in the annual SMS Power! Coding competition. The immediate forum feedback super positive. In 2020, member FeRcHuLeS posted enjoyment of the game and mastered on retroachievements.org. I was unaware of this website so I had to investigate!

Let's check it out!

RetroAchievements
RetroAchievements is an online service which provides users with fan-made achievement sets for many retro gaming platforms such as the Sega Master System. Navigate to RetroAchievements website. Choose Sign up Create an account. Download RALibretro emulator which facilitates achievements integration to retro games!

Launch RALibretro emulator | Choose RetroAchievements menu | Login | Enter RetroAchievements site info:


Cores
RetroAchievements cores are a special version of emulator cores. Essentially a core is a plug-in or emulator that runs platform specific games. Choose Settings menu | Manage Cores... | Master System and Download


Simpsons Trivia
Download Simpsons Trivia v1.02. Choose RALibretro File menu | Select Core | Master System | Genesis Plus GX ensures audio support. Choose File menu | Load Game | SimpsonsTrivia-v1.02.sms. Game ready to play:

IMPORTANT: Simpsons Trivia v1.02 must be selected as this has the correct Retro Achievements game hash! Choose arrow keys for Left, Right, Up, Down movement. Choose Z for Fire1, X for Fire2, P or Enter to Pause.

Implementation
Simpsons Trivia game has 4x difficulty categories: Easy, Normal, Hard and Pro! plus 4x questions rounds: 5, 10, 25, and 50. Thus 20x achievements setup: Perfect win for 4x categories * 4x rounds plus "Get between 80-100%" for 4x categories. Shout out to Bl4h8L4hBl4h setting up 20x achievements as per documentation!

Also, shout out to BenGhazi explaining how RALibretro emulator processes achievements e.g. Achievement has 4x important pieces of data stored in RAM: Difficulty [0x00], Selected Rounds [0x05], Correct Answers [0x05] and ScreenID [0x0C]. When all values are aligned on Game Over screen then the achievement is set!

IMPORTANT: here is Tag 1.02 source code excerpt which corroborates Game Over screen definition [0x0C]:
  // Screen type.
  #define SCREEN_TYPE_NONE	0
  #define SCREEN_TYPE_SPLASH	1
  #define SCREEN_TYPE_TITLE	2
  #define SCREEN_TYPE_INTRO	3
  #define SCREEN_TYPE_DIFF	4
  #define SCREEN_TYPE_LONG	5
  #define SCREEN_TYPE_READY	6
  #define SCREEN_TYPE_LEVEL	7
  #define SCREEN_TYPE_NUMBER	8
  #define SCREEN_TYPE_PLAY	9
  #define SCREEN_TYPE_QUIZ	10
  #define SCREEN_TYPE_SCORE	11
  #define SCREEN_TYPE_OVER	12

GitHub
RetroAchievements GitHub includes the following source code repositories for game achievement integration:
 rcheevos  Library to parse and evaluate achievements and leaderboards for RetroAchievements
 RAInterface  Enables RetroAchievements emulators to interact with the server via RA_Integration.dll
 RAIntegration   The DLL responsible to integrate emulators with RetroAchievements.org
 RALibretro  RALibretro is the multi-emulator used to develop RetroAchievements

Clone
On Windows, launch Terminal. Create C:\GitHub\RetroAchievements directory. Git clone the following repos:
 mkdir -p C:\GitHub\RetroAchievements
 cd C:\GitHub\RetroAchievements
 git clone --quiet https://github.com/RetroAchievements/rcheevos.git
 git clone --quiet https://github.com/RetroAchievements/RAInterface.git
 git clone --recursive --depth 1 -q https://github.com/RetroAchievements/RAIntegration.git 
 git clone --recursive --depth 1 -q https://github.com/RetroAchievements/RALibretro.git

IMPORTANT: use the --quiet [or -q] flag in order to suppress all the superfluous submodule logging output!


rcheevos
rcheevos is a C library that makes it easier to process Retro Achievements data. Launch Visual Studio 2022. Open rcheevos-test.sln. Choose Debug | x64. Rebuild Solution. Open test/test.c. Press F5 to Debug source:


RAInterface
RAInterface is a submodule which provides emulator hooks to integrate with RA server via RA_Integration.dll

RAIntegration
RAIntegration is the main DLL used for interfacing with retroachievements.org. Launch Visual Studio 2022. Open RA_Integration.sln. Choose Debug | x64. Rebuild Solution. Install CppUnitTest Test Adapter to run all Interface + Integration tests: Extensions menu | Manage Extensions | Test Adapter CppUnitTest Framework

Launch Test Explorer | Test menu | Test Explorer | Right click specific test e.g. RA_Interface.Tests | Debug:


RALibretro
RALibretro is the multi-emulator used to develop RetroAchievements and earn them. RALibretro uses libretro cores to do the actual emulation with RAIntegration DLL to connect with the site. Launch Visual Studio 2022. Open RALibretro.sln. Choose Debug | x64. Rebuild Solution. Open src/main.cpp. Press F5 to Debug source:


Overview
Here is an overview of the 1x C project rcheevos and 3x C++ projects to interface + integrate + emulate:
 Project  Role  Dependencies
 rcheevos  Core engine: parse + evaluate achievements / leaderboards  None
 RAInterface  Interface library for emulators to interact with the RA stack  rcheevos
 RAIntegration   Full DLL integration [login/server/achievement submission]  RAInterface + rcheevos 
 RALibretro  Multi-emulator that uses RA stack to support achievements  RAIntegration

libretro
libretro is an API designed for retro games and emulators to be compiled as DLLs which can be used in front ends like RALibretro that implement the libretro API. This is reminiscent of the OpenAI Retro work we did in 2024 which also uses libretro cores but exposes them in Python with Reinforcement Learning environments.

Similar to RetroAchievements OpenAI Retro uses its own Game Integration tool to inspect memory for points of interest e.g. starting state, reward function + done condition whereas RALibretro uses Memory Inspector:


Summary
To summarize, the RetroAchievements integration into Simpsons Trivia game has been awesome as gamers continue to play this game and top the achievements more than seven years after it was published! In fact, many hard core gamers have published YouTube videos showcasing their achievements for others to follow:

The next step would be reseach the documentation fully and create some RetroAchievements sets myself J