Let's check it out!
PyTorch ecosystem extends specialized libraries for different domains [vision, audio, 3D, sparse data] go-to choice for everything from prototyping research ideas to deploying production-grade deep learning systems.
PACKAGES
| torch | Core PyTorch library providing tensors, autograd and neural network building blocks |
| torchaudio | Tools + datasets for audio processing, waveform transformations and speech models |
| torchvision | Computer Vision utilities for datasets: pretrained models and image transformations |
| pytorch3d | 3D Deep Learning library for working with meshes, point clouds rendering pipelines |
| torchsparse | High-performance sparse tensor library for 3D geometry e.g. LiDAR and voxel grids |
| Torch-scatter | Optimized operations for scattering and aggregating neural networks tensor values |
| Torch-sparse | Sparse matrix operations for PyTorch + often used in 3D Geometric Deep Learning |
| Torch-cluster | Clustering and neighborhood learning query operations e.g. kNN and radius graphs |
| Torch-spline-conv | Spline-based convolution layers for Geometric Deep Learning on graphs and meshes |
CUDA
CUDA [Compute Unified Device Architecture] is a proprietary parallel computing platform and API that allows software to use GPUs for accelerated general-purpose processing in high-performance scientific computing.
Therefore, to leverage PyTorch computations on GPUs one must install nVidia GPU drivers that support your CUDA version, install CUDA Toolkit that includes nvcc compiler with all PyTorch compatible built packages.
Development setup example: Ubuntu 20.04 | Linux kernel 5.15.0-9 | nVidia GPU Driver v535 | CUDA 12.1
INSTALLATION
Complete nVidia CUDA Toolkit installation guide found here but can be summarized in the following steps:
uname -r # 5.15.0-139-generic |
Step #1 - Remove partial CUDA installs
sudo apt remove -y cuda* nvidia-cuda-toolkit sudo apt autoremove -y sudo rm -rf /usr/local/cuda* |
Step #2 - Install nVidia Driver 535
sudo apt update sudo apt install -y nvidia-driver-535 sudo reboot |
Step #3 - Verify nVidia Driver and CUDA version - nvidia-smi
Step #4 - Install CUDA 12.1 Toolkit
sudo mkdir -p /etc/apt/keyrings curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub \ | sudo gpg --dearmor -o /etc/apt/keyrings/cuda-archive-keyring.gpg echo "deb [signed-by=/etc/apt/keyrings/cuda-archive-keyring.gpg] \ https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /" \ | sudo tee /etc/apt/sources.list.d/cuda.list sudo apt update |
Step #5 - Configure Environment Variables - source ~/.bashrc
export CUB_HOME=~/cub export CUDA_HOME=/usr/local/cuda-12.1 export PATH=$CUDA_HOME/bin:$PATH export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH |
Step #6 - Verify CUDA 12.1 Toolkit - nvcc --version
| Kernel | 5.15.0-139 |
| Driver | nvidia-driver-535 |
| CUDA 12.1 | /usr/local/cuda-12.1 |
| CUDA toolkit | /usr/local/cuda-12.1/bin/nvcc |
Example I
All examples listed are Python 3.10.19 compatible. Launch PyCharm | New Project. Enter the following info:
| Location: | ~/HelloPyTorch3d |
| Interpreter type: | uv |
| Python version: | 3.10 |
| Path to uv: | ~/.local/bin/uv |
PyCharm should setup UV virtual environment and configure Python interpreter if not then enter commands:
uv venv --python 3.10 source .venv/bin/activate # OR .\.venv\Scripts\activate which python `which python` --version # Python 3.10.19 |
In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
export MAX_JOBS=1 export NVCC_THREADS=1 |
uv lock uv sync |
Create the following file: main.py. Enter the command uv run main.py. Verify PyTorch packages installed!
| Example I |
Hello PyTorch3d torch 2.2.0+cu121 pytorch3d 0.7.7 cuda 12.1 cuda True torch_geometric 2.7.0 torch_scatter 2.1.2+pt22cu121 torch_sparse 0.6.18+pt22cu121 torch_cluster 1.6.3+pt22cu121 torch_spline_conv 1.2.2+pt22cu121 torchsparse 2.0.0b |
Example II
Repeat previous exercise but wrap logic as Azure ML endpoint. Launch PyCharm | New Project. Enter info:
| Location: | ~/HelloAzureML |
| Interpreter type: | uv |
| Python version: | 3.10 |
| Path to uv: | ~/.local/bin/uv |
PyCharm should setup UV virtual environment and configure Python interpreter if not then enter commands:
uv venv --python 3.10 source .venv/bin/activate # OR .\.venv\Scripts\activate which python `which python` --version # Python 3.10.19 |
In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
export MAX_JOBS=1 export NVCC_THREADS=1 |
uv lock uv sync |
Create the following files: app/scoring.py and .env. Enter the command which azmlinfsrv. Edit configurations Copy and paste the azmlinfsrv path as the script value. Enter the following info to launch Azure ML endpoint:
| script value | ~/HelloAzureML/.venv/bin/azmlinfsrv |
| Parameters | --entry app/scoring.py |
| Working directory | ~/HelloAzureML |
| Path to .env files | ~/HelloAzureML/.env |
Alternatively, launch terminals. Enter following commands to launch Azure ML endpoint and submit request:
Terminal #1 |
set -a; source .env; set +a ~/.venv/bin/azmlinfsrv --entry app/scoring.py |
Terminal #2 |
curl --location --request POST 'http://localhost:5001/score' \
--header 'Content-Type: application/json' \
--data-raw '{}'
|
| Example II |
{
"message": "Hello PyTorch3d",
"torch_version": "2.2.0+cu121",
"pytorch3d_version": "0.7.7",
"cuda_version": "12.1",
"cuda_available": true,
"torch_geometric_version": "2.7.0",
"torch_scatter_version": "2.1.2+pt22cu121",
"torch_sparse_version": "0.6.18+pt22cu121",
"torch_cluster_version": "1.6.3+pt22cu121",
"torch_spline_conv_version": "1.2.2+pt22cu121",
"torchsparse_version": "2.0.0b"
}
|
Cleanup as necessary: find and kill processes using TCP port 5001 esp. if that port is continued to be in use:
sudo fuser -k 5001/tcp # Linux sudo lsof -ti tcp:5001 # MacOS |
Summary
To summarize, we have installed and configured our nVidia driver and CUDA Toolkit and built some simple PyTorch programs. However, we have built the PyTorch3d + TorchSparse packages from source each time. Ideally, we would like to pre-build custom wheels once and house on localhost for future examples reuse!
This will be the topic of the next post.



























