Let's check it out!
Example I
All examples listed are Python 3.10.19 compatible. Launch PyCharm | New Project. Enter the following info:
| Location: | ~/HelloPyTorch3dWheels |
| Interpreter type: | uv |
| Python version: | 3.10 |
| Path to uv: | ~/.local/bin/uv |
PyCharm should setup UV virtual environment and configure Python interpreter if not then enter commands:
uv venv --python 3.10 source .venv/bin/activate # OR .\.venv\Scripts\activate which python `which python` --version # Python 3.10.19 |
In the PyCharm Terminal | Enter the following commands for UV to install packages to build custom wheels:
uv pip install twine uv pip install numpy==1.26.4 uv pip install --index-url https://download.pytorch.org/whl/cu121 \ "torch==2.2.0+cu121" "torchvision==0.17.0" "torchaudio==2.2.0" |
Create directory to house custom wheels mkdir -p wheelhouse-cu121. Write shell scripts to code wheel logic:
| PyTorch3D | chmod +x steveprobuild_pytorch3d_wheel.sh |
| TorchSparse | chmod +x steveprobuild_torchsparse_wheel.sh |
Execute shell scripts to build custom wheels. These will be the custom wheels used locally on next examples:
| COMMAND | ARTIFACT |
| bash steveprobuild_pytorch3d_wheel.sh | stevepropytorch3d-0.7.7-cp310-cp310-linux_x86_64.whl |
| bash steveprobuild_torchsparse_wheel.sh | steveprotorchsparse-2.0.0b0-cp310-cp310-linux_x86_64.whl |
Example II
Repeat previous exercise using locally built wheels. Launch PyCharm | New Project. Enter the following info:
| Location: | ~/HelloPyTorch3dWheels |
| Interpreter type: | uv |
| Python version: | 3.10 |
| Path to uv: | ~/.local/bin/uv |
Copy custom wheels built from previous example: mkdir wheelhouse-cu121. Copy files into new directory:
mkdir -p wheelhouse-cu121 cp -r ../01-Example/wheelhouse-cu121 . |
In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
export MAX_JOBS=1 export NVCC_THREADS=1 |
uv lock uv sync |
Create the following file: main.py. Enter the command uv run main.py. Verify PyTorch packages installed!
| Example II |
Hello PyTorch3d torch 2.2.0+cu121 pytorch3d 0.7.7 cuda 12.1 cuda True torch_geometric 2.7.0 torch_scatter 2.1.2+pt22cu121 torch_sparse 0.6.18+pt22cu121 torch_cluster 1.6.3+pt22cu121 torch_spline_conv 1.2.2+pt22cu121 torchsparse 2.0.0b |
Example III
Repeat previous exercise but upload custom wheels. Launch PyCharm | New Project. Enter the following info
| Location: | ~/HelloPyTorch3dWheels |
| Interpreter type: | uv |
| Python version: | 3.10 |
| Path to uv: | ~/.local/bin/uv |
Copy custom wheels built from previous example: mkdir wheelhouse-cu121. Copy files into new directory:
mkdir -p wheelhouse-cu121 cp -r ../01-Example/wheelhouse-cu121 . |
In the PyCharm Terminal | Enter the following commands to install devpi packages to upload wheels locally:
uv pip install devpi-server uv pip install devpi-client |
uv pip install devpi-web uv pip install twine |
Launch Terminal #1. Initialize devpi-server on localhost port 3141 and start. Navigate to the homepage URL
devpi-init # Launch browser devpi-server --host 127.0.0.1 --port 3141 # Navigate http://localhost:3141 |
Launch Terminal #2. Connect to the devpi-client and login. Create custom cuda-wheels index and activate
devpi use http://localhost:3141 devpi login root --password='' |
devpi index -c cuda-wheels bases=root/pypi devpi use root/cuda-wheels |
Finally upload 2x custom wheel files to devpi-web available at http://localhost:3141/root/cuda-wheels
twine upload --repository-url http://localhost:3141/root/cuda-wheels/ wheelhouse-cu121/*   |
Example IV
Repeat previous exercise using uploaded wheels. Launch PyCharm | New Project. Enter the following info:
| Location: | ~/HelloPyTorch3dWheels |
| Interpreter type: | uv |
| Python version: | 3.10 |
| Path to uv: | ~/.local/bin/uv |
In the PyCharm Terminal | Enter the following commands to install devpi packages to consume local wheels:
uv pip install devpi-server uv pip install devpi-client |
uv pip install devpi-web uv pip install twine |
Launch Terminal #1. Initialize devpi-server on localhost port 3141 and start. Navigate to the homepage URL
devpi-init # Launch browser devpi-server --host 127.0.0.1 --port 3141 # Navigate http://localhost:3141 |
In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
export MAX_JOBS=1 export NVCC_THREADS=1 |
uv lock uv sync |
Create the following file: main.py. Enter the command uv run main.py. Verify PyTorch packages installed!
| Example IV |
Hello PyTorch3d torch 2.2.0+cu121 pytorch3d 0.7.7 cuda 12.1 cuda True torch_geometric 2.7.0 torch_scatter 2.1.2+pt22cu121 torch_sparse 0.6.18+pt22cu121 torch_cluster 1.6.3+pt22cu121 torch_spline_conv 1.2.2+pt22cu121 torchsparse 2.0.0b |
Example V
Repeat previous exercise but wrap logic as Azure ML endpoint. Launch PyCharm | New Project. Enter info:
| Location: | ~/HelloAzureML |
| Interpreter type: | uv |
| Python version: | 3.10 |
| Path to uv: | ~/.local/bin/uv |
PyCharm should setup UV virtual environment and configure Python interpreter if not then enter commands:
uv venv --python 3.10 source .venv/bin/activate # OR .\.venv\Scripts\activate which python `which python` --version # Python 3.10.19 |
In the PyCharm Terminal | Enter the following commands for UV to install and sync package dependencies:
export MAX_JOBS=1 export NVCC_THREADS=1 |
uv lock uv sync |
Create the following directories app and tests. Enter all code for app/scoring.py. Test all code using pytest.
Launch terminals. Enter following commands: build Docker image run Docker container and submit request:
Terminal #1 |
docker build -t azml-gpu-local:latest . docker run --gpus all -p 5001:5001 azml-gpu-local:latest |
Terminal #2 |
curl --location --request POST 'http://localhost:5001/score' \
--header 'Content-Type: application/json' \
--data-raw '{
"points": [
[0.0, 0.0, 1.0],
[0.1, 0.0, 0.99],
[-0.1, 0.0, 0.99],
[0.0, 0.1, 0.99],
[0.0, -0.1, 0.99],
[0.7, 0.7, 0.0],
[-0.7, 0.7, 0.0],
[0.7, -0.7, 0.0],
[-0.7, -0.7, 0.0],
[0.0, 0.0, -1.0]
]
}'
|
| Example V |
{
"num_vertices": 1524,
"num_faces": 2892,
"mean_pixel": 0.9926620125770569,
"image_shape": [
128,
128,
3
],
"device": "cuda:0"
}
|
Cleanup as necessary: stop and remove the docker process esp. if that process is continued to be in use:
docker stop $(docker ps -q) docker rm -f $(docker ps -q) |
Summary
To summarize, we have built PyTorch3d and TorchSparse packages from source, pre-built the custom wheels housed on localhost for examples reuse. We are now in an excellent position: drive Deep Learning education!































