Skip to content

Installation Guide¤

This guide provides comprehensive instructions for installing Workshop on various platforms and configurations.

System Requirements¤

Minimum Requirements¤

  • Python: 3.10 or higher
  • RAM: 8GB (16GB recommended)
  • Disk Space: 2GB for installation, 10GB+ for datasets
  • Operating System: Linux (Ubuntu 20.04+), macOS (10.15+), Windows 10/11 (via WSL2)
  • Python: 3.10 or 3.11 (tested and recommended)
  • RAM: 16GB+ for training models
  • GPU: NVIDIA GPU with 8GB+ VRAM (for GPU acceleration)
  • Compute Capability 7.0+ (V100, RTX 20xx, RTX 30xx, RTX 40xx, A100)
  • CUDA 12.0 or higher
  • Disk Space: 50GB+ for large-scale experiments

Installation Options¤

Workshop provides multiple installation methods to suit different use cases:

The simplest way to get started:

# CPU-only installation
pip install workshop-generative

# With GPU support
pip install workshop-generative[cuda]

# With all extras (documentation, development tools)
pip install workshop-generative[all]

Note: PyPI package coming soon. For now, install from source.

uv is a fast Python package manager. Workshop provides an automated setup script that handles everything:

Quick Setup (Recommended):

# Clone the repository
git clone https://github.com/mahdi-shafiei/workshop.git
cd workshop

# Run unified setup script (auto-detects GPU)
./setup.sh

# Activate environment
source ./activate.sh

What setup.sh does:

  • Installs uv package manager if not present
  • Detects GPU/CUDA availability automatically
  • Creates .venv virtual environment
  • Installs all dependencies (uv sync --extra all for GPU or --extra dev for CPU)
  • Creates .env file with:
    • CUDA library paths (GPU mode)
    • JAX platform configuration
    • Environment variables for optimal performance
  • Generates activate.sh script for easy activation
  • Verifies installation with JAX GPU tests

What activate.sh does:

  • Activates the .venv virtual environment
  • Loads environment variables from .env
  • Configures CUDA paths (GPU mode)
  • Verifies JAX installation and GPU detection
  • Displays environment status and helpful commands

Setup Options:

# CPU-only setup (skip GPU detection)
./setup.sh --cpu-only

# Clean setup (removes caches)
./setup.sh --deep-clean

# Force reinstall
./setup.sh --force

# Verbose output
./setup.sh --verbose

Manual Setup (Advanced):

# Install uv if needed
curl -LsSf https://astral.sh/uv/install.sh | sh

# Create virtual environment
uv venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install Workshop with all dependencies
uv sync --all-extras

# Or install specific extras
uv sync --extra cuda-dev  # CUDA development environment
uv sync --extra dev       # Development tools only

Using standard pip with Workshop's setup script:

Quick Setup with Scripts:

# Clone the repository
git clone https://github.com/mahdi-shafiei/workshop.git
cd workshop

# Run setup script (installs uv automatically)
./setup.sh

# Activate environment
source ./activate.sh

The setup script works with both uv and pip. It will: - Auto-detect and install uv if needed - Create virtual environment and configure CUDA - Generate activation script with proper environment setup

Manual pip Installation:

# Clone the repository
git clone https://github.com/mahdi-shafiei/workshop.git
cd workshop

# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install in development mode
pip install -e .

# Or with extras
pip install -e '.[dev]'        # Development tools
pip install -e '.[cuda]'       # CUDA support
pip install -e '.[all]'        # Everything

# For GPU support, manually configure environment:
export LD_LIBRARY_PATH=$PWD/.venv/lib/python3.*/site-packages/nvidia/*/lib:$LD_LIBRARY_PATH
export JAX_PLATFORMS="cuda,cpu"
export XLA_PYTHON_CLIENT_PREALLOCATE="false"

Note: Using ./setup.sh is recommended even for pip users as it properly configures CUDA paths and environment variables automatically.

For containerized deployment:

# Pull the latest image
docker pull ghcr.io/mahdi-shafiei/workshop:latest

# Run with GPU support
docker run --gpus all -it ghcr.io/mahdi-shafiei/workshop:latest

# Run with volume mount for data
docker run --gpus all -v $(pwd)/data:/workspace/data \
  -it ghcr.io/mahdi-shafiei/workshop:latest

# Or build locally
docker build -t workshop:local .
docker run --gpus all -it workshop:local

Local Virtual Environment Setup¤

Workshop's setup.sh and activate.sh scripts provide automated local development environment configuration with GPU support.

Need Hardware-Specific Configuration?

For detailed information on customizing the setup for different GPUs (NVIDIA, AMD), TPUs, Apple Silicon, or multi-GPU systems, see the Hardware Setup Guide. This comprehensive guide explains how setup.sh and dot_env_template work and how to customize them for your specific hardware.

Understanding the Setup Process¤

When you run ./setup.sh, it creates a complete, self-contained development environment:

Files Created:

workshop/
├── .venv/                    # Virtual environment
│   ├── bin/activate         # Standard venv activation
│   ├── lib/python3.*/       # Python packages
│   └── ...
├── .env                      # Environment configuration
├── activate.sh              # Unified activation script
└── uv.lock                  # Dependency lock file

The .env File (GPU Mode):

# CUDA library paths (from local venv installation)
export LD_LIBRARY_PATH=".venv/lib/python3.*/site-packages/nvidia/*/lib:$LD_LIBRARY_PATH"

# JAX GPU configuration
export JAX_PLATFORMS="cuda,cpu"
export XLA_PYTHON_CLIENT_PREALLOCATE="false"
export XLA_PYTHON_CLIENT_MEM_FRACTION="0.8"

# Project paths
export PYTHONPATH="${PYTHONPATH:+${PYTHONPATH}:}$(pwd)"
export PYTEST_CUDA_ENABLED="true"

The .env File (CPU Mode):

# JAX CPU configuration
export JAX_PLATFORMS="cpu"
export JAX_ENABLE_X64="0"

# Project paths
export PYTHONPATH="${PYTHONPATH:+${PYTHONPATH}:}$(pwd)"
export PYTEST_CUDA_ENABLED="false"

Using the Environment¤

First Time Setup:

# Run setup once
./setup.sh

# Activate environment
source ./activate.sh

Daily Workflow:

# Simply activate the environment
source ./activate.sh

# Your environment is now ready with:
# - Virtual environment activated
# - CUDA paths configured (if GPU)
# - JAX optimally configured
# - Project in PYTHONPATH

# Work on your code...
uv run pytest tests/
python your_script.py

# Deactivate when done
deactivate

Activation Script Features¤

The activate.sh script provides:

  1. Smart Process Detection: Checks for running processes before deactivation
  2. GPU Verification: Tests GPU availability and displays device info
  3. Environment Status: Shows Python version, JAX backend, available devices
  4. Helpful Commands: Displays common development commands
  5. Error Handling: Provides clear messages if setup is incomplete

Example activation output:

🚀 Activating Workshop Development Environment
=============================================
✅ Virtual environment activated
✅ Environment configuration loaded
   🎮 GPU Mode: CUDA enabled

🔍 Environment Status:
   Python: Python 3.11.5
   Virtual Environment: /path/to/workshop/.venv

🧪 JAX Configuration:
   JAX version: 0.4.35
   Default backend: gpu
   Available devices: 1 total
   🎉 GPU devices: 1 (['cuda(id=0)'])
   ✅ CUDA acceleration ready!

🚀 Ready for Development!

Setup Script Options¤

Customize setup for different scenarios:

# Standard setup (auto-detects GPU)
./setup.sh

# CPU-only setup (laptop/CI)
./setup.sh --cpu-only

# Clean installation (remove all caches)
./setup.sh --deep-clean

# Force reinstall over existing environment
./setup.sh --force

# Verbose output for debugging
./setup.sh --verbose

# Combine options
./setup.sh --force --deep-clean --verbose

Troubleshooting Local Setup¤

Problem: ./setup.sh: Permission denied

# Make script executable
chmod +x setup.sh activate.sh
./setup.sh

Problem: GPU not detected after setup

# Check NVIDIA drivers
nvidia-smi

# Re-run setup with force
./setup.sh --force

# Check activation output for GPU status
source ./activate.sh

Problem: Environment variables not loaded

# Verify .env file exists
cat .env

# Use 'source' not 'bash'
source ./activate.sh  # ✅ Correct
bash activate.sh       # ❌ Won't load environment

GPU Setup¤

CUDA Installation¤

Workshop requires CUDA 12.0+ for GPU acceleration.

Linux (Ubuntu/Debian)¤

# Check if CUDA is already installed
nvcc --version
nvidia-smi

# If not installed, download CUDA Toolkit 12.0+
# Visit: https://developer.nvidia.com/cuda-downloads

# Example for Ubuntu 22.04
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-0

# Add CUDA to PATH
echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc

macOS¤

CUDA is not supported on macOS. Use Metal backend (experimental) or CPU mode.

Windows (WSL2)¤

Windows users should use WSL2 with Ubuntu:

# In PowerShell (Admin)
wsl --install

# Follow Ubuntu installation steps inside WSL2

Automated GPU Setup (Linux)¤

Workshop provides automated CUDA setup through the main setup script:

# Complete environment setup with CUDA (recommended)
./setup.sh

# This automatically:
# - Detects GPU and CUDA availability
# - Installs JAX with CUDA support
# - Configures CUDA library paths
# - Sets up JAX environment variables
# - Creates activation script with GPU verification
# - Tests GPU functionality

# For CPU-only systems:
./setup.sh --cpu-only

What gets configured automatically:

  • LD_LIBRARY_PATH: Points to CUDA libraries in .venv/lib/python3.*/site-packages/nvidia/*/lib
  • JAX_PLATFORMS: Set to "cuda,cpu" for GPU or "cpu" for CPU-only
  • XLA_PYTHON_CLIENT_PREALLOCATE: Disabled for dynamic memory allocation
  • XLA_PYTHON_CLIENT_MEM_FRACTION: Set to 0.8 (use 80% of GPU memory)
  • JAX CUDA plugins: Automatically matched to JAX version

See Local Virtual Environment Setup for detailed explanation.

JAX GPU Installation¤

After CUDA is installed:

# Install JAX with CUDA support
pip install "jax[cuda12_pip]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html

# Verify GPU is detected
python -c "import jax; print(jax.devices())"
# Should print: [cuda(id=0)] or similar

Troubleshooting GPU Installation¤

Problem: jax.devices() returns [cpu(id=0)] instead of GPU

Solutions:

  1. Set LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
  1. Use the setup script:
./scripts/fresh_cuda_setup.sh
  1. Reinstall JAX with CUDA:
pip uninstall jax jaxlib
pip install "jax[cuda12_pip]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
  1. Check CUDA installation:
nvcc --version          # Should show CUDA version
nvidia-smi              # Should show GPU info
python -c "import jax; print(jax.lib.xla_bridge.get_backend().platform)"  # Should print 'gpu'

Problem: CUDA out of memory

Solutions:

  • Reduce batch size
  • Enable mixed precision training (BF16/FP16)
  • Use gradient accumulation
  • Clear GPU cache: jax.clear_caches()

Problem: Slow training on GPU

Solutions:

  • Ensure XLA is enabled (automatic with JAX)
  • Use JIT compilation: @jax.jit
  • Check GPU utilization: nvidia-smi dmon
  • Increase batch size for better GPU utilization

TPU Setup¤

For Google Cloud TPU:

# Install JAX with TPU support
pip install "jax[tpu]" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html

# Verify TPU is detected
python -c "import jax; print(jax.devices())"
# Should print TPU devices

Verification¤

After installation, verify your setup:

# test_installation.py
import jax
import jax.numpy as jnp
from flax import nnx
from workshop.generative_models.factories import create_vae
from workshop.generative_models.core.configuration import ModelConfiguration

print("JAX version:", jax.__version__)
print("JAX backend:", jax.default_backend())
print("Available devices:", jax.devices())

# Test simple computation
x = jnp.array([1.0, 2.0, 3.0])
print("JAX computation test:", jnp.sum(x))

# Test Workshop imports
config = ModelConfiguration(
    model_type="vae",
    latent_dim=32,
    input_shape=(28, 28, 1),
    encoder_features=[64, 128],
    decoder_features=[128, 64]
)
rngs = nnx.Rngs(0)
model = create_vae(config, rngs=rngs)
print("Workshop model created successfully!")

# Test forward pass
batch = jax.random.normal(jax.random.PRNGKey(0), (4, 28, 28, 1))
outputs = model(batch, rngs=rngs)
print("Forward pass successful!")
print("Output keys:", outputs.keys())

Run the verification:

python test_installation.py

Expected output:

JAX version: 0.4.35
JAX backend: gpu (or cpu)
Available devices: [cuda(id=0)] (or [CpuDevice(id=0)])
JAX computation test: 6.0
Workshop model created successfully!
Forward pass successful!
Output keys: dict_keys(['reconstructed', 'reconstruction', 'mean', 'log_var', 'logvar', 'z'])

Development Installation¤

For contributing to Workshop:

Quick Development Setup:

# Clone repository
git clone https://github.com/mahdi-shafiei/workshop.git
cd workshop

# Run setup script (includes dev tools)
./setup.sh

# Activate environment
source ./activate.sh

# Install pre-commit hooks
uv run pre-commit install

# Verify development setup
uv run pytest tests/ -x              # Run tests
uv run ruff check src/               # Linting
uv run ruff format src/              # Formatting
uv run pyright src/                  # Type checking

What's included in development setup:

  • All core dependencies (uv sync --extra all or --extra dev)
  • Development tools: pytest, ruff, pyright, pre-commit
  • GPU support (if available)
  • Documentation tools (mkdocs, mkdocstrings)
  • Benchmarking utilities
  • Test coverage tools

Daily development workflow:

# Start your work session
source ./activate.sh

# Make changes to code
# ...

# Run tests before committing
uv run pytest tests/ -x

# Run pre-commit checks
uv run pre-commit run --all-files

# Commit your changes
git add .
git commit -m "Your commit message"

# Deactivate when done
deactivate

Platform-Specific Notes¤

Linux¤

  • Recommended: Ubuntu 20.04 LTS or Ubuntu 22.04 LTS
  • Ensure NVIDIA drivers are up-to-date: sudo ubuntu-drivers autoinstall
  • For multi-GPU: Set CUDA_VISIBLE_DEVICES="0,1"

macOS¤

  • Apple Silicon (M1/M2/M3): JAX has experimental support
pip install jax-metal  # For Apple M1/M2/M3
  • Intel Macs: CPU-only mode is fully supported
  • XCode Command Line Tools required: xcode-select --install

Windows¤

  • WSL2 Required: Native Windows support is limited
  • Install Ubuntu 22.04 from Microsoft Store
  • Follow Linux installation inside WSL2
  • GPU support requires Windows 11 + CUDA in WSL2

Environment Variables¤

Configure Workshop behavior with environment variables:

# JAX Configuration
export JAX_PLATFORMS=gpu              # or 'cpu', 'tpu'
export XLA_PYTHON_CLIENT_PREALLOCATE=false  # Disable memory preallocation
export XLA_PYTHON_CLIENT_MEM_FRACTION=0.75  # Use 75% of GPU memory

# Workshop Configuration
export WORKSHOP_CACHE_DIR=~/.cache/workshop  # Cache directory
export WORKSHOP_DATA_DIR=~/workshop_data     # Data directory
export WORKSHOP_LOG_LEVEL=INFO               # Logging level

# Development
export ENABLE_BLACKJAX_TESTS=1      # Enable expensive BlackJAX tests

Add to ~/.bashrc or ~/.zshrc for persistence.

Common Issues¤

Import Errors¤

Problem: ModuleNotFoundError: No module named 'workshop'

Solutions:

# Verify installation
pip list | grep workshop

# Reinstall
pip install -e .

# Check Python path
python -c "import sys; print(sys.path)"

Version Conflicts¤

Problem: Dependency version conflicts

Solutions:

# Clean install
deactivate
rm -rf .venv uv.lock
uv venv
source .venv/bin/activate
uv sync --all-extras

Memory Issues¤

Problem: Out of memory during installation

Solutions:

# Increase pip timeout and disable parallel builds
pip install --no-cache-dir -e .

# Or use uv which is more memory efficient
uv sync --all-extras

Updating Workshop¤

Keep Workshop up-to-date:

# From PyPI (when available)
pip install --upgrade workshop-generative

# From source
cd workshop
git pull origin main
uv sync --all-extras  # or: pip install -e .

Uninstallation¤

Remove Workshop completely:

# If installed from PyPI
pip uninstall workshop-generative

# If installed from source
pip uninstall workshop

# Remove cache and data (optional)
rm -rf ~/.cache/workshop
rm -rf ~/workshop_data

# Remove virtual environment
deactivate
rm -rf .venv

Next Steps¤

After successful installation:

  1. Quick Start: Follow the Quickstart Guide to run your first model
  2. Core Concepts: Learn about Workshop architecture
  3. First Model: Build a VAE from scratch with the First Model Tutorial
  4. Examples: Explore ready-to-run Examples

Getting Help¤

If you encounter issues:

Hardware Recommendations¤

For Research/Development¤

  • CPU: 8+ cores (Intel i7/i9 or AMD Ryzen 7/9)
  • RAM: 16-32GB
  • GPU: NVIDIA RTX 3060 (12GB) or better
  • Storage: SSD with 100GB+ free space

For Production¤

  • CPU: 16+ cores (Intel Xeon or AMD EPYC)
  • RAM: 64GB+
  • GPU: NVIDIA A100 (40/80GB) or H100
  • Storage: NVMe SSD with 500GB+ free space

For Large-Scale Training¤

  • Multi-GPU: 4-8x NVIDIA A100 or H100
  • RAM: 256GB+
  • Network: High-speed interconnect (NVLink, InfiniBand)
  • Storage: Parallel file system (Lustre, GPFS)

Last Updated: 2025-10-13

Installation Support: For installation help, open an issue.