CUDA Dependency Updates Tracker¶
This page tracks workarounds and pinned dependencies that depend on external packages improving their CUDA 13 support. As package maintainers release updated versions with better CUDA compatibility, the items below should be revisited and updated.
Current Workarounds¶
FAISS GPU Wheel for CUDA 13¶
Status: ⚠️ Requires Action
Current Workaround: Use faiss-cpu in requirements-cuda13.txt
File: requirements-cuda13.txt
Issue:
- FAISS GPU wheels are unavailable for CUDA 13 + Python 3.13 due to limited build coverage from Meta
- CPU variant is a functional fallback but lacks GPU acceleration for vector similarity search
Action Required When Fixed:
- Monitor FAISS releases for CUDA 13 + Python 3.13 wheel availability
- Test
faiss-gpuwith CUDA 13.x in CI environment - Replace
faiss-cpuwithfaiss-gpuinrequirements-cuda13.txt - Update documentation in
docs/getting-started/installation.mdanddocs/getting-started/environments.mdto remove the fallback note
Related Issue: Meta/FAISS issue tracker for CUDA 13 wheel builds
ONNX Runtime GPU Version Pinning¶
Status: 📌 Version Pinned
Current Pin: onnxruntime-gpu==1.24.1 in requirements-cuda.txt
File: requirements-cuda.txt
Reason for Pinning:
- Version 1.24.1 is known to work with CUDA 12 user-space library linkage (libcublasLt.so.12, etc.) across both CUDA 12.x and 13.x environments
- Later versions may have changed their CUDA 12 binary compatibility or introduced stricter version requirements
Action Required When Fixed:
- Monitor ONNX Runtime releases for CUDA 12–13 compatibility improvements
- Test newer versions (1.25.x, 1.26.x+) with:
- CUDA 12.x environments
- CUDA 13.x environments (with conda CUDA 12 runtime libs for linkage)
- CPU provider fallback behavior
- If newer versions offer better compatibility or performance, update to
onnxruntime-gpu>=1.24.1,<2.0(or specific newer pin) - Update CI/CD testing to verify linkage across versions
- Document the upgrade path in the changelog
Related Issue: Check ONNX Runtime issues for "CUDA 13" and "CUDA compatibility"
CuPy Prerelease on CUDA 13¶
Status: 🔧 Prerelease Required
Current Configuration: Uses --pre flag and https://pip.cupy.dev/pre in requirements-cuda13.txt
File: requirements-cuda13.txt
Reason for Prerelease:
- CUDA 13 stable
cupy-cuda13xwheels may not be available from the main PyPI index - Prerelease wheels from CuPy's development server ensure CUDA 13 support
Action Required When Fixed:
- Monitor CuPy releases for CUDA 13 stable wheel availability
- When stable wheels are published:
- Remove
--preflag fromrequirements-cuda13.txt - Remove custom index URL
https://pip.cupy.dev/pre - Update to pinned stable version (e.g.,
cupy-cuda13x==X.Y.Z) - Test in CI with stable index to confirm compatibility
- Update
docs/getting-started/installation.mdto note that CUDA 13 installation is now fully stable
Related Issue: Monitor CuPy GitHub Releases for CUDA 13 stable tag
TensorRT Version-Specific Pins¶
Status: ✅ Versioned, May Improve
Current Configuration: Separate tensorrt-cu12, tensorrt-cu13 (unversioned) in version-specific files
Files: requirements-cuda12.txt, requirements-cuda13.txt
Considerations:
- If NVIDIA publishes unified TensorRT CPU wheels with better version compatibility, we may simplify to a single tensorrt package
- CUDA 13 support may improve with new releases; consider stricter pinning if compatibility issues emerge
Action Required When Fixed:
- Monitor NVIDIA TensorRT releases for unified CUDA version support
- If NVIDIA publishes CPU/GPU-agnostic wheels:
- Move TensorRT to
requirements-cuda.txt(shared base) - Remove version-specific TensorRT packages
- Simplify requirements structure
- If CUDA 13 TensorRT stability improves, consider pinning to a specific version range
Related Issue: NVIDIA TensorRT issue tracker
PyTorch Index URLs¶
Status: 📦 Version-Specific, Monitor
Current Configuration:
- CUDA 12:
--extra-index-url https://download.pytorch.org/whl/cu128 - CUDA 13:
--extra-index-url https://download.pytorch.org/whl/cu130
Files: requirements-cuda12.txt, requirements-cuda13.txt
Considerations:
- PyTorch uses a custom index URL distribution strategy; check for future changes to their build/distribution infrastructure
- CUDA 13 might be integrated into the main PyPI wheels in future major versions, eliminating the need for custom indexes
Action Required When Fixed:
- Monitor PyTorch installation docs for index URL changes
- If PyTorch integrates CUDA variants into standard PyPI:
- Remove custom index URLs from both CUDA requirement files
- Simplify to pinned PyTorch versions (e.g.,
torch>=2.1.0) - Update CI to verify installation without custom indexes
Monitoring Checklist¶
When updating dependencies:
- Check FAISS releases for CUDA 13 wheel availability
- Review ONNX Runtime release notes and test compatibility
- Monitor CuPy stable CUDA 13 wheel status
- Evaluate TensorRT unified wheel roadmap
- Check PyTorch for index URL or distribution strategy changes
- Run full CI/CD suite with new dependencies
- Update this document with resolved items
Summary Table¶
| Package | Current Fix | File | Priority | Check Frequency |
|---|---|---|---|---|
| FAISS | faiss-cpu fallback |
requirements-cuda13.txt | High | Monthly |
| ONNX Runtime | Version pin 1.24.1 | requirements-cuda.txt | High | Quarterly |
| CuPy | Prerelease CUDA 13 | requirements-cuda13.txt | Medium | Monthly |
| TensorRT | Version-specific import | requirements-cudaX.txt | Medium | Quarterly |
| PyTorch | Custom index URLs | requirements-cudaX.txt | Low | Quarterly |
How to Report Resolved Items¶
When any of these items is resolved:
- Update the relevant requirement file
- Update this document, marking the item as ✅ Resolved
- Update related documentation (installation.md, environments.md)
- Add a note to
CHANGELOG.mdunder the new release - Remove the resolved item from the monitoring checklist