[P] PyTorch-Universal-Docker-Template: Build any Version of PyTorch from Source on any Version of CUDA/cuDNN and increase Speeds x10
https://github.com/veritas9872/PyTorch-Universal-Docker-Template
Dear fellow PyTorch aficionados, I present to you the [PyTorch-Universal-Docker-Template: The Docker Template for Universal PyTorch Source-Builds].
As of the time of writing, the majority of deep learning research is carried out using the PyTorch library. However, the conda/pip packages used to install PyTorch are designed for compatibility over performance.
Building PyTorch from source thus often increases GPU compute speeds dramatically, on some benchmarks, I have even seen an x4 increase. In practice, an x2 increase is more typical.
However, building PyTorch from source is a time-consuming and bug-prone process that is difficult to get right. Because of this, many practitioners use the official PyTorch images from PyTorch Docker Hub, NVIDIA NGC, AWS ECR, etc.
However, these images have fixed CUDA/cuDNN environments and many pre-installed packages, making it difficult to integrate them into pre-existing projects.
Moreover, many in the deep learning community are unfamiliar with Docker and would prefer to use their local environments.
The PyTorch Universal Docker Template provides a solution that can solve all of the above problems. It builds PyTorch and subsidiary libraries (TorchVision, TorchText, TorchAudio) for any desired version on any CUDA version on any cuDNN version. The build generates wheels (`.whl` files) that can be extracted and used on local projects without any dependency on Docker. Windows users can also use this project via WSL.
Combined with techniques such as AMP and cuDNN benchmarking (which an astonishing number of practitioners neglect), using this template may increase compute speeds by perhaps even x10, though this is an optimistic estimate. I believe that my project will benefit researchers in both academia and industry. Please star the repository if you are interested and good luck!