Faster Pytorch

This repository is about speeding up Pytorch training. It is a collection of techniques that can be used to speed up Pytorch training. We will focus on their respective principles and how to use them with pytorch (although most of them will be used with pytorch lighting, which we will also introduce).

  1. Monitoring with Wandb in 01Wandb
  2. DL framework Lightning in 02Lightning
  3. Distributed training with Deepspeed in 03Deepspeed
  4. Fully sharded training with Fairscale in 04Fairscale
  5. Accelerating existing models with Fabric or Accelerate in 05FabricAndAccelerate