5 Lightning Trainer Flags to take your PyTorch Project to the Next Level

Aaron (Ari) Bornstein
PyTorch Lightning Developer Blog

--

Lighting is a lightweight PyTorch wrapper for high-performance AI research that aims to abstract Deep Learning boilerplate while providing you full control and flexibility over your code. With Lightning, you scale your models not the boilerplate.

In a previous post, I explained why I think all PyTorch Projects should be leveraging Lightning.

In this post, I’ll walk through a few of my favorite Lightning Trainer Flags that will enable your projects to take advantage of best practices without any code changes.

1. Ensure Reproducibility using Deterministic

2. Speed up model training with Benchmark

This flag is likely to increase the speed of your training if your input sizes do not change. The speedup comes from allowing the cudnn auto-tuner to find the best algorithm for the hardware

3. Averaged Mixed Precision

Sets the optimization level to use for 16-bit GPU precision, using NVIDIA apex under the hood.

4. Gradient Clipping

Gradient clipping enables us to prevent exploding and vanishing gradients by clipping the derivatives of the loss function to a fixed value if they are less than a negative threshold or more than the positive threshold.

5. Early Stopping

The EarlyStopping callback can be used to monitor a validation metric and stop the training when no improvement is observed.

Grid

You can get started with grid.ai for free with just a GitHub or Google Account

Grid.AI enables you to scale training from your laptop to the cloud without having to modify a single line of code. While Grid supports all the classic Machine Learning Frameworks such as TensorFlow, Keras, PyTorch and more. Leveraging Lightning features such as Early Stopping, Integrated Logging, Automatic Checkpointing, and CLI enables you to make the traditional MLOps behind model training seem invisible.

Next Steps

If you have any questions about PyTorch Lightning feel free to reach out to me in the comments or on Twitter or LinkedIn.

About the Author

Aaron (Ari) Bornstein is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As Head of Developer Advocacy at Grid.ai, he collaborates with the Machine Learning Community, to solve real-world problems with game-changing technologies that are then documented, open-sourced, and shared with the rest of the world.

--

--

<Microsoft Open Source Engineer> I am an AI enthusiast with a passion for engaging with new technologies, history, and computational medicine.