Morteza Mirzaei

Welcome to my blog. Here, I will share my thoughts.

Scaling Deep Learning with Distributed Training: Data Parallelism to Ring AllReduce

In this post, we’ll explore the crucial role of distributed training in scaling Deep Neural Networks (DNNs) to handle large datasets and complex models. We’ll take an in-depth look at data parallel training, the most widely used technique in this domain, and dive into its implementation to provide an intuitive understanding of how it enhances efficiency in deep learning. Why Do We Need Distributed Training? There are several advantages to using distributed training, but in my opinion, these two are the most important ones that we will focus on:...

August 11, 2024 · 9 min · Morteza Mirzaei