-
11/6/23, 9:00 AM
- Meet the instructor.
- Create an account at courses.nvidia.com/join
-
11/6/23, 9:15 AM
Learn the significance of stochastic gradient descent when training on multiple GPUs
- Understand the issues with sequential single-thread data processing and the theory behind speeding up applications with parallel processing.
- Understand loss function, gradient descent, and stochastic gradient descent (SGD).
- Understand the effect of batch size on accuracy and training time with an eye...
-
11/6/23, 12:15 PM
Learn to convert single GPU training to multiple GPUs using PyTorch Distributed Data Parallel
- Understand how DDP coordinates training among multiple GPUs.
- Refactor single-GPU training programs to run on multiple GPUs with DDP.
-
11/6/23, 2:30 PM
Understand and apply key algorithmic considerations to retain accuracy when training on multiple GPUs
-
Understand what might cause accuracy to decrease when parallelizing training on multiple GPUs.
-
Learn and understand techniques for maintaining accuracy when scaling training to multiple GPUs.
-
-
11/6/23, 4:00 PM
Use what you have learned during the workshop: complete the workshop assessment to earn a certificate of competency
Go to contribution page -
11/6/23, 4:30 PM
- Review key learnings and wrap up questions.
- Take the workshop survey.
Choose timezone
Your profile timezone: