Contribution List

6 out of 6 displayed
Export to PDF
  1. 11/6/23, 9:00 AM
    • Meet the instructor.
    • Create an account at courses.nvidia.com/join
    Go to contribution page
  2. 11/6/23, 9:15 AM

    Learn the significance of stochastic gradient descent when training on multiple GPUs

    • Understand the issues with sequential single-thread data processing and the theory behind speeding up applications with parallel processing.
    • Understand loss function, gradient descent, and stochastic gradient descent (SGD).
    • Understand the effect of batch size on accuracy and training time with an eye...
    Go to contribution page
  3. 11/6/23, 12:15 PM

    Learn to convert single GPU training to multiple GPUs using PyTorch Distributed Data Parallel

    • Understand how DDP coordinates training among multiple GPUs.
    • Refactor single-GPU training programs to run on multiple GPUs with DDP.
    Go to contribution page
  4. 11/6/23, 2:30 PM

    Understand and apply key algorithmic considerations to retain accuracy when training on multiple GPUs

    • Understand what might cause accuracy to decrease when parallelizing training on multiple GPUs.

    • Learn and understand techniques for maintaining accuracy when scaling training to multiple GPUs.

    Go to contribution page
  5. 11/6/23, 4:00 PM

    Use what you have learned during the workshop: complete the workshop assessment to earn a certificate of competency

    Go to contribution page
  6. 11/6/23, 4:30 PM
    • Review key learnings and wrap up questions.
    • Take the workshop survey.
    Go to contribution page