Week 8-2-9-Copy-0
Week 8-2-9-Copy-0
Week 8-2-9-Copy-0
● Distributes a model over multiple machines: This allows for efficient training
of large models on clusters of machines.
● Offers two operations:
○ Pull: Workers can query parts of the model from the Parameter Server.
○ Push: Workers can update parts of the model by pushing their computed
gradients to the Parameter Server.
These two operations are essential for the coordination and synchronization of
distributed machine learning algorithms.