[DB Seminar] Spring 2016: Wei Dai
Date
Time
Location
Speaker
In this talk I will first give a brief overview of Petuum which encompasses a set of distributed machine learning principles as well as our open-sourced implementations. By discussing the the high level ideas and performance highlights, I hope to show that Big ML systems can benefit greatly from ML-rooted statistical and algorithmic insights.
In the second part I will dive into a key component of Petuum: the Bosen parameter server (PS), with particular interest in how consistency models allowing delayed synchronization affects ML programs. I will discuss the theoretical guarantees and empirical behaviors of iterative-convergent ML algorithms in existing PS consistency models. We then use the gleaned insights to improve a consistency model using an “eager” PS communication mechanism, and implement it as a new PS system that enables ML programs to reach their solution more quickly.