solidot新版网站常见问题,请点击这里查看。

Computation Scheduling for Distributed Machine Learning with Straggling Workers. (arXiv:1810.09992v1 [cs.DC])

来源于:arXiv
We study the scheduling of computation tasks across $n$ workers in a large scale distributed learning problem. Computation speeds of the workers are assumed to be heterogeneous and unknown to the master, and redundant computations are assigned to workers in order to tolerate straggling workers. We consider sequential computation and instantaneous communication from each worker to the master, and each computation round, which can model a single iteration of the stochastic gradient descent algorithm, is completed once the master receives $k$ distinct computations from the workers. Our goal is to characterize the average completion time as a function of the computation load, which denotes the portion of the dataset available at each worker. We propose two computation scheduling schemes that specify the computation tasks assigned to each worker, as well as their computation schedule, i.e., the order of execution, and derive the corresponding average completion time in closed-form. We also 查看全文>>