1

A simplified convergence theory for Byzantine resilient stochastic gradient descent

xniqmklkcibim1
In distributed learning. a central server trains a model according to updates provided by nodes holding local data samples. In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary). standard algorithms for model training such as stochastic gradient descent (SGD) fail to converge. https://ashleyshomestores.shop/product-category/8-piece-sectional-with-chaise/
Report this page

Comments

    HTML is allowed

Who Upvoted this Story