Distributed optimization with inexact oracle
In this paper, we study the distributed optimization problem using approximate first-order information. We suppose the agent can repeatedly call an inexact first-order oracle of each individual objective function and exchange information with its time-varying neighbors. We revisit the distributed subgradient method in this circumstance and show its suboptimality under square summable but not summable step sizes. We also present several conditions on the inexactness of the local oracles to ensure...