PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Adv Neural Inf Process Syst. Author manuscript; available in PMC 2016 June 15.
Published in final edited form as:
Adv Neural Inf Process Syst. 2015 December; 28: 2656–2664.
PMCID: PMC4907892
NIHMSID: NIHMS771641

Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms

Abstract

Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD’s runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.

1 Introduction

Many problems in machine learning can be written as a stochastic optimization problem

minimize  E[f˜(x)]  over  xn,

where f is a random objective function. One popular method to solve this is with stochastic gradient descent (SGD), an iterative method which, at each timestep t, chooses a random objective sample ft and updates

xt+1=xtαf˜t(xt),
(1)

where α is the step size. For most problems, this update step is easy to compute, and perhaps because of this SGD is a ubiquitous algorithm with a wide range of applications in machine learning [1], including neural network back-propagation [2, 3, 13], recommendation systems [8, 19], and optimization [20]. For non-convex problems, SGD is popular—in particular, it is widely used in deep learning—but its success is poorly understood theoretically.

Given SGD’s success in industry, practitioners have developed methods to speed up its computation. One popular method to speed up SGD and related algorithms is using asynchronous execution. In an asynchronous algorithm, such as Hogwild! [17], multiple threads run an update rule such as Equation 1 in parallel without locks. Hogwild! and other lock-free algorithms have been applied to a variety of uses, including PageRank approximations (FrogWild! [16]), deep learning (Dogwild! [18]) and recommender systems [24]. Many asynchronous versions of other stochastic algorithms have been individually analyzed, such as stochastic coordinate descent (SGD) [14, 15] and accelerated parallel proximal coordinate descent (APPROX) [6], producing rate results that are similar to those of Hogwild! Recently, Gupta et al. [9] gave an empirical analysis of the effects of a low-precision variant of SGD on neural network training. Other variants of stochastic algorithms have been proposed [5,11,12,21,22,23]; only a fraction of these algorithms have been analyzed in the asynchronous case. Unfortunately, a new variant of SGD (or a related algorithm) may violate the assumptions of existing analysis, and hence there are gaps in our understanding of these techniques.

One approach to filling this gap is to analyze each purpose-built extension from scratch: an entirely new model for each type of asynchrony, each type of precision, etc. In a practical sense, this may be unavoidable, but ideally there would be a single technique that could analyze many models. In this vein, we prove a martingale-based result that enables us to treat many different extensions as different forms of noise within a unified model. We demonstrate our technique with three results:

  • For the convex case, Hogwild! requires strict sparsity assumptions. Using our techniques, we are able to relax these assumptions and still derive convergence rates. Moreover, under Hogwild!’s stricter assumptions, we recover the previous convergence rates.
  • We derive convergence results for an asynchronous SGD algorithm for a non-convex matrix completion problem. We derive the first rates for asynchronous SGD following the recent (synchronous) non-convex SGD work of De Sa et al. [4].
  • We derive convergence rates in the presence of quantization errors such as those introduced by fixed-point arithmetic. We validate our results experimentally, and show that Buckwild! can achieve speedups of up to 2.3× over Hogwild!-based algorithms for logistic regression.

One can combine these different methods both theoretically and empirically. We begin with our main result, which describes our martingale-based approach and our model.

2 Main Result

Analyzing asynchronous algorithms is challenging because, unlike in the sequential case where there is a single copy of the iterate x, in the asynchronous case each core has a separate copy of x in its own cache. Writes from one core may take some time to be propagated to another core’s copy of x, which results in race conditions where stale data is used to compute the gradient updates. This difficulty is compounded in the non-convex case, where a series of unlucky random events—bad initialization, inauspicious steps, and race conditions—can cause the algorithm to get stuck near a saddle point or in a local minimum.

Broadly, we analyze algorithms that repeatedly update x by running an update step

xt+1=xtG˜t(xt),
(2)

for some i.i.d. update function Gt. For example, for SGD, we would have G(x) = α[nabla]ft(x). The goal of the algorithm must be to produce an iterate in some success region S—for example, a ball centered at the optimum x*. For any T, after running the algorithm for T timesteps, we say that the algorithm has succeeded if xt [set membership] S for some tT; otherwise, we say that the algorithm has failed, and we denote this failure event as FT.

Our main result is a technique that allows us to bound the convergence rates of asynchronous SGD and related algorithms, even for some non-convex problems. We use martingale methods, which have produced elegant convergence rate results for both convex and some non-convex [4] algorithms. Martingales enable us to model multiple forms of error—for example, from stochastic sampling, random initialization, and asynchronous delays—within a single statistical model. Compared to standard techniques, they also allow us to analyze algorithms that sometimes get stuck, which is useful for non-convex problems. Our core contribution is that a martingale-based proof for the convergence of a sequential stochastic algorithm can be easily modified to give a convergence rate for an asynchronous version.

A supermartingale [7] is a stochastic process Wt such that E[Wt+1|Wt] ≤ Wt. That is, the expected value is non-increasing over time. A martingale-based proof of convergence for the sequential version of this algorithm must construct a supermartingale Wt(xt, xt−1, …, x0) that is a function of both the time and the current and past iterates; this function informally represents how unhappy we are with the current state of the algorithm. Typically, it will have the following properties.

Definition 1

For a stochastic algorithm as described above, a non-negative process Wt : Rn×t → (R is a rate supermartingale with horizon B if the following conditions are true. First, it must be a supermartingale; that is, for any sequence xt, …, x0 and any tB,

E[Wt+1(xtG˜t(xt),xt,,x0)]Wt(xt,xt1,,x0).
(3)

Second, for all times TB and for any sequence xT, …, x0, if the algorithm has not succeeded by time T (that is, xt [negated set membership] S for all t < T), it must hold that

WT(xT,xT1,,x0)T.
(4)

This represents the fact that we are unhappy with running for many iterations without success.

Using this, we can easily bound the convergence rate of the sequential version of the algorithm.

Statement 1

Assume that we run a sequential stochastic algorithm, for which W is a rate supermartingale. For any TB, the probability that the algorithm has not succeeded by time T is

P(FT)E[W0(x0)]T.

Proof

In what follows, we let Wt denote the actual value taken on by the function in a process defined by (2). That is, Wt = Wt(xt, xt−1, …, x0). By applying (3) recursively, for any T,

E[WT]E[W0]=E[W0(x0)].

By the law of total expectation applied to the failure event FT,

E[W0(x0)]E[WT]=P(FT)E[WT|FT]+P(¬FT)E[WT|¬FT].

Applying (4), i.e. E [WT|FT] ≥ T, and recalling that W is nonnegative results in

E[W0(x0)]P(FT)T;

rearranging terms produces the result in Statement 1.

This technique is very general; in subsequent sections we show that rate supermartingales can be constructed for SGD on all convex problems and for some algorithms for non-convex problems.

2.1 Modeling Asynchronicity

The behavior of an asynchronous SGD algorithm depends both on the problem it is trying to solve and on the hardware it is running on. For ease of analysis, we assume that the hardware has the following characteristics. These are basically the same assumptions used to prove the original Hogwild! result [17].

  • There are multiple threads running iterations of (2), each with their own cache. At any point in time, these caches may hold different values for the variable x, and they communicate via some cache coherency protocol.
  • There exists a central store S (typically RAM) at which all writes are serialized. This provides a consistent value for the state of the system at any point in real time.
  • If a thread performs a read R of a previously written value X, and then writes another value Y (dependent on R), then the write that produced X will be committed to S before the write that produced Y.
  • Each write from an iteration of (2) is to only a single entry of x and is done using an atomic read-add-write instruction. That is, there are no write-after-write races (handling these is possible, but complicates the analysis).

Notice that, if we let xt denote the value of the vector x in the central store S after t writes have occurred, then since the writes are atomic, the value of xt+1 is solely dependent on the single thread that produces the write that is serialized next in S. If we let Gt denote the update function sample that is used by that thread for that write, and υt denote the cached value of x used by that write, then

xt+1=xtG˜t(υ˜t)
(5)

Our hardware model further constrains the value of [upsilon with tilde]t: all the read elements of [upsilon with tilde]t must have been written to S at some time before t. Therefore, for some nonnegative variable [tau with tilde]i,t,

eiTυ˜t=eiTxtτ˜i,t,
(6)

where ei is the ith standard basis vector. We can think of [tau with tilde]i,t as the delay in the ith coordinate caused by the parallel updates.

We can conceive of this system as a stochastic process with two sources of randomness: the noisy update function samples Gt and the delays [tau with tilde]i,t. We assume that the Gt are independent and identically distributed—this is reasonable because they are sampled independently by the updating threads. It would be unreasonable, though, to assume the same for the [tau with tilde]i,t, since delays may very well be correlated in the system. Instead, we assume that the delays are bounded from above by some random variable [tau with tilde]. Specifically, if Ft, the filtration, denotes all random events that occurred before timestep t, then for any i, t, and k,

P(τ˜i,tk|t)P(τ˜k).
(7)

We let τ = E[[tau with tilde]], and call τ the worst-case expected delay.

2.2 Convergence Rates for Asynchronous SGD

Now that we are equipped with a stochastic model for the asynchronous SGD algorithm, we show how we can use a rate supermartingale to give a convergence rate for asynchronous algorithms. To do this, we need some continuity and boundedness assumptions; we collect these into a definition, and then state the theorem.

Definition 2

An algorithm with rate supermartingale W is (H, R, ξ)-bounded if the following conditions hold. First, W must be Lipschitz continuous in the current iterate with parameter H; that is, for any t, u, υ, and sequence xt, …,x0,

Wt(u,xt1,,x0)Wt(υ,xt1,,x0)Huυ.
(8)

Second, G must be Lipschitz continuous in expectation with parameter R; that is, for any u, and υ,

E[G˜(u)G˜(υ)]Ruυ1.
(9)

Third, the expected magnitude of the update must be bounded by ξ. That is, for any x,

E[G˜(x)]ξ.
(10)

Theorem 1

Assume that we run an asynchronous stochastic algorithm with the above hardware model, for which W is a (H, R, ξ)-bounded rate supermartingale with horizon B. Further assume that HRξτ < 1. For any TB, the probability that the algorithm has not succeeded by time T is

P(FT)E[W(0,x0)](1HRξτ)T.

Note that this rate depends only on the worst-case expected delay τ and not on any other properties of the hardware model. Compared to the result of Statement 1, the probability of failure has only increased by a factor of 1 − HRξτ. In most practical cases, HRξτ [double less-than sign] 1, so this increase in probability is negligible.

Since the proof of this theorem is simple, but uses non-standard techniques, we outline it here. First, notice that the process Wt, which was a supermartingale in the sequential case, is not in the asynchronous case because of the delayed updates. Our strategy is to use W to produce a new process Vt that is a supermartingale in this case. For any t and x., if xu [negated set membership] S for all u < t, we define

Vt(xt,,x0)=Wt(xt,,x0)HRξτt+HRk=1xtk+1xtkm=kP(τ˜m).

Compared with W, there are two additional terms here. The first term is negative, and cancels out some of the unhappiness from (4) that we ascribed to running for many iterations. We can interpret this as us accepting that we may need to run for more iterations than in the sequential case. The second term measures the distance between recent iterates; we would be unhappy if this becomes large because then the noise from the delayed updates would also be large. On the other hand, if xu [set membership] S for some u < t, then we define

Vt(xt,,xu,,x0)=Vu(xu,,x0).

We call Vt a stopped process because its value doesn’t change after success occurs. It is straightforward to show that Vt is a supermartingale for the asynchronous algorithm. Once we know this, the same logic used in the proof of Statement 1 can be used to prove Theorem 1.

Theorem 1 gives us a straightforward way of bounding the convergence time of any asynchronous stochastic algorithm. First, we find a rate supermartingale for the problem; this is typically no harder than proving sequential convergence. Second, we find parameters such that the problem is (H, R, ξ-bounded, typically; this is easily done for well-behaved problems by using differentiation to bound the Lipschitz constants. Third, we apply Theorem 1 to get a rate for asynchronous SGD. Using this method, analyzing an asynchronous algorithm is really no more difficult than analyzing its sequential analog.

3 Applications

Now that we have proved our main result, we turn our attention to applications. We show, for a couple of algorithms, how to construct a rate supermartingale. We demonstrate that doing this allows us to recover known rates for Hogwild! algorithms as well as analyze cases where no known rates exist.

3.1 Convex Case, High Precision Arithmetic

First, we consider the simple case of using asynchronous SGD to minimize a convex function f (x) using unbiased gradient samples [nabla]f (x). That is, we run the update rule

xt+1=xtαf˜t(x).
(11)

We make the standard assumption that f is strongly convex with parameter c; that is, for all x and y

(xy)T(f(x)f(y))cxy2.
(12)

We also assume continuous differentiability of [nabla]f with 1-norm Lipschitz constant L,

E[f˜(x)f˜(y)]Lxy1.
(13)

We require that the second moment of the gradient sample is also bounded for some M > 0 by

E[f˜(x)2]M2.
(14)

For some ε > 0, we let the success region be

S={x|xx*2ε}.

Under these conditions, we can construct a rate supermartingale for this algorithm.

Lemma 1

There exists a Wt where, if the algorithm hasn ’t succeeded by timestep t,

Wt(xt,,x0)=ε2αcεα2M2log (extx*2ε1)+t,

such that Wt is a rate submartingale for the above algorithm with horizon B = ∞. Furthermore, it is (H, R, ξ)-bounded with parameters: H=2ε(2αcεα2M2)1, R = αL, and ξ = αM.

Using this and Theorem 1 gives us a direct bound on the failure rate of convex Hogwild! SGD.

Corollary 1

Assume that we run an asynchronous version of the above SGD algorithm, where for some constant [theta] [set membership] (0, 1) we choose step size

α=cεϑM2+2LMτε.

Then for any T, the probability that the algorithm has not succeeded by time T is

P(FT)M2+2LMτεc2εϑT log (ex0x*2ε1).

This result is more general than the result in Niu et al. [17]. The main differences are: that we make no assumptions about the sparsity structure of the gradient samples; and that our rate depends only on the second moment of G and the expected value of [tau with tilde], as opposed to requiring absolute bounds on their magnitude. Under their stricter assumptions, the result of Corollary 1 recovers their rate.

3.2 Convex Case, Low Precision Arithmetic

One of the ways Buckwild! achieves high performance is by using low-precision fixed-point arithmetic. This introduces additional noise to the system in the form of round-off error. We consider this error to be part of the Buckwild! hardware model. We assume that the round-off error can be modeled by an unbiased rounding function operating on the update samples. That is, for some chosen precision factor κ, there is a random quantization function Q such that, for any x [set membership] R, it holds that E[Q(x)] = x, and the round-off error is bounded by |Q(x) — x|< ακM. Using this function, we can write a low-precision asynchronous update rule for convex SGD as

xt+1=xtQ˜t(αf˜t(υ˜t)),
(15)

where Qt operates only on the single nonzero entry of [nabla]ft([upsilon with tilde]t). In the same way as we did in the high-precision case, we can use these properties to construct a rate supermartingale for the low-precision version of the convex SGD algorithm, and then use Theorem 1 to bound the failure rate of convex Buckwild!

Corollary 2

Assume that we run asynchronous low-precision convex SGD, and for some [theta] [set membership] (0, 1), we choose step

α=cεϑM2(1+κ2)+LMτ(2+κ2)ε,

then for any T, the probability that the algorithm has not succeeded by time T is

P(FT)M2(1+κ2)+LMτ(2+κ2)εc2εϑTlog (ex0x*2ε1).

Typically, we choose a precision such that κ [double less-than sign] 1; in this case, the increased error compared to the result of Corollary 1 will be negligible and we will converge in a number of samples that is very similar to the high-precision, sequential case. Since each Buckwild! update runs in less time than an equivalent Hogwild! update, this result means that an execution of Buckwild! will produce same-quality output in less wall-clock time compared with Hogwild!

3.3 Non-Convex Case, High Precision Arithmetic

Many machine learning problems are non-convex, but are still solved in practice with SGD. In this section, we show that our technique can be adapted to analyze non-convex problems. Unfortunately, there are no general convergence results that provide rates for SGD on non-convex problems, so it would be unreasonable to expect a general proof of convergence for non-convex Hogwild! Instead, we focus on a particular problem, low-rank least-squares matrix completion,

minimizeE[ÃxxTF2]subject toxn,
(16)

for which there exists a sequential SGD algorithm with a martingale-based rate that has already been proven. This problem arises in general data analysis, subspace tracking, principle component analysis, recommendation systems, and other applications [4]. In what follows, we let A = E[Ã]. We assume that A is symmetric, and has unit eigenvectors u1, u2, …, un with corresponding eigenvalues λ1 > λ2(...) ≥ λn. We let Δ, the eigengap, denote Δ = λ1 − λ2.

De Sa et al. [4] provide a martingale-based rate of convergence for a particular SGD algorithm, Alecton, running on this problem. For simplicity, we focus on only the rank-1 version of the problem, and we assume that, at each timestep, a single entry of A is used as a sample. Under these conditions, Alecton uses the update rule

xt+1=(I+ηn2ei˜tei˜tTAej˜tej˜tT)xt,
(17)

where ĩt and jt are randomly-chosen indices in [1, n]. It initializes x0 uniformly on the sphere of some radius centered at the origin. We can equivalently think of this as a stochastic power iteration algorithm. For any ε > 0, we define the success set S to be

S={x|(u1Tx)2(1ε)x2}.
(18)

That is, we are only concerned with the direction of x, not its magnitude; this algorithm only recovers the dominant eigenvector of A, not its eigenvalue. In order to show convergence for this entrywise sampling scheme, De Sa et al. [4] require that the matrix A satisfy a coherence bound [10].

Definition 3

A matrix A [set membership] Rn×n is incoherent with parameter μ if for every standard basis vector ej, and for all unit eigenvectors ui of the matrix, (ejTui)2μ2n1.

They also require that the step size be set, for some constants 0 < γ ≤ 1 and 0 < [theta] < (1 + ε)−1 as

η=Δεγϑ2nμ4AF2.

For ease of analysis, we add the additional assumptions that our algorithm runs in some bounded space. That is, for some constant C, at all times t, 1 ≤ ‖xt‖ and ‖xt1C. As in the convex case, by following the martingale-based approach of De Sa et al. [4], we are able to generate a rate supermartinagle for this algorithm—to save space, we only state its initial value and not the full expression.

Lemma 2

For the problem above, choose any horizon B such that ηγεΔB ≤ 1. Then there exists a function Wt such that Wt is a rate supermartingale for the above non-convex SGD algorithm with parameters H=8nη1γ1Δ1ε12, R = ημ ‖AF, and ξ = ημ ‖AF C, and

E[W0(x0)]2η1Δ1 log (enγ1ε1)+B2πγ.

Note that the analysis parameter γ allows us to trade off between B, which determines how long we can run the algorithm, and the initial value of the supermartingale E [W0(x0)]. We can now produce a corollary about the convergence rate by applying Theorem 1 and setting B and T appropriately.

Corollary 3

Assume that we run Hogwild! Alecton under these conditions for T timesteps, as defined below. Then the probability of failure, P (FT), will be bounded as below.

T=4nμ4AF2Δ2εγϑ2πγ log (enγε),P(FT)8πγμ2μ24Cϑτε.

The fact that we are able to use our technique to analyze a non-convex algorithm illustrates its generality. Note that it is possible to combine our results to analyze asynchronous low-precision non-convex SGD, but the resulting formulas are complex, so we do not include them here.

4 Experiments

We validate our theoretical results for both asynchronous non-convex matrix completion and Buckwild!, a Hogwild! implementation with lower-precision arithmetic. Like Hogwild!, a Buckwild! algorithm has multiple threads running an update rule (2) in parallel without locking. Compared with Hogwild!, which uses 32-bit floating point numbers to represent input data, Buckwild! uses limited-precision arithmetic by rounding the input data to 8-bit or 16-bit integers. This not only decreases the memory usage, but also allows us to take advantage of single-instruction-multiple-data (SIMD) instructions for integers on modern CPUs.

We verified our main claims by running Hogwild! and Buckwild! algorithms on the discussed applications. Table 1 shows how the training loss of SGD for logistic regression, a convex problem, varies as the precision is changed. We ran SGD with step size α = 0.0001; however, results are similar across a range of step sizes. We analyzed all four datasets reported in DimmWitted [25] that favored Hogwild!: Reuters and RCV1, which are text classification datasets; Forest, which arises from remote sensing; and Music, which is a music classification dataset. We implemented all GLM models reported in DimmWitted, including SVM, Linear Regression, and Logistic Regression, and report Logistic Regression because other models have similar performance. The results illustrate that there is almost no increase in training loss as the precision is decreased for these problems. We also investigated 4-bit and 1-bit computation: the former was slower than 8-bit due to a lack of 4-bit SIMD instructions, and the latter discarded too much information to produce good quality results.

Table 1
Training loss of SGD as a function of arithmetic precision for logistic regression.

Figure 1(a) displays the speedup of Buckwild! running on the dense-version of the RCV1 dataset compared to both full-precision sequential SGD (left axis) and best-case Hogwild! (right axis). Experiments ran on a machine with two Xeon X650 CPUs, each with six hyperthreaded cores, and 24GB of RAM. This plot illustrates that incorporating low-precision arithmetic into our algorithm allows us to achieve significant speedups over both sequential and Hogwild! SGD. (Note that we don’t get full linear speedup because we are bound by the available memory bandwidth; beyond this limit, adding additional threads provides no benefits while increasing conflicts and thrashing the L1 and L2 caches.) This result, combined with the data in Table 1, suggest that by doing low-precision asynchronous updates, we can get speedups of up to 2.3× on these sorts of datasets without a significant increase in error.

Figure 1
Experiments compare the training loss, performance, and convergence of Hogwild! and Buckwild! algorithms with sequential and/or high-precision versions.

Figure 1(b) compares the convergence trajectories of Hogwild! and sequential versions of the non-convex Alecton matrix completion algorithm on a synthetic data matrix A [set membership] Rn×n with ten random eigenvalues λi > 0. Each plotted scries represents a different run of Alecton; the trajectories differ somewhat because of the randomness of the algorithm. The plot shows that the sequential and asynchronous versions behave qualitatively similarly, and converge to the same noise floor. For this dataset, sequential Alecton took 6.86 seconds to run while 12-thread Hogwild! Alecton took 1.39 seconds, a 4.9× speedup.

5 Conclusion

This paper presented a unified theoretical framework for producing results about the convergence rates of asynchronous and low-precision random algorithms such as stochastic gradient descent. We showed how a martingale-based rate of convergence for a sequential, full-precision algorithm can be easily leveraged to give a rate for an asynchronous, low-precision version. We also introduced Buckwild!, a strategy for SGD that is able to take advantage of modern hardware resources for both task and data parallelism, and showed that it achieves near linear parallel speedup over sequential algorithms.

Acknowledgments

The Buckwild! name arose out of conversations with Benjamin Recht. Thanks also to Madeleine Udell for helpful conversations.

The authors acknowledge the support of: DARPA FA8750-12-2-0335; NSF IIS-1247701; NSF CCF-1111943; DOE 108845; NSF CCF-1337375; DARPA FA8750-13-2-0039; NSF IIS-1353606; ONR N000141210041 and N000141310129; NIH U54EB020405; Oracle; NVIDIA; Huawei; SAP Labs; Sloan Research Fellowship; Moore Foundation; American Family Insurance; Google; and Toshiba.

Appendix

A Proof of Theorem 1

Proof of Theorem 1

This proof is a more detailed version of the argument outlined in Section 2.2. First, we restate the definition of the process Vt from the body of the paper. As long as the algorithm hasn’t succeeded yet,

Vt(xt,,x0)=Wt(xt,,x0)HRξτt+HRk=1xtk+1xtkm=kP(τ˜m).

At the next timestep, we will have xt+1 = xt + G([upsilon with tilde]t), and so

Vt+1(xt+G˜(υ˜t),xt,,x0)=Wt+1(xt+G˜(υ˜t),xt,,x0)HRξτ(t+1)+HRG˜(υ˜t)m=1P(τ˜m)+HRk=2xtk+2xtk+1m=kP(τ˜m).

Re-indexing the second sum and applying the definition of τ produces

Vt+1(xt+G˜(υ˜t),xt,,x0)=Wt+1(xt+G˜(υ˜t),xt,,x0)HRξτ(t+1)+HRτG˜(υ˜t)+HRk=1xtk+1xtkm=k+1P(τ˜m).

Applying the Lipschitz continuity assumption (8) for W results in

Vt+1(xt+G˜(υ˜t),xt,,x0)Wt+1(xt+G˜(xt),xt,,x0)+HG˜(υ˜t)G˜(xt)HRξτ(t+1)+HRτG˜(υ˜t)+HRk=1xtk+1xtkm=k+1P(τ˜m).

Taking the expected value of both sides produces

E[Vt+1(xt+G˜(υ˜t),xt,,x0)]E[Wt+1(xt+G˜(xt),xt,,x0)]+HE[G˜(υ˜t)G˜(xt)]HRξτ(t+1)+HRτE[G˜(υ˜t)]+HRk=1xtk+1xtkm=k+1P(τ˜m).

Applying the rate supermartingale property (3) of W,

E[Vt(xt+G˜(υ˜t),xt,,x0)]Wt(xt,,x0)+HE[G˜(υ˜t)G˜(xt)]HRξτ(t+1)+HRτE[G˜(υ˜t)]+HRk=1xtk+1xtkm=k+1P(τ˜m).

Applying the Lipschitz continuity assumption (9) for G,

E[Vt(xt+G˜(υ˜t),xt,,x0)]Wt(xt,,x0)+HRE[υ˜txt1]HRξτ(t+1)+HRτE[G˜(υ˜t)]+HRk=1xtk+1xtkm=k+1P(τ˜m).

Finally, applying the update distance bound (10),

E[Vt(xt+G˜(υ˜t),xt,,x0)]Wt(xt,,x0)+HRE[υ˜txt)1]HRξτ(t+1)+HRξτ+HRk=1xtk+1xtkm=k+1P(τ˜m)=Wt(xt,,x0)HRξτt+HRk=1xtk+1xtkm=kP(τ˜m)+HRE[υ˜txt1]HRk=1xtk+1xtkP(τ˜k)=Vt(xt,,x0)+HRE[υ˜txt1]HRk=1xtk+1xtkP(τ˜k).

Now, by the definition of the [upsilon with tilde]t,

υ˜txt1=i=1n|eiTxteiTυ˜t|=i=1n|eiTxteiTxtτ˜i,t|i=1nk=1τ˜i,t|eiTxtk+1eiTxtk|

Furthermore, using the bound on [tau with tilde]i,t from (7) gives us

E[υ˜txt1]i=1nk=1|eiTxtk+1eiTxtk|P(τ˜i,tk)i=1nk=1|eiTxtk+1eiTxtk|P(τ˜k)=k=1xtk+1xtk1P(τ˜k)=k=1xtk+1xtkP(τ˜k),

where the 1-norm is equal to the 2-norm here because each step only updates a single entry of x. Substituting this result in to the above equation allows us to conclude that, if the algorithm hasn’t succeeded by time t,

E[Vt(xt+G˜(υ˜t),xt,,x0)]Vt(xt,,x0).
(19)

On the other hand, if it has succeeded, this statement will be vacuously true, since Vt does not change after success occurs. Therefore, (19) will hold for all times.

In what follows, as in the proof of Statement 1, we let Vt denote the actual value taken on by the function during execution of the algorithm. That is, Vt = Vt(xt, xt−1, …, x0). By applying (19) recursively, for any T < B, we can show that

E[VT]E[V0].

Since we assumed as part of our hardware model that xt = x0 for t < 0,

E[V0]=E[W0(x0)].

Therefore, by the law of total expectation

E[W0(x0)]E[VT]=E[VT|FT]P(FT)+E[VT|¬FT]P(¬FT)E[VT|FT]P(FT)=E[WT(xT,,x0)HRξτT+HRk=1xTk+1xTkm=kP(τ˜m)|FT]P(FT)(E[WT(xT,,x0)|T]HRξτT)P(FT).

Since Wt is a rate supermartingale, we can apply (4) to get

E[W0(x0)](THRξτT)P(FT),

and solving for P (FT) produces

P(FT)E[W0(x0)](1HRξτ)T,

as desired.

B Proofs for Convex Case

First, we state the rate supermartingale lemma for the low-precision convex SGD algorithm.

Lemma 3

There exists a Wt with

W0(x0)ε2αcεα2M2(1+κ2)log (ex0x*2ε)

such that Wt is a rate submartingale for the above convex SGD algorithm with horizon B = ∞. Furthermore, it is (H, R, ξ)-bounded with parameters: R = αL, ξ2 = α2(1 + κ2)M2, and

H=2ε2αcεα2M2(1+κ2).

We note that, including this Lemma, the results in Section 3.1 are the same as the results in Section 3.2, except that the quantization factor is set as κ = 0. It follows that it is sufficient to prove only the Lemma and Corollary in 3.2; this is what we will do here.

In order to prove the results in this section, we will need some definitions and lemmas, which we state now.

Definition 4

(Piecewise Logarithm). For the purposes of this document, we define the piecewise logarithm function to be

log(x)={log(ex):x1x:x1

Lemma 4

The piecewise logarithm Junction is differentiable and concave. Also, if x ≥ 1, then for any Δ,

log(x(1+Δ))log(x)+Δ.

Proof

The first part of the lemma follows from the fact that log(x) is a piecewise function, where the pieces are both increasing and concave, and the fact that the function is differentiable at x = 1. The second part of the lemma follows from the fact that a first-order approximation always overestimates a concave function.

Armed with this definition, we prove Lemma 3.

Proof of Lemma 3

First, we note that, at any timestep t, if we evaluate the distance to the optimum at the next timestep using (11), then

xt+G˜t(xt)x*2=xtx*22(xtx*)TQ˜t(αf˜t(xt))+Q˜t(αf˜t(xt))2=xtx*22(xtx*)TQ˜t(αf˜t(xt))+α2αf˜t(xt)2+Q˜t(αf˜t(xt))αf˜t(xt)2.

Taking the expected value and applying (14), and the bounds on the properties of Qt, produces

E[xt+G˜t(xt)x*2]xtx*22α(xtx*)Tf(xt)+α2M2+δ2.

Since we assigned δ ≤ ακM,

E[xt+G˜t(xt)x*2]xtx*22α(xtx*)Tf(xt)+α2M2(1+κ2)=xtx*22α(xtx*)T(f(xt)f(x*))+α2M2(1+κ2).

Applying the strong convexity assumption (12),

E[xt+G˜t(xt)x*2]xtx*22αc(xtx*)2+α2M2(1+κ2)=(12αc)xtx*2+α2M2(1+κ2).

Now, if we haven’t succeeded yet, then ‖xtx*‖2 > ε. Under these conditions,

E[xt+G˜t(xt)x*2]xtx*2(12αc+α2M2(1+κ2)ε1).

Multiplying both sides of the equation by ε−1 and taking the piecewise logarithm, by Jensen’s inequality

E[log (ε1xt+G˜t(xt)x*2)]log (E[ε1xt+G˜t(xt)x*2])log (ε1xtx*2(12αc+α2M2(1+κ2)ε1)).

Since ε−1xtx*‖2 > 1, we can apply Lemma 4, which gives us

E[log (ε1xt+G˜t(xt)x*2)]log (ε1xtx*2)2αc+α2M2(1+κ2)ε1.

Now, we define the rate supermartingale Wt such that, if we haven’t succeeded up to time t, then

Wt(xt,,x0)=ε2αcεα2M2(1+κ2)log (ε1xtx*2)+t;

otherwise, if u is a time such that xu [set membership] S, then for all t > u,

Wt(xt,,x0)=Wu(xu,,x0).

The first rate supermartingale property (3) is true because if success hasn’t occurred,

E[Wt+1(xt+G˜t(xt),,x0)]=E[ε2αcεα2M2(1+κ2)log (ε1xt+G˜t(xt)x*2)+(t+1)]=ε2αcεα2M2(1+κ2)E[log (ε1xt+G˜t(xt)x*2)]+(t+1)ε2αcεα2M2(1+κ2)(log (ε1xtx*2)2αc+α2M2(1+κ2)ε1)+(t+1)=ε2αcεα2M2(1+κ2)log (ε1xtx*2)1+(t+1)=Wt(xt,,x0);

it is vacuously true if success has occurred because the value of Wt does not change after xu [set membership] S for u < t. The second rate supermartingale property (4) holds because, if success hasn’t occurred by time T,

WT(xT,,x0)=ε2αcεα2M2(1+κ2)log (ε1xTx*2)+TT;

this follows from the non-negativity of the log function for non-negative arguments.

We have now shown that Wt is a rate supermartingale for this algorithm. Next, we verify that the bound on W0 given in the lemma statement holds. At time 0, by the definition of the log function, since we assume that success has not occurred yet,

W0(x0)=ε2αcεα2M2(1+κ2)log (ε1x0x*2)=ε2αcεα2M2(1+κ2)log (ex0x*2ε);

this is the bound given in the lemma statement.

Next, we show that this rate supermartingale is (H, R, ξ)-bounded, for the values of H, R, and ξ given in the lemma statement. First, for any x, t, and sequence xt−1, …, x0,

xWt(x,xt1,,x0)=x(ε2αcεα2M2(1+κ2)log (ε1xx*2))=ε2αcεα2M2(1+κ2)2ε1(xx*)log(ε1xx*2).

Now, by the definition of log, we can conclude that log′(u) = min (1, u−1). Therefore,

xWt(x,xt1,,x0)=22αcεα2M2(1+κ2)(xx*)min (1,εxx*2),

and taking the norm of both sides,

xWt(x,xt1,,x0)=22αcεα2M2(1+κ2)min (xx*,εxx*1).

Clearly, this expression is maximized when ‖xx*‖2 = ε. Therefore,

xWt(x,xt1,,x0)2ε2αcεα2M2(1+κ2).

The Lipschitz continuity expression with H in the lemma statement now follows from the mean value theorem.

Next, we bound the Lipschitz continuity expression for R. We have that, for any x and y, if the single non-zero entry of [nabla]f is at index i, then

E[G˜(x)G˜(y)]=E[Q˜(αf˜(x))Q˜(αf˜(y))]=E[|Q˜(αeiTf˜(x))Q˜(αeiTf˜(y))|]

Without loss of generality, we assume that Q is non-decreasing, and that eiTf˜(x)eiTf˜(y). Thus, by the unbiased quality of Q,

E[G˜(x)G˜(y)]=E[Q˜(eiTαf˜(x))Q˜(eiTαf˜(y))]=E[eiTαf˜(x)eiTαf˜(y)]=αE[f˜(x)f˜(y)].

Finally, applying (13),

E[G˜(x)G˜(y)]=αL.

Finally, we bound the update expression with ξ. We have,

E[G˜(x)]2=E[Q˜(αf˜(x))]2E[Q˜(αf˜(x))]2=E[α2f˜(x)2+2α(f˜(x))T(Q˜(αf˜(x))αf˜(x))+Q˜(αf˜(x))αf˜(x)2].

Applying the bounds on the rounding error,

E[G˜(x)]2E[α2f˜(x)2+2α(f˜(x))T(Q˜(αf˜(x))αf˜(x))+δ2].

Taking the expected value and applying (14) and the unbiased quality of Q

E[G˜(x)]2α2M2+δ2.

Applying the assignment δ = ακM results in

E[G˜(x)]2α2M2+(1+κ2),

which is the desired expression.

So, we have proved all the statements in the lemma.

Proof of Corollary 2

Applying Theorem 1 directly to the result of Lemma 1 produces

P(FT)E[W0(x0)](1HRξτ)T=ε2αcεα2M2(1+κ2)log (ex0x*2ε)((1(2ε2αcεα2M2(1+κ2))(αL)(αM1+κ2)τ)T)1=ε(2αcεα2(M2(1+κ2)2LMτ1+κ2ε))Tlog (ex0x*2ε)ε(2αcεα2(M2(1+κ2)LMτ(2+κ2)ε))Tlog (ex0x*2ε)

Substituting the chosen value of α,

P(FT)εT(2cε(cεϑM2(1+κ2)+LMτ(2+κ2)ε)(M2(1+κ2)LMτ(2+κ2)ε(cεϑM2(1+κ2)+LMτ(2+κ2)ε)2)1log (ex0x*2ε)=ε(2c2ε2ϑM2(1+κ2)+LMτ(2+κ2)εc2ε2ϑ2M2(1+κ2)+LMτ(2+κ2)ε)Tlog (ex0x*2ε)εc2ε2ϑM2(1+κ2)+LMτ(2+κ2)εlog (ex0x*2ε)=M2(1+κ2)+LMτ(2+κ2)εc2εϑTlog (ex0x*2ε),

as desired.

C Proofs for Non-Convex Case

In order to accomplish this proof, we make use of some definitions and lemmas that appear in De Sa et al. [4]. We state them here before proceeding to the proof.

First, we define a function

τ(x)=(u1Tx)2(1γn1)(u1Tx)2+γn1x2.

Clearly, 0 ≤ τ (x) ≤ 1. Using this function, De Sa et al. [4] prove the following lemma. While their version of the lemma applies to higher-rank problems and multiple distributions, we state here a version that is specialized for the rank-1, entrywise sampling case we study in this paper. (This is a combination of Lemma 2 and Lemma 12 from De Sa etal.[4].)

Lemma 5

(τ-bound). If we run the Alecton update rule using entrywise sampling under the conditions in Section 33, including the incoherence and step size assignment, then for any x [negated set membership] S,

E[τ(x+ηÃx)]τ(x)(1+ηΔ(1τ(x))).

We also use another lemma from De Sa et al. [4]. This is a combination of their Lemmas 1 and 7.

Lemma 6

(Expected value of τ (x0)). If we initialize x0 with a uniform random angle (as done in Alecton), then

E[1τ(x0)]πγ2.

Now, we prove Lemma 2.

Proof of Lemma 2

First, if x [negated set membership] S, then (u1Tx)2(1ε)x2. Therefore,

τ(x)=(u1Tx)2(1γn1)(u1Tx)2+γn1x21ε(1γn1)(1ε)+γn1=1ε1ε+γn1ε,

and so

1τ(x)γn1ε1ε+γn1ε,>γn1ε.

From the result of Lemma 5, for any x [negated set membership] S,

E[τ(x+ηÃx)]τ(x)(1+ηΔ(1τ(x))).

Therefore,

E[1τ(x+ηÃx)](1τ(x))(1ηΔτ(x))

Therefore, by Jensen’s inequality and Lemma 4, since γ−1nε(l − τ(x)) > 1,

E[log (γ1nε1(1τ(x+ηÃx)))]log (E[γ1nε1(1τ(x+ηÃx))])log (γ1nε1(1τ(x))(1ηΔτ(x)))log (γ1nε1(1τ(x)))ηΔτ(x).

Now, we define our rate supermartingale. First, define

Z={x|τ(x)12},

and let B > 0 be any constant. Let Wt be defined such that, if xu [negated set membership] S [union or logical sum] Z for all ut, then

Wt(xt,,x0)=2ηΔlog (γ1nε1(1τ(xt)))+2B(1τ(xt))+t.

On the other hand, if xu [set membership] S [union or logical sum] Z for some u, then for all t > u, we define

Wt(xt,,x0)=Wu(xu,,x0).

That is, once xt enters S [union or logical sum] Z, the process W stops changing.

We verify that Wt is a rate supermartingale. First, (3) is true because, in the case that the process has stopped it is true vacuously, and in the case that it hasn’t stopped (i.e. xi [negated set membership] S [union or logical sum] Z for all ut),

E[Wt+1(xt+ηÃtxt,xt,,x0]=E[2ηΔlog (γ1nε1(1τ(xt+ηÃtxt)))+2B(1τ(xt+ηÃtxt))+t+1]=2ηΔE[log (γ1nε1(1τ(xt+ηÃtxt)))]+2BE[1τ(xt+ηÃtxt)]+t+12ηΔ(log (γ1nε1(1τ(xt)))ηΔτ(xt))+2B(1τ(xt))+t+1=Wt(xt,,x0)2τ(xt)+1.

Since xt [negated set membership] Z, it follows that 2τ(xt) ≥ 1. Therefore,

E[Wt+1(xt+ηÃtxt,xt,,x0]Wt(xt,,x0).

And so (3) holds in all cases.

The second rate supermartingale property (4) holds because, if success hasn’t occurred by time T < B, then there are two possibilities: either the process hasn’t stopped yet, or it stopped at a timestep where xt [set membership] Z. In the former case, by the non-negativity of the log function,

WT(xT,,x0)=2ηΔlog (γ1nε1(1τ(xT)))+2B(1τ(xT))+TT.

In the latter case,

WT(xT,,x0)=2ηΔlog (γ1nε1(1τ(xT)))+2B(1τ(xT))+TB.

Therefore (4) holds.

We have now shown that Wt is a rate supermartingale for Alecton. Next, we show that our bound on the initial value of the supermartingale holds. At time 0,

W0(x0)=2ηΔlog (γ1nε1(1τ(x0)))+2B(1τ(x0))2ηΔlog (γ1nε1)+2B(1τ(x0))=2ηΔlog (enγε)+2B(1τ(x0)).

Therefore, applying Lemma 6,

E[W0(x0)]2ηΔlog (enγε)+2BE[1τ(x0)]2ηΔlog (enγε)+B2πγ.

This is the value given in the lemma.

Now, we show that Wt is (H, R, ξ)-bounded. First, we give the H bound. To do so, we first differentiate τ(x).

τ(x)=2u1u1Tx((1γn1)(u1Tx)2+γn1x2)2(u1Tx)2((1γn1)u1u1Tx+γn1x)((1γn1)(u1Tx)2+γn1x2)2=2u1u1Txγn1x22(u1Tx)2γn1x((1γn1)(u1Tx)2+γn1x2)2=2γn1u1u1Txx2x(u1Tx)2((1γn1)(u1Tx)2+γn1x2)2.

Therefore,

τ(x)2=4γ2n2(u1Tx)2x4(u1Tx)4x2((1γn1)(u1Tx)2+γn1x2)44γ2n2x4(u1Tx)2x2((1γn1)(u1Tx)2+γn1x2)34γn1x2(1τ(x))((1γn1)(u1Tx)2+γn1x2)24(1τ(x))((1γn1)(u1Tx)2+γn1x2)4n(1τ(x))γx2.

Applying the assumption that ‖x2 ≥ 1,

τ(x)4n(1τ(x))γ.

Now, differentiating Wt with respect to τ produces

dWdτ=2nηγεΔlog(γ1nε1(1τ))2B.

So, it follows that

xWt(x,xt1,,x0)|dWdτ|τ(x)(2nηγεΔlog(γ1nε1(1τ))+2B)4n(1τ(x))γ.

Applying our assumption that ηγεΔB ≤ l, it is clear that this function will be maximized when γ−1nε−1(l − τ) = 1. Therefore,

xWt(x,xt1,,x0)(2nηγεΔ+2B)2ε=8nηγΔε,

which is our given value for H.

Next, we give the R bound. For Alecton, we have

G~(x)=ηÃx=ηn2eieiTAejejTx.

Therefore,

E[G~(x)G~(y)]=ηn2E[eieiTAejejT(xy)]=ηn2E[|eiTAejejT(xy)|]=ηi=1nj=1n|eiTAej||ejT(xy)|=ηj=1n|ejT(xy)|(i=1n|eiTAej|)ηj=1n|ejT(xy)|n(i=1n(eiTAej)2)12=ηj=1n|ejT(xy)|n(ejTA2ej)12=ηj=1n|ejT(xy)|n(k=1λj2(ukTej)2)12.

Applying the incoherence bound,

E[G˜(x)G˜(y)]ηj=1n|ejT(xy)|n(k=1λj2μ2n1)12=ηj=1n|ejT(xy)|n(μ2n1AF2)12=ηj=1n|ejT(xy)|μAF=ημAFxy1.

This agrees with our assignment of R= ημ‖AF.

Finally, we give our ξ bound on the magnitude of the updates. By the same argument as above, we will have

E[G~(x)]=ηn2E[eieiTAejejTx]=ημAFx1.

Applying the assumption that x12C, produces the bound given in the lemma, ξ = ημ ‖AF C.

This completes the proof of the lemma.

Next, we prove the corollary that gives a bound on the failure probability of asynchronous Alecton.

Proof of Corollary 3

By Theorem 1, we know that for the constants defined in Lemma 2,

P(FT)E[W(0,x0)](1HRξτ)T.

If we choose B = T for the horizon in Lemma 2, and substitute in the given constants,

P(FT)(2ηΔ log (enγε)+T2πγ)(1(8nηγΔε)(ημAF)(ημAFC)τ)1T1=(2ηΔT log (enγε)+2πγ)(18ηnμ2AF2CτγΔε)1.

Now, for the given value of η, we will have

8ηnμ2AF2CτγΔε=Δεγϑ2nμ4AF28nμ2AF2CτγΔε=4Cϑτεμ2.

Also, for the given values of η and T, we will have

2ηΔTlog (enγε)=2nμ4AF2ΔεγϑΔ2εγϑ2πγ4nμ4AF22Δ=2πγ.

Substituting these results in produces

P(FT)8πγ(14Cϑτεμ2)1=8πγμ2μ24Cϑτε,

which is the desired result.

D Simplified Convex Result

In this section, we provide a simplified proof for a result similar to our main result that only works in the convex case. This proof does not use any martingale results, and can therefore be considered more elementary than the proofs given above; however, it does not generalize to the non-convex case.

Theorem 2

Under the conditions given in Section 3.1, for any ε > 0, if for some [theta] [set membership] (0, 1) we choose constant step size

α=cϑε2LMτε+M2,

then there exists a timestep

T2LMτε+M2c2ϑεlog (x0x*2ε)

such that

E[xTx*2]ε.

Proof

Our goal is to bound the square-distance to the optimum by showing that it generally decreases at each timestep. We can show algebraically that

xt+1x*2=xtx*22α(xtx*)Tf˜t(xt)+2α(xtx*)T(f˜t(xt)f˜t(υ˜t))+α2f˜t(υ˜t)2.

We can think of these terms as representing respectively: the current square-distance, the first-order change, the noise due to delayed updates, and the noise due to random sampling. Taking the expected value given [upsilon with tilde]t and applying Cauchy-Schwarz, (12), (13), and (14) produces

E[xt+1x*2|t,υ˜t]xtx*22αcxtx*2+2αLxtx*xtυ˜t1+α2M2=(12αc)xtx*2+α2M2+2αLxtx*i=1n|eiTxteiTxtτ˜i,t|(12αc)xtx*2+α2M2+2αLxtx*i=1nk=1τ˜i,t|eiTxtk+1eiTxtk|.

We can now take the full expected value given the filtration, which produces

E[xt+1x*2|t](12αc)xtx*2+α2M2+2αLxtx*i=1nk=1P(τ˜i,kk)|eiTxtk+1eiTxtk|.

Applying (7) results in

E[xt+1x*2|t](12αc)xtx*2+α2M2+2αLxtx*i=1nk=1P(τ˜k)|eiTxtk+1eiTxtk|=(12αc)xtx*2+α2M2+2αLxtx*k=1P(τ˜k)xtk+1xtk1,

and since only at most one entry of x changes at each iteration.

E[xt+1x*2|t](12αc)xtx*2+α2M2+2αLk=1P(τ˜k)xtx*xtk+1xtk.

Finally, taking the full expected value, and applying Cauchy-Schwarz again,

E[xt+1x*2](12αc)E[xtx*2]+α2M2+2αLk=1P(τ˜k)E[xtx*2]E[xtk+1xtk2].

Noticing that, from (14),

E[xtk+1xtk2]=E[αG˜(υ˜tk)2]α2M,

if we let Jt = E [‖xtx*‖2], we get

Jt+1(12αc)Jt+α2M2+2α2LMk=1P(τ˜k)Jt=(12αc)Jt+α2M2+2α2LMτJt.

For any ε > 0, as long as Jt > ε,

log Jt+1log Jt+log (12αc+α2M2ε1+2α2LMτε12)<log Jt2αc+α2M2ε1+2α2LMτε12.

If we substitute the value of α chosen in the theorem statement, then

log Jt+1<log Jtc2ϑε2LMτε+M2.

Therefore, for any T, if JT > ε for all t < T,

T<2LMτε+M2c2ϑεlog (J0JT),

which proves the theorem.

References

1. Bottou Léon. COMPSTAT'2010. Springer; 2010. Large-scale machine learning with stochastic gradient descent; pp. 177–186.
2. Bottou Léon. Neural Networks: Tricks of the Trade. Springer; 2012. Stochastic gradient descent tricks; pp. 421–436.
3. Bottou Léon, Bousquet Olivier. The tradeoffs of large scale learning. In: Platt JC, Koller D, Singer Y, Roweis S, editors. NIPS. Vol. 20. NIPS Foundation; 2008. pp. 161–168.
4. De Sa Christopher, Olukotun Kunle, Ré Christopher. Global convergence of stochastic gradient descent for some non-convex matrix problems. ICML. 2015
5. Duchi John C, Bartlett Peter L, Wainwright Martin J. Randomized smoothing for stochastic optimization. SIAM Journal on Optimization. 2012;22(2):674–701.
6. Fercoq Olivier, Richtárik Peter. Accelerated, parallel and proximal coordinate descent. arXiv preprint arXiv:1312 5799. 2013
7. Fleming Thomas R, Harrington David P. Counting processes and survival analysis. Vol. 169. John Wiley & Sons; 1991. pp. 56–57.
8. Gupta Pankaj, Goel Ashish, Lin Jimmy, Sharma Aneesh, Wang Dong, Zadeh Reza. WTF: The who to follow service at twitter. WWW’ 13. 2013:505–514.
9. Gupta Suyog, Agrawal Ankur, Gopalakrishnan Kailash, Narayanan Pritish. Deep learning with limited numerical precision. ICML. 2015
10. Jain Prateek, Netrapalli Praneeth, Sanghavi Sujay. STOC. ACM; 2013. Low-rank matrix completion using alternating minimization; pp. 665–674.
11. Johansson Björn, Rabi Maben, Johansson Mikael. A randomized incremental subgradient method for distributed optimization in networked systems. SIAM Journal on Optimization. 2009;20(3):1157–1170.
12. Konecný Jakub, Qu Zheng, Richtárik Peter. S2cd: Semi-stochastic coordinate descent. NIPS Optimization in Machine Learning workshop. 2014
13. Le Cun Yann, Bottou Lén, Orr Genevieve B, Müller Klaus-Robert. Efficient backprop. Neural Networks, Tricks of the Trade. 1998
14. Liu Ji, Wright Stephen J. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. SIOPT. 2015;25(l):351–376.
15. Liu Ji, Wright Stephen J, Ré Christopher, Bittorf Victor, Sridhar Srikrishna. An asynchronous parallel stochastic coordinate descent algorithm. JMLR. 2015;16:285–322.
16. Mitliagkas Ioannis, Borokhovich Michael, Dimakis Alexandras G, Caramanis Constantine. Frogwild!: Fast pagerank approximations on graph engines. PVLDB. 2015
17. Niu Feng, Recht Benjamin, Re Christopher, Wright Stephen. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. NIPS. 2011:693–701.
18. Noel Cyprien, Osindero Simon. Dogwild!-Distributed Hogwild for CPU & GPU. 2014
19. Ahamed Shameem, Parambath Puthiya. Matrix factorization methods for recommender systems. 2013
20. Rakhlin Alexander, Shamir Ohad, Sridharan Karthik. Making gradient descent optimal for strongly convex stochastic optimization. ICML. 2012
21. Richtárik Peter, Takáč Martin. Parallel coordinate descent methods for big data optimization. Mathematical Programming. 2012:1–52.
22. Tao Qing, Kong Kang, Chu Dejun, Wu Gaowei. Machine Learning and Knowledge Discovery in Databases. Springer; 2012. Stochastic coordinate descent methods for regularized smooth and nonsmooth losses; pp. 537–552.
23. Tappenden Rachael, Takáč Martin, Richtárik Peter. On the complexity of parallel coordinate descent. arXiv preprint arXiv:1503.03033. 2015
24. Yu Hsiang-Fu, Hsieh Cho-Jui, Si Si, Dhillon Inderjit S. Scalable coordinate descent approaches to parallel matrix factorization for recommender systems. ICDM. 2012:765–774.
25. Zhang Ce, Re Christopher. Dimmwitted: A study of main-memory statistical analytics. PVLDB. 2014