## A Coordinate Gradient Descent Method for ... - Semantic Scholar

2, No. 4, pp. 377-402 doi: 10.4208/nmtma.2009.m9002s. November 2009. A Coordinate Gradient Descent Method for. Nonsmooth Nonseparable Minimization.

Vol. 2, No. 4, pp. 377-402 November 2009

Numer. Math. Theor. Meth. Appl. doi: 10.4208/nmtma.2009.m9002s

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization Zheng-Jian Bai1 , Michael K. Ng2,∗ and Liqun Qi3 1

School of Mathematical Sciences, Xiamen University, Xiamen 361005, China. Centre for Mathematical Imaging and Vision and Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong. 3 Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Hong Kong.

2

Received 18 February 2009; Accepted (in revised version) 9 June 2009

Abstract. This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function. We find a search direction by solving a subproblem obtained by a second-order approximation of the smooth function and adding a separable convex function. Under a local Lipschitzian error bound assumption, we show that the algorithm possesses global and local linear convergence properties. We also give some numerical tests (including image recovery examples) to illustrate the efficiency of the proposed method. AMS subject classifications: 65F22, 65K05 Key words: Coordinate descent, global convergence, linear convergence rate.

1. Introduction We consider a nonsmooth optimization problem of minimizing the sum of a smooth function and a convex nonseparable function as follows. def

min Fc (x) = f (x) + cP(x), x

(1.1)

where c > 0, P : Rn → (−∞, ∞] is proper, convex, lower semicontinuous (lsc) function, and f is smooth (i.e., continuously differentiable) on an open subset of Rn containing domP = {x|P(x) < ∞}. In this paper, we assume that P is a nonseparable function in the form P(x) := kL xk1 , where L 6= I is preferred to be a sparse matrix. In particular, we focus on a special case of (1.1) defined by def

min Fc (x) = f (x) + ckL xk1 , x

(1.2)

Corresponding author. Email addresses: zjbaixmu.edu. n (Z.-J. Bai), mngmath.hkbu.edu.hk (M. K. Ng), maqilqpolyu.edu.hk (L. Qi)

http://www.global-sci.org/nmtma

377

2009 c Global-Science Press

378

Z.-J. Bai, M. K. Ng and L. Qi

where L is the first order or second order differentiation matrix. Problem (1.1) with P(x) = kxk1 and Problem (1.2) arise in many applications, including compressed sensing [9, 13, 24], signal/image restoration [5, 19, 23], data mining/classification [3, 14, 21], and parameter estimation [8, 20]. There has been considerable discussion on the problem (1.1), see for instance [2, 6, 7, 11, 15]. If P is also smooth, then a coordinate gradient descent based on Armijo-type rule was well developed for the unconditional minimization problem (1.1) in Karmanov [10, pp. 190–196 and pp. 246–250], where the global convergence and geometrical convergence are provided if Fc (x) is assumed to be strongly convex. Recently, Tseng and Yun [22] gave a coordinate gradient descent method with stepsize chosen by an Armijotype rule for the problem (1.1) under the assumption that P is (block) separable, where the coordinates are updated in either the Gauss-Seidel rule or the Gauss-Southwell-r rule or the Gauss-Southwell-q rule. Moreover, the global convergence and linear convergence for this method were established. However, the method cannot be employed to solve (1.2) directly since P(x) = kL xk1 is no longer a (block) separable function. Recently, various methods have been considered for image restoration / deblurring/ denoising problems with ℓ1 -regularization, see for instance [5,17,19,23,25]. In particular, Fu et al. [5] gave a primal-dual interior point method for solving the following optimization problem with ℓ1 regularization: min kAx − bk22 + ckL xk1 , x

(1.3)

where A is a linear blurring operator, x is the original true image, and b is the observed blurred image. In each interior point iteration, the linear system is solved by a preconditioned conjugate gradient method. However, the number of conjugate gradient iterations are still large since the linear system is ill-conditioned and the performance of the preconditioner depends on the support of the blurring function and on how fast such function decays in spatial domain. Osher et al. [17, 25] presented the Bregman iterative algorithm for solving (1.3) with L being the identity matrix or the first order differentiation matrix. In each Bregman iteration, we need to solve an unconstrained convex subproblem. In this paper, we aim to provide a coordinate gradient descent method with stepsize chosen by an Armijo-type rule to solve the problem (1.2) and (1.3) efficiently, especially when the problem dimension is large. Our idea is to find a coordinate-wise search direction by finding a minimum in a subproblem, which is obtained by a strictly convex quadratic approximate of f and adding a separable function term (derived from P(x) = kL xk1 ). Then, we update the current iterate in the direction of the coordinate-wise minimizer. We will show that the coordinate-wise minimizer can be sufficient close to the coordinate-wise minimizer of the subproblem of the sum of the same strictly convex quadratic approximate of f and P(x) = kL xk1 if the parameter c is small enough. This approach can be implemented simply and is capable to solve large-scale problems. We show that our algorithm converges globally if the coordinates are chosen by either the Gauss-Seidel rule or the Gauss-Southwell-r rule or the Gauss-Southwell-q rule. Moreover, we prove that our approach with Gauss-Southwell-q rule converges at least linearly based on a local Lips-

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization

379

chitzian error bound assumption. Numerical tests (including image recovery examples) show the efficiency of the proposed method. Throughout the paper, we use the following notations. Rn denotes the space of ndimensional real column vectors, and T denotes transpose. For any x ∈ Rn and nonempty Pn def J ⊆ N = {1, · · · , n}, x j denotes the j-th component of x, and kxk p = ( j=1 |x j | p )1/p = 1 for 1 ≤ p ≤ ∞. For simplicity, we write kxk = kxk2 . For n × n real symmetric matrices A and B, we write A  B (respectively, A ≻ B) to mean that A − B is positive semidefinite (respectively, positive definite). AJ J = [Ai j ]i, j∈J denotes the principal submatrix of A indexed by J . λmin (A) and λmax (A) denote the minimum and maximum eigenvalues of A. We denote by I the identity matrix and by 0n the n × n matrix of zero entries. Unless otherwise specified, {x k } denotes the sequence x 0 , x 1 ,· · · and, for any integer ℓ ≥ 0, {x k+ℓ}K denotes a subsequence {x k+ℓ}k∈K with K ⊆ {0, 1, · · · }. The paper is organized as follows. In Section 2, we give a coordinate gradient descent method for solving our problem. In Section 3, we establish the convergence of our method. In Section 4 numerical examples are presented to illustrate the efficiency of our proposed method and apply our approach to the image restoration problems in Section 5. Finally, the concluding remarks are given in Section 6.

2. The coordinate gradient descent method In this section, we present the coordinate gradient descent algorithm for solving the problems (1.2) and (1.3). Since it is assumed that f is smooth, we will replace f by a second-order approximation based on ∇ f (x) at x. Then we generate a search direction d by a coordinate descent. In particular, given a nonempty index subset J ∈ N and a symmetric matrix H ≻ 0n (approximating the Hessian ∇2 f (x)), we determine a search direction d = dH (x; J ) by   1 T T dH (x; J ) = arg min ∇ f (x) d + d H d + ckL(x + d)k1 | d j = 0 ∀ j ∈ /J . d 2 def

(2.1)

Then, we compute the new iterate: x + := x + αd, where α > 0 is a stepsize. For simplicity, we select the stepsize α by the Armijo rule as in [22]. We point out that dH (x; J ) depends on H only through HJ J . It is still difficult to solve (2.1) since kL xk1 is nonseparable. Therefore, it is desirable to find an approximation of dH (x; J ) via replacing kL xk1 in (2.1) by a separable convex function. In particular, for the problem (1.2), we may approximate dH (x; J ) by   1 def d˜H (x; J ) = arg min ∇ f (x) T d + d T H d + ckLk1 kx + dk1 | d j = 0 ∀ j ∈ /J . d 2

(2.2)

380

Z.-J. Bai, M. K. Ng and L. Qi

Remark 2.1. Suppose that H ≻ 0n is a diagonal matrix. It follows from (2.2) that the j-th components of d˜H (x; J ) is given by d˜H (x; J ) j = −mid{(∇ f (x) j − ckLk1 )/H j j , x j , (∇ f (x) j + ckLk1 )/H j j }, j ∈ J , where mid{e, f , g} means the median of the scalars e, f , g. On the closeness between d˜H (x; J ) and dH (x; J ), we establish the following result. Proposition 2.1. Given x ∈ Rn , nonempty J ⊆ N and H ≻ λI ≻ 0n . For the problem (1.2), let d = dH (x; J ) and d˜ = d˜H (x; J ) be the solutions of (2.1) and (2.2), respectively. Then kd˜ − dk ≤

p 2 nkLk1 λ

c.

(2.3)

Proof. Let g = ∇ f (x). By the definition of d and d˜ and Fermat’s rule [18, Theorem 10.1], d ∈ arg min(g + H d) T w + ckL(x + w)k1 , w

˜ T w + ckLk1 k(x + w)k1 . d˜ ∈ arg min(g + H d) w

Thus ˜ 1, (g + H d) T d + ckL(x + d)k1 ≤ (g + H d) T d˜ + ckL(x + d)k ˜ T d˜ + ckLk1 k(x + d)k ˜ 1 ≤ (g + H d) ˜ T d + ckLk1 k(x + d)k1 . (g + H d) Adding the above two inequalities and simplifying give rise to (d˜ − d) T H(d˜ − d)

  ˜ 1 − kL(x + d)k1 ) + ckLk1 k(x + d)k1 − k(x + d)k ˜ 1 ≤ c(kL(x + d)k ≤ ckL(d˜ − d)k1 + ckLk1 kd˜ − dk1 p ≤ 2ckLk1 kd˜ − dk1 ≤ 2c nkLk1 kd˜ − dk.

It follows from H ≻ λI and (2.12) that

p λk(d˜ − d)k2 ≤ 2c nkLk1 kd˜ − dk.

Dividing both sides by λk(d˜ − d)k yields (2.3).



Remark 2.2. From Proposition 2.1, we see that d˜H (x; J ) is sufficiently close to dH (x; J ) as the parameter c is small enough. Therefore, in practice, we may replace dH (x; J ) by d˜H (x; J ) since it is easily computable. Based on the convexity of the function kL xk1 , we have the following descent lemma.

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization

381

Lemma 2.1. For any x ∈ Rn , nonempty J ⊆ N and H ≻ 0n , let d = dH (x; J ) and g = ∇ f (x). Then   Fc (x + αd) ≤ Fc (x) + α g T d + ckL(x + d)k1 − ckL xk1 + o(α) ∀α ∈ (0, 1], (2.4) g T d + ckL(x + d)k1 − ckL xk1 ≤ −d T H d.

(2.5)

Proof. It follows from a similar proof in [22, Lemma 2.1].



Next, we state the coordinate gradient descent (CGD) method for solving (1.2) as follows: Algorithm 2.1. (CGD method) Step 0. Give x 0 ∈ Rn . Let k := 0.

Step 1. Choose a nonempty J k ⊆ N and an H k ≻ 0n .

Step 2. Solve (2.1) for d k = dH k (x k ; J k ) where x = x k , J = J k , H = H k .

k k Step 3. Choose αinit > 0 and let αk be the largest element of {αinit β j } j=0,1,··· satisfying

Fc (x k + αk d k ) ≤ Fc (x k ) + αk θ ∆ k ,

(2.6)

where 0 < β < 1, 0 < θ < 1, 0 ≤ γ < 1, and def

T

∆ k = ∇ f (x k ) T d k + γd k H k d k + ckL(x k + d k )k1 − ckL x k k1 .

(2.7)

Step 4. Define x k+1 := x k + αk d k . Then replace k by k + 1 and go to Step 1. In Algorithm 2.1, we need to choose an appropriate index subset J k . As in [22], we may choose the index subset J k in a Gauss-Seidel manner. Based on the definition of L in (1.2), let J 0 , J 1 ,· · · cover 1, 2, · · · , n for every s consecutive iterations, i.e., J k ∪ J k+1 ∪ · · · ∪ J k+s−1 = {1, 2, · · · , n},

k = 0, 1, · · · ,

where J k includes the linear indices corresponding to the nonzero entries of the row vectors of the matrix L. We may also choose the index subset J k in a Gauss-Southwell-r rule. That is, we select J k such that kd D k (x k ; J k )k∞ ≥ νkd D k (x k )k∞ , def

where 0 < ν ≤ 1, D k ≻ 0n is diagonal, and d D (x) = d D (x; N ). Finally, we can use the Gauss-Southwell-q rule to choose the index subset J k , i.e., q D k (x k ; J k ) ≤ ν q D k (x k ),

(2.8)

382

Z.-J. Bai, M. K. Ng and L. Qi

where 0 < ν ≤ 1, D k ≻ 0n is diagonal and   1 T def T q D (x; J ) = ∇ f (x) d + d Dd + ckL(x + d)k1 − ckL xk1 . 2 d=dD (x;J )

(2.9)

Then q D (x; J ) gives an estimate for the descent in Fc from x to x + d D (x; J ). By Lemma 2.1, we have that 1 def q D (x) = q D (x; N ) ≤ − d D (x) T Dd D (x) ≤ 0. 2 Remark 2.3. We observe that it is still difficult to find the index subset J k satisfying the condition (2.8) since kL xk1 is not separable. From Proposition 2.1, we know that d˜D (x; J ) → d D (x; J ) as c → 0. Therefore, for the small c > 0, we can choose the index subset J k such that q˜D (x k ; J k ) ≤ µ q˜D (x k ) where def

q ˜ D (x; J ) =

 T

1

for some 0 < µ ≤ 1,

(2.10)

 T

∇ f (x) d + d Dd + ckLk1 (kx + dk1 − kxk1 ) 2

(2.11) d= d˜D (x;J )

and then check whether the condition (2.8) holds for the selected J k and d k = d˜D k (x k ; J k ). Then the CGD method with the Gauss-Southwell-r rule is very simple and can be used for large-scale applications in signal/image restoration, etc. From the numerical experiments in Sections 4 and 5, we can observe that it is very efficient to do so when the parameter c come near to zer o (but not necessarily too small). Remark 2.4. In Step 2 of Algorithm 2.1, we need to solve (2.1) for d k = dH k (x k ; J k ). By Proposition 2.1, we may approximate dH k (x k ; J k ) by d˜H k (x k ; J k ) defined in (2.2) if the parameter c is sufficiently small. The solution d˜H k (x k ; J k ) to (2.2) has an explicit expression if H k ≻ 0n is diagonal, see Remark 2.1. From the numerical tests in Sections 4 and 5, one can see that d˜H k (x k ; J k ) is an effective approximation to dH k (x k ; J k ) when c is as small as practice-acceptable. Finally, in Step 3 of Algorithm 2.1, we use the Armijo rule for choosing the stepsize αk . In this step, we need only function evaluations. In practice, we can keep the number of k function evaluations small if αinit is chosen based on the previous stepsize αk−1 . We will see that all the above three rules yield global convergence of the CGD method for the problem (1.1). We will also show that the CGD method with the Gauss-Southwell-q rule gives rise to at least a linear convergence rate under a local Lipschitizian error bound assumption.

2.1. Properties of search direction In this section, we shall discuss the properties of the search direction dH (x; J ). These properties can be employed for the proof of global convergence and local convergence rate of the CGD method.

383

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization

On the sensitivity of dH (x; J ) with respect to the quadratic coefficients H, we have the following lemma. Since the proof is similar to that of Lemma 3 in [22], we omit it. e ≻ 0n , let d = dH (x; J ) Lemma 2.2. For any x ∈ Rn , nonempty J ⊆ N , and H ≻ 0n , H e and d = dHe (x; J ). Then ˜ ≤ kdk

1 2

1 + λmax (R) +

p

1 − 2λmin (R) + λmax (R)2

 λmax (HJ J ) eJ J ) λmin (H

kdk,

−1/2 e −1/2 e where R = HJ J H J J HJ J . If HJ J ≻ HJ J , then also

s

kdk ≤

eJ J ) λmax (HJ J − H eJ J ) λmin (HJ J − H

˜ kdk.

The following result concerns stepsizes satisfying the Armijo descent condition (2.6), which can be proved by following the similar proof lines of [22, Lemma 5 (b)]. Lemma 2.3. For any x ∈ Rn , H ≻ 0n , and nonempty J ⊆ N , let d = dH (x; J ) and g = ∇ f (x). For any γ ∈ [0, 1), let ∆ = g T d + γd T H d + ckL(x + d)k1 − ckL xk1 . If f satisfies k∇ f ( y) − ∇ f (z)k ≤ ζk y − zk

∀ y, z ∈ Rn

(2.12)

for some ζ > 0, and H  λI, where λ > 0, then the descent condition Fc (x + αd) − Fc (x) ≤ θ α∆ is satisfied for any θ ∈ (0, 1) whenever 0 ≤ α ≤ min{1, 2λ(1 − θ + θ γ)/ζ}.

3. Convergence analysis In this section, we establish the global and linear convergence of Algorithm 2.1. As in [22], we know that x ∈ Rn is called a stationary point of Fc if x ∈ domFc and Fc′ (x; d) ≥ 0 for all d ∈ Rn . ¯ for all k, where 0 < λ ≤ λ. ¯ Assumption 3.1. λI ≤ H k ≤ λI By following similar arguments in [22, Theorem 1], we can show the following global convergence of Algorithm 2.1. Lemma 3.1. Let {x k }, {d k }, {H k } be sequences generated by the CGD method under Assumpk tion 3.1, where infk αinit > 0. Then the following results hold. (a) {Fc (x k )} is nonincreasing and ∆ k given by (2.7) satisfies T

− ∆ k ≥ (1 − γ)d k H k d k ≥ (1 − γ)λkd k k2

Fc (x k+1) − Fc (x k ) ≤ θ αk ∆ k ≤ 0

∀k.

∀k,

(3.1) (3.2)

384

Z.-J. Bai, M. K. Ng and L. Qi

(b) If {x k }K is a convergent subsequence of {x k }, then {αk ∆ k } → 0 and {d k }K → 0. If in ¯ for all k, where 0 < δ ≤ δ, ¯ then {d k (x k ; J k )}K → 0. addition δI ≤ D k ≤ δI D ¯ for all k, where (c) If {J k } is chosen by the Gauss-Southwell-q rule (2.8), δI ≤ D k ≤ δI n k k ¯ 0 < δ ≤ δ, and either (1) kL xk1 is continuous on R or (2) infk α > 0 or (3) αinit =1 k for all k, then every cluster point of {x } is a stationary point of Fc . (d) If f satisfies (2.12) for some ζ ≥ 0, then infk αk > 0. If limk→∞ Fc (x k ) > −∞ furthermore, then {∆ k } → 0 and {d k } → 0. Now we establish the convergence rate of the CGD method for {J k } chosen by the Gauss-Southwell-q rule (2.8). We need the following assumption as in [22]. In the following, X¯ denotes the set of stationary points of Fc and dist(x, X¯ ) = min kx − x¯ k ∀x ∈ Rn . x¯ ∈ X¯

Assumption 3.2. (a) X¯ 6= ; and, for any ξ ≥ min x Fc (x), there exist scalars τ > 0 and ε > 0 such that dist(x, X¯ ) ≤ τkd I (x)k whenever Fc (x) ≤ ξ, kd I (x)k ≤ ε. (b) There exists a scalar δ > 0 such that kx − yk ≥ δ

whenever x, y ∈ X¯ , Fc (x) 6= Fc ( y).

On the asymptotic convergence rate of the CGD method under Assumption 3.2, we have the following theorem. The proof technique is taken from [22] but noting the nonseparability of kL xk1 . Theorem 3.1. Assume that f satisfies (2.12) for some ζ ≥ 0. Let {x k }, {H k }, {d k } be sequences generated by the CGD method satisfying Assumption 3.1, where {J k } is chosen ¯ for all k (0 ≤ δ ≤ δ). ¯ If Fc satisfies by Gauss-Southwell-q rule (2.8) and δI ≤ D k ≤ δI k k k Assumption 3.2 and supk αinit ≤ 1 and infk αinit > 0, then either {Fc (x )} ↓ −∞ or {Fc (x k )} converges at least Q-linearly and {x k } converges at least R-linearly. Proof. For each k = 0, 1, · · · and d k = dH k (x k ; J k ), by (2.7)-(2.9), we have 

k

∆ +

1



1 T T T − γ d k H k d k = g k d k + d k H k d k + ckL(x k + d k )k1 − ckL x k k1 2 2 1 T ≤ g k d˜ k + (d˜ k ) T H k d˜ k + ckL(x k + d˜ k )k1 − ckL x k k1 2 1 k ≤ q D k (x ; J k ) + (d˜ k ) T (H k − D k )d˜ k 2 k k ¯ − δ)kd˜ k k2 , ≤ q D k (x ; J ) + (λ (3.3)

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization

385

where d˜ k := d D k (x k ; J k ). For each k, ¯ δ δ k k −1/2 ¯ k k k )−1  I. I  δ(HJk k J k )−1  (HJk k J k )−1/2 DJ  δ(H k J k (H J k J k ) J J ¯ λ λ Then, by Lemma 2.2, we obtain kd˜ k k ≤

¯ + 1 + δ/λ

p

2 ¯ ¯ + (δ/λ) ¯ 1 − 2δ/λ λ

2

δ

kd k k.

This, together with (3.3), implies that   1 T k ∆ + − γ d k H k d k ≤ q D k (x k ; J k ) + ωkd k k2 , 2

(3.4)

¯ λ, δ, ¯ and δ. where ω is a constant depending on λ, k By Lemma 3.1 (a), {Fc (x )} is nonincreasing. Then either {Fc (x k )} ↓ −∞ or k limk→∞ Fc (x k ) > −∞. Suppose the latter. Since infk αinit > 0, Lemma 3.1 (d) implies k k k k that infk α > 0, {∆ } → 0, and {d } → 0. Since {H } is bounded by Assumption 3.1, by (3.4), we get limk→∞ inf q D k (x k ; J k ) ≥ 0. By (2.9) and (2.5) in Lemma 2.1, we have that 1 q D k (x k ) ≤ − d D k (x k ) T D k d D k (x k ) ≤ 0. 2 Also, using (2.8), we obtain {d D k (x k )} → 0. ˜ = I, By Lemma 2.2 with H = D k and H p 1 + 1/δ + 1 − 2/δ¯ + (1/δ)2 k kd I (x )k ≤ δ¯ kd D k (x k )k ∀k. 2

(3.5)

Thus {d I (x k )} → 0. Since {Fc (x k )} is nonincreasing, it follows that Fc (x k ) ≤ Fc (x 0) and kd I (x k )k ≤ ε for all k ≥ some ¯k. Then, by Assumption 3.2 (a), we get kx k − x¯ k k ≤ τkd I (x k )k ∀k ≥ ¯k,

(3.6)

where τ > 0 and x¯ k ∈ X¯ satisfies kx k − x¯ k | = dist(x k , X¯ ). Since {d I (x k )} → 0, this implies that {x k − x¯ k } → 0. Since {x k+1 − x k } = {αk d k } → 0, this and Assumption 3.2 (b) imply that {¯ x k } eventually settles down at some isocost surface of Fc , i.e., there exist an index ˆk ≥ ¯k and a scalar v¯ such that Fc (¯ x k ) = v¯ ∀k ≥ ˆk. (3.7) Fix any index k with k ≥ ˆk. Since x¯ k is a stationary point of Fc , we obtain ∇ f (¯ x k ) T (x k − x¯ k ) + ckL x k k1 − ckL x¯ k k1 ≥ 0. By the Mean Value Theorem, there exists some ψk lying on the line segment joining x k with x¯ k such that f (x k ) − f (¯ x k ) = ∇ f (ψk ) T (x k − x¯ k ).

386

Z.-J. Bai, M. K. Ng and L. Qi

Since x k , x¯ k ∈ Rn , so does ψk . Combing these two relations and using (3.7), we get  T v¯ − Fc (x k ) ≤ ∇ f (¯ x k ) − ∇ f (ψk ) (x k − x¯ k ) ≤ k∇ f (¯ x k ) − ∇ f (ψk )k kx k − x¯ k k

≤ ζkx k − x¯ k k2 ,

where the last inequality uses (2.12), the convexity of Rn , and kψk − x¯ k k ≤ kx k − x¯ k k. This together with {x k − x¯ k } → 0 shows that lim inf Fc (x k ) ≥ v¯. k→∞

(3.8)

Fix any k ≥ ˆk. Letting J = J k , we have Fc (x k+1) − v¯

= f (x k+1) + ckL(x k+1 )k1 − f (¯ x k ) − ckL x¯ k k1

= ∇ f (˜ x k ) T (x k+1 − x¯ k ) + ckL(x k+1 )k1 − ckL x¯ k k1

= (∇ f (˜ x k ) − g k ) T (x k+1 − x¯ k ) − (D k d k ) T (x k+1 − x¯ k ) + γk , where the Mean Value Theorem with x˜ k a point lying on the segment joining x k+1 with x¯ k , and γk = (g k + D k d k ) T (x k+1 − x¯ k ) + ckL(x k+1 )k1 − ckL x¯ k k1

= (g k + D k d k ) T (x k − x¯ k ) + αk (g k + D k d k ) T d k + ckL(x k+1 )k1 − ckL x¯ k k1 ≤ (g k + D k d k ) T (x k − x¯ k ) + αk ckL x k k1 − αk ckL(x k + d k )k1 + ckL(x k+1 )k1 − ckL x¯ k k1

= (g k + D k d k ) T (x k − x¯ k ) + αk ckL x k k1 − αk ckL(x k + d k )k1   + ckL αk (x k + d k ) + (1 − αk )x k k1 − ckL x¯ k k1 ≤ (g k + D k d k ) T (x k − x¯ k ) + αk ckL x k k1 − αk ckL(x k + d k )k1 + αk ckL(x k + d k )k1 + (1 − αk )ckL x k k1 − ckL x¯ k k1

= (g k + D k d k ) T (x k − x¯ k ) + ckL x k k1 − ckL x¯ k k1 T 1 k = (D k d k ) T (x k − x¯ k ) − D d D k (x k ) (x k − x¯ k ) 2  T 1 k k k (x k − x¯ k ) − ckL x¯ k k1 + ckL x k k1 . + g + D d D k (x ) 2 In the second step above, we used x k+1 − x¯ k = x k − x¯ k + αk d k ; the third step uses (2.5) in Lemma 2.1 (applied to x k , D k , and J k ); the fifth step uses the convexity of kL xk1 , k αk ≤ αinit ≤ 1.

387

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization

Combining the above results, we obtain Fc (x k+1) − v¯

¯ k kkx k+1 − x¯ k k + δkd ¯ k kkx k − x¯ k k ≤ ζk˜ x k − x k kkx k+1 − x¯ k k + δkd 1 k k ¯ + δkd ¯ k k − q D k (x k ), D k (x )kkx − x 2

(3.9)

¯ and (2.9) have been used. where (2.12), 0n ≺ D k  δI By the inequalities k˜ x k − x k k ≤ kx k+1 − x k k + kx k − x¯ k k, kx k+1 − x¯ k k ≤ kx k+1 − k k k k+1 x k + kx − x¯ k, and kx − x k k = αk kd k k, we observe from (3.6) and D k ≻ 0n that the right-hand side of (3.9) is bounded above by 2  (3.10) C1 kd k k + kd D k (x k )k + kd I (x k )k − q D k (x k ) for all k ≥ ˆk, where C1 > 0 is some constant depending on ζ, τ, δ¯ only. By (3.5), the quantity in (3.10) is bounded above by C2 kd k k2 + C2 kd D k (x k )k2 − q D k (x k )

(3.11)

for all k ≥ ˆk, where C2 > 0 is some constant depending on ζ, τ, δ, δ¯ only. By (3.1), we have 1

T

λkd k k2 ≤ d k H k d k ≤ −

1−γ

∆k

∀k.

(3.12)

By (2.9) and D k  δI, 1 δ q D k (x k ) ≤ − d D k (x k ) T D k d D k (x k ) ≤ − kd D k (x k )k2 2 2

∀k.

Thus, the quantity in (3.11) is bounded above by   C3 −∆ k − q D k (x k ) ¯ γ only. for all k ≥ ˆk, where C3 > 0 is some constant depending on ζ, τ, λ, δ, δ, By (2.8), we have q D k (x k ; J k ) ≤ νq D k (x k ).

(3.13)

(3.14)

By (3.4) and (3.12), 

1



T

−q D k (x ; J ) ≤ −∆ + γ − d k H k d k + ωkd k k2 2   1 1 ω k ∆k − ∆k . ≤ −∆ − max 0, γ − 2 1−γ λ(1 − γ) k

k

k

(3.15)

Combining (3.14) and (3.15), the quantity in (3.13) is bounded above by −C4 ∆ k

(3.16)

388

Z.-J. Bai, M. K. Ng and L. Qi

¯ δ, δ, ¯ γ, ν only. for all k ≥ ˆk, where C4 > 0 is some constant depending on ζ, τ, λ, λ, k Therefore, the right-hand side of (3.9) is bounded above by −C4 ∆ for all k ≥ ˆk. This, together with (3.2), (3.9), and infk αk > 0 yields Fc (x k+1 − v¯) ≤ C5 (Fc (x k ) − Fc (x k+1))

∀k ≥ ˆk,

where C5 = C4 /(θ infk αk ). Upon rearranging terms and using (3.8), we get 0 ≤ Fc (x k+1) − v¯ ≤

C5 1 + C5

(Fc (x k ) − v¯)

∀k ≥ ˆk,

and so {Fc (x k )} converges to v¯ at least Q-linearly. Finally, by (3.2), (3.12), and x k+1 − x k = αk d k , we obtain θ (1 − γ)λ

kx k+1 − x k k2 αk

It follows that

È kx k+1 − x k k ≤

≤ Fc (x k ) − Fc (x k+1)

supk αk θ (1 − γ)λ

(Fc (x k ) − Fc (x k+1))

∀k ≥ ˆk.

∀k ≥ ˆk.

Since {Fc (x k ) − Fc (x k+1)} → 0 at least R-linearly and supk αk ≤ 1, {x k } converges at least R-linearly. 

4. Numerical results In this section, we will present some numerical tests to illustrate the efficiency of Algorithm 2.1. All runs are carried out in MATLAB 7.6 running on a notebook PC of 2.5 GHz Intel(R) Core(TM)2 Duo CPU. We implement the CGD Algorithm 2.1 to solve the following minimization problems with ℓ1 -regularization: def

min Fc (x) = f (x) + ckL xk1 , x

where the matrix L is given by 

1 −1   1 −1 L= ..  . 

or

2 −1   −1 2 −1  . .. ... L=   −1 

(4.1)

 ..

. 1

−1

   ∈ R(n−1)×n  

(4.2)

    ..  ∈ Rn×n . .   2 −1  −1 2

(4.3)

389

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization

Table 1: Nonlinear least square test fun tions. Name BT BAL TRIG

n 1000 1000 1000

LR1

1000

LFR

1000

Description Broyden tridiagonal function, nonconvex, with sparse Hessian Brown almost-linear function, nonconvex, with dense Hessian Trigonometric function, nonconvex, with dense Hessian    2 n n P P f (x) = i j x j − 1 , convex, with dense Hessian i=1 j=1  2  2 n n n P P P 2 2 x j − 1 + n+1 xj + 1 , f (x) = x i − n+1 j=1

i=1

j=1

strongly convex, with dense Hessian

For the function f in (4.1), we choose several nonconvex and convex test functions with n = 1000 from the set of nonlinear least square functions used by Moré et al. [16]. These functions are shown in Table 1. In our experiments, we choose the diagonal Hessian approximation   H k = diag min{max{∇2 f (x k ) j j , 10−2 }, 109 } j=1,··· ,n , for Problem (4.1). When ∇2 f is ill-conditioned, the Armijo descent condition (2.6) eventually cannot be satisfied by any αk > 0 due to cancelation error in the function evaluations. Also, in MATLAB 7.6, floating point substraction is accurate up to 15 digits only. In these cases, no further progress is possible so we exit the method when (2.6) remains unsatisfied after αk reaches 10−15 . To use the CGD approach to solve our problems, we need to solve (2.1), which is not separable and then poses the computational challenges, especially for large-scale cases. By Remark 2.4, d˜H (x; J ) defined in (2.2) is a good approximation to dH (x; J ) defined in (2.1) if c is close to zero. Thus we can replace dH (x; J ) by d˜H (x; J ) in practice. In addition, since the Hessian approximation H ≻ 0n is chosen to be diagonal, d˜H (x; J ) is given explicitly, see Remark 2.1. On the other hand, the index subset J k is chosen by either (i) the Gauss-Seidel rule, where J k cycles through {1, 2}, {2, 3},· · · ,{n−1, n} in that order for B as in (4.2) (for B as in (4.3), J k cycles through {1, 2, 3}, {2, 3, 4},· · · ,{n−2, n−1, n}) or (ii) the Gauss-Southwell-r rule (D k = H k )  −4 k k −3  max{10 , ν /10} if α > 10 , ¦ © min{0.9, 50ν k } if αk < 10−6 , J k = j | |d˜D k (x k ; j)| ≥ ν k kd˜D k (x k )k∞ , ν k+1 =  k ν else (4.4)

(initially ν 0 = 0.5) or (iii) the Gauss-Southwell-q rule 

§ k

J =

ª k

k

k

j | q˜Dk (x ; j) ≤ ν min q˜Dk (x ; i) , i

ν

k+1

−4 k  max{10 , ν /10} min{0.9, 50ν k } =  k ν

if αk > 10−3 , if αk < 10−6 , else

(4.5)

390

Z.-J. Bai, M. K. Ng and L. Qi

Table 2: Numeri al results for the test fun tions ( "*" means that CGD exited due to an Armijo stepsize rea hing 10−15 and "‡" means that ν k+1 = max{10−5 , ν k /10} if αk ≥ 10−3 ). Name BT

BAL

TRIG

LR1

LFR

c 1 0.1 0.01 0.001 1 0.1 0.01 0.001 1 0.1 0.01 0.001 1 0.1 0.01 0.001 1 0.1 0.01 0.001

L is set to be (4.2) Rule (i) Rule (ii) it./obj./ pu. it./obj./ pu. 3991/11.5906/4.8 1/0.74036/0.08 6986/11.1572/8.0 1/0.04722/0.1 6986/13.5325/8.0 1/0.00433/0.1 6986/13.8458/8.0 1/0.00043/0.2 5/122887/0.6 5/0.00135/0.5 5/122659/0.5 5/0.00018/0.4 5/122637/0.5 5/0.00001/0.5 5/122634/0.6 5/0.00000/0.4 *1/0.00008/0.3 1/0.00000/0.3 *1/0.00008/0.4 1/0.00000/0.3 *1/0.00008/0.4 1/0.00000/0.2 *1/0.00008/0.2 1/0.00000/0.3 14/224236/0.5 7/978562/0.7 14/22648/0.6 7/978558/0.6 14/2489/0.5 7/6285090.8 14/473/0.5 7/628509/0.7 999/1005/9.9 1/1001/0.08 999/15.2/9.7 1/11/0.05 999/5.1/9.3 1/1.1/0.03 999/5.0/9.2 1/1.0/0.05

Rule (iii)

it./obj./ pu. ‡8/0.73116/0.1 ‡16/0.04722/0.1 ‡25/0.00433/0.1 ‡36/0.00043/0.2 5/0.00135/0.5 5/0.00018/0.4 5/0.00001/0.4 5/0.00000/0.4 1/0.00000/0.2 1/0.00000/0.2 1/0.00000/0.3 1/0.00000/0.3 5/254.1/0.5 5/251.2/0.6 5/251.0/0.6 5/250.9/0.7 1/1001/0.03 1/11/0.06 1/1.1/0.02 1/1.0/0.03

˜ (initially µ0 = 0.5) satisfying (2.8) with d = d(x; J ) for some 0 < µ ≤ 1, where q˜D (x; J ) is defined by (2.11). The Gauss-Southwell-q rule is based on Remark 2.3. For simplicity, in Algorithm 2.1, we set θ = 0.1, β = 0.5, γ = 0, α0init = 1, and α0init = min{αk−1 /β, 1} for all k ≥ 1. The stopping tolerance for Algorithm 2.1 is set to be kx k+1 − x k k ≤ 10−4 . Now, we report the performance of Algorithm 2.1 using the rules (i), (ii) and (iii). Tables 2-3 display the number of CGD iterations, the final objective function value, and the CPU time (in seconds) for four different values of c. From Tables 2 and 3, we observe that Rules (ii) and (iii) behaviors better than Rule (i) for the test functions BT, BAL, TRIG, and LFR. However, for the test function LR1 whose Hessian are far from being diagonally dominant, Rules (ii) and (iii) are slower than Rule (i) in CPU time but Rule (iii) performs better than Rules (i) and (ii) in minimizing the objective function value.

5. Application to image restoration In this section, we apply the CGD algorithm to image restoration examples.

391

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization

Table 3: Numeri al results for the test fun tions ("*" means that CGD exited due to an Armijo stepsize rea hing 10−15 . Name BT

BAL

TRIG

LR1

LFR

c 1 0.1 0.01 0.001 1 0.1 0.01 0.001 1 0.1 0.01 0.001 1 0.1 0.01 0.001 1 0.1 0.01 0.001

L is set to be (4.3) Rule (i) Rule (ii) it./obj./ pu. it./obj./ pu. 1997/11.9552/2.7 *6/4.7574/0.02 3994/10.3078/4.8 14/0.10141/0.03 5990/10.1676/7.3 23/0.00996/0.2 7973/10.1573/9.1 32/0.00099/0.1 16/122307/1.5 6/2.0055/0.6 17/120820/1.5 5/0.20033/0.5 17/120671/1.5 5/0.02003/0.5 17/120657/1.6 5/0.00200/0.5 *1/0.00208/0.4 1/0.00000/0.3 *1/0.00028/0.4 1/0.00000/0.3 *1/0.00001/0.4 1/0.00000/0.3 2/0.00009/0.4 1/0.00000/0.3 12/415602/0.4 7/978567/0.8 12/41795/0.4 7/978558/0.7 12/4414/0.4 7/784234/0.7 13/676/0.4 7/628509/0.9 997/1007/9.5 1/1001/0.1 997/45.5/9.5 1/41.2/0.06 997/5.5/9.3 1/1.4/0.02 997/5.0/9.7 1/1.0/0.03

Rule (iii)

it./obj./ pu. *8/3.5911/0.02 36/0.13934/0.2 90/0.01118/0.4 150/0.00101/0.5 6/2.0055/0.7 5/0.20033/0.4 5/0.02003/0.5 5/0.00200/0.3 1/0.00000/0.3 1/0.00000/0.2 1/0.00000/0.3 1/0.00000/0.3 5/257.6/0.5 5/251.6/0.6 5/251.0/0.5 5/250.9/0.5 1/1001/0.1 1/41.2/0.06 1/1.4/0.05 1/1.0/0.03

Example 1. We test the 128×128 Cameraman image (See Fig. 1(a)). The blurring function is given by a( y, z) = exp[−0.5 ∗ ( y 2 + z 2 )]. The observed image is represented by the vector of the form of b = Ax ∗ + η, where A is a BTTB matrix and x ∗ is a vector formed by row ordering the original image. We use Algorithm 2.1 to solve the following minimization problem with ℓ1 -regularization: def

min Fc (x) = kAx − bk22 + ckL xk1 , x

(5.1)

where L is given as in (4.2) or (4.3). Fu et al. [5] provided a primal-dual interior point method for image restoration by solving (5.1). In each interior point iteration, we need to solve a ill-conditioned linear system. Osher et al. [17, 25] gave the Bregman iterative algorithm for solving (5.1) with L being the identity matrix or the first order differentiation matrix. In each Bregman iteration, we need to solve an unconstrained convex subproblem. For our algorithm, if we choose a diagonal matrix H ≻ 0n and the parameter c is sufficiently small, then, by Proposition 2.1, the search direction = dH (x; J ) in (2.1) may be approximated by d˜H (x; J ) defined in (2.2), which is easy to solve, see Remark 2.1. In addition, from Section 4, the index subset J can

392

Z.-J. Bai, M. K. Ng and L. Qi

20

40

60

80

100

120 20

40

60

80

100

120

(a) Original Image

20

20

40

40

60

60

80

80

100

100

120

120 20

40

60

80

100

120

20

(b) ObservedImage

40

60

80

100

120

(c) Observed Image

20

20

40

40

60

60

80

80

100

100

120

120

20

40

60

80

100

120

20

(d) Initial Guess Image

40

60

80

100

120

(e) Initial Guess Image

20

20

40

40

60

60

80

80

100

100

120

120 20

40

60

80

100

(f) Restored Image

120

20

40

60

80

100

120

(g) Restored Image

Figure 1: Original, observed, initial guess, and restored images of Cameraman for L as in (4.2) (left) and (4.3) (right).

393

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization

Table 4: Numeri al results for the image restoration ("*" means that the ondition (2.8) is not satised). c 0.1 0.01 0.001 0.0001 c 0.1 0.01 0.001 0.0001

Rule (i) it./hd./ obj./ pu./res0./res*. 81/6.0/734265/131/0.0892/0.0888 81/6.2/724597/131/0.0891/0.0887 81/6.3/723219/131/0.0892/0.0888 81/6.2/723265/134/0.0892/0.0888

L is set to be (4.2) Rule (ii) it./hd./ obj./ pu./res0./res*. 170/0.49/19591/266/0.0892/0.0489 170/0.51/6574/266/0.0892/0.0491 169/0.52/5291/275/0.0892/0.0493 170/0.54/5282/277/0.0891/0.0492

Rule (i) it./hd./ obj./ pu./res0./res*. 208/7.4/724955/341/0.0891/0.0885 209/6.0/715157/335/0.0891/0.0885 209/6.1/714234/336/0.0891/0.0885 209/5.9/714008/335/0.0892/0.0886

L is set to be (4.3) Rule (ii) it./hd./ obj./ pu./res0./res*. 170/0.53/22096/274/0.0891/0.0489 169/0.52/6743/271/0.0893/0.0492 170/0.54/5450/279/0.0891/0.0491 170/0.52/5219/271/0.0891/0.0492

Rule (iii)

it./hd./ obj./ pu./res0./res*. *20/3.95/33616/32.9/0.0891/0.0648 *131/0.65/7405/215/0.0892/0.0513 171/0.52/5277/278/0.0892/0.0492 171/0.53/5133/277/0.0892/0.0494 Rule (iii)

it./hd./ obj./ pu./res0./res*. *21/3.81/33322/34.2/0.0891/0.0647 *136/0.67/7318/221/0.0892/0.0512 170/0.55/5364/277/0.0892/0.0494 171/0.51/5133/281/0.0892/0.0493

be chosen by Rules (i), (ii), or (iii) which is easy to check. Also, the main computational cost of our algorithm lies in the computations of ∇ f (x) and function evaluations Fc (x) (which is needed in the Armijo rule). This requires the matrix-vector products Ax and AT x which can be computed by fast transforms. In the following, we present some numerical tests to show that our approach is effective for image restoration. In our tests, the noise η is set to Gaussian white noise with noise-to-signal ratio of 40 dB. We also set the initial guess image x 0 to be the solution of the following linear equation (c0 I + A∗ A)x 0 = A∗ b,

(5.2)

where c0 is a suitable parameter. For simplicity, we fix c0 = 0.05 and solve the above linear equation by the PCG method with the block-circulant preconditioner as in [12]. In the PCG method, we use the zero vector as the initial point and the stopping criteria is kr i k2 /kA∗ bk2 ≤ 10−7 , where r i is the residual after i iterations. In our experiments, we choose the diagonal Hessian approximation   H k = diag min{max{∇2 f (x k ) j j , 10−2 }, 109 } j=1,··· ,n , for Problem (5.1) with f (x) = kAx − bk22. For simplicity, in Algorithm 2.1, the index subset J k is chosen by Rules (i), (ii), or (iii), the other parameters are set as in Section 4. In our numerical tests, we terminate Algorithm 2.1 when kx k+1 − x k k/kbk ≤ 10−4 and the maximal number of CGD iterations is less than 500. Our numerical results are included in Table 4, where it., obj., hd., pu., res0. and res*. stand for the number of the CGD iterations, the final objective value, the value kH k dH k (x k )k∞ at the final iteration, the cpu time (in seconds), and the relative residuals kx k −x ∗ k/kx ∗ k at the starting point x 0 and at the final iterate of our algorithm, respectively. From Table 4, it is obvious that Rules (ii) and (iii) perform typically better than Rule (i) in terms of reducing the objective function value and the relative residual. However, as the parameter c is larger, Algorithm 2.1 with Rules (iii) may stop before the convergence since d˜D (x; J ) may be far away from d D (x; J ). This agrees with our prediction.

394

Z.-J. Bai, M. K. Ng and L. Qi

0.09

0.09

0.085

0.085

0.08

0.07 0.065

0.07 0.065

0.06

0.06

0.055

0.055

0.05

0.05

0.045

0

20

40

(a)

60 80 100 120 Number of CGD Iterations

140

160

−− Rule (i) − Rule (ii) : Rule (iii)

0.075 ||xk−x*||/||x*||

||xk−x*||/||x*||

0.08

−− Rule (i) − Rule (ii) : Rule (iii)

0.075

0.045

180

0

50

(b)

100 150 Number of CGD Iterations

200

250

Figure 2: Convergen e history of the CGD method for Problem (5.1). (a) L as in (4.2); (b) L as in (4.3).

13.5

13.5

13

13

12.5

12.5

+ Rule (i) − Rule (ii) : Rule (iii)

11.5

k

log Fc(x )

11.5 11 10.5

(a)

+ Rule (i) − Rule (ii) : Rule (iii)

12

k

log Fc(x )

12

11 10.5

10

10

9.5

9.5

9

9

8.5

8.5

0

20

40

60 80 100 120 Number of CGD Iterations

140

160

180

(b)

0

50

100 150 Number of CGD Iterations

200

250

Figure 3: The obje tive fun tion value Fc (x k ) versus the number of the CGD iterations for Problem (5.1). (a) L as in (4.2); (b) L as in (4.3).

Fig. 1 shows the original, observed, initial guess, and restored images of Cameraman for Rule (iii) with c = 0.001. To further illustrate the performance of the proposed method, in Figs. 2 and 3, we display the convergence history of Algorithm 2.1 for different rules with c = 0.001. We see from Fig. 2 that the proposed algorithm with Rules (ii) and (iii) works more efficiently. Also, we plot the natural logarithm of the objective function value versus the number of CGD iterations in Fig. 3 for different rules with c = 0.001. From Fig. 3, we can observe that the objective function value with Rule (i) is almost unchanged while the objective function value with Rules (ii) and (iii) decreases very fast. This shows that, the direction d˜D (x; J ) is a feasible approximation to d D (x; J ), especially when c is small. Example 2. We consider the image deblurring problem. Let x ∈ Rn be the underlying

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization

395

image. Then the observed blurred image b ∈ Rn is given by b = Ax + η,

(5.3)

where η ∈ Rn is a vector of noise and A ∈ Rn×n is a linear blurring operator, see for instance [5]. We solve (5.3) in a tight frame domain. Let F be an n× m matrix whose column vectors form a tight frame in Rn , i.e., F F T = I. Moreover, we suppose that the original image x has a sparse approximation under F. Thus (5.3) turns into b = AFu + η. Then the underlying solution is given by x = Fu. For the purpose of demonstration, the tight frame F is generated from the piecewise linear B-spline framelet and the coarsest decomposition level is set to be 4. The three filters are given by p 1 2 1 h0 = [1, 2, 1], h1 = [1, 0, −1], h2 = [−1, 2, −1]. 4 4 4 Furthermore, the matrix F is the reconstruction operator and F T is the decomposition operator of underlying tight framelet system, see [4] for the generation of the matrix F. We apply the CGD algorithm to solve the following minimization problem: def

min Fc (u) = kAFu − bk22 + c1 kLFuk1 + c2 kuk1 , u

(5.4)

where c1 , c2 > 0 and L is given as in (4.2) or (4.3). Similarly, the index subset J k is chosen by Rules (i), (ii), and (iii) in Section 4 with x k = uk and f (u) := kAFu − bk22 , where the approximate search direction d˜D (u; J ) is defined by   1 T def T ˜ d D (u; J ) = arg min ∇ f (u) d + d Dd + (c1 kBk1 kF k2 + c2 )ku + dk1 | d j = 0 ∀ j ∈ /J , d 2

(5.5)

and q ˜(u) is given by   1 T def T q˜D (u; J ) = ∇ f (u) d + d Dd + (c1 kBk1 kFk2 + c2 )(ku + dk1 − kuk1 ) . 2 d= d˜D (u;J ) (5.6) The other parameters in Algorithm 2.1 are set as in Section 4. Remark 5.1. Following the similar proof of Proposition 2.1 in Section 2, we can also show that, for D  δI ≻ 0n, p 2 n ˜ kd D (u; J ) − d D (u; J )k ≤ (kLk1 kFk2 c1 + c2 ). δ This shows that d˜D (x; J ) → d D (x; J ) as c1 , c2 → 0.

396

Z.-J. Bai, M. K. Ng and L. Qi

Table 5: Numeri al results for the 256 × 256 Cameraman image onvolved by a 15 × 15 Gaussian kernel of σ b = 2 and ontaminated by a Gaussian white noise of varian e σ2 ("*" means that the ondition (2.8) is not satised). c1

c2

1 0.1 0.001 0.0001 0.001

1 0.1 0.1 0.01 0.001

c1

c2

1 0.1 0.001 0.0001 0.001

1 0.1 0.1 0.01 0.001

σ = 2 and L is set to be (4.2) Rule (i) Rule (ii) it./ nz./ obj./ pu./res0./res*. it./ nz./ obj./ pu./res0./res*. 153/479/1.1e9/264/1/0.999 237/153326/1.1e7/517/1/0.134 153/2142/1.1e9/265/1/0.999 85/817471/1.7e6/190/1/0.132 153/2736/1.1e9/265/1/0.999 82/1223976/1.7e6/187/1/0.132 153/4249/1.1e9/264/1/0.999 82/1916809/6.2e5/186/1/0.132 153/4753/1.1e9/265/1/0.999 82/2059906/5.1e5/189/1/0.131 σ = 2 and L is set to be (4.3) Rule (i) Rule (ii) it./ nz./ obj./ pu./res0./res*. it./ nz./ obj./ pu./res0./res*. 500/1253/1.1e9/863/1/0.997 297/106844/1.1e7/637/1/0.142 500/6710/1.1e9/870/1/0.997 92/622699/1.7e6/204/1/0.132 500/10281/1.1e9/867/1/0.997 81/1216652/1.7e6/184/1/0.132 500/14772/1.1e9/869/1/0.997 82/1914421/6.2e5/188/1/0.132 500/15396/1.1e9/873/1/0.997 82/2018360/5.2e5/189/1/0.132

Rule (iii)

it./ nz./ obj./ pu./res0./res*. *66/264107/1.2e7/240/1/0.146 84/679553/1.7e6/310/1/0.132 82/927371/1.7e6/300/1/0.132 83/1226224/6.3e5/306/1/0.131 83/1273851/5.1e5/309/1/0.132 Rule (iii)

it./ nz./ obj./ pu./res0./res*. *68/193909/1.3e7/240/1/0.154 89/552058/1.7e6/316/1/0.132 82/923758/1.7e6/299/1/0.132 82/1230000/6.3e5/301/1/0.132 83/1264086/5.1e5/304/1/0.131

In our numerical tests, the noise η is set to be a Gaussian white noise of variance σ2 . The initial guess is given by u0 = 0. As in [7], we choose D k = ρ k I, where ¨ k

ρ =

100, k ≤ 10, 20, k > 10.

For our problem, Algorithm 2.1 is stopped when kuk+1 − uk k/kbk ≤ 5.0 × 10−4 and the maximal number of CGD iterations is set to be 500. Now, we report the performance of Algorithm 2.1 with Rule (i), (ii), or (iii). Table 5 contains the numerical results for the 256 × 256 Cameraman image convolved by a 15 × 15 Gaussian kernel with σ b = 2 (generated by the MATLAB-provided function fspe ial (`Gaussian',15,2)) and contaminated by a Gaussian white noise of variance σ = 2 for different values of c1 and c2 , where it., obj., pu., res0. and res*. are defined as above and nz. denotes the number of nonzero entries in the solution (an entry is regarded to be nonzero if its absolute value exceeds 10−6 ). From Table 5, it is obvious that Algorithm 2.1 with Rules (ii) and (iii) performs more efficiently in terms of the objective function, the cpu time, and the relative residual. On the other hand, the solution to Problem (5.4) becomes more sparse if the values of c1 and c2 is growing larger. However, Algorithm 2.1 with Rule (iii) may stop as the parameters c1 and c2 is larger. This shows that the direction d˜D (x; J ) may deviate from d D (x; J ) when c1 , c2 is too large, see Remark 5.1. Table 6 lists the ratios (%) of between the number of nonzero entries (nz.) and the total number of entries (n.) in the solution obtained by Algorithm 2.1 for the 256 × 256 Cameraman image convolved by a 15 × 15 Gaussian kernel with σ b = 2 and contaminated by a Gaussian white noise of variance σ = 2 for different values of c1 and c2 . Here r1. = nz1. . . × 100, r2. = nz2 × 100, and r3. = nz3 × 100 r2., where nz1., nz2. and nz3. n. n. n. denote the number of nonzero entries in the solution when its entries larger than 10−4 , 10−5 , and 10−6 , respectively.

397

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization

Table 6: Numeri al results for the 256 × 256 Cameraman image onvolved by a 15 × 15 Gaussian kernel of σ b = 2 and ontaminated by a Gaussian white noise of varian e σ2 ("*" means that the ondition (2.8) is not satised). c1

c2

1 0.1 0.001 0.0001 0.001

1 0.1 0.1 0.01 0.001

c1

c2

1 0.1 0.001 0.0001 0.001

1 0.1 0.1 0.01 0.001

σ = 2 and L is set to be (4.2) Rule (i) Rule (ii) r1./ r2./r3. (%) r1./ r2./r3. (%) 0.0221/0.0221/0.0221 7.0739/7.0823/7.0896 0.0990/0.0990/0.0990 37.7247/37.7735/37.7988 0.1262/0.1265/0.1265 56.4570/56.5484/56.5951 0.1932/0.1961/0.1965 88.4682/88.5923/88.6309 0.2147/0.2190/0.2198 95.1364/95.2305/95.2475 σ = 2 and L is set to be (4.3) Rule (i) Rule (ii) r1./ r2./r3. (%) r1./ r2./r3. (%) 0.0579/0.0579/0.0579 4.9320/4.9371/4.9403 0.3099/0.3103/0.3103 28.7400/28.7716/28.7928 0.4745/0.4753/0.4754 56.1263/56.2103/56.2565 0.6778/0.6823/0.6830 88.3593/88.4825/88.5204 0.7044/0.7109/0.7119 93.1963/93.3025/93.3265

Rule (iii)

r1./ r2./r3. (%) 12.2093/12.2117/12.2120 31.4124/31.4209/31.4217 42.8772/42.8802/42.8805 56.6983/56.6990/56.6991 58.9009/58.9012/58.9013 Rule (iii)

r1./ r2./r3. (%) 8.9642/8.9658/8.9661 25.5146/25.5252/25.5265 42.7092/42.7129/42.7134 56.8730/56.8737/56.8737 58.4493/58.4497/58.4498

(a) Noise σ = 2

(b) using (4.2) c1 = c2 = 1.0

(c) using (4.3) c1 = c2 = 1.0

(d) Noise σ = 2

(e) using (4.2) c1 = c2 = 0.1

(f) using (4.3) c1 = c2 = 0.1

Figure 4: Noisy blurred (left) and deblurred images ( enter and right) of Cameraman onvolved by a 15 × 15 Gaussian kernel of σ b = 2 and ontaminated by a Gaussian noise of varian e σ2 .

Fig. 4 shows the results of the noisy blurred image and deblurred image for the 256 × 256 Cameraman image by Algorithm 2.1 with Rule (iii) for c1 = c2 = 1.0 (see Figs. 4(a), (b), (c) ) and c1 = c2 = 0.1 (see Figs. 4(d), (e), (f) ), where the blurring kernels are a

398

Z.-J. Bai, M. K. Ng and L. Qi

(a) Noise σ = 2

(b) using (4.2)

(c) using (4.3)

(d) Noise σ = 5

(e) using (4.2)

(f) using (4.3)

(g) Noise σ = 10

(h) using (4.2)

(i) using (4.3)

Figure 5: Noisy blurred (left) and deblurred images ( enter and right) of Cameraman onvolved by a 15 × 15 Gaussian kernel of σ b = 2 and ontaminated by a Gaussian noise of varian e σ2 .

15 × 15 Gaussian kernel with σ b = 2. We observe Fig. 4 that the restored image becomes worse when the value of c1 and c2 is larger. Figs. 5-6 display the results of the noisy blurred image and deblurred image for the 256 × 256 Cameraman and 260 × 260 Bridge images by Algorithm 2.1 with Rule (iii) for c1 = c2 = 0.001, where the blurring kernels are a 15 × 15 Gaussian kernel with σ b = 2 and a 7×7 disk kernel (generated by the MATLAB-provided function fspe ial (`disk',3)). From Figs. 5-6, it is easy to see that Algorithm 2.1 with Rule (iii) is very effective. It is also shown that the algorithm is robust to noise, since it still gives good restored images when the noise is as high as σ = 10.

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization

(a) Noise σ = 2

(b) using (4.2)

(c) using (4.3)

(d) Noise σ = 5

(e) using (4.2)

(f) using (4.3)

(g) Noise σ = 10

(h) using (4.2)

(i) using (4.3)

399

Figure 6: Noisy blurred (left) and deblurred images ( enter and right) of Bridge a 7 × 7 disk kernel and

ontaminated by a Gaussian white noise of varian e σ2 .

To further illustrate the performance of the proposed method, in Figs. 7 and 8, we, respectively, plot the convergence history of Algorithm 2.1 and the natural logarithm of the objective function value for the 256 × 256 Cameraman image convolved by a 15 × 15 Gaussian kernel with σ b = 2 and contaminated by a Gaussian white noise of variance σ = 2 with c1 = c2 = 0.001. From Fig. 7, we observe that the proposed algorithm with Rules (ii) and (iii) works more efficiently. Fig. 8 shows that the objective function value with Rule (i) is almost unchanged while the objective function value with Rules (ii) and (iii) decreases very fast. This indicates that the direction d˜D (u; J ) is a effective approximation to d D (u; J ) when both c1 and c2 are small.

400

Z.-J. Bai, M. K. Ng and L. Qi

1

1

0.9

0.9

0.8

0.6 0.5

0.6 0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0

20

40

(a)

60 80 100 Number of CGD Iterations

120

140

+ Rule (i) − Rule (ii) : Rule (iii)

0.7 ||xk−x*||/||x*||

||xk−x*||/||x*||

0.8

+ Rule (i) − Rule (ii) : Rule (iii)

0.7

0.1

160

0

100

(b)

200 300 400 Number of CGD Iterations

500

600

Figure 7: Convergen e history of the CGD method for Problem (5.4). (a) L as in (4.2); (b) L as in (4.3).

21

21

20

20

19

k

k

17

17

16

16

15

15

14

14

13

(a)

+ Rule (i) − Rule (ii) : Rule (iii)

18 log Fc(u )

18 log Fc(u )

19

+ Rule (i) − Rule (ii) : Rule (iii)

0

20

40

60 80 100 Number of CGD Iterations

120

140

13

160

0

100

(b)

200 300 400 Number of CGD Iterations

500

600

Figure 8: The obje tive fun tion value Fc (uk ) versus the number of the CGD iterations for Problem (5.4). (a) L as in (4.2); (b) L as in (4.3).

6. Concluding remarks In conclusion, we have proposed an efficient coordinate gradient descent algorithm for solving the nonsmooth nonseparable minimization problems. We find the search direction from a subproblem obtained by a second-order approximation of the smooth function and adding a separable convex function. We show that our algorithm converges globally if the coordinates are chosen by either the Gauss-Seidel rule or the Gauss-Southwell-r rule or the Gauss-Southwell-q rule. We also prove that our approach with the Gauss-Southwell-q rule converges at least linearly based on a local Lipschitzian error bound assumption. We report some numerical tests to demonstrate the efficiency of the proposed method. Acknowledgments The research of the first author is partially supported by NSFC Grant 10601043, NCETXMU, and SRF for ROCS, SEM. The research of the second author is

A Coordinate Gradient Descent Method for Nonsmooth Nonseparable Minimization

401

supported in part by RGC 201508 and HKBU FRGs. The research of the third author is supported by the Hong Kong Research Grant Council.

References [1] S. Alliney and S. Ruzinsky, An algorithm for the minimization of mixed ℓ1 and ℓ2 norms with application to Bayesian estimation, IEEE Trans. Signal Process., 42 (1994), pp. 618–627. [2] A. Auslender, Minimisation de fonctions localement lipschitziennes: applications à la programmation mi-convexe, mi-différentiable, In: Mangasarian, O.L., Meyer, R.R., and Robinson, S.M. (eds.) Nonlinear Programming, vol. 3, pp. 429–460. Academic, New York, 1978. [3] P. S. Bradley, U. M. Fayyad, and O. L. Mangasarian, Mathematical programming for data mining: formulations and challenges, INFORMS J. Comput. 11 (1999), pp. 217–238. [4] J.-F. Cai, R. H. Chan, and Z. Shen, A framelet-based image inpainting algorithm, Appl. Comput. Harmon. Anal., 24 (2008), pp. 131–149. [5] H. Y. Fu, M. K. Ng, M. Nikolova, and J. L. Barlow, Efficient minimization methods of mixed ℓ2ℓ1 and ℓ1-ℓ1 norms for image restoration, SIAM J. Sci. Comput., 27 (2006), pp.1881–1902. [6] M. Fukushima, A successive quadratic programming method for a class of constrained nonsmooth optimization problems, Math. Program., 49 (1990), pp. 231–251. [7] M. Fukushima and H. Mine, Ageneralized proximal point algorithm for certain non-convex minimization problems, Int. J. Syst. Sci., 12 (1981), pp. 989–1000. [8] A. Guitton and D. J. Verschuur, Adaptive subtraction of multiples using the ℓ1-norm, Geophys. Prospecting, 52 (2004), pp.1–27. [9] E. T. Hale, W. Yin, and Y. Zhang, A fixed-point continuation method for ℓ1-regularized minimization with applications to compressed sensing, CAAM Tech- nical Report TR07-07, Department of Computational and Applied Mathematics, Rice University, July 2007. [10] V. G. Karmanov, Mathematical Programming, Mir Pub., Moscow, 1989. [11] K. C. Kiwiel, A method for minimizing the sum of a convex function and a continuously differentiable function, J. Optim. Theory Appl., 48 (1986), pp. 437–449. [12] F. R. Lin, M. K. Ng, and W. K. Ching, Factorized banded inverse preconditioners for matrices with Toeplitz structure, SIAM J. Sci. Comput., 26 (2005), pp. 1852–1870. [13] M. Lustig, D. Donoho, and J. Pauly, Sparse MRI: The application of compressed sensing for rapid MR imaging, Magnetic Resonance in Medicine, 58 (2007), pp. 1182–1195. [14] O. L. Mangasarian and D. R. Musicant, Large scale kernel regression via linear programming, Mach. Learn. 46 (2002), pp.255–269. [15] H. Mine and M. Fukushima, A minimization method for the sum of a convex function and a continuously differentiable function, J. Optim. Theory Appl., 33 (1981), pp. 9–23. [16] J. J. Moré, B. S. Garbow, and K. E. Hillstrom, Testing unconstrained optimization software, ACM Trans. Math. Softw., 7 (1981), pp. 17–41. [17] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, An iterated regularization method for total variation-based image restoration, Multiscale Model. Simul., 4 (2005), pp. 460–489. [18] R. T. Rockafellar and R. J.-B. Wets,Variational Analysis, Springer, New York, 1998. [19] L. Rudin, S. J. Osher, and E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D, 60 (1992), pp. 259–268. [20] S. Ruzinsky, Sequential Least Absolute Deviation Estimation of Autoregressive Parameters, Ph.D. thesis, Illinois Institute of Technology, Chicago, IL, 1989. [21] S. Sardy, A. Bruce, and P. Tseng, Robust wavelet denoising, IEEE Trans. Signal Proc. 49 (2001), pp.1146–1152.

402

Z.-J. Bai, M. K. Ng and L. Qi

[22] P. Tseng and S. Yun, A coordinate gradient descent method for nonsmooth separable minimization, Math. Program., Ser. B, 117 (2008), pp. 387–423. [23] C. R. Vogel and M. E. Oman, Iterative methods for total variation denoising, SIAM J. Sci. Comput., 17 (1996), pp. 227–238. [24] M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, An architecture for compressiving image, In Proceedings of the International Conference on Image Processing (ICIP), Atlanta, Georgia, 2006. [25] W. Yin, S. Osher, D. Goldfarb, and J. Darbon, Bregman iterative algorithms for ℓ1-minimization with applications to compressed sensing, SIAM J. Imaging Sci., 1 (2008), pp. 143–168.