Difference of two norms-regularizations for Q-Lasso

Abdellatif Moudafi (Aix-Marseille Université, L.I.S UMR CNRS 7020, Domaine Universitaire de Saint-Jérome, Marseille, France)

Applied Computing and Informatics

ISSN: 2634-1964

Article publication date: 3 August 2020

Issue publication date: 4 January 2021

520

Abstract

The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for mIN, is the set of errors when linear measurements are taken to recover a signal/image via the Lasso. Based on a recent work by Wang (2013), we are interested in two new penalty methods for Q-Lasso relying on two types of difference of convex functions (DC for short) programming where the DC objective functions are the difference of l1 and lσq norms and the difference of l1 and lr norms with r>1. By means of a generalized q-term shrinkage operator upon the special structure of lσq norm, we design a proximal gradient algorithm for handling the DC l1lσq model. Then, based on the majorization scheme, we develop a majorized penalty algorithm for the DC l1lr model. The convergence results of our new algorithms are presented as well. We would like to emphasize that extensive simulation results in the case Q={b} show that these two new algorithms offer improved signal recovery performance and require reduced computational effort relative to state-of-the-art l1 and lp (p(0,1)) models, see Wang (2013). We also devise two DC Algorithms on the spirit of a paper where exact DC representation of the cardinality constraint is investigated and which also used the largest-q norm of lσq and presented numerical results that show the efficiency of our DC Algorithm in comparison with other methods using other penalty terms in the context of quadratic programing, see Jun-ya et al. (2017).

Keywords

Citation

Moudafi, A. (2021), "Difference of two norms-regularizations for Q-Lasso", Applied Computing and Informatics, Vol. 17 No. 1, pp. 79-89. https://doi.org/10.1016/j.aci.2018.07.002

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Abdellatif Moudafi

License

Published in Applied Computing and Informatics. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) license. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this license may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction and preliminaries

The process of compressive sensing (CS) [8], which consists of encoding and decoding, is rapidly consolidated year after year due to the blooming of large datasets which become increasingly important and available. The process of encoding involves taking a set of (linear) measurements, b=Ax, where A is a matrix of size m×n. If m<n, we can compress the signal xIRn, whereas the process of decoding is to recover x from b where x is assumed to be sparse. It can be formulated as an optimization problem, namely

(1.1)minx0subjecttoAx=b,
where 0 is the l0 norm, which counts the number of nonzero entries of x; namely
(1.2)x0=|{xi;xi0}|
with |·| being here the cardinality, i.e., the number of elements of a set. Hence minimizing the l0 norm amounts to finding the sparsest solution. One of the difficulties in CS is solving the decoding problem above, since l0 optimization is NP-hard. An approach that has gained popularity is to replace l0 by the convex norm l1 since it often gives a satisfactory sparse solution and has been applied in many different fields such as geology and ultrasound imaging.

More recently, nonconvex metrics were used as alternative approaches to l1, especially the nonconvex metric lp for p(0,1) in [6] which can be interpreted as a continued approximation strategy of l0 as p0. A great deal of research has been conducted into lp problems including all kinds of variants and related algorithms, as you can see in [4] and references therein. The convex l1 relaxation compared to the nonconvex problem (lp) is generally more difficult to handle. However, it was shown in [12] that the potential reduction method can solve this special nonconvex problem in polynomial time with arbitrarily given accuracy.

Most recently, the majority of such sparsity inducing functions are unified as the notion of DC programming in [9], including log-sum, smoothly clipped absolute deviation and capped-l1 penalty. Generally, DC programming problem can be solved through a primal–dual convex relaxations algorithm which is famous in the literature of DC Programming [11]. Other algorithms appeared as for solving application problems of DC programming in the area of finance and insurance, data analysis, machine learning as well as signal processing. However, as noted in [18], among the above mentioned DC programming approaches for sparse reconstruction, most of them are mainly preserving the separability properties of both l0 and l1 norms.

To begin with, let us recall that the lasso of Tibshirani [16] is given by the following minimization problem

(1.3)minxIRn12Axb22+γx1,

A being an m×n real matrix, bIRm and γ>0 is a tuning parameter. The latter is nothing else than the basic pursuit (BP) of Chen et al. [7], namely

(1.4)minxIRnx1suchthatAx=b.

However, the constraint Ax=b being inexact due to errors of measurements, the problem (1.4) can be reformulated as

(1.5)minxIRnx1subjecttoAxbpε,
where ε>0 is the tolerance level of errors and p is often 1,2 or . It is noticed in [1] that (1.5) can be rewritten as
(1.6)minxIRnx1subjecttoAxQ,
in the case when Q:=Bε(b), the closed ball in IRn with center b and radius ε.

Now, when Q is a nonempty closed convex set of IRm and PQ the orthogonal projection from IRm onto the set Q and by observing that the constraint is equivalent to the condition AxPQ(Ax)=0, this leads to the following Lagrangian formulation

(1.7)minxIRn12(IPQ)Ax22+γx1,

γ>0 being a Lagrangian multiplier.

A link is also made in [1] with split feasibility problems [5] which consist in finding x satisfying

(1.8)xC,AxQ,
with C and Q two nonempty closed convex subsets of IRn and IRm, respectively. An equivalent formulation of (1.8) as a minimization problem is given by
(1.9)minxC12(IPQ)Ax22,
and its l1-regularization is
(1.10)minxC12(IPQ)Ax22+γx1,
with γ>0 a regularization parameter.

This convex relaxation approach was frequently employed, see for example [1,20] and references there in. As the level curves of l1-l2 are closer to l0 than those of l1, this motivated us in [14] to propose a regularization of split feasibility problems by means of the nonconvex l1-l2, namely

(1.11)minxC12(IPQ)Ax22+γ(x1x2),
and present three algorithms with their convergence properties [14]. Unlike the separable sparsity inducing functions involved in the aforementioned DC programming for problem (l0), we are interested in the two first sections of this work to two specific types of DC programming with un-separable objective functions, which are in the form of difference functions between two norms, namely the new notion lσq denoting the sum of q largest elements of a vector in magnitude (i.e., the l1 norm of q-term best approximation of a vector) introduced in [18] and the classical lr norm with r>1. Obviously lσq and lr (r>1) are regular convex norms. The corresponding DC programs are as follows:
(1.12)minxIRn(x1εxσq:Ax=b),
and
(1.13)minxIRn(x1εxr:Ax=b),
where ε(0,1],xσq is defined as the sum of the q largest elements of x in magnitude, q{1,2,···n} and r>1. We would like to emphasize that the following least-squares variant of (1.12) and (1.13), were studied in the recent work by Wang [18]:
(1.14)minx(f(x):=12Axb2+μ(x1εxσq)),
where μ>0 and ε(0,1), and
(1.15)minx(f˜(x):=12Axb2+μ(x1εxr)),
where r>0 and ε(0,1).

This paper proposes generalizations to Q-Lasso, namely

minx(f(x):=12(IPQ)Ax2+μ(x1εxσq)),
where μ>0 and ε(0,1), as well as
minx(f˜(x):=12(IPQ)Ax2+μ(x1εxr)),
where r>0 and ε(0,1), and our attention will be focused on the algorithmic aspect.

The rest of the paper is organized as follows. In Sections 2 and 3, two DC-penalty methods instead of conventional methods such as l1 or l1l2 minimization are proposed. Their convergence to a stationary point are also analyzed. The first iterative minimization method is based on the gradient proximal algorithm and the second one is designed by means of the majored penalty strategy. Furthermore, relying on DCA (difference of convex algorithm) two other algorithms are proposed and their convergence results are established in Section 3 and 4.

2. Proximal gradient algorithm

First, we recall that the subdifferential of a convex function φ is given by

(2.1)φ(x):={uIRn;φ(y)φ(x)+u,yxyIRn}.

Each element of φ(x) is called subgradient. If φ(x)=12(IPQ)Ax2, it is well-known that

(2.2)φ(x)=φ(x)=AT(IPQ)Ax,
and when φ(x)=x1, we have
(2.3)(φ(x))i={sgn(xi)ifxi0;[1,1]ifxi=0.

The indicator function of a set CIRn is defined by

(2.4)iC(x)={0ifxC;+otherwise.

Moreover, the normal cone of a set C at xC, denoted by NC(x) is defined as

(2.5)NC(x):={dIRn|d,yx0,yC}.

Connection between the above definitions is given by the key relation iC=NC.

In this section our interest is in solving the DC programming

(2.6)minx(f(x):=12(IPQ)Ax2+μ(x1εxσq)),
where μ>0 and ε(0,1).

Similar to l1 norm, l2 norm, etc., we adopt the notation xσq to denote the norm of lσq which is defined a line below (1.13) and we design an iterative algorithm based both on a generalized q-term shrinkage operator and on the proximal gradient algorithm framework.

At this stage, observe that the restriction on ε guarantees that f(x)0 for all x. To solve (2.6), we consider the following standard proximal gradient algorithm:

  • 1. Initialization: Let x0 be given and set L>λmax(ATA) with λmax(ATA) the maximal eigenvalue.

  • 2. For k=0,1,··· find

(2.7)xk+1ArgminxIRn(AT(IPQ)Axk,xxk+L2xxk2+μ(x1εxσq)).

Observe that subproblem (2.7) can equivalently formulated as

(2.8)minx(L2x(xk1LAT(IPQ)Axk)2+μ(x1εxσq)).

Thus, it suffices to consider the solutions to the following minimization problem

(2.9)minx(12xy2+λ1x1λ2xσq),
with a given vector y and positive numbers λ1>λ2>0. An explicit solution of this problem is given by the following result, see [18].

Proposition 2.1 Let {i1,,in} be the indices such that

|yi1||yi2||yin|.

Then x:=proxλ1x1λ2xσq(y) with

(2.10)xi={sign(yi)max{|yi|(λ1λ2),0}ifi=i1,i2,···,iq;sign(yi)max{|yi|λ1,0}otherwise
is a solution of (2.9).

The proximal operator above (called the generalized q-term shrinkage operator in [18]) amounts to write the algorithm as follows:

Proximal Gradient Algorithm:

  1. 1.

    Start: Let x0 be given and set L>λmax(ATA) with λmax(ATA) the maximal eigenvalue.

  2. 2.

    For k=0,1, find

yk+1=(xk1LAT(IPQ)Axk),

Sort yk+1 as |yi1||yi2||yin|,

(2.11)(xk+1)i={sign(yi)max{|yi|μ(1ε),0}ifi=il;sign(yi)max{|yi|μ,0}otherwise
where l=1,,q.

End.

Now, we are in a position to show the following convergence result of the scheme (2.7):

Proposition 2.2 The sequence (xk) generated by the Proximal Gradient Algorithm above converges to a stationary point of problem (2.6).

Proof. Remember that h(x)=12(IPQ)Ax2 is differentiable and its gradient h(x)=AT(IPQ)Ax is Lipschitz continuous with constant L˜:=λmax(ATA). By [3]-Proposition A.24, we have

f(xk+1)12(IPQ)Axk2+AT(IPQ)Axk,xk+1xk+L˜2xk+1xk2+μ(xk+11εxk+1σq).

Combining this with definition of xk+1, we obtain

(2.12)f(xk+1)f(xk)LL˜2xk+1xk2.

Since L>L˜, we see immediately that f(xk+1)f(xk) and thus the sequence (f(xk)) is convergent since f is a non-negative function. Furthermore, we obtain that kxk+1xk2<+ which follows by summing (2.12) from k=0 to . As a further consequence, we note that

μ(1ε)xk1μ(xk1εxkσq)f(xk)f(x0).

Since μ(1ε)>0, we have that (xk) is bounded. Moreover, the objective function f is square term plus a piecewise linear function which ensures that f is semi-algebric and hence satisfies Kurdyka-Lojasiewicz inequality. [2]-Theorem 5.1 is then applicable and obtain that (xk) is convergent to a stationary point of (2.6). □

3. Majorized penalty algorithm

Consider the following minimization problem

(3.1)minx(f˜(x):=12(IPQ)Ax2+μ(x1εxr),
where AIRm×n,Q a nonempty closed convex set of IRm,r>0 and ε(0,1).

First, observe again that conditions on ε guarantees that f˜(x)0 for all x. We will now describe an algorithm for solving (3.1), based on the majorized penalty approach see, for example, [18] and references therein. Following the same lines as in [18], we start by constructing a majorization of f˜. To that end let L>λmax(ATA), then for any x,yIRn, we have

12(IPQ)Ax2212(IPQ)Ay22+AT(IPQ)Ay,xy+L2xy22.

Moreover, by invoking the convexity of the norm xr and definition of its subdifferential, we also have

xryr+g(y),xywithg(y)yr,
where
(3.2)[g(y)]i={sign(yi)|yi|r1yrr1ify0;0otherwise.
Hence, if we define
F(x,y)=12(IPQ)Ay22+AT(IPQ)Ay,xy+L2xy2+μ(x1εyrεg(y),xy),
hence, for every x,yIRn, we get
F(x,y)f˜(x)andF(y,y)=f˜(y).

Starting with an initial iterate x0, the majorized penalty approach above updates xk by solving

(3.3)xk+1=argminxF(x,xk).

This leads to the following explicit formulation of xk+1 by means of the proximity (shrinkage) operator of x1:

xk+1=argminx(AT(IPQ)Axk,xxk+L2xxk2+μ(x1εg(xk),xxk))=argminx(L2xxk+1L(AT(IPQ)Axkμεg(xk))2+μx1)=proxμLx1(xk1L(AT(IpQ)Axkμεg(xk)))=sgn(vk)max{|vk|μL,0},
where
vk=xk1L(AT(IPQ)Axkμεg(xk))withg(xk)xkr.

We summarize the algorithm as follows:

Majorized Penalty Algorithm:

  • 1. Initialization: Let x0 be given and set L>λmax(ATA).

  • 2. For k=0,1, find

(3.4)xk+1=sgn(xk1L(AT(IPQ)Axkμεg(xk)))max{|vk|μL,0}

End.

The following proposition contains the convergence result of this Penalty Algorithm.

Proposition 3.1 Let (xk) be the sequence generated by the Majorized Penalty Algorithm above. Then

(3.5)L2xkxk+12f˜(xk)f˜(xk+1).

Furthermore, the sequence (xk) is bounded and any cluster point is a stationary point of problem (3.1).

Proof. Since xk+1 minimizes F(x,xk), thanks to the first-order optimality condition we can write

(3.6)0AT(IPQ)Axk+L(xk+1xk)+μxk+11μεg(xk),

g(xk) being a subgradient of xr at xk+1. This combined with the definition of the subdifferential of x1 at xk+1 gives

μxk1μxk+11AT(IPQ)AxkL(xk+1xk)+μεg(xk),xkxk+1=AT(IPQ)Axk+μεg(xk),xkxk+1+Lxk+1xk22=AT(IPQ)Axk+μεg(xk),xk+1xk+Lxk+1xk22.

Hence

μxk+11μxk1+AT(IPQ)Axkμεg(xk),xkxk+1Lxk+1xk22.

This together with the definition of F, for any k1, leads to

f˜(xk+1)f˜(xk)F(xk+1,xk)f˜(xk)=AT(IPQ)Axk,xk+1xk+L2xk+1xk22+μ(xk+11xk1εg(xk),xkxk+1)=L2xk+1xk22+μxk+11μxk1+AT(IPQ)Axkμεg(xk),xk+1xkL2xk+1xk22Lxk+1xk22.

Consequently,

(3.7)f˜(xk+1)f˜(xk)L2xk+1xk22.

Hence f˜(xk+1)f˜(xk) and thus the sequence (f˜(xk)) is convergent since f˜ is a non-negative function. Furthermore, the sequence (xk) is such that

k=0xk+1xk2<+.

Indeed, by summing (3.7) from k=0 to , we obtain that

L2k=0xk+1xk22f˜(x0)limk+f˜(xk)f˜(x0)<+.

Consequently, the sequence (xk) is asymptotically regular, i.e., limk+xkxk+1=0. On the other hand, observe that the definition of f˜ for any k1, leads to

μ(xk1εxkr)12(IPQ)Axk22+μ(xk1εxkr)=f˜(xk)f˜(x0).
Since xk1xkr, we obtain that μ(1ε)xkrf˜(x0). This implies that (xk) is bounded since 0<ε<1. To conclude, we prove that every cluster point of (xk) is a stationary point of (3.1). Let x be a cluster point of (xk), then x=limVxkv,(xkv) being subsequence of (xk). By passing to the limit in (3.6) along the subsequence (xkv) and in the light of the upper semicontinuity of (Clarke) subdifferentials, we obtain the desired result, namely
0AT(IPQ)Ax+μx1μεg(x),
which is nothing else than the first-order optimality condition of (3.1). □

4. DCA algorithm

Now we turn our attention to a DC Algorithm (DCA), where the dual step at each iteration can be efficiently carried out due to the accessible subgradients of the largest- q-norm ·σq and ·r norm. Remember that to find critical points of f:=φψ, the DCA consists in designing of sequences (xk) and (yk) by the following rules

(4.1){ykψ(xk);xk+1=argminxIRn(φ(x)(ψ(xk)+yk,xxk)).

Note that by the definition of subdifferential, we can write

ψ(xk+1)ψ(xk)+yk,xk+1xk.

Since xk+1 minimizes φ(x)(ψ(xk)+yk,xxk), we also have

φ(xk+1)(ψ(xk)+yk,xk+1xk)φ(xk)ψ(xk).

Combining the last inequalities, we obtain

f(xk)=φ(xk)ψ(xk)φ(xk+1)(ψ(xk)+yk,xk+1xk)f(xk+1).

Therefore, the DCA leads to a monotonically decreasing sequence (f(xk)) that converges as long as the objective function f is bounded below.

Now, we can decompose the objective function in (2.6) as follows

(4.2)minx(f(x):=(12(IPQ)Ax2+μx1)(μεxσq)),
where μ>0, ε(0,1), here φ(x)=12(IPQ)Ax2+μx1 and ψ(x)=μεxσq.

At each iteration, DCA solves The convex subproblem defined by linearizing the concave term εxσq is solved by DCA at each iteration until a convergence condition is satisfied. More precisely, we have

(4.3){ykμεxkσq;xk+1=argminxIRn(12(IPQ)Ax2+μx1(μεxkσq+yk,xxk)).

Especially, if either the function φ or ψ is polyhedral, the DCA is said to be polyhedral and terminates in finite iterations [15]. Note that the our proposed DCA is polyhedral since the largest-q norm term εxσq can be expressed as a pointwise maximum of 2qCnq linear functions, see [10]. On the other hand, the subdifferential of xσq at a point xk is given in, see for example [19],

(4.4)xkσqargmaxy(i=1n|[xk]i|yi:i=1nyi=q,0yi1,i=1,,n),

that is

xkσq={(y1,,yn):yi1==yiq=1,yiq+1=0==yin=0},
where yij denotes the element of y corresponding to xij in the linear program (4.4). Observe that a subgradient yxkσq can be computed efficiently by first sorting the elements |xi| in decreasing order, namely |xi1||xi2||xiq|. Then, assign 1 to yi which corresponds to xi1,xiq.

To conclude, let us consider the following DC formulation of (3.1):

(4.5)minx(f(x):=(12(IPQ)Ax2+μx1)(μεxr)),
where r>0, ε(0,1), here φ(x)=12(IPQ)Ax2+μx1 and ψ(x)=μεxr.

The subgradient yxkr is also available via the formula (3.2) and the DCA in this context take the following form

(4.6){ykμεxkr;xk+1=argminxIRn(12(IPQ)Ax2+μx1(μεxkr+yk,xxk)).
where
[yk]i={sign([xk]i)|[xk]i|r1xkrr1ifxk0;0otherwise.

For the details of DCA convergence properties, see [15].

5. Concluding remarks

The focus of this paper is on Q-Lasso relying on two new DC-penalty methods instead of conventional methods such as l1 or l1l2 minimization developed in [13,17] and [21]. Two iterative minimization methods based on the gradient proximal algorithm as well as the majored penalty algorithm are designed and their convergence to a stationary point is proved. Furthermore, by means of DC (difference of convex) Algorithm, two other algorithms are devised and their convergence results are also stated.

References

[1]M.A. Alghamdi, M. Ali Alghamdi, Naseer Shahzad, H.-K. Xu, Properties and iterative methods for the Q-Lasso, Abstr. Appl. Anal. (2013), Article ID 250943, 8 pages.

[2]H. Attouch, J. Bolte, B.F. Svaiter, Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods, Math. Program., Ser. A 137 (2013) 91129.

[3]D.-P. Bertsekas, Nonlinear Programming, Athena Scientific, 1999.

[4]A.M. Bruckstein, D.L. Donoho, M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images, SIAM Rev. 51 (2009) 3481.

[5]Y. Censor, T. Elfving, A multiprojection algorithm using Bregman projections in a product space, Numer. Algorithms 8 (1994) 221239.

[6]R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization, IEEE Signal Process. Lett. 14 (2007) 707710.

[7]S.S. Chen, D.L. Donoho, M.A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput. 20 (1998) 3361.

[8]D. Donoho, Compressed sensing, IEEE Trans. Inf. Theory 52 (2006) 12891306.

[9]G. Gasso, A. Rakotomamonjy, S. Canu, Recovering sparse signals with a certain family of nonconvex penalties and dc programming, IEEE Trans. Signal Process. 57 (12) (2009) 46864698.

[10]Jun-ya Gotoh, Akiko Takeda, Katsuya Tono, DC formulations and algorithms for sparse optimization problems, Math. Program. (2017) 136.

[11]R. Horst, N.V. Thoai, Dc programming: overview, J. Optim. Theory Appl. 103 (1999) 141.

[12]S. Ji, K.-F. Sze, Z. Zhou, A.M.-C. So, Y. Ye, Beyond convex relaxation: A polynomial-time non-convex optimization approach to network localization, in: Proceedings of the 32nd IEEE International Conference on Computer Communications (INFOCOM 2013), Torino, 2013.

[13]Y. Lou, M. Yan, Fast l1l2 Minimization via a proximal operator, J. Sci. Comput. (2017) 119.

[14]A. Moudafi, A. Gibali, l1l2 regularization of split feasibility problems, Numer. Algorithms (2017) 119, http://dx.doi.org/10.1007/s11075-017-0398-6.

[15]T. Pham Dinh, H.A. Le Thi, Convex analysis approach to D.C. programming: Theory, algorithms and applications, Acta Math. Vietnamica 22 (1) (1997) 289355.

[16]R. Tibshirani, Regression shrinkage and selection via the lasso, J. R. Stat. Soc., Ser. B 58 (1996) 267288.

[17]P. Yin, Y. Lou, Q. He, J. Xin, Minimization of l1−2 for compressed sensing, SIAM J. Sci. Comput. 37 (2015) 536563.

[18]Y. Wang, New improved penalty methods for sparse reconstruction based on difference of two norms, Technical Report (2013) 111.

[19]B. Wu, C. Ding, D.F. Sun, K.C. Toh, On the Moreau-Yoshida regularization of the vector k-norm related functions, SIAM J. Optim. 24 (2014) 766794.

[20]Xu. Hong-Kun, Maryam A. Alghamdi, Naseer Shahzad, Regularization for the split feasibility problem, J. Nonlinear Convex Anal. 17 (3) (2015) 513525.

[21]Z. Xu, X. Chang, F. Xu, H. Zhang, l1−2 regularization: a thresholding representation theory and a fast solver, IEEE Trans. Neural Networks Learn. Syst. 23 (2012) 10131027.

Acknowledgements

Publishers note: The publisher wishes to inform readers that the article “Difference of two norms-regularizations for Q-Lasso” was originally published by the previous publisher of Applied Computing and Informatics and the pagination of this article has been subsequently changed. There has been no change to the content of the article. This change was necessary for the journal to transition from the previous publisher to the new one. The publisher sincerely apologises for any inconvenience caused. To access and cite this article, please use Moudafi, A. (2020), “Difference of two norms-regularizations for Q-Lasso”, New England Journal of Entrepreneurship. Vol. 17 No. 1, pp. 79-89. The original publication date for this paper was 19/17/2018.

Corresponding author

Abdellatif Moudafi can be contacted at: abdellatif.moudafi@univ-amu.fr

Related articles