

The minimum of F &sigma(.) is used as a starting point to locate the minimum of F &sigma(.) for the next (smaller) &sigma, using a steepest descent approach. Then, the idea of SL0 for escaping from these local minima is to use a "Gratuated Non-Convexity (GNC)" procedure: It starts with a very large &sigma (for which there is no local minima), and decreases gradually &sigma to zero. However, this is difficult, because for small values of &sigma, F &sigma(.) is highly non-smooth with a lot of local minima. The goal is then to minimize F &sigma( s) for a very small value of &sigma. Moreover, for &sigma &rarr &infin, F &sigma(.) tends to a very smooth and convex function (contrary to the L0 norm itself). The approximation tends to equality for &sigma &rarr 0. More precisely, it approximates the L0 norm of a vector s by a smooth function F &sigma( s), where &sigma determines the quality of approximation: The larger &sigma, the smoother F &sigma(.) but worse approximation to the L0 norm and the smaller &sigma, the better approximation to the L0 norm but the less smooth F &sigma(.). The main idea of SL0 is to use a "smooth" measure of the L0 norm. Note also that the equivalence of minimizing L1 and L0 is only assymptotic, and does not always hold (for a counter-example, see here). Basis Pursuit), which replace L0 norm by other cost functions (like L1). This is contrary to most of other existing algorithms (e.g.

SL0 tries to directly minimize the L0 norm.For example, it is about 2 to 3 orders of magnitude faster than L1-magic. One of its main applications is in Compressive Sensing (CS). SL0 (Smoothed L0) is an algorithm for finding the sparsest solutions of an underdetermined system of linear equations As= x.Smoothed L0 (SL0) Algorithm for Sparse Decomposition
