4 - Sparse optimization algorithms
from Part I - Compressive Sensing Technique
Published online by Cambridge University Press: 05 June 2013
Summary
This chapter reviews a collection of sparse optimization models and algorithms. The review focuses on introducing these algorithms along with their motivations and basic properties.
This chapter is organized as follows. Section 4.1 gives a short overview of convex optimization, including the definitions of convex sets and functions, the concepts of local and global optimality, as well as optimality conditions. A list of sparse optimization models is presented in Section 4.2; it deals with different types of signals and different kinds of noise and can also include additional features as objectives and constraints that arise in practical problems. Section 4.3 demonstrates that the convex sparse optimization problems can be transformed in equivalent cone programs and solved by off-the-shelf algorithms, yet it argues that these algorithms are usually inefficient or even infeasible for large-scale problems, which are typical in the CS applications. Sections 4.4-4.13 cover a large variety of (yet hardly complete) algorithms for sparse optimization. The list is not short because sparse optimization is a common ground where lots of optimization techniques, old or new, are found useful in varying senses. They have different strengths and fit different applications. One common reason that makes many of these algorithms efficient is their use of the shrinkage-like proximal operations, which can be computed very efficiently; so, we start with shrinkage in Section 4.4. Then, Section 4.5 presents a prox-linear framework and gives several algorithms under this framework that are based on gradient descent and take advantages of shrinkage-like operations.
- Type
- Chapter
- Information
- Compressive Sensing for Wireless Networks , pp. 69 - 117Publisher: Cambridge University PressPrint publication year: 2013
- 5
- Cited by