Book contents
- Frontmatter
- Contents
- Contributors
- Preface
- 1 The Modern Mathematics of Deep Learning
- 2 Generalization in Deep Learning
- 3 Expressivity of Deep Neural Networks
- 4 Optimization Landscape of Neural Networks
- 5 Explaining the Decisions of Convolutional and Recurrent Neural Networks
- 6 Stochastic Feedforward Neural Networks: Universal Approximation
- 7 Deep Learning as Sparsity-Enforcing Algorithms
- 8 The Scattering Transform
- 9 Deep Generative Models and Inverse Problems
- 10 Dynamical Systems andOptimal Control Approach to Deep Learning
- 11 Bridging Many-Body Quantum Physics and Deep Learning via Tensor Networks
7 - Deep Learning as Sparsity-Enforcing Algorithms
Published online by Cambridge University Press: 29 November 2022
- Frontmatter
- Contents
- Contributors
- Preface
- 1 The Modern Mathematics of Deep Learning
- 2 Generalization in Deep Learning
- 3 Expressivity of Deep Neural Networks
- 4 Optimization Landscape of Neural Networks
- 5 Explaining the Decisions of Convolutional and Recurrent Neural Networks
- 6 Stochastic Feedforward Neural Networks: Universal Approximation
- 7 Deep Learning as Sparsity-Enforcing Algorithms
- 8 The Scattering Transform
- 9 Deep Generative Models and Inverse Problems
- 10 Dynamical Systems andOptimal Control Approach to Deep Learning
- 11 Bridging Many-Body Quantum Physics and Deep Learning via Tensor Networks
Summary
Over the last few decades sparsity has become a driving force in the development of new and better algorithms in signal and image processing. In the context of the late deep learning zenith, a pivotal work by Papyan et al. showed that deep neural networks can be interpreted and analyzed as pursuit algorithms seeking for sparse representations of signals belonging to a multilayer synthesis sparse model. In this chapter we review recent contributions showing that this observation is correct but incomplete, in the sense that such a model provides a symbiotic mixture of coupled synthesis and analysis sparse priors. We make this observation precise and use it to expand on uniqueness guarantees and stability bounds for the pursuit of multilayer sparse representations. We then explore a convex relaxation of the resulting pursuit and derive efficient optimization algorithms to approximate its solution. Importantly, we deploy these algorithms in a supervised learning formulation that generalizes feed-forward convolutional neural networks into recurrent ones, improving their performance without increasing the number of parameters of the model.
Keywords
- Type
- Chapter
- Information
- Mathematical Aspects of Deep Learning , pp. 314 - 337Publisher: Cambridge University PressPrint publication year: 2022