Book contents
- Frontmatter
- Contents
- List of contributors
- Preface
- 1 Mathematical models and practical solvers for uniform motion deblurring
- 2 Spatially-varying image deblurring
- 3 Hybrid-imaging for motion deblurring
- 4 Efficient, blind, spatially-variant deblurring for shaken images
- 5 Removing camera shake in smartphones without hardware stabilization
- 6 Multi-sensor fusion for motion deblurring
- 7 Motion deblurring using fluttered shutter
- 8 Richardson–Lucy deblurring for scenes under a projective motion path
- 9 HDR imaging in the presence of motion blur
- 10 Compressive video sensing to tackle motion blur
- 11 Coded exposure motion deblurring for recognition
- 12 Direct recognition of motion-blurred faces
- 13 Performance limits for motion deblurring cameras
- Index
- References
9 - HDR imaging in the presence of motion blur
Published online by Cambridge University Press: 05 June 2014
- Frontmatter
- Contents
- List of contributors
- Preface
- 1 Mathematical models and practical solvers for uniform motion deblurring
- 2 Spatially-varying image deblurring
- 3 Hybrid-imaging for motion deblurring
- 4 Efficient, blind, spatially-variant deblurring for shaken images
- 5 Removing camera shake in smartphones without hardware stabilization
- 6 Multi-sensor fusion for motion deblurring
- 7 Motion deblurring using fluttered shutter
- 8 Richardson–Lucy deblurring for scenes under a projective motion path
- 9 HDR imaging in the presence of motion blur
- 10 Compressive video sensing to tackle motion blur
- 11 Coded exposure motion deblurring for recognition
- 12 Direct recognition of motion-blurred faces
- 13 Performance limits for motion deblurring cameras
- Index
- References
Summary
Introduction
Digital cameras convert incident light energy into electrical signals and present them as an image after altering the signals through different processes which include sensor correction, noise reduction, scaling, gamma correction, image enhancement, color space conversion, frame-rate change, compression, and storage/transmission (Nakamura 2005). Although today's camera sensors have high quantum efficiency and high signalto-noise ratios, they inherently have an upper limit (full well capacity) for accumulation of light energy. Also, the sensor's least acquisition capacity depends on its pre-set sensitivity. The total variation in the magnitude of irradiance incident at a camera is called the dynamic range (DR) and is defined as DR = (maximum signal value)/(minimum signal value). Most digital cameras available in the market today are unable to account for the entire DR due to hardware limitations. Scenes with high dynamic range (HDR) either appear dark or become saturated. The solution for overcoming this limitation and estimating the original data is referred to as high dynamic range imaging (HDRI) (Debevec & Malik 1997, Mertens, Kautz & Van Reeth 2007, Nayar & Mitsunaga 2000).
Over the years, several algorithmic approaches have been investigated for estimation of scene irradiance (see, for example, Debevec & Malik (1997), Mann & Picard (1995), Mitsunaga & Nayar (1999)). The basic idea in these approaches is to capture multiple images of a scene with different exposure settings and algorithmically extract HDR information from these observations. By varying the exposure settings, one can control the amount of energy received by the sensors to overcome sensor bounds/limits.
- Type
- Chapter
- Information
- Motion DeblurringAlgorithms and Systems, pp. 184 - 206Publisher: Cambridge University PressPrint publication year: 2014