Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-24T01:51:30.644Z Has data issue: false hasContentIssue false

Toward a signal-processing foundation for computational sensing and imaging: electro-optical basis and merit functions

Published online by Cambridge University Press:  01 August 2017

David G. Stork*
Affiliation:
Rambus Labs, 1050 Enterprise Way, Suite 700, Sunnyvale, CA 94089, USA
*
Corresponding author: D.G. Stork, Email: [email protected]

Abstract

We highlight the need for – and describe initial strategies to find – new digital-optical basis functions and performance merit functions to serve as a foundation for designing, analyzing, characterizing, testing, and comparing a range of computational imaging systems. Such functions will provide a firm theoretical foundation for computational sensing and imaging and enhanced design software, thereby broadly speeding the development of computational imaging systems.

Type
Industrial Technology Advances
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is included and the original work is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use.
Copyright
Copyright © The Authors, 2017

I. INTRODUCTION

Progress in both imaging optics and digital image processing have depended upon the choice of basis and merit functions. In imaging optics, basis functions (such as Zernike polynomials) are used to describe the wavefront error and in image processing (such as two-dimensional discrete cosine or wavelet functions) are used to represent the digital image. In imaging optics, merit functions (such as root-mean-square size of the point-spread function or the value of the modulation transfer function) are used to judge the quality of an optical image and thus of the imaging system that produced it. In digital image processing, merit functions (such as peak signal-to-noise or root-mean-squared error (RMSE)) are used to judge the quality of a digital image, including the quality of processed or compressed images. Such functions have been derived separately by the optical and the image-processing communities, are expressed in different units, and are rarely integrated or used in conjunction. In the recent era of computational imaging, however – in which optics and digital image processing are co-designed for a desired end-to-end function – prior functions are not appropriate.

In this paper, we highlight the need for joint electro-optic merit and basis functions and suggest a strategy for deriving them. We begin in Section II with a short history of basis function and their importance. Then we turn in Section III to conceptual foundations for such functions for computational imaging, including desirable and useful properties. We outline one approach to finding new such functions in Section IV and conclude in Section V speculating on the path forward.

II. BRIEF HISTORY OF MERIT AND BASIS FUNCTIONS IN OPTICS AND IMAGE PROCESSING

For the first several centuries of optical imaging, system designers were guided trial and error, ad hoc rules of thumb and subjective visual assessment of the images produced by their optical systems, such as telescopes, microscopes, spectacles (eyeglasses), camera obscuras, and so on [Reference Darrigol1]. In the 19th century, Petzval and Seidel and in the 20th century Nijboer and Zernike provided a conceptual foundation and ultimately a representation of wavefront error that was mathematically complete and whose terms could be interpreted optically. For instance, defocus, coma, and other aberrations each corresponded directly to individual terms in an expansion of the wavefront error in terms of circular Zernike polynomials. Furthermore, the development of principled performance metrics such as the RMS aberration error, Strehl ratio, Struve ratio, Rayleigh criterion, and measures of the Optical Transfer Function all greatly speeded the understanding and development of optical imaging technology. Rigorous theory of wave propagation related the basis functions, point-spread functions, and optical imaging performance merit functions. Such bases and metrics have been incorporated into all optical design software, thus reducing design time and improving the performance of microscopes, telescopes, cameras, CMOS photolithography systems, and indeed all imaging devices.

Analogously, in the last half of the 20th century, there has been a great deal of work on basis functions and merit functions in digital image processing, for use in image analysis, image compression, and related tasks. Leading area-based raster digital image representations include the Discrete Cosine Transform or DCT (the foundation for the JPEG image compression standard), numerous two-dimensional wavelets (the foundation for JPEG 2000, ICER, etc.), and others [Reference Gonazlez and Woods2]. Likewise image quality metrics such as mean-squared error (MSE) and its many variants (weighted MSE, predictive MSE, etc.) have provided a foundation for design and comparison of different image-processing and compression algorithms. Nearly all these functions are supported in commercial and research image-processing software. (Here we will not consider color representation or image quality metrics that depend upon human visual perception – a very difficult application-dependent problem [Reference Pedersen and Hardenberg3].)

To a large extent, the disciplines of optics and signal processing developed separately, continue to be taught separately, address different classes of problems, rely on different theory and on different performance metrics, and exploit different software tools. It is common for professionals in optics to have no knowledge of image processing and for professionals in image processing to have no knowledge of optics. As an illustration, note that Born and Wolf's updated classic Principles of Optics (7th ed) [Reference Born and Wolf6] has no index entry on wavelets and Pratt's classic Digital Image Processing (2nd ed.) [Reference Pratt7] has a total of just four pages on models of optics (which are not used elsewhere in the book).

III. FOUNDATIONS FOR COMPUTATIONAL IMAGING

Over the past few years, the hybrid discipline of computational imaging has shown new powerful imaging architectures (e.g., plenoptic imaging) and new theory (compressive sensing) [Reference Brady8]. In this new era of computational imaging – in which the digital–optical system is viewed as an information channel and the optics and signal processing are co-designed for optimal end-to-end performance (Fig. 1) – the traditional basis functions and merit functions mentioned above are inadequate, and may even serve as an impediment to progress. For instance, traditional metrics for optical metrology such as RMS spot size are not appropriate in computational imaging, because a small spot size may happen to have zeros in the optical transfer function thus leading to the loss of image information. Such information cannot be recovered through signal processing later in the overall imaging channel. Instead, a somewhat large point-spread function whose optical transfer function lacks zeros may be preferred because such a point-spread function can preserve image information that can be amplified and thus recovered digitally. Thus as far as the final digital image is concerned – the only image by which the computational imager should be judged – some particular large point-spread function should be preferred over a smaller one. This is precisely the opposite of what current point-spread-function-based optical merit functions would recommend.

Fig. 1. In traditional or sequential optical design, one first designs the optics to yield a high-quality optical image on the image sensor, and then designs the digital signal processing so as to yield a high-quality final digital image. For the first design stage, traditional optical representations and merit functions such as Zernike polynomials and RMS point-spread-function size are used ; for the second design stage, traditional image-processing representations and merit functions such as Daubechies wavelets and RMS error are used . In true joint design in computational imaging for end-to-end performance, new (digital-optical) basis functions and merit functions are needed [Reference Robinson, Stork, Howard and Koshel4, Reference Stork and Robinson5].

Consider an illustration of the inadequacy of current metric functions for describing and analyzing computational imaging systems, here a traditional rotationally symmetric lens, linear photodetector array and linear convolution-based signal processing. Suppose the lens is imperfect or aberrated (i.e., does not yield an ideal, traditional optical image) yet we (somehow) design the overall signal processing such that the overall imaging system takes a point in the scene and produces a corresponding point in the final digital image; thus this computational imager produces a faithful digital representation of the scene. Suppose to describe our system we use the traditional optical basis set ${\cal J}^{o}$ comprising the circular Zernike polynomials as well as a signal processing basis set ${\cal J}^{s}$ comprising two-dimensional Gabor wavelet functions [Reference Lee9]. Suppose the sole optical aberration of the lens is horizontal coma. The aberration function then has a particular simple representation: $\Phi \lpar \rho\comma \; \theta\rpar \propto Z_{3}^{1} \lpar \rho\comma \; \theta\rpar $ . The point-spread function and thus the pattern of light on the image sensor would be somewhat complex and the required image processing would be a extremely complicated – surely an infinite sum of Gabor wavelets, $\sum\nolimits_{r\comma s = 1}^{\infty} G_{r\comma s} \lpar x\comma \; y\rpar $ . Conversely, we might have a very complex lens with whose aberration function is an extremely complex infinite sum of Zernike polynomials whose projected image would be corrected by very simple image processing, for instance convolution with a small number of Gabor wavelets.

These examples show that a system representation based jointly on Zernike polynomials and two-dimensional Gabor wavelets is not compact (simple systems require an infinite number of terms in either the optical or the digital domains or both), quite unbalanced (the representation of typical computational imagers might be simple in one domain while very complex in the other), and provide little or no insight into the overall function of the computational imaging system. Instead, we prefer a representation that is compact (would not need infinite terms in typical cases), at least for typical systems of interest. Such a representation could be expressed in the general form

(1) $${\cal J}(\rho, \theta; x, y) = \sum\limits_{j \in {\cal O}} {\cal J}^o_j(\rho, \theta) \cdot \sum\limits_{k \in {\cal S}} {\cal J}^s_k (x,y),$$

where ${\cal J}^{o}$ represents basis components (indexed by j in a set of optical values ${\cal O}$ ) in the purely optical domain and where ${\cal J}^{s}$ represents basis components (indexed by k in a set of digital signal-processing values ${\cal S}$ ) in the purely two-dimensional digital signal domain.

A) Desiderata for basis and merit functions

There are a number of desirable properties or desiderata we seek for the full digital–optical representation implied by equation (1). (Of course it is unlikely that all can be fulfilled simultaneously; there will likely be tradeoffs.) These properties include:

  • Mathematical completeness: The representation should be able to describe any optical system and any linear digital processing operation, and thus describe any linear transformation from external (optical) scene to final digital image.

  • Ortho-normality: Elements in the basis function set should be linearly independent and therefore non-redundant. This property is necessary for a representation of a computational imaging system to be unique, and for transformations to be invertible – at least when the system is non-degenerate. (An overcomplete basis may be desirable, however, because it can enable the representation of typical computational systems to be sparse.) The basis elements should be normalized for ease of calculation and for direct interpretation and comparison.

  • Interpretability: To the extent possible, the basis functions should be interpretable or at least easily visualized to facilitate understanding and scholarly presentation.

  • Computationally efficient: The representation should admit efficient computation just as the computation of coefficients in an expansion, such as the DCT admit very fast implementations through the fast Fourier transform (FFT).

  • Balanced complexity: The representation of “typical” or “representative” computational imaging systems should require a small number of terms (basis elements) in each domain, optical, and digital. In an (undesirable) unbalanced joint representation, one might find that a single basis element in one domain (e.g., optical) requires an infinite number of basis functions in the other domain (e.g., digital).

  • Low condition number for digital signal processing: All computational imaging systems possess noise, such as Poisson photon shot noise, sensor read noise, analog- to-digital (A/D) conversion noise, and others. The condition number for a system matrix $\kappa \lpar {\bf R}\rpar = \vert \lambda_{max}/\lambda_{min}\vert \ge 1$ – the ratio of the matrix's largest to its smallest eigenvalue – should be small in order to compute accurately the final image.

There are a number of properties or desiderata we seek for end-to-end merit function. (Of course it is unlikely that all can be fulfilled simultaneously; there will likely be tradeoffs.) These properties include:

  • Scalar: The merit function should be a scalar (as are RMS wavefront error, RMS point-size, digital image RMS error, etc.), so that system designers can rank order candidate systems, and so that architects of computational imaging design software can craft gradient-descent iterative or other optimization routines.

  • Non-negative: The merit function should be non-negative so that when the merit function value is zero, the ideal or “perfect” digital–optical performance is obtained, just as in the limit a vanishing RMS point-spread-function spot size corresponds to a “perfect” optical image and a vanishing RMS pixel error corresponds to a “perfect” digital image.

  • Scale and unit independent: The merit function should not depend upon overall physical scale of the optical component or number of pixels in the final digital representation, so as to facilitate comparisons of designs and imaging architectures independent of scale.

  • Extendable to alternate image estimation tasks: The merit function should be naturally extendable/adaptable to image-based tasks other than faithfully reproducing a scene, for instance estimating visual motion, creating scene depth maps (appropriate for certain imaging architectures), and others.

We consider the current state of basis and merit function first in the optical domain (Section III.B), then in the two-dimensional digital signal-processing domain (Section III.C), then finally in the joint digital–optical domain (Section III.D).

B) Optical basis and merit functions

Current optical design (for axially symmetric systems) relies heavily on the circular Zernike polynomials, $Z_{n}^{m}\lpar \rho\comma \; \theta\rpar $ , defined as

(2) $$\matrix{ {Z_n^m (\rho ,\theta )} \hfill & { = R_n^m (\rho )\cos (m\theta )\quad ({\rm even}\;{\rm polynomials})} \hfill \cr {Z_n^{ - m} (\rho ,\theta )} \hfill & { = R_n^m (\rho )\sin (m\theta )\quad ({\rm odd}\;{\rm polynomials})} \hfill \cr } $$

where ρ is the normalized radial distance from the optical axis, θ the azimuthal angle, and n the order of the radial polynomial

(3) $$\eqalign{ R_n^m (\rho )\,{=}\,\sum_{k=0}^{n-m/2}\!{(-1)^k (n-k)! \over k! (({n+m / 2})- k)! ({(n-m / 2})- k)! } \rho^{n-2 k}\!{,}}$$

for n − m even, and 0 for n − m odd [Reference Born and Wolf6, p. 523–525]. These circular polynomials obey the ortho-normalization relation

(4) $$\eqalign{ \int\nolimits_{\theta = 0}^{2 \pi} \! \int\nolimits_{\rho = 0}^1 \!\! Z_n^m(\rho, \theta) Z_{n^\prime}^{m^\prime}\!(\rho, \theta)\ \rho\ d\rho d\theta = {\epsilon_m \pi \over 2(n+1)} \delta_{n, n^\prime}\delta_{m,m^\prime},}$$

where the Neumann factor ε m is 2 if m = 0 and 1 otherwise and $\delta_{\cdot\comma \cdot}$ is the standard discrete Dirac delta or indicator function (Fig. 2). The difference between any wavefront on a disk (e.g., exit pupil) and a spherical reference wavefront can be represented as the linear sum of these basis functions (Fig. 3). The coefficients in such a representation reveal the class of aberration present and hence what optical corrections are needed in order to reduce RMS wavefront error or the RMS point-spread function spot size.

Fig. 2. The 21 lowest-order circular Zernike polynomials, $Z_{n}^{m}\lpar \rho \comma \; \theta\rpar $ (cf. equations (2)–(4), below): odd functions , even functions and circularly symmetric functions .

Fig. 3. Every complex wavefront (on a disk) can be represented as the sum of circular Zernike basis functions. Typically one represents the wavefront error – the difference between the actual wavefront and an ideal, spherical wavefront – as a weighted sum of circular Zernike polynomials.

A number of alternative bases sets for circular Zernike polynomials have been proposed, such as the Stephenson polynomials [Reference Stephenson10], shown in Fig. 4. Some of the benefits of this complete representation are that the mean-power error is naturally expressed as a term in a wavefront expansion. Moreover, if the wavefronts produced by two optical systems are very similar, their corresponding Stephenson coefficients will generally be more similar than will their corresponding Zernike coefficients. For this reason, Stephenson polynomials have found increasing use in searching lens design and optical patent databases, for analyzing optical manufacturing constraints, and so on. Finally, the coefficients in an expansion of the wavefront error in terms of the Stephenson basis are independent of pupil size and this property facilitates comparisons of optical systems at different scales. One drawback is that these basis functions do not relate to traditional aberrations as naturally as the Zernike polynomials do.

Fig. 4. The 15 lowest-order basis functions in the Stephenson basis set. Some of the lowest-order basis functions are equivalent to individual circular Zernike polynomials but all others (an infinite number) differ from Zernike polynomials.

For the last several centuries, the vast majority of optical systems – especially optical imaging systems – have employed circular apertures in large part because the lens and mirror elements are easier to manufacture than other apertures. As such, the circular Zernike polynomials have served the optics community well and their use is justifiably widespread. In the current era of computational imaging, however – where there is greater freedom in design and manufacture of optical components and where digital signal processing is an essential step in the data pipeline – the circular Zernike basis will be less appropriate and will likely present an impediment to progress. A basis set for square or rectangular optical systems, analogous to the circular Zernike polynomials, differs rather markedly from the circular Zernike polynomials (Fig. 5) [Reference Al-Hamdani and Hasan11]. It is unlikely that even this basis will be appropriate for many computational imaging systems. (Image-processing bases on a square or rectangular support are likely to be simple, however, because they may be separable into a product of one-dimensional bases, i.e., $G\lpar x\comma \; y\rpar = g\lpar x\rpar h\lpar y\rpar $ .)

Fig. 5. The first 20 basis functions in an extension of Zernike polynomials to imaging systems with a square aperture. These functions are derived from circular Zernike polynomials by orthonormality defined over a square aperture support.

There are a number of traditional optical merit functions, such as the RMS wavefront error, Strehl ratio, Rayleigh criterion, the size of the point-spread function, or a number of measures of the Modulation Transfer Function or Optical Transfer Function (Fig. 6). As mentioned above, these measures are inappropriate for computational imaging systems for a number of reasons. A small point-spread-function may have zeros in its modulation transfer function thereby losing spatial information that cannot be recovered through digital signal processing. More generally and importantly, such merit functions do not describe the full end-to-end performance of a digital–optical computational imager.

Fig. 6. (Left) An Airy disk with red circle indicating a spot size of half maximum. Traditional optical merit functions favor small spot sizes. (Right) A one- dimensional Modulation Transfer Function (MTF) for a diffraction-limited imaging system, represented as a function of the normalized spatial frequency ν. Aberrated systems have MTFs that are lower than such a physical limit.

C) Signal-processing basis and merit functions

There are a number of basis function sets that have been used in image processing and image compression. The simplest mathematically and computationally – and most widely used – is based on the DCT, a separable product of cosine functions (equation (5)). One can index pixels by their discrete coordinates in orthogonal directions, n 1 and n 2, and then the basis functions can be represented as:

(5) $$\matrix{ {G_{k_1,k_2}(n_1,n_2)} \hfill & { = \cos \left[ {\displaystyle{\pi \over {N_2}}\left( {n_2 + \displaystyle{1 \over 2}} \right)k_2} \right]} \hfill \cr {} \hfill & {\quad \times \cos \left[ {\displaystyle{\pi \over {N_1}}\left( {n_1 + \displaystyle{1 \over 2}} \right)k_1} \right],} \hfill \cr } $$

where k 1 and k 2 serve as components of a two-dimensional discrete wave vector, and N 1 and N 2 denote the integral size of the the image block being transformed. Figure 7 shows the 64 basis functions for a DCT appropriate for representing an 8-by-8-pixel image block. The harmonic nature and separability of the DCT permits this transform to be easily computed using the FFT, either on uni-processors or on digital signal processing hardware.

Fig. 7. The 64 8-by-8-pixel basis functions of the DCT for the case N 1 = N 2 = 8 and {k 1, k 2} ∈ [0 − 7] × [0 − 7] in equation (5).

A widely recognized limitation of the DCT (and its JPEG instantiation) is the appearance of “blockies,” or discontinuities at the boundaries of the component image blocks [Reference Shen and Kuo12]. A limitation of particular relevance to computational imaging is that the DCT incorporates no priors or other information about the properties of aberrated and non-standard optics. For instance the fact that an optical subsystem may commonly be out of optical focus (and thus produce optical images with certain statistical characteristics) is not reflected in the image-processing basis set. Moreover, the DCT is not radially symmetric, which is likely appropriate when processing images produced by radially symmetric optical subsystems. These limitations make the DCT a very poor candidate for a component in representations for computational imaging systems.

A number of image representations based on wavelets have been used to overcome the image-processing and compression limitations inherent in the DCT, many of which have been based on Daubechies filters [Reference Prasad and Iyengar13]. and exploited in compression methods such as the JPEG 2000 standard. This standard has a number of both lossless and lossy modes, is reversible and admits a number of efficient computational implementations based on the hierarchical discrete wavelet transform (DWT). Another class of image-processing basis functions comprises Gabor wavelets of the form

(6) $$\eqalign{ G({\bf x}; {\bf k}, \phi_0, {\bf \Sigma})\,{=}\,e^{- 2 \pi i({\bf k}^t {\bf x} + \phi_0)} {1 \over 2 \pi \vert {\bf \Sigma} \vert^{-1/2}} e^{- {1 / 2}({\bf x} - {\bf x}_0)^t {\bf \Sigma}^{-1} ({\bf x} - {\bf x}_0)}\!{,} }$$

where ${\bf x} = \lpar x\comma \; y\rpar $ is a two-dimensional position in the sensed image, ${\bf k} = \lpar k_{x}\comma \; k_{y}\rpar $ is a two-dimensional vector describing the sinusoidal carrier and ϕ0 its phase, ${\bf \Sigma}$ is a positive-definite matrix, t denotes transpose, and ${\bf x}_{0} = \lpar x_{0}\comma \; y_{0}\rpar $ is the geometrical center of the basis function. A full basis set consists of different orientations and phase, typically in quadrature phase for mathematical compactness (Fig. 8).

Fig. 8. A subset of the Gabor wavelet basis functions for Σ proportional to the identity matrix (circularly symmetric), and different orientations described by the vector k and phases described by ϕ0 = 0 or π/2 in equation (6). Here red and blue denote positive and negative response values. Gabor wavelets have been used in radially symmetric image representations, and thus may serve as guidance for a new class of orientation-selective image-processing bases for joint digital-optical system representations.

Figure 9 shows a hierarchical wavelet representation and synthesis of an image based on Symlets – Daubechies wavelets modified to have spatial symmetry [Reference Swee and Elangovan14]. The display at the left is scale-space, where the upper-left corner displays the small number of coefficients of large basis functions and squares to the lower-right display the large number of coefficients of small basis functions. Squares that lie off the diagonal represent coefficients for spatially oriented basis functions: vertical bases at the upper right and horizontal bases at the lower left.

Fig. 9. A wavelet synthesis of an image based on Symlets – a slight modification of Daubechies wavelets to ensure spatial symmetry. The left figure shows the wavelet coefficients in a scale-space display, where the few coefficients for spatially large basis functions are displayed in the small square at the upper left, and the more numerous coefficients for successively smaller basis functions are displayed in the other squares. Vertical bases to the right, horizontal to the left, joint along the diagonal of the figure.

Another general class of computational optical imagers exploits the technique of compressive sensing [Reference He and Carin15, Reference Takhar, Miller and Pollak16]. Such imagers are typically based on optical subsystem that produces a high-quality image on a sensor array (even if that “image” is a mere point on a single photodetector) filtered by a spatial light modulator (SLM). Each component image corresponds to a projection onto a subspace of the high-dimensional space of all possible images. If the patterns presented in the SLM have certain desirable properties of statistical independence, then a high-resolution image can be computed from these projections, for instance by Tikhonov regularization or related methods. In practice, the SLM patterns have been based on random partial Fourier matrices, Hadamard matrices, Gaussian, Bernoulli, multi-wavelet matrices, chirp matrices, all of which have provably good measurement properties. More complex matrices which also have good statistical independence properties have been generated by bipartite expander graphs [Reference Berinde, Gilbert, Indyk, Karloff and Strauss17]. Representation in such a basis can be viewed as spatial filtering. What if the optical image produced by the lens system is degraded or aberrated? For example, what set of spatial filters, when paired with a lens system with severe coma aberration, can lead to a high-quality digital image? This class of problems at the foundation of computational imaging have not yet been addressed.

D) Joint electro-optical basis and merit functions

As mentioned above, currently optical design methodology and software (such as Zemax, Code V, and OSLO) are dominated by function representations based on Zernike and Seidel polynomials and merit functions based on RMS wavefront error, RMS spot size, Strehl ratio, and measures of the modulation transfer function. Currently image-processing methodology and software (such as Matlab, Open CV, etc.) is dominated by function representations based on cosines and wavelets and merit functions are based on mean-squared error (RMSE) and its variants, such as peak RMS. The discipline of computational imaging, and the software tools that serve it, lack an appropriate end-to-end merit function.

Robinson and Stork [Reference Robinson, Stork, Howard and Koshel4] proposed an end-to-end merit function base on the predicted MSE and used it for iterative joint design of the optics and signal processing in simple computational imagers:

(7) $$MSE_{P} = \displaystyle{1 \over n}{\rm Tr }\left[ {({\bf RH}(\Theta ) - {\bf I}){\bf C}_s{({\bf RH}(\Theta ) - {\bf I})}^t + {\bf R}{\bf C}_n{\bf R}^t} \right],$$

where n is the number of pixels in the final digital image, R is the digital image-processing filter, ${\bf H}\lpar \Theta \rpar $ is the optical system matrix determined by the values of the optical variables Θ (focal lengths, lens separations, etc.), I is the n × n identity matrix, C s is the point-to-point source correlation matrix, C n is the noise correlation matrix, and Tr denotes the trace operation. This merit function was sufficient for gradient descent design in optical design parameters such as lens curvatures and spacings and digital-processing parameters such as the values of filter taps. The value of MSE P given in equation (7) is a limit, which in practice is well approximated the actual MSE. It is not known which end-to-end merit function might have properties superior to that in equation (7), for instance in satisfying the desiderata and constraints listed in Section III.A.

In summary, despite some preliminary explorations and acceptable candidates for end-to-end merit functions, there remains no true joint basis function set or widely accepted end-to-end merit function derived especially for the needs of computational imaging.

IV. Mathematical and computational approach

In seeking basis functions for computational imaging systems we view the optical and the digital subsystems in such systems as stages in a visual information channel, which takes visual information from the scene and represents it in the final digital image [Reference Stork and Robinson5]. This perspective conceptually unifies, to the extent possible, the digital and optical domains. For this reason, many of the concepts and much of the mathematical foundations of channel theory will guide our work [Reference Cover and Thomas18]. For instance, by analogy to the cases in classic signal channels, initial candidate merit functions will be based on visual information channel capacity, predicted MSE, rate distortion, noise models, or measures based on the mutual entropy between the scene and final image (measured in bits). Moreover, we shall employ techniques from Fourier optics, as appropriate, because this framework supports a natural expression of optical channel properties, such as information bandwidth [Reference Goodman19]. Another class of merit functions is based on the condition number of the end-to-end system matrix which unifies the optical and digital domains and expresses, implicitly, the overall system's robustness to noise. We are well aware of the fundamental differences between the two domains, for instance that while image-processing filters can have positive and negative values (“filter taps”), light carries only positive values, that the noise in the optical portion is Poisson photon noise while noise in the sensor is read noise, A/D noise, and so forth.

Figure 10 illustrates a for the simulation process flow for finding joint electro-optical merit and basis functions. First, we propose a set of candidate optical basis functions most appropriate to the general class of computational imager under consideration (e.g., circular Zernike or Stephenson polynomials for axially symmetric systems, Zernike extensions for square aperture systems, etc.), and select one for simulations. We next do likewise for candidate image-processing basis functions and likewise for a candidate end-to-end merit function. Then we take a representative set of images and design an electro-optical computational imager to reproduce the images accurately, as judged by the candidate merit function. These computational imaging system design simulations will be performed on a large cluster of computers each running Zemax, where its user defined optimization operand is based on the candidate merit function [20]. (The first simulations should be noise-free and later simulations should incorporate realistic photon and sensor noise models.) In this way, a set of computational imagers with different optical and image-processing parameters will be created.

Fig. 10. The computational and mathematical process flow. First select a candidate optical basis set ${\cal J}^{o}$ , a candidate image-processing basis set ${\cal J}^{s}$ , and a scalar merit function. Then, using a database of representative images, we will use a networked cluster of computers running Zemax to find optical and image-processing parameters for the database of images. Then we will use a variety of statistical estimation, clustering and data mining methods, using penalties corresponding to desiderata, in order to select or derive electro-optical basis functions.

The next step is to statistically analyze or “mine” the resulting optical and parameter sets to extract a low-dimensional set of such parameters for instance by using standard statistical methods of principal components, nonlinear principal component, independent component analysis, machine learning, clustering, singular value decomposition, modified Gram–Schmidt orthogonalization, harmonic analysis [Reference Mahadevan21], spectral clustering [Reference Comanici, Precup, Vrancx, Knudson and Grześ22], and so on [Reference Duda, Hart and Stork23]. There are a number of special feature extraction [Reference Bethge, Gerwinn, Macke, Rogowitz, Pappas and Daly24] and representation extraction methods that will be explored as well [Reference Bengio, Courville and Vincent25, Reference Mahajan26].

The regularities in the resulting statistical dataset are unlikely to be ideal or mathematically simple, and these should be fit or projected onto bases in simpler function spaces (e.g., ρ2.1 will be simplified to ρ2, $\sin \lpar \theta + .001 \theta^{2}\rpar $ will be simplified to $\sin \lpar \theta\rpar $ , for instance). The next step is to impose the basis function desiderata described above – e.g., choose tradeoffs between optical and digital – by means of mathematical penalties on such functional fits.

Likewise, this general approach can be applied to compressive imagers that rely on non-ideal optics. In this case, the role of the image-processing basis functions is played by the bases for spatial filters (generally implemented by SLMs). While there have been extensive theoretical analyses and simulation studies for spatial filters that have suitable statistical properties, prior studies assume that the optics is (in essence) “perfect” and that the computational task at hand is to compress a sensed image that is otherwise of high quality. Such studies do not address the problems described above: finding filter basis sets to be paired with “imperfect” optical systems in order to yield high-quality digital images and, furthermore, finding electro-optical bases sets possessing the above desiderata appropriate for compressive imagers.

The above simulation methods can be applied to new apertures (e.g., square), to non-standard optics (e.g., diffraction gratings), and architectures (e.g., full light-field plenoptic). The power of the representations can be illustrated by describing representative computational imaging systems in traditional bases and in the new, extracted bases and use these representations to study fundamental performance bounds of computational imaging systems [Reference Cossairt, Gupta, Mitra and Veeraraghavan27Reference Spinoulas, Cossairt, Gill, Stork and Katsaggelos29].

V. CONCLUSIONS

The work of a wide range of researchers and developers in optics, optical design software creators, image-processing algorithm developers, image-processing hardware designers, and customers of computational imaging systems will be improved and unified by the new basis and merit functions, should they be found by the above approach.

  • Optical system designers: Optical designers will understand and naturally incorporate digital processing into their system design flows. Designers will broaden their conceptual understanding of imaging to view optics as performing just one stage in an overall optical–digital information channel. Much of optical design rests on informal heuristics and rules of thumb (such as what sequence of lens types are likely to best serve a particular application); designers new develop new heuristics and rules of thumb, thereby speeding the development of improved computational imaging systems.

  • Optical design software tool creators: Academic and commercial optical design software tool creators will be able to unify the optical and digital domains and thus help software tool designers speed and improve their overall designs. Digital–optical design tools will be able to incorporate true joint digital–optical optimization routines in a principled way, thereby improving upon the current approaches of sequential and of iterative optical-then-digital optimization.

  • Image-processing experts: The statistics of sensed (intermediate) images in computational imaging systems differs profoundly from those in traditional optical systems based on nearly aberration-free optics [Reference Robinson and Stork30]. As such, the optimal algorithms, including compression algorithms intermediate at the sensor.

  • Image-processing hardware developers: Just as DSP and image-processing hardware developers designed their processors to efficiently perform operations such as in the FFT, so too will developers design processors to compute efficiently the basis functions and core operations needed in computational imaging identified by our project.

  • Customers: Currently customers of imaging devices give design specifications in purely optical terms, such as line pairs per millimeter, properties of the MTF, and so on; such specifications are not appropriate for computational imaging systems. Instead, specifications for the end-to-end performance are appropriate – specifications that will be expressed exploiting the foundations provided by the proposed work.

ACKNOWLEDGEMENTS

The author profited from discussions with Patrick R. Gill of Rambus Labs.

References

REFERENCES

[1] Darrigol, O.: History of Optics: from Greek Antiquity to the Nineteenth Century. Oxford University Press, Oxford, UK, 2012.Google Scholar
[2] Gonazlez, R.C.; Woods, R.E.: Digital Image Processing, 3rd ed. Prentice-Hall Publishers, New York, NY, 2007.Google Scholar
[3] Pedersen, M.; Hardenberg, J.Y.: Full-reference Image Quality Metrics: Classification and Evaluation. Now Publishers, Boston, MA, 2012.Google Scholar
[4] Robinson, M.D.; Stork, D.G.: Joint design of lens systems and digital image processing, in Gregory G.G.; Howard, J.M. and Koshel, R.J. (eds.), SPIE Int. Optical Design Conf., Vancouver, BC, 2006, vol. 6342, 6342 1G1–6342 1G10.Google Scholar
[5] Stork, D.G.; Robinson, M.D.: Theoretical foundations of joint design of electro-optical imaging systems. Appl. Opt., 47 (10) (2008), B64B75.Google Scholar
[6] Born, M.; Wolf, E.: Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th ed., Cambridge University Press, Cambridge, UK, 1999.Google Scholar
[7] Pratt, W.K.: Digital Image Processing, 2nd ed. Wiley Interscience, New York, NY, 1991.Google Scholar
[8] Brady, D.J.: Optical Imaging and Spectroscopy, Wiley and Optical Society of America, New York, NY, 2009.Google Scholar
[9] Lee, T.S.: Image representation using 2D Gabor wavelets. IEEE Trans. Pattern Anal. Mach. Intell., 18 (10) (2002), 959971.Google Scholar
[10] Stephenson, P.C.L.: Optical aberrations described by an alternative series expansion. J. Opt. Soc. Am. A, 26 (2) (2009), 265273.Google Scholar
[11] Al-Hamdani, A.H.; Hasan, S.Y.: Zernike polynomiales for optical systems with rectangular and square apertures of area equal to π. Indian J. Pure Appl. Phys., 51 (2011), 837843.Google Scholar
[12] Shen, M.-Y.; Kuo, C.-C.J.: Review of postprocessing techniques for compression artifact removal. J. Vis. Commun. Image Represent., 9 (1) (1998), 214.Google Scholar
[13] Prasad, L.; Iyengar, S.S.: Wavelet Analysis with Applications to Image Processing. CRC Press, Boca Raton, FL, 1997.Google Scholar
[14] Swee, E.G.T.; Elangovan, S.: Applications of symlets for denoising and load forecasting. Proc. of the IEEE Signal Processing Workshop on Higher-Order Statistics, 1999, 165169.Google Scholar
[15] He, L.; Carin, L.: Exploiting structure in wavelet-based Bayesian compressive sensing. IEEE Trans. Signal Process., 47 (9) (2009), 34883497.Google Scholar
[16] Takhar, D. et al. : A new compressive imaging camera architecture using optical-domain compression. in Bouman, C.A.; Miller, E.L. and Pollak, I. (eds.), Electronic Imaging: Computational Imaging IV, 2006, vol. 6065, 606509.Google Scholar
[17] Berinde, R.; Gilbert, A.C.; Indyk, P.; Karloff, H.; Strauss, M.J.: Combining geometry and combinatorics: a unified approach to sparse signal recovery. In Annu. Allerton Conf. on Communication, Control and Computing, 2008, 798805.Google Scholar
[18] Cover, T.M.; Thomas, J.A.: Elements of Information Theory. Wiley, New York, NY, 1991.Google Scholar
[19] Goodman, J.W.: Introduction to Fourier Optics, 3rd ed. Roberts and Company, Englewood, CO, 2005.Google Scholar
[21] Mahadevan, S.: Representation Discovery Using Harmonic Analysis. Morgan and Claypool Publishers, CA, 2008.Google Scholar
[22] Comanici, Gheorghe; Precup, D.: Basis function discovery using spectral clustering and bisimulation metrics, in Vrancx, P.; Knudson, M. and Grześ, M. (eds.), Adaptive and Learning Agents, Springer-Verlag, Berlin, 2012, vol. LNCS 7113, 8599.Google Scholar
[23] Duda, R.O.; Hart, P.E.; Stork, D.G.: Pattern Classification, 2nd ed. Wiley, New York, NY, 2001.Google Scholar
[24] Bethge, M.; Gerwinn, S.; Macke, J.H.: Unsupervised learning of a steerable basis for invariant image representations, in Rogowitz, B.E.; Pappas, T.N. and Daly, S.J. (eds.), Electronic Imaging: Human Vision and Electronic Imaging XII, vol. 6492, 2007, 64920C.Google Scholar
[25] Bengio, Y.; Courville, A.; Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35 (8) (2013), 17981828.Google Scholar
[26] Mahajan, V.N.: Aberration Theory made Simple, volume TT93 of Tutorial Texts in Optical Engineering, 2nd ed. SPIE Press, Bellingham, WA, 2011.Google Scholar
[27] Cossairt, O.; Gupta, M.; Mitra, K.; Veeraraghavan, A.: Performance bounds for computational imaging. Imag. Appl. Opt., (2013).Google Scholar
[28] Cossairt, O.; Gupta, M.; Nayar, S.: When does computational imaging improve performance. IEEE Trans. Image Process., 22 (2) (2013), 447458.Google Scholar
[29] Spinoulas, L.; Cossairt, O.; Gill, P.R.; Stork, D.G.; Katsaggelos, A.K.: Performance comparison of ultra-miniature diffraction gratings with lenses and zone plates, in Computational Optical Sensing and Imaging (COSI), Alexandria, VA, 2015.Google Scholar
[30] Robinson, M.D.; Stork, D.G.: New image processing challenges for jointly designed electro-optical imaging systems. in Int. Conf. on Image Processing (ICIP), Cairo, Egypt, 2009, 37893792.Google Scholar
Figure 0

Fig. 1. In traditional or sequential optical design, one first designs the optics to yield a high-quality optical image on the image sensor, and then designs the digital signal processing so as to yield a high-quality final digital image. For the first design stage, traditional optical representations and merit functions such as Zernike polynomials and RMS point-spread-function size are used ; for the second design stage, traditional image-processing representations and merit functions such as Daubechies wavelets and RMS error are used . In true joint design in computational imaging for end-to-end performance, new (digital-optical) basis functions and merit functions are needed [4, 5].

Figure 1

Fig. 2. The 21 lowest-order circular Zernike polynomials, $Z_{n}^{m}\lpar \rho \comma \; \theta\rpar $ (cf. equations (2)–(4), below): odd functions , even functions and circularly symmetric functions .

Figure 2

Fig. 3. Every complex wavefront (on a disk) can be represented as the sum of circular Zernike basis functions. Typically one represents the wavefront error – the difference between the actual wavefront and an ideal, spherical wavefront – as a weighted sum of circular Zernike polynomials.

Figure 3

Fig. 4. The 15 lowest-order basis functions in the Stephenson basis set. Some of the lowest-order basis functions are equivalent to individual circular Zernike polynomials but all others (an infinite number) differ from Zernike polynomials.

Figure 4

Fig. 5. The first 20 basis functions in an extension of Zernike polynomials to imaging systems with a square aperture. These functions are derived from circular Zernike polynomials by orthonormality defined over a square aperture support.

Figure 5

Fig. 6. (Left) An Airy disk with red circle indicating a spot size of half maximum. Traditional optical merit functions favor small spot sizes. (Right) A one- dimensional Modulation Transfer Function (MTF) for a diffraction-limited imaging system, represented as a function of the normalized spatial frequency ν. Aberrated systems have MTFs that are lower than such a physical limit.

Figure 6

Fig. 7. The 64 8-by-8-pixel basis functions of the DCT for the case N1 = N2 = 8 and {k1, k2} ∈ [0 − 7] × [0 − 7] in equation (5).

Figure 7

Fig. 8. A subset of the Gabor wavelet basis functions for Σ proportional to the identity matrix (circularly symmetric), and different orientations described by the vector k and phases described by ϕ0 = 0 or π/2 in equation (6). Here red and blue denote positive and negative response values. Gabor wavelets have been used in radially symmetric image representations, and thus may serve as guidance for a new class of orientation-selective image-processing bases for joint digital-optical system representations.

Figure 8

Fig. 9. A wavelet synthesis of an image based on Symlets – a slight modification of Daubechies wavelets to ensure spatial symmetry. The left figure shows the wavelet coefficients in a scale-space display, where the few coefficients for spatially large basis functions are displayed in the small square at the upper left, and the more numerous coefficients for successively smaller basis functions are displayed in the other squares. Vertical bases to the right, horizontal to the left, joint along the diagonal of the figure.

Figure 9

Fig. 10. The computational and mathematical process flow. First select a candidate optical basis set ${\cal J}^{o}$, a candidate image-processing basis set ${\cal J}^{s}$, and a scalar merit function. Then, using a database of representative images, we will use a networked cluster of computers running Zemax to find optical and image-processing parameters for the database of images. Then we will use a variety of statistical estimation, clustering and data mining methods, using penalties corresponding to desiderata, in order to select or derive electro-optical basis functions.