Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-23T06:34:55.709Z Has data issue: false hasContentIssue false

Automotive SAR imaging: potentials, challenges, and performances

Published online by Cambridge University Press:  21 April 2023

Marco Manzoni*
Affiliation:
Department of Electronics, Information and Bioengineering, Politecnico di Milano, Via Giuseppe Ponzio 34, 20133 Milano, Italy
Stefano Tebaldini
Affiliation:
Department of Electronics, Information and Bioengineering, Politecnico di Milano, Via Giuseppe Ponzio 34, 20133 Milano, Italy
Andrea Virgilio Monti-Guarnieri
Affiliation:
Department of Electronics, Information and Bioengineering, Politecnico di Milano, Via Giuseppe Ponzio 34, 20133 Milano, Italy
Claudio Maria Prati
Affiliation:
Department of Electronics, Information and Bioengineering, Politecnico di Milano, Via Giuseppe Ponzio 34, 20133 Milano, Italy
Dario Tagliaferri
Affiliation:
Department of Electronics, Information and Bioengineering, Politecnico di Milano, Via Giuseppe Ponzio 34, 20133 Milano, Italy
Monica Nicoli
Affiliation:
Department of Electronics, Information and Bioengineering, Politecnico di Milano, Via Giuseppe Ponzio 34, 20133 Milano, Italy
Umberto Spagnolini
Affiliation:
Department of Electronics, Information and Bioengineering, Politecnico di Milano, Via Giuseppe Ponzio 34, 20133 Milano, Italy
Ivan Russo
Affiliation:
Huawei Technologies Italia S.r.l., Segrate, Italy
Christian Mazzucco
Affiliation:
Huawei Technologies Italia S.r.l., Segrate, Italy
*
Corresponding author: Marco Manzoni; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

The main interest in using synthetic aperture radar (SAR) technology in automotive scenarios is that arbitrarily long arrays can be synthesized by exploiting the natural motion of the ego vehicle, enabling finer azimuth resolution and improved detection. All of this is achieved without increasing the hardware complexity in terms of the number of physical antennas. In this paper, we start by discussing the application of SAR imaging in the automotive environment from both theoretical and experimental perspectives. We proceed by describing an efficient processing workflow and we derive the rough number of operations required to focus an image proving the real-time imaging capability of the system. The experimental results are based on open road data acquired using an eight-channel radar at 77 GHz, considering side-looking SAR and forward SAR. The results confirm the idea that SAR imaging can be successfully and routinely used for high-resolution mapping of urban environments in the near future.

Type
MMS 2022 Special Issue
Copyright
© The Author(s), 2023. Published by Cambridge University Press in association with the European Microwave Association

Introduction

Radar imaging systems are increasingly used in automotive scenarios, and the trend toward autonomous driving places increasing demands on detection, localization, and classification. In this context, the radar technology is known to have important advantages over sensors based on LiDAR, ultrasonic, or optical technology. In fact, radar can work in a variety of weather conditions, including rain, snow, hail, and fog, and can perform target detection at ranges up to hundreds of meters while using inexpensive, lightweight, and low-power devices. Importantly for the automotive sector, recent technological advances in array design, analog-to-digital conversion, and increased on-board computing resources enable the real-time implementation of advanced signal processing techniques at low energy costs.

Current automotive imaging radars are mostly Multiple Input Multiple Output (MIMO) devices, in which multiple transmitting and receiving antennas are used pairwise to form the so-called virtual array. Sensor performances in terms of spatial resolution and signal-to-noise ratio (SNR) can vary depending on data processing, but ultimately depend largely on the number of channels that make up the virtual array. A different approach is provided by synthetic aperture radar (SAR) imaging, which is mainly used for Earth observation and high-resolution mapping. Essentially, the SAR concept involves exploiting the natural motion of a platform to synthesize arrays (apertures) of arbitrary length. In the case of automotive mm-Wave radars, SAR processing can be used to obtain synthetic apertures of tens of centimeters or even meters, yielding finer spatial resolution than using physical arrays, and improved SNR due to longer observation times. The appeal exerted by this concept in the automotive scenario is well represented by the research carried out by many research groups in the last few years, see for example [Reference Azouz and Li1Reference Manzoni, Tebaldini, Monti-Guarnieri, Prati and Russo6].

In this paper, we aim at considering the potentials of SAR imagery in the context of urban mobility, by evaluation of the theoretical resolution and SNR and by showing SAR images of an urban scenarios from experimental data acquired on open roads. An earlier version of this paper was presented at the 2022 Mediterranean Microwave Symposium Conference and was published in its proceedings [Reference Tebaldini, Rizzi, Manzoni, Guarnieri, Prati, Tagliaferri, Nicoli, Spagnolini, Russo and Mazzucco7].

The question that is left open, however, concerns the capability of SAR imaging to produce maps of the surrounding environment in real time in a complementary way to cameras and LiDARs. This feature is mandatory if we want SAR to play a significant role in the automotive world, where timeliness is an essential enabler for autonomous driving systems. Real-time imaging inherently depends on the algorithm used to focus the image. This paper assesses a focusing algorithm's rough performances, showing how it can reach real-time imaging capabilities even with inexpensive hardware implementation.

Potentials of SAR imaging

The parameters that are oftentimes used to characterize the performance of any imaging radar system are the spatial resolution and SNR, which ultimately represent the system capability to detect and localize a target in a noisy environment and/or in the presence of other nearby targets. For a typical MIMO imaging radar, both resolution and SNR are largely determined by the number of transmitting and receiving antenna pairs, or channels.

Resolution

In most cases, transmitting and receiving antennas are deployed so as to form a virtual uniform linear array. In this case, the angular resolution for a forward-looking radar is:

(1)$$\Delta \psi_{MIMO} = {\lambda\over 2 \cos{( \psi) }} {1\over N_{ch}d_x}$$

where λ is the carrier wavelength, N ch = N Tx × N Rx is the number of available channels, d x is the spacing between nearby elements in the virtual array (assuming the virtual array is represented as a collection of monostatic elements), and ψ is the off-boresight angle, as shown in Fig. 1. It is worth noting that in most cases the antenna layout is such that d x = λ/4 to ensure non-ambiguous imaging, after which one has that angular resolution is fully determined by the geometrical factor cos ψ and the number of elements N ch. For any value of d x, the product N ch d x represents the length of the virtual array.

Figure 1. Angular resolution. Black dashed curves: for a front-looking MIMO radar employing 32, 64, and 128 channels. Continuous curves angular resolution for a SAR employing a synthetic aperture of 0.3, 0.7, and 1.2.

In the case of SAR, instead, the resolution is achieved by processing a set of data acquired as the vehicle travels a certain distance A s, called the synthetic aperture, which can be thought of as the length of an equivalent (monostatic) virtual array deployed longitudinally (i.e. in the direction of motion, orthogonal with respect to the MIMO array). Accordingly, the angular resolution for SAR imaging is obtained as:

(2)$$\Delta \psi_{SAR} = {\lambda\over 2 \sin{( \psi) }} {1\over A_s}$$

where the geometrical factor is now sin(ψ) due to the longitudinal deployment. Comparing (1) with (2), and by Fig. 1, it is immediate to see that SAR imaging can serve as a natural complement to conventional radar in the automotive context. SAR imaging does not bring any improvement for targets right at boresight. Yet, it does provide a far superior resolution for targets slightly off-boresight. For this reason, in the automotive context SAR imaging has primarily been considered to produce high-resolution mapping of the environment to the sides of the road, suggesting SAR techniques could be most suited to urban scenarios [Reference Stanko, Palm, Sommer, Kloppel, Caris and Pohl8Reference Feger, Haderer and Stelzer10].

SNR

The SNR computation for radar and SAR imaging can be approached by simply assuming some processing scheme where consecutive pulses are coherently integrated over some coherent integration time T c, which is proportional to the SNR improvement factor over single-pulse data per processed channel. The effective coherent integration time depends largely on which processing scheme is adopted. Any accurate SAR imaging algorithm accounts for any effect related to the variation of the sensor-to-target distance across different pulses. This includes the cases where the range shift between signals acquired at different times exceeds one range bin, referred to as range migration, and where the phase variation across different pulses cannot be correctly modeled by a linear law. In this way, under the assumption that range migration and phase variations are correctly matched, the coherent integration time is only bounded by the physical antenna beam width, as it is the case for space-borne and air-borne SARs. On the other hand, radar imaging is (typically) implemented under the assumptions that range migration can be neglected and phase variations are linear. Such assumptions allow for a significant simplification of the signal model and all related processing algorithms. Yet they set an upper bound for the coherent integration time. As for range migration, the requirement is that the range shift over time does not exceed range resolution:

(3)$$\Delta R = v_{ego} \cos{( \psi) } T_c < 2\delta_r$$

where v ego is the vehicle forward velocity, δ r is range resolution, and the factor 2 is used to allow for some smoothness of the waveform spectrum. Notice that this bound is more stringent at the boresight of the MIMO radar (ψ ≈ 0).

The non-linear nature of the phase history can be accounted for by bounding the residual parabolic component. Let's suppose the vehicle moves at a velocity v ego on a rectilinear path. It is straightforward to develop in Taylor series around a point τ 0 the hyperbolic law of distances between the sensor and a generic target:

(4)$$\eqalign{R( \tau) & = \sqrt{R_0^2 + ( v_{ego} \tau) ^2} \cr & \approx {\underbrace{R}_{\text{constant}}} + \underbrace{\cos( \psi) v_{ego} \Delta \tau}_{\text{linear}} + \ \underbrace{{v_{ego}^2 \sin^2( \psi) \over 2R}\Delta \tau^2}_{\text{\,parabolic}} }$$

where R is the distance between the radar and the target at τ = τ 0 and τ is the slow-time. All the quantities just described are also depicted in Fig. 2. We now upper bound the phase parabolic component at π/2 obtaining:

(5)$$\Delta \varphi = {4\pi\over \lambda}{v_{ego}^2 \sin^2( \psi) \over 2R}\left({T_c\over 2}\right)^2 < {\pi\over 2}$$

To quantify the limits on T c , we assume here a set of parameters representative for high-resolution urban mapping, such as f 0 = 77 GHz, v ego = 10 m/s, δ r = 0.15 m, and R 0 = 10 m. Plugging those values into (3) and (5) one gets that the coherent integration time is on the order of 30 ms for all targets within ±60 deg off-boresight. This value sets a physical limit to the SNR improvement factor achieved by integrating over time, unless SAR processing is adopted.

Figure 2. Geometry of the problem.

Of course, in conventional radar imaging both SNR and angular resolution can always be improved by increasing the number of channels N ch, that is by moving toward more sophisticated and expensive devices. On the other hand, the present analysis has just shown that SAR processing allows for large performance improvements by exploiting the natural motion of the ego-vehicle, therefore opening the way to the concept of high-resolution mapping using low-cost devices. The price to pay is obviously an increased complexity and computational burden, as we discuss in the next section.

Real-time image formation

Real-time imaging of the surrounding environment is a mandatory feature for any technology aspiring to be helpful in the autonomous driving transition. The capability of an automotive radar to image the environment in real time is inherently related to the efficiency of the algorithm used to process the raw data. We consider in this paper a MIMO-SAR system in which the radar is forward looking. In other words, the antennas of the MIMO radar are displaced orthogonal with respect to the motion of the vehicle. In this condition, the MIMO array suppresses the left/right ambiguity typical of any SAR system. Moreover, it allows for a fast and easy implementation of an autofocusing algorithm which relies on residual Doppler estimation in a stack of low-resolution MIMO images [Reference Manzoni, Tagliaferri, Rizzi, Tebaldini, Guarnieri, Prati, Nicoli, Russo, Duque, Mazzucco and Spagnolini11, Reference Manzoni, Rizzi, Tebaldini, Monti–Guarnieri, Prati, Tagliaferri, Nicoli, Russo, Mazzucco, Duque and Spagnolini12]. For this reason, any direct focusing of the raw data in the frequency domain should be avoided since it will not allow for any residual motion compensation.

The question that arise is how to combine these low-resolution MIMO images to obtain a final high-resolution SAR image. One possible approach is the fast factorized back projection (FFBP) [Reference Ulander, Hellsten and Stenstrom13]. This processing scheme starts with a simple time domain back Projection (TDBP) for each slow-time to generate a stack of coregistered low-resolution MIMO images. At this stage, the autofocus procedure can take place providing the residual (uncompensated) velocities of the vehicle. Then, the entire synthetic aperture is divided into a set of sub-apertures. For each sub-aperture, the MIMO images are demodulated, interpolated on a finer spatial grid, modulated again and summed coherently. In this way, we obtain a set of mid-resolution SAR images equal to the number of sub-apertures. Notice that the interpolation is necessary to accommodate the expansion of the bandwidth given by the coherent summation.

The procedure is repeated until just one image is left in the stack, which is the final high-resolution SAR image. This scheme is well known for providing a massive improvement in computational speed compared with classical TDBP algorithms. While suitable for real-time imaging, a better algorithm can be found to exploit existing hardware/software technologies in the automotive industry to achieve high-quality and fast SAR imaging.

The 3D2D processing scheme

In this section, we described another possible approach to achieve real-time SAR imaging. This approach will be called from now on 3D2D and the reason for this name will be clear at the end of the section. While both algorithms (FFBP and 3D2D) are suited for a future real-world implementation, we deem that the 3D2D scheme is possibly simpler to implement, provides high-quality images, and is generally faster. The block diagram of the 3D2D processor is depicted in Fig. 3.

Figure 3. 3D2D block diagram.

As the FFBP, the 3D2D starts from a set of coregistered and bandpass low-resolution images. These images are obtained with a simple TDBP over all the virtual channels and for each slow-time instant. We remark that, by focusing each image on the same common grid, we compensate for range migration and we take into account also phase curvature. This is significantly different with respect to the so-called Doppler beam sharpening, which does not consider both these aspects. Another input data are the vehicle's trajectory from the Navigation Unit (NU). The way in which these data will be used will be described later on in this section.

The 3D2D workflow proceeds as follows:

  1. (i) Each low-res image is demodulated into baseband using a linear law of distances:

    (6)$$I_{bb}( r,\; \psi,\; \tau_n) \approx I_{MIMO}( r,\; \psi,\; \tau_n) e^{-j\,{4\pi\over \lambda}R( r, \psi, \tau_n) }$$
    The approximation is used in place of the equality just to remember that the data are placed into baseband with an approximated law of distances given by:
    (7)$$R( r,\; \psi,\; \tau_n) = R_0( r,\; \psi) + v_r( \psi) ( \tau_n - \tau_0) $$
    where R 0(r,  ψ) represents the distance from the center of the aperture to each pixel, τ 0 is the slow-time of the center of the aperture, and v r(ψ) is the radial velocity of the car with respect to a fixed target in (r,  ψ). This velocity is calculated exploiting the nominal trajectory provided by the NU at the center of the aperture.

    The linear approximation of the true hyperbolic range equation holds for short aperture and/or far-range targets as already expressed in (5).

  2. (ii) The Fourier transform (FT) in the slow-time dimension of the baseband data is taken:

    (8)$$I\left(r,\; \psi,\; f_d = -{2\over \lambda}v_r \right) = {\cal F}\{ I_{bb}( r,\; \psi,\; \tau_n) \} $$
    where v r is the radial velocity and ${\cal F}$ is the Fourier transform in the slow-time direction. The transformed dataset is now in range, angle, and Doppler frequency domain which can be easily converted in range, angle, radial velocity. The output is the 3D Range–Angle–Velocity (RAV) data cube. The number of frequency points over which the FT is computed is an arbitrary parameter and can be chosen to provide a very fine sampling in the radial velocity domain. We remark that the FT simply add a linear phase term to the data and sum the result. This coherent summation is what generates the SAR image which now should just be extracted from the RAV cube.
  3. (iii) The extraction of the SAR image is performed by interpolating the 3D cube over a 2D surface. The $( r_{fine},\; \, \psi _{fine},\; \, v_r^{NAV})$ coordinates of the 2D surface are calculated considering the high-resolution nature of the final SAR image and the nominal radial velocity provided by the vehicle's navigation unit (eventually corrected with the estimated residual velocities from the autofocus procedure):

    (9)$$\matrix{& I_{SAR}( r_{\,fine},\; \psi_{\,fine}) \cr & \quad = I( r \rightarrow r_{\,fine},\; \psi \rightarrow \psi_{\,fine} ,\; v_r \rightarrow v_r^{NAV}) }$$
    where → is the interpolation operation, r fine and ψ fine are the range and angle coordinates of the final SAR image, and $v_r^{NAV}$ is the radial velocity of each pixel in the fine resolution grid as calculated by navigational data and corrected by the autofocus procedure.

The interpolation in (9) has a noticeable geometrical meaning. It describes the formation of an SAR image in terms of the intersection of the curved surface $v_r^{NAV}$ with the 3D RAV data cube. In Fig. 4, some Doppler (or velocity) layers of the RAV cube are represented as horizontal light blue panels. The maximum of the absolute value is usually taken in the Doppler domain to detect targets. The detected maximum is provided at the bottom of the layers in Fig. 4.

Figure 4. A visual interpretation of the 3D2D algorithm. In light blue, the velocity (or Doppler) layers of the Range–Angle–Velocity (RAV) data cube. The maximum of the absolute value in the velocity domain is usually taken and it is displayed at the base of the RAV cube. The SAR image is embedded in the RAV cube and can be extracted with a simple interpolation. The curved surface represents the SAR image.

The curved surface extracted from the RAV cube and representing the SAR image is also depicted in Fig. 4. We remark that the curvature of that surface depends on the vehicle's motion and it is provided by the navigation unit (and refined by the autofocus procedure). A final note should be done about autofocus. High frequencies call for high accuracy in the knowledge of the vehicle's motion during the synthetic aperture time. At the wavelength used in the automotive industry, however, no navigation unit can provide sufficient accuracy, therefore a data-drive residual motion compensation must be employed. In our case, we used the technique already detailed in [Reference Manzoni, Tagliaferri, Rizzi, Tebaldini, Guarnieri, Prati, Nicoli, Russo, Duque, Mazzucco and Spagnolini11] that is based on the estimation of residual Doppler on a set of detected ground control points sparsed in the imaged scene.

Computational cost

To prove the real-time capabilities of the proposed algorithm, we can now derive the rough number of operations (RNO) required to focus an SAR image. This number will provide an estimate of the computational burden needed to focus a single image using the 3D2D algorithm. We remark once again that this is just a rough order of magnitude and it is by no mean a precise number of operations which is instead inherently dependent upon the hardware and software implementation.

The processing starts with the so-called range compression of the raw radar data. In any frequency-modulated continuous-wave radar, this can be done with a simple fast Fourier transform (FFT) [Reference Meta, Hoogeboom and Ligthart14]. This step requires a total number of operations equal to:

(10)$$N_f \log_2N_f\times N_{ch} \times N_{\tau}$$

where the first term represents the number of operations required to perform an FFT over N f frequency samples [Reference Cooley and Tukey15], while the last two terms highlight that these operations must be repeated for each of the N ch channels and all the N τ slow-time samples.

The next step involves the TDBP of the data to form the stack of low-resolution MIMO images. For each pixel in the BP grid, we need to:

  1. (i) Calculate the bistatic distances from each real antenna phase center to the considered pixel. This operation is composed of six squares and two square roots (Pythagorean theorem) for a total of eight operations per channel.

  2. (ii) Interpolate the range compressed data using a linear or nearest neighbor interpolator accounting for two operations per channel.

  3. (iii) Modulate the signal, which is just a single complex multiplication per pixel.

  4. (iv) Accumulate the back-projected signal for each channel with a single complex summation.

We remark that these operations must be computed for each pixel in the BP grid and for each slow-time; thus, the final number of operations is equal to:

(11)$$N_r \times N_a \times N_\tau \times 12N_{ch}$$

where N r and N a are the sizes of the backprojection grid in the range and angular direction, respectively.

Now the core of the 3D2D algorithm starts with a demodulation of each of the $N_\tau$ low-resolution images. For each pixel, a single multiplication is performed to compute the distances in (7) and a multiplication is performed to demodulate the data. This consideration leads to a number of operations equal to:

(12)$$N_\tau \times N_a \times N_r \times 2$$

The next step is a slow-time FFT to be computed over each pixel, leading to:

(13)$$N_a \times N_r \times N_v\log_2 N_v$$

where N v is the number of Doppler frequency (or velocity) points computed by the FFT, a typical value is $N_v = 8N_\tau$.

The final step is the 3D interpolation. For each pixel of the output fine resolution polar grid, we have to compute the radial velocity (see (9)). This step requires a number of operations equal to the size of the output grid N r × N ah where N ah is the size in the angular direction of the fine resolution polar grid.

Now, the interpolation takes place with a number of operations in the order of 27 × N r × N ah. The number 27 is derived from the fact that each pixel in the output grid is derived by combining the 27 closest neighbors, i.e. the 3 × 3 × 3 cube centered on the pixel.

These considerations lead to a total number of operations equal to:

(14)$$\matrix{& \underbrace{N_\tau \times N_a \times N_r \times 2}_{\text{demodulation}} + \underbrace{N_a \times N_r \times N_v\log_2 N_v}_{\text{FFT}}\cr & \quad + \underbrace{28 \times N_r \times N_{ah}}_{\text{interpolation}} }$$

Typical values of all the parameters involved in the computation of the RNO are listed in Table 1. Plugging these values into (10), (11), and (14) we reach an RNO of about ~329 millions. For comparison, a 9-year-old iPhone®5S has a raw processing power of 76 GigaFlops [Reference Victor16], therefore it could, in principle, focus an SAR image in 6 ms. We deem that, with dedicated hardware implementations like on GPU/FPGA, it is easily possible to achieve real-time imaging of the surroundings.

Table 1. Typical values of all the parameters involved in the computation of the rough number of operations (RNO)

Results

In this section, we provide some results from real open-road experiments. The setup consists of a 77 GHz MIMO radar mounted in forward-looking mode on an Alfa Romeo®Giulia. The radar is based on a Texas Instruments AWR1243 chipset consisting of 3 TX antennas and 4 RX ones. Only two out of three antennas are exploited in this experiment. The data are acquired by transmitting chirp pulses with a bandwidth of 1 GHz. The PRF is about 7 kHz. The car is equipped with a 3 degrees of freedom IMUs, measuring lateral and longitudinal acceleration, along with heading rate. The locally acquired measurements are available with 500 Hz sampling frequency and referred to the center of gravity of the car. Moreover, we used four wheel encoders, measuring the odometric velocity of each wheel, and a steering angle sensor at the frontal wheels.

We experimented both a side-looking geometry, in which the boresight of the MIMO radar is located at the right of the car, and a front-looking geometry with the radar looking in the direction of motion of the vehicle.

An example of side-looking SAR image is depicted in Fig. 5. This campaign was carried out in the closed road in front of the department building (namely, building 20 of Politecnico di Milano) in order to prove the capability of the system to image the environment. Here, some bright targets are immediately distinguishable such as parked cars (green box), sidewalks and fences (orange box) and even a pedestrian (red box). Notice that the resolution is so high that it is even possible to distinguish the two legs of the walking pedestrian. On the right of Fig. 5, the optical images gathered by a camera mounted on the roof of the vehicle are depicted.

Figure 5. Side-looking SAR imaging of an urban scenario in Milan.

In the following tests, the MIMO radar has been mounted in forward looking configuration. As already said, this particular installation geometry allows to suppress left/right ambiguities that are typical of any SAR system. Moreover, it opens the possibility of a complete mapping of the scenario in front of the car, which is usually much more useful than having the knowledge of the scenario at the side of the car. This time the tests have been carried out in open roads to emulate a real urban scenario. In Fig. 6 the optical scenario is depicted alongside the SAR image.

Figure 6. Optical and SAR image acquired during a forward looking campaign. Targets such as parked cars, sidewalks, and scooters are easily recognizable.

From the camera photo, we can see the variety of targets that compose the urban landscape: buildings, fences, parked cars, moving vehicles, bike, electric scooters, and many more. By inspecting the SAR image, some targets are clearly distinguishable such as the two parked cars (yellow and red circles), the sidewalk (green circle), and even a group of parked electric scooters (purple circle). It is interesting to notice that, in the same image, also the buildings near the road are mapped. The last detail to mention is the bright area immediately ahead of the car in the very near range. This is a common feature of all the images generated during the campaigns and it is easy to recognize that what generates this bright response is the return of the asphalt just below the radar.

Similar details can be recognized in Fig. 7. At the top, an optical image is shown. At the bottom (central), the full SAR image of the field of view is represented, while two zoomed details are shown at the left and at the right. In the red rectangle, the parked cars at the left of the vehicle are highlighted, while on the right, the same green car visible in the optical image is also recognizable in the SAR image. The fine resolution of the SAR image allows the recognition of even the smallest details, such as the green car's alloy wheels. As a final note, we would like to give a qualitative comparison of the results just shown with the state-of-the-art techniques in automotive radar imaging. The resolution is unmatched by any other technique exploiting MIMO radars in front or side-looking geometry. The system has proven to be able to detect also faint targets due to its improved SNR. The processing workflow is fast and can be implemented to work in real-time, unlike many other complicated and time-consuming processors. The real difference between this paper and the others in the literature (already mentioned in the introduction) is that we performed a real open road campaign in an un-controlled environment with a single radar and automotive-grade navigation equipment. This means that this contribution really describes the capability of SAR in a realistic environment.

Figure 7. Another SAR image from a forward looking campaign. Also in this case it is straightforward to recognize bright targets such as the parked cars at the side of the road.

Conclusion

In this paper, we have discussed the application of SAR imaging in the automotive context. The potentials of SAR imaging were analyzed by evaluating the implications of coherent data processing over a significantly longer integration time than allowed by conventional radar imaging. We also described an efficient algorithm that allows real-time imaging of the scenario employing SAR. Furthermore, we assessed the performances by providing a rough order of magnitude of the number of operations required to focus the high-resolution SAR image, proving how real-time imaging is within reach of current technologies already used in the automotive industry. An experimental demonstration was provided by showing SAR images of urban scenarios derived from campaign data. These campaigns were carried out with an eight-channel MIMO device. Whereas it is clear that MIMO radar remains the best technology for targets directly ahead of the ego-vehicle, theoretical analysis, and experimental results confirm that SAR processing can outperform traditional radar imaging for targets at least slightly off-boresight. In particular, it was shown that SAR processing enables high-resolution imaging of various targets, including parked cars, fences, parking poles, building facades, and pedestrians. Based on this work, we deem that SAR imaging constitutes a new promising complement to conventional automotive radar imaging, aimed explicitly at gaining awareness of the external environment in urban or sub-urban scenarios.

Acknowledgements

This work has been carried out in the context of the activities of the Joint Lab by Huawei Technologies Italia and Politecnico di Milano. The authors heartily acknowledge the collaboration with the Huawei Munich Research Centre, as part of the Joint Lab. The authors wish to thank Dr. Paolo Falcone at Aresys for his collaboration and support and Professor Sergio Savaresi at PoliMi for the use of the Alfa Giulia during the acquisition campaigns.

Competing interest

The author(s) declare that Dr. Ivan Russo and Dr. Christian Mazzucco are currently Huawei Technologies employees. This research was founded within the program of the Joint Research Lab participated by Politecnico di Milano and Huawei Technologies Italia.

Marco Manzoni (member, IEEE) was born in Lecco, Italy, in 1994. He received the M.Sc. degree in telecommunications engineering (cum Laude) and the Ph.D. degree in information technology (cum Laude) from Politecnico di Milano in 2018 and 2022, respectively. His research is focussed on signal processing techniques for radar remote sensing including space-borne, drone-borne, and car-borne synthetic aperture radar. He is also interested in water vapor estimation from space-borne interferometric SAR measurements, change detection, and structural monitoring with ground-based SAR. He is the co-recipient of the best paper award at the Mediterranean Microwave Symposium 2022.

Stefano Tebaldini (senior member, IEEE) received the M.S. degree in telecommunication engineering and the Ph.D. degree from the Politecnico di Milano, in 2005 and 2009, respectively. Since 2005, he has been with the Digital Signal Processing Research Group, Politecnico di Milano, where he currently holds the position of associate professor. His research activities mostly focus on Earth observation with synthetic aperture radar (SAR) and radar design and processing. He is one of the inventors of a new technology patented by T.R.E. for the exploitation of multiple interferograms in the presence of distributed scattering. He teaches courses on signal theory and remote sensing at the Politecnico di Milano. He has been involved as a key scientist in several studies by the European Space Agency (ESA) concerning the tomographic phase of BIOMASS. He was a member of the SAOCOM-CS ESA Expert Group and is currently a member of the BIOMASS MAG at ESA.

Andrea Virgilio Monti-Guarnieri (senior member, IEEE) received the M.Sc. degree (cum laude in electronic engineering, in 1988. He has been a full Professor with the Dipartimento di Elettronica, Informazione e Bioingegneria, since 2017. He is the Founder of PoliMi spin-off Aresys (2003), targeting SAR, radar, and geophysics applications. He has an H index (Google) of 33, 5400 citations, received four conference awards, and holds applications for five patents. His current research interests focus on radar-based system design, calibration, MIMO, and geosynchronous SAR. He has been a reviewer of several scientific journals, guest editor for MPI Remote Sensing, and a member in scientific-technical committees of international workshops and symposia on radar and Earth observation (EO).

Claudio Maria Prati is currently a full professor of telecommunications with the Electronic Department, Politecnico of Milano (POLIMI). He has chaired the Telecommunications Study Council at POLIMI. He holds five patents in the field of SAR and SAS data processing. He has been awarded three prizes from the IEEE Geoscience and Remote Sensing Society (IGARSS ’89 and IGARSS ’99 and best TGARSS paper 2016). He has published more than 150 papers on SAR and SAS data processing and interferometry. He has been involved as the key scientist in several studies by the European Space Agency (ESA), the European Union (EU), the Italian National Research Council (CNR), the Italian Space Agency (ASI), and ENI-AGIP. He is the co-founder of Tele-Rilevamento Europa (T.R.E), a spin-off company of POLIMI that has recently become T.R.E Altamira, a CLS French group company.

Dario Tagliaferri (member, IEEE) received the B.Sc. degree (2012), the M.Sc. degree (2015), and the PhD (2019) in telecommunication engineering from Politecnico di Milano, Italy. He is currently an assistant professor at Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Italy, in the framework of Huawei-Polimi Joint Research Lab. His research interests comprise signal processing techniques for wireless communication and sensing systems, with an emphasis on vehicular scenarios. Specific research topics include integrated sensing and communication systems, intelligent reflecting surfaces, and V2X communication systems. He was the co-recipient of the Best Paper Awards from the 1st IEEE International Online Symposium on JC&S 2021 and from the EuMA Mediterranean Microwave Symposium 2022.

Monica Nicoli (senior member, IEEE) received the M.Sc. degree (cum laude) in telecommunication engineering and the Ph.D. degree in electronic and communication engineering from the Politecnico di Milano, Milan, Italy, in 1998 and 2002, respectively. She was a visiting researcher with ENIAgip, Italy, from 1998 to 1999, and also with Uppsala University, Sweden, in 2001. In 2002, she joined the Politecnico di Milano as a faculty member. She is currently an associate professor of telecommunications with the Department of Management, Economics and Industrial Engineering, Politecnico di Milano. Her research interests cover wireless communications and signal processing for ITS, with an emphasis on V2X communications, localization and navigation, cooperative and distributed systems for the Internet of Vehicles. She has coauthored over 150 scientific publications (journals, conferences, and patents). She is a recipient of the Marisa Bellisario Award (1999), and a co-recipient of the Best Paper Awards of the IEEE Joint Communication and Sensing (2021), the IEEE Statistical Signal Processing Workshop (2018), and the IET Intelligent Transport Systems Journal (2014). She is an associate editor for the IEEE Transactions on Intelligent Transportation Systems. She also served as an associate editor of the EURASIP Journal on Wireless Communications and Networking from 2010 to 2017, and as a lead guest editor for the Special Issue on Localization in Mobile Wireless and Sensor Networks in 2011.

Umberto Spagnolini (senior member, IEEE) is currently a professor of statistical signal processing, the director of Joint Lab Huawei-Politecnico di Milano, and Huawei Industry Chair. His research in statistical signal processing covers remote sensing and communication systems with more than 300 papers on peer-reviewed journals/conferences and patents. He is the author of the book Statistical Signal Processing in Engineering (J. Wiley, 2017). His specific areas of interest include mmW channel estimation and space-time processing for single/multi-user wireless communication systems, cooperative and distributed inference methods including V2X systems, mmWave communication systems, parameter estimation/tracking, focusing, and wave-field interpolation for remote sensing (UWB radar and oil exploration). He was a recipient/co-recipient of best Paper awards on geophysical signal processing methods (from EAGE), array processing (ICASSP 2006), and distributed synchronization for wireless sensor networks (SPAWC 2007, WRECOM 2007). He is a technical expert of standard-essential patents and IP. He has served as a part of the IEEE Editorial boards as well as a member in technical program committees of several conferences for all the areas of interests.

Ivan Russo was born in Vibo Valentia, Italy, in 1982. He received the B.Sc. degree in electronics engineering and the M.Sc. degree in telecommunications engineering from the University of Calabria, Rende, Italy, in 2003 and 2007, respectively, and the Ph.D. degree in electronics engineering from the Mediterranean University of Reggio Calabria, Reggio Calabria, Italy, in 2011, with a focus on quasi-optical (QO) amplifiers, active FSSs, and efficient array beamforming networks. From 2010 to 2011, he was with the Department of Microwave Technology, University of Ulm, Ulm, Germany, where he was involved in high-resolution near-field probes and characterization of overmoded waveguides. From 2011 to 2013, he was a University Assistant with the Institute for Microwave and Photonics Engineering, TU Graz, Graz, Austria, where he was involved in spherical near/far-field transformations, RFID antennas, and circularly and dual-polarized UWB antennas. From 2013 to 2014, he was an EMC/antenna engineer with Thales Alenia Space, Turin, Italy, where he was involved in installed antenna performance on satellites. From 2014 to 2018, he was an antenna engineer with Elettronica S.p.A., Rome, Italy, where he focused on the development of UWB antennas and phased arrays for electronic warfare applications. Since 2018, he has been with the Huawei Research Center, Milan, Italy, as an antenna and phased array engineer, where he is currently focusing on innovative automotive radar antennas and systems and advanced solution for phased arrays and high-speed interconnects.

Christian Mazzucco received the Laurea degree in telecommunications engineering from the University of Padova, Italy, in 2003, and the master's degree in information technology from the Politecnico di Milano, in 2004. In 2004, he joined Nokia Siemens Networks, Milan, where he was involved in research on UWB localization and tracking techniques. From 2005 to 2009, he was involved in several projects mainly researching and developing Wimax systems and high-speed LDPC decoders. In 2009, he joined Huawei Technologies, Italy, studying algorithms for high-power amplifiers digital predistortion, phase noise suppression, and MIMO for point-to-point microwave links. He is currently involved in researching phased array processing and the development of mmWave 5G BTS systems.

References

Azouz, A and Li, Z (2014) Motion compensation for high-resolution automobile-SAR. 2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP). Xi'an, China: IEEE, pp. 203–207. [Online]. Available: https://ieeexplore.ieee.org/document/6889232.10.1109/ChinaSIP.2014.6889232CrossRefGoogle Scholar
Iqbal, H, Schartel, M, Roos, F, Urban, J and Waldschmidt, C (2018) Implementation of a SAR demonstrator for automotive imaging. 2018 18th Mediterranean Microwave Symposium (MMS). Istanbul: IEEE, pp. 240–243. [Online]. Available: https://ieeexplore.ieee.org/document/8611814/.10.1109/MMS.2018.8611814CrossRefGoogle Scholar
Wu, H and Zwick, T (2009) Automotive SAR for parking lot detection. 2009 German Microwave Conference. Munich, Germany: IEEE, pp. 1–8. [Online]. Available: http://ieeexplore.ieee.org/document/4815910/.Google Scholar
Iqbal, H, Loffler, A, Mejdoub, MN and Gruson, F (2021) Realistic SAR implementation for automotive applications. IEEE, Proceedings of 2020 17th European Radar Conference (EuRAD), pp. 306–309.10.1109/EuRAD48048.2021.00085CrossRefGoogle Scholar
Gishkori, S, Daniel, L, Gashinova, M and Mulgrew, B (2019) Imaging for a forward scanning automotive synthetic aperture radar. IEEE Transactions on Aerospace and Electronic Systems 55, 14201434.10.1109/TAES.2018.2871436CrossRefGoogle Scholar
Manzoni, M, Tebaldini, S, Monti-Guarnieri, AV, Prati, CM and Russo, I (2022) A comparison of processing schemes for automotive MIMO SAR imaging. Remote Sensing 14, 4696. [Online]. Available: https://www.mdpi.com/2072-4292/14/19/4696.Google Scholar
Tebaldini, S, Rizzi, M, Manzoni, M, Guarnieri, AM, Prati, C, Tagliaferri, D, Nicoli, M, Spagnolini, U, Russo, I and Mazzucco, C (2022) SAR imaging in automotive scenarios. 2022 Microwave Mediterranean Symposium (MMS). Pizzo Calabro, Italy: IEEE, pp. 1–5. [Online]. Available: https://ieeexplore.ieee.org/document/9825599/.Google Scholar
Stanko, S, Palm, S, Sommer, R, Kloppel, F, Caris, M and Pohl, N (2016) Millimeter resolution SAR imaging of infrastructure in the lower THz region using MIRANDA-300. IEEE, Proceedings of the 2016 European Radar Conference (EuRAD), pp. 358–361.10.1109/EuMC.2016.7824641CrossRefGoogle Scholar
Gao, X, Roy, S and Xing, G (2017) MIMO-SAR: a hierarchical high-resolution imaging algorithm for mmWave FMCW radar in autonomous driving. IEEE Transactions on Vehicular Technology 70, 73227334.10.1109/TVT.2021.3092355CrossRefGoogle Scholar
Feger, R, Haderer, A and Stelzer, A (2017) Experimental verification of a 77-GHz synthetic aperture radar system for automotive applications. IEEE, Proceedings of the 2017 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM), pp. 111–114.10.1109/ICMIM.2017.7918869CrossRefGoogle Scholar
Manzoni, M, Tagliaferri, D, Rizzi, M, Tebaldini, S, Guarnieri, AVM, Prati, CM, Nicoli, M, Russo, I, Duque, S, Mazzucco, C and Spagnolini, U (2022) Motion estimation and compensation in automotive MIMO SAR. IEEE Transactions on Intelligent Transportation Systems, pp. 1–17. [Online]. Available: https://ieeexplore.ieee.org/document/9945666/.Google Scholar
Manzoni, M, Rizzi, M, Tebaldini, S, Monti–Guarnieri, AV, Prati, CM, Tagliaferri, D, Nicoli, M, Russo, I, Mazzucco, C, Duque, S and Spagnolini, U (2022) Residual motion compensation in automotive MIMO SAR imaging. IEEE, Proceedings of the 2022 IEEE Radar Conference (RadarConf22), pp. 01–07. doi: 10.1109/RadarConf2248738.2022.9764310.CrossRefGoogle Scholar
Ulander, L, Hellsten, H and Stenstrom, G (2003) Synthetic-aperture radar processing using fast factorized back-projection. IEEE Transactions on Aerospace and Electronic Systems 39, 760776.Google Scholar
Meta, A, Hoogeboom, P and Ligthart, LP (2007) Signal processing for FMCW SAR. IEEE Transactions on Geoscience and Remote Sensing 45, 35193532. [Online]. Available: http://ieeexplore.ieee.org/document/4373378/.10.1109/TGRS.2007.906140CrossRefGoogle Scholar
Cooley, JW and Tukey, JW (1965) An algorithm for the machine calculation of complex Fourier series. Mathematics of Computation 19, 297301. [Online]. Available: https://www.ams.org/mcom/1965-19-090/S0025-5718-1965-0178586-1/.Google Scholar
Victor, H (2013) Apple iPhone 5 s performance review: CPU and GPU speed compared to top Android phones (benchmarks), [Online]. Available: shorturl.at/jowCF.Google Scholar
Figure 0

Figure 1. Angular resolution. Black dashed curves: for a front-looking MIMO radar employing 32, 64, and 128 channels. Continuous curves angular resolution for a SAR employing a synthetic aperture of 0.3, 0.7, and 1.2.

Figure 1

Figure 2. Geometry of the problem.

Figure 2

Figure 3. 3D2D block diagram.

Figure 3

Figure 4. A visual interpretation of the 3D2D algorithm. In light blue, the velocity (or Doppler) layers of the Range–Angle–Velocity (RAV) data cube. The maximum of the absolute value in the velocity domain is usually taken and it is displayed at the base of the RAV cube. The SAR image is embedded in the RAV cube and can be extracted with a simple interpolation. The curved surface represents the SAR image.

Figure 4

Table 1. Typical values of all the parameters involved in the computation of the rough number of operations (RNO)

Figure 5

Figure 5. Side-looking SAR imaging of an urban scenario in Milan.

Figure 6

Figure 6. Optical and SAR image acquired during a forward looking campaign. Targets such as parked cars, sidewalks, and scooters are easily recognizable.

Figure 7

Figure 7. Another SAR image from a forward looking campaign. Also in this case it is straightforward to recognize bright targets such as the parked cars at the side of the road.