Deep Learning for Radars

Radars have a long story. They used to be large, expensive and available for military only. Nowadays, many cars have a bunch of single-chip radars embedded into bumpers to enable autonomous emergency braking, cross-traffic alerts or lane change assist.

What’s exciting about the present time, is that automobile manufacturers are moving from 24 GHz to 77 GHz. They are doing that for a number of reasons:

  • The antennas take 10x less space; that allows to have more transmitters and receivers per radar, while shrinking the module. For example, the TI IWR1443 evaluation board has 3 transmitters and 4 receivers, which provides enough data to map objects in 3d space.
  • The wavelength for 77 GHz radars is less than 4 mm, so objects and their speeds can be detected with 3x higher resolution.
  • The available unlicensed bandwidth at these frequencies is larger, which helps with accuracy as well.

As the sensors are built for mass-produced cars, they are cheap. Texas Instruments IWR1443 chip costs just ~$25 (at quantity 250), which puts them into the same pricing category as camera sensors. The chip includes a radar hardware accelerator and a powerful ARM Cortex-R4F microcontroller. As they allow to run user programs on this core, it begins to become interesting.

If one looks at the classic radar perception stack, it appears suspiciously similar to the classic computer vision approach: they start with raw ADC signals, run FFT a couple of times, and then have a hierarchy of hand-crafted features with a lot of parameters that one is expected to fine-tune for a specific application.

datapath_overall

What if we throw away all this cruft and let the machine to learn these features? In my first experiment, I have extracted the so-called radar data cube. Essentially, it’s raw signals lightly grilled with 1D and then 2D FFT. In order to feed the data into TensorFlow / PyTorch, I had to convert the data to an image.

This is a banana:

radar-cube-0002

And this is an apple:

radar-cube-0008

In order to test my hypothesis, I have collected a dataset with 4 types of observations: nothing, a hand, a banana and an apple with ~200 images for each category. Then I have created a very simple classifier based on TensorFlow Image Retraining example. That gave an accuracy of 90% on validation/test splits (and one can train their own classifier on this dataset to verify the claim).

That’s about the same accuracy as one would get by doing the same for a small custom dataset with photos. This proves the point that we can get useful data out of the radar with no need for the classical radar stack and no need to manually deal with parameters like Constant False Alarm Rate, desired Radar Cross-Section and tens of others.

I am thinking of the next experiments to run, and will work to open source the code, so that others can play with radars and machine learning too. Write me a note, if you have a question or a suggestion at imkrasin@gmail.com.