Archives

Radar

LeoLabs raises $65M Series B for its satellite monitoring and collision detection service

Low Earth orbit is full of stuff: not only bits of debris and junk, but also satellites — the number of which is growing rapidly alongside the decreasing cost of launch. This can occasionally pose a problem for satellite providers, whose valuable spacecraft run the risk of colliding with other satellites, or with the many thousands of other objects in orbit.

For most of the space age, debris tracking was performed by a smattering of military outfits and other governmental organizations, but that hardly paints a complete and broadly accessible picture. LeoLabs has been aiming to fill what it calls this “data deficit” in orbital object tracking since the company’s founding in 2016. Now it will be scaling its operations with a $65 million Series B financing round, jointly led by Insight Partners and Velvet Sea Ventures. This latest round brings the company’s total funding to over $100 million.

LeoLabs uses ground-based phased array radars – one in Alaska, one in Texas, two in New Zealand and two in Costa Rica – to monitor low Earth orbit, and to track and measure any object that flies through its observational area. One main advantage of LeoLabs’ tracking system is the size of the objects it can detect: as small as 2 centimeters across, as opposed to the much larger 10 centimeter objects tracked by legacy detection systems.

The difference in scale is huge: there are around 17,000 objects in orbit 10 centimeters or larger, but that number jumps to 250,000 when monitoring from 2 centimeters. That’s a lot of opportunity for collision, and though 2 centimeters sounds small (that’s less than an inch), they can do catastrophic damage traveling at orbital velocity. Customers can access this information using a subscription service, which will automatically alert them about collision risks.

“There just isn’t much information about what’s going on,” Dan Ceperley told TechCrunch. “So we’re rolling out this global radar network to generate a lot of data, and then all that software infrastructure to make it useful.”

LeoLabs sees around three to five close approaches involving larger objects, Ceperley said per year. Those are noteworthy because a collision could potentially produce thousands of smaller fragments – even more space junk. When tracking smaller objects, the company sees up to 20 times more collision risks. Fortunately, many satellites have electric thrusters that can be activated to avoid collisions or maintain orbit. With sufficient advance, companies can maneuver a few days prior to the anticipated collision.

With this new injection of funds, Ceperley said the company is looking to expand the number of radar sites around the world and scale its software-as-a-service business. While LeoLabs already has complete orbital coverage, more radars will increase the frequency with which objects are tracked, he explained. LeoLabs will also be scaling its software and data science teams (already the largest in the company), setting up locations outside the U.S., and adding new products and services.

“There’s a once in a lifetime revolution going on in the space industry, all this new investment has driven down the costs of launching satellites, building satellites and operating satellites, so there’s a lot of satellites going into low Earth orbit,” Ceperley said. “There’s a need for a new generation of services to actually track all these things [. . .] And so we’re building out that next generation tracking service, mapping service, for that new era.”

CMU researchers show potential of privacy-preserving activity tracking using radar

Imagine if you could settle/rekindle domestic arguments by asking your smart speaker when the room last got cleaned or whether the bins already got taken out?

Or — for an altogether healthier use-case — what if you could ask your speaker to keep count of reps as you do squats and bench presses? Or switch into full-on ‘personal trainer’ mode — barking orders to peddle faster as you spin cycles on a dusty old exercise bike (who needs a Peloton!).

And what if the speaker was smart enough to just know you’re eating dinner and took care of slipping on a little mood music?

Now imagine if all those activity tracking smarts were on tap without any connected cameras being plugged inside your home.

Another bit of fascinating research from researchers at Carnegie Mellon University’s Future Interfaces Group opens up these sorts of possibilities — demonstrating a novel approach to activity tracking that does not rely on cameras as the sensing tool. 

Installing connected cameras inside your home is of course a horrible privacy risk. Which is why the CMU researchers set about investigating the potential of using millimeter wave (mmWave) doppler radar as a medium for detecting different types of human activity.

The challenge they needed to overcome is that while mmWave offers a “signal richness approaching that of microphones and cameras”, as they put it, data to train AI models to recognize different human activities as RF noise is not plentiful.

Not to be deterred, they set about sythensizing doppler data to feed a human activity tracking model — devising a software pipeline for training privacy-preserving activity tracking AI models. 

The results can be seen in this video — where the model is shown correctly identifying a number of different activities, including cycling, clapping, waving and squats. Purely from its ability to interpret the mmWave signal the movements generate — and purely having been trained on public video data. 

“We show how this cross-domain translation can be successful through a series of experimental results,” they write. “Overall, we believe our approach is an important stepping stone towards significantly reducing the burden of training such as human sensing systems, and could help bootstrap uses in human-computer interaction.”

Researcher Chris Harrison confirms the mmWave doppler radar-based sensing doesn’t work for “very subtle stuff” (like spotting different facial expressions). But he says it’s sensitive enough to detect less vigorous activity — like eating or reading a book.

The motion detection ability of doppler radar is also limited by a need for line-of-sight between the subject and the sensing hardware. (Aka: “It can’t reach around corners yet.” Which, for those concerned about future robots’ powers of human detection, will surely sound slightly reassuring.)

Detection does require special sensing hardware, of course. But things are already moving on that front: Google has been dipping its toe in already, via project Soli — adding a radar sensor to the Pixel 4, for example.

Google’s Nest Hub also integrates the same radar sense to track sleep quality.

“One of the reasons we haven’t seen more adoption of radar sensors in phones is a lack of compelling use cases (sort of a chicken and egg problem),” Harris tells TechCrunch. “Our research into radar-based activity detection helps to open more applications (e.g., smarter Siris, who know when you are eating, or making dinner, or cleaning, or working out, etc.).”

Asked whether he sees greater potential in mobile or fixed applications, Harris reckons there are interesting use-cases for both.

“I see use cases in both mobile and non mobile,” he says. “Returning to the Nest Hub… the sensor is already in the room, so why not use that to bootstrap more advanced functionality in a Google smart speaker (like rep counting your exercises).

“There are a bunch of radar sensors already used in building to detect occupancy (but now they can detect the last time the room was cleaned, for example).”

“Overall, the cost of these sensors is going to drop to a few dollars very soon (some on eBay are already around $1), so you can include them in everything,” he adds. “And as Google is showing with a product that goes in your bedroom, the threat of a ‘surveillance society’ is much less worry-some than with camera sensors.”

Startups like VergeSense are already using sensor hardware and computer vision technology to power real-time analytics of indoor space and activity for the b2b market (such as measuring office occupancy).

But even with local processing of low-resolution image data, there could still be a perception of privacy risk around the use of vision sensors — certainly in consumer environments.

Radar offers an alternative to such visual surveillance that could be a better fit for privacy-risking consumer connected devices such as ‘smart mirrors‘.

“If it is processed locally, would you put a camera in your bedroom? Bathroom? Maybe I’m prudish but I wouldn’t personally,” says Harris.

He also points to earlier research which he says underlines the value of incorporating more types of sensing hardware: “The more sensors, the longer tail of interesting applications you can support. Cameras can’t capture everything, nor do they work in the dark.”

“Cameras are pretty cheap these days, so hard to compete there, even if radar is a bit cheaper. I do believe the strongest advantage is privacy preservation,” he adds.

Of course having any sensing hardware — visual or otherwise — raises potential privacy issues.

A sensor that tells you when a child’s bedroom is occupied may be good or bad depending on who has access to the data, for example. And all sorts of human activity can generate sensitive information, depending on what’s going on. (I mean, do you really want your smart speaker to know when you’re having sex?)

So while radar-based tracking may be less invasive than some other types of sensors it doesn’t mean there are no potential privacy concerns at all.

As ever, it depends on where and how the sensing hardware is being used. Albeit, it’s hard to argue that the data radar generates is likely to be less sensitive than equivalent visual data were it to be exposed via a breach.

“Any sensor should naturally raise the question of privacy — it is a spectrum rather than a yes/no question,” agrees Harris.  “Radar sensors happen to be usually rich in detail, but highly anonymizing, unlike cameras. If your doppler radar data leaked online, it’d be hard to be embarrassed about it. No one would recognize you. If cameras from inside your house leaked online, well… ”

What about the compute costs of synthesizing the training data, given the lack of immediately available doppler signal data?

“It isn’t turnkey, but there are many large video corpuses to pull from (including things like Youtube-8M),” he says. “It is orders of magnitude faster to download video data and create synthetic radar data than having to recruit people to come into your lab to capture motion data.

“One is inherently 1 hour spent for 1 hour of quality data. Whereas you can download hundreds of hours of footage pretty easily from many excellently curated video databases these days. For every hour of video, it takes us about 2 hours to process, but that is just on one desktop machine we have here in the lab. The key is that you can parallelize this, using Amazon AWS or equivalent, and process 100 videos at once, so the throughput can be extremely high.”

And while RF signal does reflect, and do so to different degrees off of different surfaces (aka “multi-path interference”), Harris says the signal reflected by the user “is by far the dominant signal”. Which means they didn’t need to model other reflections in order to get their demo model working. (Though he notes that could be done to further hone capabilities “by extracting big surfaces like walls/ceiling/floor/furniture with computer vision and adding that into the synthesis stage”.)

“The [doppler] signal is actually very high level and abstract, and so it’s not particularly hard to process in real time (much less ‘pixels’ than a camera).” he adds. “Embedded processors in cars use radar data for things like collision breaking and blind spot monitoring, and those are low end CPUs (no deep learning or anything).”

The research is being presented at the ACM CHI conference, alongside another Group project — called Pose-on-the-Go — which uses smartphone sensors to approximate the user’s full-body pose without the need for wearable sensors.

CMU researchers from the Group have also previously demonstrated a method for indoor ‘smart home’ sensing on the cheap (also without the need for cameras), as well as — last year — showing how smartphone cameras could be used to give an on-device AI assistant more contextual savvy.

In recent years they’ve also investigated using laser vibrometry and electromagnetic noise to give smart devices better environmental awareness and contextual functionality. Other interesting research out of the Group includes using conductive spray paint to turn anything into a touchscreen. And various methods to extend the interactive potential of wearables — such as by using lasers to project virtual buttons onto the arm of a device user or incorporating another wearable (a ring) into the mix.

The future of human computer interaction looks certain to be a lot more contextually savvy — even if current-gen ‘smart’ devices can still stumble on the basics and seem more than a little dumb.

 

Oculii looks to supercharge radar for autonomy with $55M round B

Autonomous vehicles rely on many sensors to perceive the world around them, and while cameras and lidar get a lot of the attention, good old radar is an important piece of the puzzle — though it has some fundamental limitations. Oculii, which just raised a $55M round, aims to minimize those limitations and make radar more capable with a smart software layer for existing devices — and sell its own as well.

Radar’s advantages lie in its superior range, and in the fact that its radio frequency beams can pass through things like raindrops, snow, and fog — making it crucial for perceiving the environment during inclement weather. Lidar and ordinary visible light cameras can be totally flummoxed by these common events, so it’s necessary to have a backup.

But radar’s major disadvantage is that, due to the wavelengths and how the antennas work, it can’t image things in detail the way lidar can. You tend to get very precisely located blobs rather than detailed shapes. It still provides invaluable capabilities in a suite of sensors, but if anyone could add a bit of extra fidelity to its scans, it would be that much better.

That’s exactly what Oculii does — take an ordinary radar and supercharge it. The company claims a 100x improvement to spatial resolution accomplished by handing over control of the system to its software. Co-founder and CEO Steven Hong explained in an email that a standard radar might have, for a 120 degree field of view, a 10 degree spatial resolution, so it can tell where something is with a precision of a few degrees on either side, and little or no ability to tell the object’s elevation.

Some are better, some worse, but for the purposes of this example that amounts to an effectively 12×1 resolution. Not great!

Handing over control to the Oculii system, however, which intelligently adjusts the transmissions based on what it’s already perceiving, could raise that to a 0.5° horizonal x 1° vertical resolution, giving it an effective resolution of perhaps 120×10. (Again, these numbers are purely for explanatory purposes and aren’t inherent to the system.)

That’s a huge improvement and results in the ability to see that something is, for example, two objects near each other and not one large one, or that an object is smaller than another near it, or — with additional computation — that it is moving one way or the other at such and such a speed relative to the radar unit.

Here’s a video demonstration of one of their own devices, showing considerably more detail than one would expect:

Exactly how this is done is part of Oculii’s proprietary magic, and Hong did not elaborate much on how exactly the system works. “Oculii’s sensor uses AI to adaptively generate an ‘intelligent’ waveform that adapts to the environment and embed information across time that can be leveraged to improve the resolution significantly,” he said. (Integrating information over time is what gives it the “4D” moniker, by the way.)

Here’s a little sizzle reel that gives a very general idea:

Autonomous vehicle manufacturers have not yet hit on any canonical set of sensors that AVs should have, but something like Oculii could give radar a more prominent place — its limitations sometimes mean it is relegated to emergency braking detection at the front or some such situation. With more detail and more data, radar could play a larger role in AV decisionmaking systems.

The company is definitely making deals — it’s working with Tier-1s and OEMs, one of which (Hella) is an investor, which gives a sense of confidence in Oculii’s approach. It’s also working with radar makers and has some commercial contracts looking at a 2024-2025 timeline.

CG render of Oculii's two radar units.

Image Credits: Oculii

It’s also getting into making its own all-in-one radar units, doing the hardware-software synergy thing. It claims these are the world’s highest resolution radars, and I don’t see any competitors out there contradicting this — the simple fact is radars don’t compete much on “resolution,” but more on the precision of their rangefinding and speed detection.

One exception might be Echodyne, which uses a metamaterial radar surface to direct a customizable radar beam anywhere in its field of view, examining objects in detail or scanning the whole area quickly. But even then its “resolution” isn’t so easy to estimate.

At any rate the company’s new Eagle and Falcon radars might be tempting to manufacturers working on putting together cutting-edge sensing suites for their autonomous experiments or production driver-assist systems.

It’s clear that with radar tipped as a major component of autonomous vehicles, robots, aircraft and other devices, it’s worth investing seriously in the space. The $55M B round certainly demonstrates that well enough. It was, as Oculii’s press release lists it, “co-led by Catapult Ventures and Conductive Ventures, with participation from Taiwania Capital, Susquehanna Investment Group (SIG), HELLA Ventures, PHI-Zoyi Capital, R7 Partners, VectoIQ, ACVC Partners, Mesh Ventures, Schox Ventures, and Signature Bank.”

The money will allow for the expected scaling and hiring, and as Hong added, “continued investment of the technology to deliver higher resolution, longer range, more compact and cheaper sensors that will accelerate an autonomous future.”