Archives

funding

Lightmatter’s photonic AI ambitions light up an $80M B round

AI is fundamental to many products and services today, but its hunger for data and computing cycles is bottomless. Lightmatter plans to leapfrog Moore’s law with its ultra-fast photonic chips specialized for AI work, and with a new $80M round the company is poised to take its light-powered computing to market.

We first covered Lightmatter in 2018, when the founders were fresh out of MIT and had raised $11M to prove that their idea of photonic computing was as valuable as they claimed. They spent the next three years and change building and refining the tech — and running into all the hurdles that hardware startups and technical founders tend to find.

For a full breakdown of what the company’s tech does, read that feature — the essentials haven’t changed.

In a nutshell, Lightmatter’s chips perform certain complex calculations fundamental to machine learning in a flash — literally. Instead of using charge, logic gates, and transistors to record and manipulate data, the chips use photonic circuits that perform the calculations by manipulating the path of light. It’s been possible for years, but until recently getting it to work at scale, and for a practical, indeed a highly valuable purpose has not.

Prototype to product

It wasn’t entirely clear in 2018 when Lightmatter was getting off the ground whether this tech would be something they could sell to replace more traditional compute clusters like the thousands of custom units companies like Google and Amazon use to train their AIs.

“We knew in principle the tech should be great, but there were a lot of details we needed to figure out,” CEO and co-founder Nick Harris told TechCrunch in an interview. “Lots of hard theoretical computer science and chip design challenges we needed to overcome… and COVID was a beast.”

With suppliers out of commission and many in the industry pausing partnerships, delaying projects, and other things, the pandemic put Lightmatter months behind schedule, but they came out the other side stronger. Harris said that the challenges of building a chip company from the ground up were substantial, if not unexpected.

A rack of Lightmatter servers.

Image Credits: Lightmatter

“In general what we’re doing is pretty crazy,” he admitted. “We’re building computers from nothing. We design the chip, the chip package, the card the chip package sits on, the system the cards go in, and the software that runs on it…. we’ve had to build a company that straddles all this expertise.”

That company has grown from its handful of founders to more than 70 employees in Mountain View and Boston, and the growth will continue as it brings its new product to market.

Where a few years ago Lightmatter’s product was more of a well-informed twinkle in the eye, now it has taken a more solid form in the Envise, which they call a ‘general purpose photonic AI accelerator.” It’s a server unit designed to fit into normal datacenter racks but equipped with multiple photonic computing units, which can perform neural network inference processes at mind-boggling speeds. (It’s limited to certain types of calculations, namely linear algebra for now, and not complex logic, but this type of math happens to be a major component of machine learning processes.)

Harris was reticent to provide exact numbers on performance improvements, but more because those improvements are increasing than that they’re not impressive enough. The website suggests it’s 5x faster than an NVIDIA A100 unit on a large transformer model like BERT, while using about 15 percent of the energy. That makes the platform doubly attractive to deep-pocketed AI giants like Google and Amazon, which constantly require both more computing power and who pay through the nose for the energy required to use it. Either better performance or lower energy cost would be great — both together is irresistible.

It’s Lightmatter’s initial plan to test these units with its most likely customers by the end of 2021, refining it and bringing it up to production levels so it can be sold widely. But Harris emphasized this was essentially the Model T of their new approach.

“If we’re right, we just invented the next transistor,” he said, and for the purposes of large-scale computing, the claim is not without merit. You’re not going to have a miniature photonic computer in your hand any time soon, but in datacenters, where as much as 10 percent of the world’s power is predicted to go by 2030, “they really have unlimited appetite.”

The color of math

A Lightmatter chip with its logo on the side.

Image Credits: Lightmatter

There are two main ways by which Lightmatter plans to improve the capabilities of its photonic computers. The first, and most insane sounding, is processing in different colors.

It’s not so wild when you think about how these computers actually work. Transistors, which have been at the heart of computing for decades, use electricity to perform logic operations, opening and closing gates and so on. At a macro scale you can have different frequencies of electricity that can be manipulated like waveforms, but at this smaller scale it doesn’t work like that. You just have one form of currency, electrons, and gates are either open or closed.

In Lightmatter’s devices, however, light passes through waveguides that perform the calculations as it goes, simplifying (in some ways) and speeding up the process. And light, as we all learned in science class, comes in a variety of wavelengths — all of which can be used independently and simultaneously on the same hardware.

The same optical magic that lets a signal sent from a blue laser be processed at the speed of light works for a red or a green laser with minimal modification. And if the light waves don’t interfere with one another, they can travel through the same optical components at the same time without losing any coherence.

That means that if a Lightmatter chip can do, say, a million calculations a second using a red laser source, adding another color doubles that to two million, adding another makes three — with very little in the way of modification needed. The chief obstacle is getting lasers that are up to the task, Harris said. Being able to take roughly the same hardware and near-instantly double, triple, or 20x the performance makes for a nice roadmap.

It also leads to the second challenge the company is working on clearing away, namely interconnect. Any supercomputer is composed of many small individual computers, thousands and thousands of them, working in perfect synchrony. In order for them to do so, they need to communicate constantly to make sure each core knows what other cores are doing, and otherwise coordinate the immensely complex computing problems supercomputing is designed to take on. (Intel talks about this “concurrency” problem building an exa-scale supercomputer here.)

“One of the things we’ve learned along the way is, how do you get these chips to talk to each other when they get to the point where they’re so fast that they’re just sitting there waiting most of the time?” said Harris. The Lightmatter chips are doing work so quickly that they can’t rely on traditional computing cores to coordinate between them.

A photonic problem, it seems, requires a photonic solution: a wafer-scale interconnect board that uses waveguides instead of fiber optics to transfer data between the different cores. Fiber connections aren’t exactly slow, of course, but they aren’t infinitely fast, and the fibers themselves are actually fairly bulky at the scales chips are designed, limiting the number of channels you can have between cores.

“We built the optics, the waveguides, into the chip itself; we can fit 40 waveguides into the space of a single optical fiber,” said Harris. “That means you have way more lanes operating in parallel — it gets you to absurdly high interconnect speeds.” (Chip and server fiends can find that specs here.)

The optical interconnect board is called Passage, and will be part of a future generation of its Envise products — but as with the color calculation, it’s for a future generation. 5-10x performance at a fraction of the power will have to satisfy their potential customers for the present.

Putting that $80M to work

Those customers, initially the “hyper-scale” data handlers that already own datacenters and supercomputers that they’re maxing out, will be getting the first test chips later this year. That’s where the B round is primarily going, Harris said: “We’re funding our early access program.”

That means both building hardware to ship (very expensive per unit before economies of scale kick in, not to mention the present difficulties with suppliers) and building the go-to-market team. Servicing, support, and the immense amount of software that goes along with something like this — there’s a lot of hiring going on.

The round itself was led by Viking Global Investors, with participation from HP Enterprise, Lockheed Martin, SIP Global Partners, and previous investors GV, Matrix Partners and Spark Capital. It brings their total raised to about $113 million; There was the initial $11M A round, then GV hopping on with a $22M A-1, then this $80M.

Although there are other companies pursuing photonic computing and its potential applications in neural networks especially, Harris didn’t seem to feel that they were nipping at Lightmatter’s heels. Few if any seem close to shipping a product, and at any rate this is a market that is in the middle of its hockey stick moment. He pointed to an OpenAI study indicating that the demand for AI-related computing is increasing far faster than existing technology can provide it, except with ever larger datacenters.

The next decade will bring economic and political pressure to rein in that power consumption, just as we’ve seen with the cryptocurrency world, and Lightmatter is poised and ready to provide an efficient, powerful alternative to the usual GPU-based fare.

As Harris suggested hopefully earlier, what his company has made is potentially transformative in the industry and if so there’s no hurry — if there’s a gold rush, they’ve already staked their claim.

Figure raises $7.5M to help startup employees better understand their compensation

The topic of compensation has historically been a delicate one that has left many people — especially startup employees — wondering just what drives what can feel like random decisions around pay and equity.

Last June, software engineers (and housemates) Miles Hobby and Geoffrey Tisserand set about trying to solve the problem for companies by developing a data-driven platform that aims to help companies structure their compensation plans and transparently communicate them to candidates.

Now today, the startup behind that platform, Figure, announced it has raised $7.5 million in seed funding led by CRV. Bling Capital, Better Tomorrow Ventures and Garage Capital also participated in the financing, along with angel investors such as AngelList co-founder Naval Ravikant, Jason Calacanis, Reddit CEO Steve Huffman and other executives based in Silicon Valley.

The startup has amassed a client list that includes other startups such as fintechs Brex and NerdWallet and AI-powered fitness company Tempo. 

Put simply, Hobby and Tisserand’s mission is to improve workflows and transparency around pay, particularly equity. The pair had both worked at startups themselves (Uber and Instacart, respectively) and ended up leaving money on the table when they left those companies because no one had properly explained to them what their equity, which changed at every valuation, meant.  

Image Credits: Figure co-founders and co-CEOs Miles Hobby and Geoffrey Tisserand. Image Credits: Figure

So, one of their goals was to create a solution that would provide a user-friendly explanation of what a person’s equity stake really means, from tax implications to whether or not they have to buy the stock and/or hold onto it.

“I’ve gone through the job search process many times before and there’s all these complex legal documents to understand why you’re getting 10,000 stock options, but obviously we knew the vast majority of people have no idea how that works,” Tisserand told TechCrunch. “We saw an opportunity there to help companies actually convey the value to their candidates while also making them aware of the potential risks of owning something that’s so illiquid.”

Image Credits: Figure

Another goal of Figure’s is to help create a more fair and balanced process about decisions around pay and equity so that there’s less inequality out there. Pointedly, it aims to remove some of the biases that exist around those decisions by systematizing the process.

“We saw a void in this kind of context around equity…and knew that there had to be a better way for companies to structure, manage and explain their compensation plans,” Hobby said.

To Hobby and Tisserand, Figure is designed to help stop instances of implicit bias.

“Compensation should be based on the work that you’re doing, and not gender or ethnic background,” Tisserand told TechCrunch. “We’re trying to give that context and remove biases. So, we’re trying to help at two different stages –– to surface inequities that already exist and make sure there are no anomalies, and then to help stop them before they can exist.”

Figure also aims to give companies the tools to educate candidates and employees on their total compensation — including equity, salary, benefits and bonuses — in a “straightforward and user-friendly” way. For example, it can create custom offer letters that interactively detail a candidate’s compensation.

“Our goal is for Figure to become an operating system for compensation, where a company can encode their compensation philosophy into our system, and we help them determine their job architecture, compensation bands and offer numbers while monitoring their compensation health to provide adjustment suggestions when needed,” Hobby said.

Post-hire, Figure’s compensation management system “helps keep everything running smoothly.”

Anna Khan, general partner of enterprise software at CRV, is joining Figure’s board as part of the funding. The decision to back the startup was in part personal, she said.

“I’d been investing in software for eight years and was alarmed that no one was building anything around pay equity when it comes to how we’re paid, why we’re paid what we’re paid and on how to build equity long term,” Khan told TechCrunch. “Unfortunately, discussions around compensation and equity still happen behind closed doors and this extends into workflow around compensation — equally broken — with manual leveling, old data and large pay inequities.”

The company plans to use its new capital to expand its product offerings and scale its organization.

Oculii looks to supercharge radar for autonomy with $55M round B

Autonomous vehicles rely on many sensors to perceive the world around them, and while cameras and lidar get a lot of the attention, good old radar is an important piece of the puzzle — though it has some fundamental limitations. Oculii, which just raised a $55M round, aims to minimize those limitations and make radar more capable with a smart software layer for existing devices — and sell its own as well.

Radar’s advantages lie in its superior range, and in the fact that its radio frequency beams can pass through things like raindrops, snow, and fog — making it crucial for perceiving the environment during inclement weather. Lidar and ordinary visible light cameras can be totally flummoxed by these common events, so it’s necessary to have a backup.

But radar’s major disadvantage is that, due to the wavelengths and how the antennas work, it can’t image things in detail the way lidar can. You tend to get very precisely located blobs rather than detailed shapes. It still provides invaluable capabilities in a suite of sensors, but if anyone could add a bit of extra fidelity to its scans, it would be that much better.

That’s exactly what Oculii does — take an ordinary radar and supercharge it. The company claims a 100x improvement to spatial resolution accomplished by handing over control of the system to its software. Co-founder and CEO Steven Hong explained in an email that a standard radar might have, for a 120 degree field of view, a 10 degree spatial resolution, so it can tell where something is with a precision of a few degrees on either side, and little or no ability to tell the object’s elevation.

Some are better, some worse, but for the purposes of this example that amounts to an effectively 12×1 resolution. Not great!

Handing over control to the Oculii system, however, which intelligently adjusts the transmissions based on what it’s already perceiving, could raise that to a 0.5° horizonal x 1° vertical resolution, giving it an effective resolution of perhaps 120×10. (Again, these numbers are purely for explanatory purposes and aren’t inherent to the system.)

That’s a huge improvement and results in the ability to see that something is, for example, two objects near each other and not one large one, or that an object is smaller than another near it, or — with additional computation — that it is moving one way or the other at such and such a speed relative to the radar unit.

Here’s a video demonstration of one of their own devices, showing considerably more detail than one would expect:

Exactly how this is done is part of Oculii’s proprietary magic, and Hong did not elaborate much on how exactly the system works. “Oculii’s sensor uses AI to adaptively generate an ‘intelligent’ waveform that adapts to the environment and embed information across time that can be leveraged to improve the resolution significantly,” he said. (Integrating information over time is what gives it the “4D” moniker, by the way.)

Here’s a little sizzle reel that gives a very general idea:

Autonomous vehicle manufacturers have not yet hit on any canonical set of sensors that AVs should have, but something like Oculii could give radar a more prominent place — its limitations sometimes mean it is relegated to emergency braking detection at the front or some such situation. With more detail and more data, radar could play a larger role in AV decisionmaking systems.

The company is definitely making deals — it’s working with Tier-1s and OEMs, one of which (Hella) is an investor, which gives a sense of confidence in Oculii’s approach. It’s also working with radar makers and has some commercial contracts looking at a 2024-2025 timeline.

CG render of Oculii's two radar units.

Image Credits: Oculii

It’s also getting into making its own all-in-one radar units, doing the hardware-software synergy thing. It claims these are the world’s highest resolution radars, and I don’t see any competitors out there contradicting this — the simple fact is radars don’t compete much on “resolution,” but more on the precision of their rangefinding and speed detection.

One exception might be Echodyne, which uses a metamaterial radar surface to direct a customizable radar beam anywhere in its field of view, examining objects in detail or scanning the whole area quickly. But even then its “resolution” isn’t so easy to estimate.

At any rate the company’s new Eagle and Falcon radars might be tempting to manufacturers working on putting together cutting-edge sensing suites for their autonomous experiments or production driver-assist systems.

It’s clear that with radar tipped as a major component of autonomous vehicles, robots, aircraft and other devices, it’s worth investing seriously in the space. The $55M B round certainly demonstrates that well enough. It was, as Oculii’s press release lists it, “co-led by Catapult Ventures and Conductive Ventures, with participation from Taiwania Capital, Susquehanna Investment Group (SIG), HELLA Ventures, PHI-Zoyi Capital, R7 Partners, VectoIQ, ACVC Partners, Mesh Ventures, Schox Ventures, and Signature Bank.”

The money will allow for the expected scaling and hiring, and as Hong added, “continued investment of the technology to deliver higher resolution, longer range, more compact and cheaper sensors that will accelerate an autonomous future.”