Written by Devin Coldewey

80% of the 22 million comments on net neutrality rollback were fake, investigation finds

Of the 22 million comments submitted to the FCC regarding 2017’s controversial rollback of net neutrality, some 18 million were fake, an investigation by the New York Attorney General’s office has found. The broadband industry funded the fraudulent creation of about 8.5 million of those, while a 19-year-old college student submitted 7.7 million, and the remainder came from unknown but spurious sources.

The damning report, issued today, is the result of years of work; it set up a tip line early on so people could report fraudulent comments, and no doubt received plenty, as people were already independently finding themselves, dead relatives, and other obviously fake submissions among the record.

It turns out that a huge number of these comments were paid for by a consortium of broadband companies called Broadband for America, which laid out about $4.2 million for the purpose. They contracted with several “lead generator” companies, the kind of shady operations that offer you free trials of “male enhancement pills” or the like if you fill out a form — in this case, asking the person to write an anti-net-neutrality comment.

As if that wasn’t bad enough, the lead generation companies didn’t even bother plying their shady trade in what passes for an honest way; instead they fabricated the lists and comments with years-old data and in one case with identities stolen in a major data breach. The practice was near universal:

In all, six lead generators funded by the broadband industry engaged in fraud. As a result, nearly every comment and message the broadband industry submitted to the FCC and Congress was fake, signed using the names and addresses of millions of individuals without their knowledge or consent.

The broadband companies are off the hook on a technicality, since they were careful to firewall themselves from the practices of those they were contracting with, even though the record shows it was plain that the information being collected and used was fraudulent. But because the actions were, ostensibly, independently taken by the enterprising lead generators, the buck stops there.

Notably, these scams were also involved in more than a hundred other advocacy campaigns, including submitting over a million fake comments for an EPA proceeding and millions of other letters and digital comments.

The wholesale undermining of the processes of government earned fines of $3.7M, $550K, and $150K for Fluent Inc, React2Media, and Opt-Intelligence respectively. There are also “comprehensive reforms” imposed on them, though it may be best not to expect much from those.

Internet rights advocacy organization Fight for the Future issued a king-size “I told you so” noting that they had flagged this process at the time and helped bring it to the attention of both government officials and ordinary folks.

Another 7.7 million fake comments were submitted by a single person, a California college student who simply combined a fake name generation site with disposable email service to provide plausible identities. The person automated an individual comment submission process, and somehow the FCC’s systems didn’t flag it. Another unknown person used similar means to submit another 1.6 million fake comments.

Acting FCC Chairwoman Jessica Rosenworcel said in a statement that “Today’s report demonstrates how the record informing the FCC’s net neutrality repeal was flooded with fraud. This was troubling at the time because even then the widespread problems with the record were apparent. We have to learn from these lessons and improve because the public deserves an open and fair opportunity to tell Washington what they think about the policies that affect their lives.”

Indeed at the time Rosenworcel suggested delaying the vote, joining many in the country who felt the scale of the shenanigans warranted further investigation — but then-Chairman Ajit Pai brushed aside their concerns, one of many decisions that have considerably tarnished his legacy.

Altogether it’s a pretty sad situation, and the broadband companies and their lobbyists get off without so much as a slap on the wrist. The NY AG report has a variety of recommendations, some of which no doubt have already been implemented or suggested as the FCC’s comment debacle became clear, but the bad guys definitely won this time.

Lightmatter’s photonic AI ambitions light up an $80M B round

AI is fundamental to many products and services today, but its hunger for data and computing cycles is bottomless. Lightmatter plans to leapfrog Moore’s law with its ultra-fast photonic chips specialized for AI work, and with a new $80M round the company is poised to take its light-powered computing to market.

We first covered Lightmatter in 2018, when the founders were fresh out of MIT and had raised $11M to prove that their idea of photonic computing was as valuable as they claimed. They spent the next three years and change building and refining the tech — and running into all the hurdles that hardware startups and technical founders tend to find.

For a full breakdown of what the company’s tech does, read that feature — the essentials haven’t changed.

In a nutshell, Lightmatter’s chips perform certain complex calculations fundamental to machine learning in a flash — literally. Instead of using charge, logic gates, and transistors to record and manipulate data, the chips use photonic circuits that perform the calculations by manipulating the path of light. It’s been possible for years, but until recently getting it to work at scale, and for a practical, indeed a highly valuable purpose has not.

Prototype to product

It wasn’t entirely clear in 2018 when Lightmatter was getting off the ground whether this tech would be something they could sell to replace more traditional compute clusters like the thousands of custom units companies like Google and Amazon use to train their AIs.

“We knew in principle the tech should be great, but there were a lot of details we needed to figure out,” CEO and co-founder Nick Harris told TechCrunch in an interview. “Lots of hard theoretical computer science and chip design challenges we needed to overcome… and COVID was a beast.”

With suppliers out of commission and many in the industry pausing partnerships, delaying projects, and other things, the pandemic put Lightmatter months behind schedule, but they came out the other side stronger. Harris said that the challenges of building a chip company from the ground up were substantial, if not unexpected.

A rack of Lightmatter servers.

Image Credits: Lightmatter

“In general what we’re doing is pretty crazy,” he admitted. “We’re building computers from nothing. We design the chip, the chip package, the card the chip package sits on, the system the cards go in, and the software that runs on it…. we’ve had to build a company that straddles all this expertise.”

That company has grown from its handful of founders to more than 70 employees in Mountain View and Boston, and the growth will continue as it brings its new product to market.

Where a few years ago Lightmatter’s product was more of a well-informed twinkle in the eye, now it has taken a more solid form in the Envise, which they call a ‘general purpose photonic AI accelerator.” It’s a server unit designed to fit into normal datacenter racks but equipped with multiple photonic computing units, which can perform neural network inference processes at mind-boggling speeds. (It’s limited to certain types of calculations, namely linear algebra for now, and not complex logic, but this type of math happens to be a major component of machine learning processes.)

Harris was reticent to provide exact numbers on performance improvements, but more because those improvements are increasing than that they’re not impressive enough. The website suggests it’s 5x faster than an NVIDIA A100 unit on a large transformer model like BERT, while using about 15 percent of the energy. That makes the platform doubly attractive to deep-pocketed AI giants like Google and Amazon, which constantly require both more computing power and who pay through the nose for the energy required to use it. Either better performance or lower energy cost would be great — both together is irresistible.

It’s Lightmatter’s initial plan to test these units with its most likely customers by the end of 2021, refining it and bringing it up to production levels so it can be sold widely. But Harris emphasized this was essentially the Model T of their new approach.

“If we’re right, we just invented the next transistor,” he said, and for the purposes of large-scale computing, the claim is not without merit. You’re not going to have a miniature photonic computer in your hand any time soon, but in datacenters, where as much as 10 percent of the world’s power is predicted to go by 2030, “they really have unlimited appetite.”

The color of math

A Lightmatter chip with its logo on the side.

Image Credits: Lightmatter

There are two main ways by which Lightmatter plans to improve the capabilities of its photonic computers. The first, and most insane sounding, is processing in different colors.

It’s not so wild when you think about how these computers actually work. Transistors, which have been at the heart of computing for decades, use electricity to perform logic operations, opening and closing gates and so on. At a macro scale you can have different frequencies of electricity that can be manipulated like waveforms, but at this smaller scale it doesn’t work like that. You just have one form of currency, electrons, and gates are either open or closed.

In Lightmatter’s devices, however, light passes through waveguides that perform the calculations as it goes, simplifying (in some ways) and speeding up the process. And light, as we all learned in science class, comes in a variety of wavelengths — all of which can be used independently and simultaneously on the same hardware.

The same optical magic that lets a signal sent from a blue laser be processed at the speed of light works for a red or a green laser with minimal modification. And if the light waves don’t interfere with one another, they can travel through the same optical components at the same time without losing any coherence.

That means that if a Lightmatter chip can do, say, a million calculations a second using a red laser source, adding another color doubles that to two million, adding another makes three — with very little in the way of modification needed. The chief obstacle is getting lasers that are up to the task, Harris said. Being able to take roughly the same hardware and near-instantly double, triple, or 20x the performance makes for a nice roadmap.

It also leads to the second challenge the company is working on clearing away, namely interconnect. Any supercomputer is composed of many small individual computers, thousands and thousands of them, working in perfect synchrony. In order for them to do so, they need to communicate constantly to make sure each core knows what other cores are doing, and otherwise coordinate the immensely complex computing problems supercomputing is designed to take on. (Intel talks about this “concurrency” problem building an exa-scale supercomputer here.)

“One of the things we’ve learned along the way is, how do you get these chips to talk to each other when they get to the point where they’re so fast that they’re just sitting there waiting most of the time?” said Harris. The Lightmatter chips are doing work so quickly that they can’t rely on traditional computing cores to coordinate between them.

A photonic problem, it seems, requires a photonic solution: a wafer-scale interconnect board that uses waveguides instead of fiber optics to transfer data between the different cores. Fiber connections aren’t exactly slow, of course, but they aren’t infinitely fast, and the fibers themselves are actually fairly bulky at the scales chips are designed, limiting the number of channels you can have between cores.

“We built the optics, the waveguides, into the chip itself; we can fit 40 waveguides into the space of a single optical fiber,” said Harris. “That means you have way more lanes operating in parallel — it gets you to absurdly high interconnect speeds.” (Chip and server fiends can find that specs here.)

The optical interconnect board is called Passage, and will be part of a future generation of its Envise products — but as with the color calculation, it’s for a future generation. 5-10x performance at a fraction of the power will have to satisfy their potential customers for the present.

Putting that $80M to work

Those customers, initially the “hyper-scale” data handlers that already own datacenters and supercomputers that they’re maxing out, will be getting the first test chips later this year. That’s where the B round is primarily going, Harris said: “We’re funding our early access program.”

That means both building hardware to ship (very expensive per unit before economies of scale kick in, not to mention the present difficulties with suppliers) and building the go-to-market team. Servicing, support, and the immense amount of software that goes along with something like this — there’s a lot of hiring going on.

The round itself was led by Viking Global Investors, with participation from HP Enterprise, Lockheed Martin, SIP Global Partners, and previous investors GV, Matrix Partners and Spark Capital. It brings their total raised to about $113 million; There was the initial $11M A round, then GV hopping on with a $22M A-1, then this $80M.

Although there are other companies pursuing photonic computing and its potential applications in neural networks especially, Harris didn’t seem to feel that they were nipping at Lightmatter’s heels. Few if any seem close to shipping a product, and at any rate this is a market that is in the middle of its hockey stick moment. He pointed to an OpenAI study indicating that the demand for AI-related computing is increasing far faster than existing technology can provide it, except with ever larger datacenters.

The next decade will bring economic and political pressure to rein in that power consumption, just as we’ve seen with the cryptocurrency world, and Lightmatter is poised and ready to provide an efficient, powerful alternative to the usual GPU-based fare.

As Harris suggested hopefully earlier, what his company has made is potentially transformative in the industry and if so there’s no hurry — if there’s a gold rush, they’ve already staked their claim.

Oculii looks to supercharge radar for autonomy with $55M round B

Autonomous vehicles rely on many sensors to perceive the world around them, and while cameras and lidar get a lot of the attention, good old radar is an important piece of the puzzle — though it has some fundamental limitations. Oculii, which just raised a $55M round, aims to minimize those limitations and make radar more capable with a smart software layer for existing devices — and sell its own as well.

Radar’s advantages lie in its superior range, and in the fact that its radio frequency beams can pass through things like raindrops, snow, and fog — making it crucial for perceiving the environment during inclement weather. Lidar and ordinary visible light cameras can be totally flummoxed by these common events, so it’s necessary to have a backup.

But radar’s major disadvantage is that, due to the wavelengths and how the antennas work, it can’t image things in detail the way lidar can. You tend to get very precisely located blobs rather than detailed shapes. It still provides invaluable capabilities in a suite of sensors, but if anyone could add a bit of extra fidelity to its scans, it would be that much better.

That’s exactly what Oculii does — take an ordinary radar and supercharge it. The company claims a 100x improvement to spatial resolution accomplished by handing over control of the system to its software. Co-founder and CEO Steven Hong explained in an email that a standard radar might have, for a 120 degree field of view, a 10 degree spatial resolution, so it can tell where something is with a precision of a few degrees on either side, and little or no ability to tell the object’s elevation.

Some are better, some worse, but for the purposes of this example that amounts to an effectively 12×1 resolution. Not great!

Handing over control to the Oculii system, however, which intelligently adjusts the transmissions based on what it’s already perceiving, could raise that to a 0.5° horizonal x 1° vertical resolution, giving it an effective resolution of perhaps 120×10. (Again, these numbers are purely for explanatory purposes and aren’t inherent to the system.)

That’s a huge improvement and results in the ability to see that something is, for example, two objects near each other and not one large one, or that an object is smaller than another near it, or — with additional computation — that it is moving one way or the other at such and such a speed relative to the radar unit.

Here’s a video demonstration of one of their own devices, showing considerably more detail than one would expect:

Exactly how this is done is part of Oculii’s proprietary magic, and Hong did not elaborate much on how exactly the system works. “Oculii’s sensor uses AI to adaptively generate an ‘intelligent’ waveform that adapts to the environment and embed information across time that can be leveraged to improve the resolution significantly,” he said. (Integrating information over time is what gives it the “4D” moniker, by the way.)

Here’s a little sizzle reel that gives a very general idea:

Autonomous vehicle manufacturers have not yet hit on any canonical set of sensors that AVs should have, but something like Oculii could give radar a more prominent place — its limitations sometimes mean it is relegated to emergency braking detection at the front or some such situation. With more detail and more data, radar could play a larger role in AV decisionmaking systems.

The company is definitely making deals — it’s working with Tier-1s and OEMs, one of which (Hella) is an investor, which gives a sense of confidence in Oculii’s approach. It’s also working with radar makers and has some commercial contracts looking at a 2024-2025 timeline.

CG render of Oculii's two radar units.

Image Credits: Oculii

It’s also getting into making its own all-in-one radar units, doing the hardware-software synergy thing. It claims these are the world’s highest resolution radars, and I don’t see any competitors out there contradicting this — the simple fact is radars don’t compete much on “resolution,” but more on the precision of their rangefinding and speed detection.

One exception might be Echodyne, which uses a metamaterial radar surface to direct a customizable radar beam anywhere in its field of view, examining objects in detail or scanning the whole area quickly. But even then its “resolution” isn’t so easy to estimate.

At any rate the company’s new Eagle and Falcon radars might be tempting to manufacturers working on putting together cutting-edge sensing suites for their autonomous experiments or production driver-assist systems.

It’s clear that with radar tipped as a major component of autonomous vehicles, robots, aircraft and other devices, it’s worth investing seriously in the space. The $55M B round certainly demonstrates that well enough. It was, as Oculii’s press release lists it, “co-led by Catapult Ventures and Conductive Ventures, with participation from Taiwania Capital, Susquehanna Investment Group (SIG), HELLA Ventures, PHI-Zoyi Capital, R7 Partners, VectoIQ, ACVC Partners, Mesh Ventures, Schox Ventures, and Signature Bank.”

The money will allow for the expected scaling and hiring, and as Hong added, “continued investment of the technology to deliver higher resolution, longer range, more compact and cheaper sensors that will accelerate an autonomous future.”