Archives

machine learning

OctoML raises $15M to make optimizing ML models easier

OctoML, a startup founded by the team behind the Apache TVM machine learning compiler stack project, today announced that it has raised a $15 million Series A round led by Amplify, with participation from Madrone Ventures, which led its $3.9 million seed round. The core idea behind OctoML and TVM is to use machine learning to optimize machine learning models so they can more efficiently run on different types of hardware.

“There’s been quite a bit of progress in creating machine learning models,” OctoML CEO and University of Washington professor Luis Ceze told me.” But a lot of the pain has moved to once you have a model, how do you actually make good use of it in the edge and in the clouds?”

That’s where the TVM project comes in, which was launched by Ceze and his collaborators at the University of Washington’s Paul G. Allen School of Computer Science & Engineering. It’s now an Apache incubating project and because it’s seen quite a bit of usage and support from major companies like AWS, ARM, Facebook, Google, Intel, Microsoft, Nvidia, Xilinx and others, the team decided to form a commercial venture around it, which became OctoML. Today, even Amazon Alexa’s wake word detection is powered by TVM.

Ceze described TVM as a modern operating system for machine learning models. “A machine learning model is not code, it doesn’t have instructions, it has numbers that describe its statistical modeling,” he said. “There’s quite a few challenges in making it run efficiently on a given hardware platform because there’s literally billions and billions of ways in which you can map a model to specific hardware targets. Picking the right one that performs well is a significant task that typically requires human intuition.”

And that’s where OctoML and its “Octomizer” SaaS product, which it also announced, today come in. Users can upload their model to the service and it will automatically optimize, benchmark and package it for the hardware you specify and in the format you want. For more advanced users, there’s also the option to add the service’s API to their CI/CD pipelines. These optimized models run significantly faster because they can now fully leverage the hardware they run on, but what many businesses will maybe care about even more is that these more efficient models also cost them less to run in the cloud, or that they are able to use cheaper hardware with less performance to get the same results. For some use cases, TVM already results in 80x performance gains.

Currently, the OctoML team consists of about 20 engineers. With this new funding, the company plans to expand its team. Those hires will mostly be engineers, but Ceze also stressed that he wants to hire an evangelist, which makes sense, given the company’s open-source heritage. He also noted that while the Octomizer is a good start, the real goal here is to build a more fully featured MLOps platform. “OctoML’s mission is to build the world’s best platform that automates MLOps,” he said.

Activity-monitoring startup Zensors repurposes its tech to help coronavirus response

Computer vision techniques used for commercial purposes are turning out to be valuable tools for monitoring people’s behavior during the present pandemic. Zensors, a startup that uses machine learning to track things like restaurant occupancy, lines, and so on, is making its platform available for free to airports and other places desperate to take systematic measures against infection.

The company, founded two years ago but covered by TechCrunch in 2016, was among the early adopters of computer vision as a means to extract value from things like security camera feeds. It may seem obvious now that cameras covering a restaurant can and should count open tables and track that data over time, but a few years ago it wasn’t so easy to come up with or accomplish that.

Since then Zensors has built a suite of tools tailored to specific businesses and spaces, like airports, offices, and retail environments. They can count open and occupied seats, spot trash, estimate lines, and all that kind of thing. Coincidentally, this is exactly the kind of data that managers of these spaces are now very interested in watching closely given the present social distancing measures.

Zensors co-founder Anuraag Jain told Carnegie Mellon University — which the company was spun out of — that it had received a number of inquiries from the likes of airpots regarding applying the technology to public health considerations.

Software that counts how many people are in line can be easily adapted to, for example, estimate how close people are standing and send an alert if too many people are congregating or passing through a small space.

“Rather than profiting off them, we thought we would give our help for free,” said Jain. And so, for the next two months at least, Zensors is providing its platform for free to “selected entities who are on the forefront of responding to this crisis, including our airport clients.”

The system has already been augmented to answer COVID-19-specific questions like whether there are too many people in a given area, when a surface was last cleaned and whether cleaning should be expedited, and how many of a given group are wearing face masks.

Airports surely track some of this information already, but perhaps in a much less structured way. Using a system like this could be helpful for maintaining cleanliness and reducing risk, and no doubt Zensors hopes that having had a taste via what amounts to a free trial, some of these users will become paying clients. Interested parties should get in touch with Zensors via its usual contact page.

Google and USCF collaborate on machine learning tool to help prevent harmful prescription errors

Machine learning experts working at Google Health have published a new study in tandem with the University of California San Francisco (UCSF)’s computational health sciences department that describes a machine learning model the researchers built that can anticipate normal physician drug prescribing patterns, using a patient’s electronic health records (EHR) as input. That’s useful because around 2 percent of patients who end up hospitalized are affected by preventable mistakes in medication prescriptions, some instances of which can even lead to death.

The researchers describe the system as working in a similar manner to automated, machine learning-based fraud detection tools that are commonly used by credit card companies to alert customers of possible fraudulent transactions: They essentially build a baseline of what’s normal consumer behavior based on past transactions, and then alert your bank’s fraud department or freeze access when they detect a behavior that is not in line with and individual’s baseline behavior.

Similarly, the model trained by Google and UCSF worked by identifying any prescriptions that “looked abnormal for the patient and their current situation.” That’s a much more challenging proposition in the case of prescription drugs, vs. consumer activity – because courses of medication, their interactions with one another, and the specific needs, sensitivities and conditions of any given patient all present an incredibly complex web to untangle.

To make it possible, the researchers used electronic health records from de-identified patient that include vital signs, lab results, prior medications and medical procedures, as well as diagnoses and changes over time. They paired this historical data with current state information, and came up with various models to attempt to output an accurate prediction of a course of prescription for a given patient.

Their best-performing model was accurate “three quarters of the time,” Google says, which means that it matched up with what a physician actually decided to prescribe in a large majority of cases. It was also even more accurate (93%) in terms of predicting at least one medication that would fall within a top ten list of a physician’s most likely medicine choices for a patient – even if its top choice didn’t match the doctor’s.

The researchers are quick to note that though the model thus far has been fairly accurate in predicting a normal course of prescription, that doesn’t mean it’s able to successfully detect deviations from that yet with any high degree of accuracy. Still, it’s a good first step upon which to build that kind of flagging system.