Nvidia’s Ampere GPUs come to Google Cloud

Nvidia today announced that its new Ampere-based data center GPUs, the A100 Tensor Core GPUs, are now available in alpha on Google Cloud. As the name implies, these GPUs were designed for AI workloads, as well as data analytics and high-performance computing solutions.

The A100 promises a significant performance improvement over previous generations. Nvidia says the A100 can boost training and inference performance by over 20x compared to its predecessors (though you’ll mostly see 6x or 7x improvements in most benchmarks) and tops out at about 19.5 TFLOPs in single-precision performance and 156 TFLOPs for Tensor Float 32 workloads.

“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads,” said Manish Sainani, Director of Product Management at Google Cloud, in today’s announcement. “With our new A2 VM family, we are proud to be the first major cloud provider to market NVIDIA A100 GPUs, just as we were with NVIDIA’s T4 GPUs. We are excited to see what our customers will do with these new capabilities.”

Google Cloud users can get access to instances with up to 16 of these A100 GPUs, for a total of 640GB of GPU memory and 1.3TB of system memory.

Quantum Machines announces QUA, its universal language for quantum computing

It’s a busy week in the world of quantum computing and today Tel Aviv-based Quantum Machines, a startup that is building a software and hardware stack for controlling and operating quantum computers, announced the launch of QUA, a new language that it calls the first “standard universal language for quantum computers.”

Quantum Machines CEO Itamar Sivan likened QUA to developments like Intel’s x86 and Nvidia’s CUDA, both of which provide the low-level tools for developers to get the most out of their hardware.

Quantum Machine’s own control hardware is essentially agnostic with regards to the underlying quantum technology that its customers want to use. The idea here is that if the company manages to make its own hardware the standard for controlling these systems, then its language will – almost by default – become the standard as well. And while it’s a ‘universal’ language in the technical sense, it is — at least for now — meant to run on Quantum Machine’s own Quantum Orchestration Platform, which it announced earlier this year.

“QUA is basically the language of the Quantum Orchestration Platform,” Sivan told me. “But beyond that, QUA is what we believe the first candidate to become what we define as the ‘quantum computing software abstraction layer.’”

He argued that we are now at the right stage for the development of this layer because the underlying hardware has reached a matureness and because these systems are now fully programmable.

In his view, this is akin to what happened in classical computing, too. “The transition from having just specific circuits — physical circuits for specific algorithms — to the stage at which the system is programmable is the dramatic point. Basically, you have a software abstraction layer and then, you get to the era of software and everything accelerated.”

Image Credits: Quantum Machines /

Sivan actually believes that for the time being, developers will want languages that give them a lot of direct control over the hardware because for the foreseeable future, that’s what’s necessary to harness the advantages of quantum computing. “If you want to squeeze out everything quantum computers can give you, you better use low-level languages in the first place,” he argued,

For low-level developers, Sivan argues, QUA will represent a paradigm shift. “They shift from having to developing many, many things in an iterative way to actually having a language that can support even their wildest dreams — their while quantum algorithms dreams,” he said. “This is a real paradigm shift and these guys are experiencing in its full capacity — and it’s not only the accelerated process of programming and working, but also the capabilities themselves. Once everything is programmed in QUA and then compiled to the Quantum Orchestration Platform, then you also get the full benefit of the underlying hardware.”

Image Credits: Quantum Machines /

The company argues that its QUA language is the first language to combine quantum operations at the pulse level and universal classical operations. Quantum Machines also built a compiler, XQP, which can then optimize the programs for the specific underlying hardware, in this case, Quantum Machine’s Pulse Processor assembly language.

It obviously needs to do all of this in order to create an ecosystem and a community around its language. Of course, if its Quantum Orchestration Platform becomes widely used — and it already has an impressive list of users today — then QUA will also see wide adoption.

‘It’s one thing to build a beautiful language,” said Sivan. “But it’s another thing to develop it to be both beautiful and supported by an underlying hardware that is then adopted by itself. And then, the adoption of QUA is also led by the adoption of the Quantum Orchestration Platform, which is itself driven by the capabilities, nothing else.”

Mirantis releases its first major update to Docker Enterprise

In a surprise move, Mirantis acquired Docker’s Enterprise platform business at the end of last year and while Docker itself is refocusing on developers, Mirantis kept the Docker Enterprise name and product. Today, Mirantis is rolling out its first major update to Docker Enterprise with the release of version 3.1.

For the most part, these updates are in line with what’s been happening in the container ecosystem in recent months. There’s support for Kubernetes 1.17 and improved support for Kubernetes on Windows (something the Kubernetes community has worked on quite a bit in the last year or so). Also new is Nvidia GPU integration in Docker Enterprise through a pre-installed device plugin, as well as support for Istio Ingress for Kubernetes and a new command-line tool for deploying clusters with the Docker Engine.

In addition to the product updates, Mirantis is also launching three new support options for its customers that now give them the option to get 24×7 support for all support cases, for example, as well as enhanced SLAs for remote managed operations, designated customer success managers and proactive monitoring and alerting. With this, Mirantis is clearly building on its experience as a managed service provider.

What’s maybe more interesting, though, is how this acquisition is playing out at Mirantis itself. Mirantis, after all, went through its fair share of ups and downs in recent years, from high-flying OpenStack platform to layoffs and everything in between.

“Why we do this in the first place and why at some point I absolutely felt that I wanted to do this is because I felt that this would be a more compelling and interesting company to build, despite maybe some of the short-term challenges along the way, and that very much turned out to be true. It’s been fantastic,” Mirantis CEO and co-founder Adrian Ionel told me. “What we’ve seen since the acquisition, first of all, is that the customer base has been dramatically more loyal than people had thought, including ourselves.”

Ionel admitted that he thought some users would defect because this is obviously a major change, at least from the customer’s point of view. “Of course we have done everything possible to have something for them that’s really compelling and we put out the new roadmap right away in December after the acquisition — and people bought into it at very large scale,” he said. With that, Mirantis retained more than 90 percent of the customer base and the vast majority of all of Docker Enterprise’s largest users.

Ionel, who almost seemed a bit surprised by this, noted that this helped the company to turn in two “fantastic” quarters and was profitable in the last quarter, despite the COVID-19.

“We wanted to go into this acquisition with a sober assessment of risks because we wanted to make it work, we wanted to make it successful because we were well aware that a lot of acquisitions fail,” he explained. “We didn’t want to go into it with a hyper-optimistic approach in any way — and we didn’t — and maybe that’s one of the reasons why we are positively surprised.”

He argues that the reason for the current success is that enterprises are doubling down on their container journeys and because they actually love the Docker Enterprise platform, like infrastructure independence, its developer focus, security features and ease of use. One thing many large customers asked for was better support for multi-cluster management at scale, which today’s update delivers.

“Where we stand today, we have one product development team. We have one product roadmap. We are shipping a very big new release of Docker Enterprise. […] The field has been completely unified and operates as one salesforce, with record results. So things have been extremely busy, but good and exciting.”