Archives

google compute engine

Google Cloud opens its Seoul region

Google Cloud today announced that its new Seoul region, its first in Korea, is now open for business. The region, which it first talked about last April, will feature three availability zones and support for virtually all of Google Cloud’s standard service, ranging from Compute Engine to BigQuery, Bigtable and Cloud Spanner.

With this, Google Cloud now has a presence in 16 countries and offers 21 regions with a total of 64 zones. The Seoul region (with the memorable name of asia-northeast3) will complement Google’s other regions in the area, including two in Japan, as well as regions in Hong Kong and Taiwan, but the obvious focus here is on serving Korean companies with low-latency access to its cloud services.

“As South Korea’s largest gaming company, we’re partnering with Google Cloud for game development, infrastructure management, and to infuse our operations with business intelligence,” said Chang-Whan Sul, the CTO of Netmarble. “Google Cloud’s region in Seoul reinforces its commitment to the region and we welcome the opportunities this initiative offers our business.”

Over the course of this year, Google Cloud also plans to open more zones and regions in Salt Lake City, Las Vegas and Jakarta, Indonesia.

Google Cloud gets a new family of cheaper general-purpose compute instances

Google Cloud today announced the launch of its new E2 family of compute instances. These new instances, which are meant for general purpose workloads, offer a significant cost benefit, with saving of around 31 percent compared to the current N1 general purpose instances.

The E2 family runs on standard Intel and AMD chips, but as Google notes, they also use a custom CPU scheduler “that dynamically maps virtual CPU and memory to physical CPU and memory to maximize utilization.” In addition, the new system is also smarter about where it places VMs, with the added flexibility to move them to other hosts as necessary. To achieve all of this, Google built a custom CPU scheduler “ with significantly better latency guarantees and co-scheduling behavior than Linux’s default scheduler.” The new scheduler promises sub-microsecond wake-up latencies and faster context switching.

That gives Google efficiency gains that it then passes on to users in the form of these savings. Chances are, we will see similar updates to Google’s other instances families over time.

Its interesting to note that Google is clearly willing to put this offering against that of its competitors. “Unlike comparable options from other cloud providers, E2 VMs can sustain high CPU load without artificial throttling or complicated pricing,” the company writes in today’s announcement. “This performance is the result of years of investment in the Compute Engine virtualization stack and dynamic resource management capabilities.” It’ll be interesting to see some benchmarks that pit the E2 family against similar offerings from AWS and Azure.

As usual, Google offers a set of predefined instance configurations, ranging from 2 vCPUs with 8 GB of memory to 16 vCPUs and 128 GB of memory. For very small workloads, Google Cloud is also launching a set of E2-based instances that are similar to the existing f1-micro and g1-small machine types. These feature 2 vCPUs, 1 to 4 GB of RAM and a baseline CPU performance that ranges from the equivalent of 0.125 vCPUs to 0.5 vCPUs.

AWS launches discounted spot capacity for its Fargate container platform

AWS today quietly brought spot capacity to Fargate, its serverless compute engine for containers that supports both the company’s Elastic Container Service and, now, its Elastic Kubernetes service.

Like spot instances for the EC2 compute platform, Fargate Spot pricing is significantly cheaper, both for storage and compute, than regular Fargate pricing. In return, though, you have to be able to accept the fact that your instance may get terminated when AWS needs additional capacity. While that means Fargate Spot may not be perfect for every workload, there are plenty of applications that can easily handle an interruption.

“Fargate now has on-demand, savings plan, spot,” AWS VP of Compute Services Deepak Singh told me. “If you think about Fargate as a compute layer for, as we call it, serverless compute for containers, you now have the pricing worked out and you now have both orchestrators on top of it.”

He also noted that containers already drive a significant percentage of spot usage on AWS in general, so adding this functionality to Fargate makes a lot of sense (and may save users a few dollars here and there). Pricing, of course, is the major draw here, and an hour of CPU time on Fargate Spot will only cost $0.01245364 (yes, AWS is pretty precise there) compared to $0.04048 for the on-demand price,

With this, AWS is also launching another important new feature: capacity providers. The idea here is to automate capacity provisioning for Fargate and EC2, both of which now offer on-demand and spot instances, after all. You simply write a config file that, for example, says you want to run 70% of your capacity on EC2 and the rest on spot instances. The scheduler will then keep that capacity on spot as instances come and go, and if there are no spot instances available, it will move it to on-demand instances and back to spot once instances are available again.

In the future, you will also be able to mix and match EC2 and Fargate. “You can say, I want some of my services running on EC2 on demand, some running on Fargate on demand, and the rest running on Fargate Spot,” Singh explained. “And the scheduler manages it for you. You squint hard, capacity is capacity. We can attach other capacity providers.” Outpost, AWS’ fully managed service for running AWS services in your data center, could be a capacity provider, for example.

These new features and prices will be officially announced in Thursday’s re:Invent keynote, but the documentation and pricing is already live today.