Archives

cloud infrastructure

Microsoft launches Open Service Mesh

Microsoft today announced the launch of a new open-source service mesh based on the Envoy proxy. The Open Service Mesh is meant to be a reference implementation of the Service Mesh Interface (SMI) spec, a standard interface for service meshes on Kubernetes that has the backing of most of the players in this ecosystem.

The company plans to donate Open Service Mesh to the Cloud Native Computing Foundation (CNCF) to ensure that it is community-led and has open governance.

“SMI is really resonating with folks and so we really thought that there was room in the ecosystem for a reference implementation of SMI where the mesh technology was first and foremost implementing those SMI APIs and making it the best possible SMI experience for customers,” Microsoft partner program manager (and CNCF board member) Gabe Monroy told me.

Image Credits: Microsoft

He also added that, because SMI provides the lowest common denominator API design, Open Service Mesh gives users the ability to “bail out” to raw Envoy if they need some more advanced features. This “no cliffs” design, Monroy noted, is core to the philosophy behind Open Service Mesh.

As for its feature set, SMI handles all of the standard service mesh features you’d expect, including securing communications between services using mTLS, managing access control policies, service monitoring and more.

Image Credits: Microsoft

There are plenty of other service mesh technologies in the market today, though. So why would Microsoft launch this?

“What our customers have been telling us is that solutions that are out there today, Istio being a good example, are extremely complex,” he said. “It’s not just me saying this. We see the data in the AKS support queue of customers who are trying to use this stuff — and they’re struggling right here. This is just hard technology to use, hard technology to build at scale. And so the solutions that were out there all had something that wasn’t quite right and we really felt like something lighter weight and something with more of an SMI focus was what was going to hit the sweet spot for the customers that are dabbling in this technology today.”

Monroy also noted that Open Service Mesh can sit alongside other solutions like Linkerd, for example.

A lot of pundits expected Google to also donate its Istio service mesh to the CNCF. That move didn’t materialize. “It’s funny. A lot of people are very focused on the governance aspect of this,” he said. “I think when people over-focus on that, you lose sight of how are customers doing with this technology. And the truth is that customers are not having a great time with Istio in the wild today. I think even folks who are deep in that community will acknowledge that and that’s really the reason why we’re not interested in contributing to that ecosystem at the moment.”

Cloudflare launches Workers Unbound, the next evolution of its serverless platform

Cloudflare today announced the private beta launch of Workers Unbound, the latest step in its efforts to offer a serverless platform that can compete with the likes of AWS Lambda.

The company first launched its Workers edge computing platform in late 2017. Today it has “hundreds of thousands of developers” who use it and in the last quarter alone, more than 20,000 developers built applications based on the service, according to the company. Cloudflare also uses Workers to power many of its own services, but the first iteration of the platform had quite a few limitations. The idea behind Workers Unbound is to do away with most of those and turn it into a platform that can compete with the likes of AWS, Microsoft and Google.

“The original motivation for us building Cloudflare Workers was not to sell it as a product but because we were using it as our own internal platform to build applications,” Cloudflare co-founder and CEO Matthew Prince told me ahead of today’s announcement. “Today, Cloudflare Teams, which is our fastest-growing product line, is all running on top of Cloudflare workers and it’s allowed us to innovate as fast as we have and stay nimble and stay agile and all those things that get harder as you as you become a larger and larger company.”

Cloudflare co-founder and CEO Matthew Prince

Prince noted that Cloudflare aims to expose all of the services it builds for its internal consumption to third-party developers as well. “The fact that we’ve been able to roll out a whole Zscaler competitor in almost no time is because of the fact that we had this platform and we could build on it ourselves,” he said.

The original Workers service will continue to operate (but under the Workers Bundled moniker) and essentially become Cloudflare’s serverless platform for basic workloads that only run for a very short time. Workers Unbound — as the name implies — is meant for more complex and longer-running processes.

When it first launched Workers, the company said that its killer feature was speed. Today, Prince argues that speed obviously remains an important feature — and Cloudflare Workers Unbound promises that it essentially does away with cold start latencies. But developers also adopted the platform because of its ability to scale and its price.

Indeed, Workers Unbound, Cloudflare argues, is now significantly more affordable than similar offerings. “For the same workload, Cloudflare Workers Unbound can be 75 percent less expensive than AWS Lambda, 24 percent less expensive than Microsoft Azure Functions, and 52 percent less expensive than Google Cloud Functions,” the company says in today’s press release.

As it turned out, the fact that Workers was also an edge computing platform was basically a bonus but not necessarily why developers adopted it.

Another feature Prince highlighted is regulatory compliance. “I think the thing we’re realizing as we talk to our largest enterprise customers is that for real companies — not just the individual developer hacking away at home — but for real businesses in financial services or anyone who has to deal with a regulated industry, the only thing that trumps ease of use is regulatory compliance, which is not sexy or interesting or anything else but like if your GC says you can’t use XYZ platform, then you don’t use XYZ platform and that’s the end of the story,” Prince noted.

Speed, though, is of course something developers will always care about. Prince stressed that the team was quite happy with the 5ms cold start times of the original Workers platform. “But we wanted to be better,” he said. “We wanted to be the clearly fastest serverless platform forever — and the only number that we know no one else can beat is zero — unless they invent a time machine.”

The way the team engineered this is by queuing up the process while the two servers are still negotiating their TLS handshake. “We’re excited to be the first cloud computing platform that [offers], for no additional costs, out of the box, zero millisecond cold start times which then also means less variability in the performance.”

Cloudflare also argues that developers can update their code and have it go live globally within 15 seconds.

Another area the team worked on was making it easier to use the service in general. Among the key new features here is support for languages like Python and a new SDK that will allow developers to add support for their favorite languages, too.

Prince credits Cloudflare’s ability to roll out this platform, which is obviously heavy on compute resources — and to keep it affordable — to the fact that it always thought of itself as a security platform first (the team has often said that the CDN functionality was more or less incidental). Because it performed deep packet inspection, for example, the company’s servers always featured relatively high-powered CPUs. “Our network has been optimized for CPU usage from the beginning and as a result, it’s actually made it much more natural for us to extend our network that way,” he explained. “To this day, the same machines that are running our firewall products are the same machines that are running our edge computing platform.”

Looking ahead, Prince noted that while Workers and Workers Unbound feature a distributed key-value store, the team is looking at adding a more robust database infrastructure and distributed storage.

The team is also looking at how to decompose applications to put them closest to where they will be running. “You could imagine that in the future, it might be that you write an application and we say, ‘listen, the parts of the application that are sensitive to the user of the database might run in Portland, where you are — but if the database is in Ashburn, Virginia, then the parts that are sensitive to latency in the database might run there,” he said.

 

Effx raises $3.9M for its DevOps monitoring platform

Effx, a startup that aims to give developers better insights into their microservice architectures, today announced that it has raised a $3.9 million funding round led by Kleiner Perkins and Cowboy Ventures. Other investors and angels in this round include Tokyo Black, Essence VC Fund, Jason Warner, Michael Stoppelman, Vijay Pandurangan and Miles Grimshaw.

The company’s founder and CEO, Joey Parsons, was an early employee at Rackspace and then first went to Flipboard and then Airbnb a few years ago, where he built out the company’s site reliability team.

“When I first joined Airbnb, it’s the middle of 2015, it’s already a unicorn, already a well-known entity in the industry, but they had nobody there that was really looking after cloud infrastructure and reliability there […],” he told me. The original Airbnb platform was built on Ruby on Rails and wasn’t able to scale to the demands of the growing platform anymore. “Myself and a lot of people that were really smarter than me from the team there got together and we decided at that point, ‘okay, let’s let’s break apart this monolith or monorail that we call it and break it up into microservices.’ ”

Image Credits: Effx

But microservices obviously come with their own challenges — they constantly change, after all, and those changes are reflected in different UIs — and that’s essentially where the idea for Effx came from. The idea behind the product is to give engineers a single pane of glass to get all of the information they need about the microservices that have been deployed across their organization.

Effx founder and CEO Joey Parsons

At Airbnb, Parsons’ team built out a small metastore to track what each service did, who owned it, what language it was written in and whether it was in scope for PCI or GDPR, for example. After leaving Airbnb, Parsons went to Kleiner as an entrepreneur in residence and started to work on building out this idea of bringing to more companies some of the ideas of what the team built at Airbnb. He raised a small amount of money from Kleiner to hire the initial engineering team in 2019 and then started testing the product with a first set of pilot customers earlier this year.

In its early iterations, the product relied on engineers writing YAML files, which the product could then consume, but few engineers love writing YAML files and the value in a tool like this comes from being able to automate a lot of this work. So the team built out integrations with common service orchestration platforms, including Kubernetes, but also AWS Lambda and ECS.

“What we’ve found is that most companies that have been moving towards microservices are using some combination of those platforms — maybe one, maybe two, maybe all three — to orchestrate things,” Parsons explained. “So we built really heavy integrations into those platforms to where in Kubernetes we can drop a client in there, it automatically discovers all your services, populates as much as it can into the catalog from that and then does the same thing for an AWS Lambda or ECS perspective where we consume data from those platforms and pull data in.”

Image Credits: Effx

As Parsons noted, the value here isn’t just in getting that single pane of glass, but once you have all of this information and these services’ dependencies and combine it with your CI/CD data, it also becomes a new tool for troubleshooting as it helps you see which services changed before something broke. To even better enable this, teams can add links to their runbooks, documentation and version control tools too.

Parsons tells me that the team is currently in the process of closing more pilots and hiring more engineers as it works to build out its service, add more integrations and find new ways to help its customers make use of all the data it gathers.

“As the future of what we’re building comes more into fruition, the most important thing for us right now is to really deliver on the value that our existing product delivers to our end users as a platform to build more business,” Parsons explained. “I think that in the long run, the power of this feed and getting the data that’s behind it ends up being a really interesting mode for us simply because there’s a lot of great insights that you can build for organizations based on like the patterns and the cadence of information that shows up in this feed, to help teams really understand why there’s that incident that happens every Tuesday at midnight UTC.”