Three Myths about Service Mesh: Service Mesh Day Remarks from Tetrate CEO Varun Talwar
Tetrate CEO Varun Talwar kicked off Service Mesh Day, the first ever industry conference on service mesh, with a few words about what had brought the standing-room-only crowd, from a variety of organizations and industries, together.
From the 10,000-foot view, compute density is growing. Users need more compute, network and storage capacity. The shift to microservices and containers has enabled organizations to keep growing with necessary speed, but has opened the door to the networking problems encapsulated by the well-known eight fallacies of distributed computing.
Enter service mesh.
“The way we think of service mesh is an application-aware networking layer,” said Talwar. “And when I say applications, I mean everything. I don’t just mean containers. I mean, brownfield, greenfield applications on containers, virtual machines, bare metal and serverless functions.”
Talwar welcomed an amazing lineup of conference speakers. They included service mesh stalwarts like Envoy creator Matt Klein, Eric Brewer, the VP of infrastructure at Google Cloud, Larry Peterson, CTO of the Open Networking Foundation, who would talk about how modern networking is moving to the application layer, as well as end users from organizations like Yelp, Square, Salesforce, ING and more who are deploying Envoy and thinking about app security rather than perimeter security and services, rather than servers. Cloud providers like Nick Coult (AWS) and Prajakta Joshi (Google Cloud) would describe how they’re putting in policy-based mesh into public cloud environments to control traffic. Check out the full playlist.
But before the kicking off the agenda, Talwar sought to set straight a few myths about service mesh:
Myth #1: You do service mesh after Kubernetes.
Users can use mesh to containerize and go from VM to containers, as Tetrate engineer Dhi Aurrahman, with Prajakta Joshi, would later describe.
Myth #2: Service mesh only works in containers.
Service mesh can work equally well on VM and containers. This would be the topic of a session on Istio and Envoy for VM and Kubernetes Workloads presented by Tetrate’s Shriram Rajagopalan.
Myth #3: Service mesh is hard to adopt.
Adopters tend to begin using service mesh in a three-step journey. Most people are starting from ingress, because it’s less complex than taking it all the way into individual services. Second, users will take requests from ingress all the way to an actual sidecar, or running workload, in what’s often called east-west traffic management. And in step three, they introduce security from ingress to the running workload.
Tetrate’s offerings tame the complexities of service mesh adoption. GetEnvoy provides organizations with certified, compliant builds of Envoy. Without peace of mind and confidence about security compliance and the ability to upgrade, companies won’t get close to putting Envoy into production. Apache SkyWalking, founded by Tetrate engineer Sheng Wu, an APM and observability tool that’s widely adopted in China, integrates with service mesh and answers the need for operators to have a unified and meaningful map of their entire network’s performance. And the newly announced Tetrate Q adopts Next Generation Access Control (NGAC) for the multi-cloud world, to be described in an NGAC session with David Farraiolo of NIST and Tetrate Engineer Ignasi Barrera.
Service Mesh Day was organized by Tetrate and sponsored by Google Cloud, Juniper Networks, Capital One, Cloud Foundry, AWS, the Cloud Native Computing Foundation, the Open Networking Foundation (ONF) and the OpenStack Foundation.
All right. Welcome everyone to the first service Mesh Day. My name is Varun. I am the founder and CEO of Tetrate and thank you all for coming. Uh, this is the first industry conference on focused on service Mesh. I would like to thank all of you, our speakers and most importantly all of our sponsors, CNCF, ONF and OpenStack foundation, our diamond sponsor Google Cloud, gold sponsor Juniper Networks, silver sponsor Capital One, Bronze sponsors, Cloud Foundry Foundation and AWS. So there’s … who all are here. There’s within the community here. There are end users. There’s startups, there is cloud providers, there is 200 plus organizations which registered and it looks like quite a few showed up. So that’s great. I’m super excited about it, so this clearly interest in the space.
But before we get into the amazing lineup of speakers, I just wanted to set context for the day for spend like just five minutes on this, as to why is, what are we hearing about service mesh? Why is it even an interesting area? So if you just step back from all the noise, what is happening in industry, it is a computes growing computes going everywhere, amount of computers, growing, the density of computers growing. Um, and this is numbers from uh, uh, one of the largest networking companies in the world, um, and their research from 2016 to 2021. So not just network storage, which is growing 27%, but correspondingly compute, um, is growing and compute densities growing both in data center as well as cloud. So what does it mean when you have lots of compute and distributed? What happens? Networking to connect, which is an n by n problem is, is the one which is becoming extremely complex. And that’s the space that we are here to talk about - networking.
The other thing that’s happening because of containers and microservices, and that’s one of the factors why compute density is growing, um, is the promise of microservices is great. You move fast, every team runs at their own speed. But we all know that when you break one process into multiprocessors, it’s the all of the networking problems, commonly known, eight fallacies of distributed computing. We’re all familiar with that. We can’t assume a bunch of characteristics about the network.
So how do we think about service mesh? So today you’ll hear from, you know, um, stalwards like Eric who is the VP of infrastructure at Google Cloud. Um, Larry Peterson, who was the CTO for Open Networking Foundation on how networking is moving to the application layer. The way we think of service mesh is an application aware networking layer. And when I say applications, I mean everything. I don’t just mean containers, I mean brownfield, greenfield, VMs, serverless functions, next thing that’s yet to be discovered. Um, so that’s how we think about, uh, the space. So as complexity moves to network, um, there’s a few things that we, concepts that we have to rethink. Um, and you’ll again hear from Matt will go over Envoy and Envoy roadmap and how Envoy is one of those first proxies that thought about proxying to services, not IP addresses.
You’ll hear from, um, endusers of Envoy, like, Yelp, Square, Salesforce. There’s a bunch of in this room who are deploying Envoy and thinking of services, not servers. You’ll hear from a cloud providers, Nick, uh, from AWS, Prajakta from Google Cloud, how they are putting in policy based mesh into their public cloud environments to think about all traffic policies. You’ll hear from end users like ING who were thinking about putting security into application and thinking about app security in our perimeter security. So as you go through this, just day to day, I think you’ll get a lot of the concepts conveyed into why, you know, how the rethink is happening?
So, but before we kick off the great agenda that I want to debunk a few myths about service mesh. I call them myths because I think they’re not real, but there’s quite a bit of confusion around them.
Myth number one, you do service mesh after Kubernetes. So there’s a talk today from one of the engineers at Tetrate, Dhi, and Prajakta and they’ll talk about how you do mesh before you adopt Kubernetes. Why you do mesh to containerize and go from VM to containers.
Myth number two, which is it only works in containers. Tthere was a reason when I was at Google, we, I personally and bunch of others fought for Istio to be a separate project. Um, and I still believe that’s the right case. Service mesh can work equally well on VMs and containers and a, there’s a talk today from Sriram, one of the engineers at Tetrate on how you work Istio natively both on VM and containers.
Myth number three, it’s hard to adopt. Um, so my next slide I’ll cover how we are seeing adoption in steps and I think that is how mesh is going to be adopted. So in terms of what we are seeing in terms of how people are starting to use service mesh is somewhat of a three step journey. Um, most people, um, we are seeing are starting from ingress, which is step one. Why is that? Um, primarily it’s an easier concept. It’s less intrusive. A pattern that is known to people. Um, you get to learn Envoy and then you can take it all the way into individual services where you get into protocols and performance characteristics and all of the complexities. Step number two is when you take requests from ingress all the way to actual sidecar and actual running workload, which often called east west traffic management. Um, and step number three, which you start to introduce security from all the way from ingress to the running workload. So that’s what we are seeing in terms of a probable adoption sort of path for service mesh.
So with that… I’ll take 30 seconds promotion preview for what Tetrate it is up to. So a few weeks ago we launched something called GetEnvoy, which is a way for you to get certified compliant builds of Envoy. Now for any end user to actually put Envoy in production, you need to have that peace of mind. You need to know it’s secure, compliant. I can upgrade it without which you’re not getting it close to production. Um, one other project, Sky Walking, Wu Sheng who is somewhere here. There. He is the author of Sky Walking. It’s an APM project done right for services. I’m happy to note there it’s graduating soon as a top-level Apache project and something that he’s worked hard to integrate with service mesh and Istio and those and it’s widely adopted in China.
Something that I’m really, really excited about, which we actually announced an hour ago is Tetrate Q. This is a fresh look at access control, which we are doing in collaboration with NIST. You’ll hear from David, David’s here somewhere in the room back there. David and Ignacio will touch upon what this is, but this is thinking about access control for modern infrastructure. All right, so I want, that’s what we are up to with that, I would like to invite our first speaker, Larry Peterson. Larry is the CTO for open Networking Foundation. He’s a director for Stanford Platform Labs, also a professor at Princeton University. Uh, I think of him as the father of SDN and the guy who created SDN and NFE. And today, we’re super excited to have him here to talk about how he thinks about future of networking in service mesh and multicloud world. So please welcome Larry.