Duration 47:13
16+
Play
Video

Build an enterprise-grade service mesh with Traffic Director

Kelsey Hightower
Principal Engineer at Google
+ 1 speaker
  • Video
  • Table of contents
  • Video
Google Cloud Next 2020
July 14, 2020, Online, San Francisco, CA, USA
Google Cloud Next 2020
Request Q&A
Video
Build an enterprise-grade service mesh with Traffic Director
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
2.99 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Kelsey Hightower
Principal Engineer at Google
Stewart Reichling
Product Manager, Networking at Google

Kelsey is a Principal Engineer at Google, and a huge open source contributor, working on projects that aid software developers and operations professionals in building and shipping cloud native applications. He is also an accomplished author and keynote speaker with a knack for demystifying complex topics and enabling others to succeed.

View the profile

About the talk

Does your architecture rely on more than one service? Do these services need to talk to each other? Do you worry about what will happen when a service becomes unreachable? If you answered "yes" to any of these questions, learn how Traffic Director can help you build out and seamlessly scale your architecture to thousands of backends.

This session provides an overview of Traffic Director, Google’s managed control plane for open service mesh, including edge proxies, middle proxies, sidecar proxies, and more. See how real customers are solving architecture challenges at enterprise scale and preview new features that solve common pain points in large-scale service deployments.

Speakers: Kelsey Hightower, Stewart Reichling

Watch more:

Google Cloud Next ’20: OnAir → https://goo.gle/next2020

Subscribe to the GCP Channel → https://goo.gle/GCP

#GoogleCloudNext

NET206

product: Cloud CDN,GCLB, Traffic Director; fullname: Kelsey Hightower, Stewart Reichling;

event: Google Cloud Next 2020; re_ty: Publish;

Share

My name is Stewart Riesling from Google Cloud on the product manager working on traffic director with me is Kelsey. Hightower is going to be doing a demo later today. We're here to talk about building an enterprise-grade service match with traffic. So I'm going to go through a little bit of a pretext and prelude for why a product like traffic director even exist and why you should consider it, they're going to talk about what traffic director is an awesome day. It's going to be a good sense of, you know how you interact with this product. How does it actually work? I'll talk to

some of the problems that customers are solving with traffic director and then we'll go and some the announcements So what is traffic director? Kind of thing through a very simple model. So I talk to customers all the time with problems like a retailer wants to try and make there at the plant production ready. So as a retailer you might be doing very no retail things. You have a shopping cart service service, the shopping cart service, helps your customers add items to their hearts and then a payment service allows our

customers to pay for the stuff that they want to buy really straightforward stuff, types of things that you might want to solve as a retail. Let's look at what that looks like. Once you start trying to make a production-ready And so here I have some code it's not great but that's good. I might write it's a check out function, a charge customer function and then a function that calls on both of those know, very straightforward, the types of things that you might want to do as a retailer. And you'll notice here that there is this line of code that is the payment in point. This is a point

that the shopping cart service, customers aren't really doing this stuff anymore, but address that the shopping carts, going to call And you'll notice that the simple model, you got some business logic, you got some business logic which is that shopping cart are the penguins at Point. A pretty straightforward and some more business, too bad. You're just calling this thing from your shopping cart. No, you get to production scenario where you need to make this thing. Something that works for the holiday season and

a lot of customers overloaded. And so when you do any more capacity capacity as you create another things back end, Your shopping cart. Now needs to be able to call another one. So you have two instances of your payments back in your shopping cart, needs to figure out how to send traffic between them. Barcode and here is the exact same thing about you before. I did some of the checkout stuff cuz you're going to see the starts to become, pretty clunky, pretty fast. So rather than calling

the one payment Sandpoint, you need to be able to call. And so, one way that you might do this, as you might write a function, it's a wrap 10005. You know, have this 10006 is well, and you randomly choose between one of them. This is pretty straightforward, not too. Complicated way of implementing round-robin load balancing occasionally, I sent traffic between both those payments back in that you'll notice here. Is that raised before, I only had that one in point now has two endpoints of also had to think about how I choose

Put yourself in the shoes of a retailer. What does this stuff have to do with selling goods and services and products to your customers? Pretty straightforward. We just figured out how we could add another payments back in bounce between them. But you also noticed that she would get your shopping cart code your now spending more of your time on this non-business logic and what I mean by not this is logic, sugar is basically code that doesn't have very much to do with you deliver your business at all, has to do with things like resiliency up time dealing with big problems that

you might face that are more infrastructure level than actually delivering Goods to your end customer. So okay, you just got a little bit more capacity for a new requirement. I just want to make sure that the shopping cart and so rather than just randomly choose one, which is what we did here is, we are going to actually test those endpoints. Make sure that they actually are healthy, before we send traffic to. And so, rather than get load down Sandpoint, we update that functions

of Is healthy, which I don't know how to write this. Basically, what this would do is figure out Scott that IP address 10. Is it actually helps you go and check on points with look at the endpoint was, there are healthy or not and some more more towards infrastructure level problems. Networking problems and that's where you're dedicating more and more of your time. And so I can't, like we saw before your code, that is not business. Logic continues to grow and ask you,

what does that have to do with selling services to customers very much resembles? I just showed you are pretty straightforward ones. That a lot of customers customers have requirements. Like, I have payments back into other back ends in. I need to be able to gracefully handle timeouts, needs to be encrypted tracing. It needs to be in there to dynamically scaling up different things, back in and do all of these things. I would posit are in a non-business logic or

engineering efforts go to more and more infrastructure concerns which takes away And resources from delivering your payments from delivering your shopping cart from delivering, your actual business. So the good news is I just showed you is a little bit contrived, found ways to solve this customers have, for a long time, to doing things like running a load balancer in between the shopping cart and the end. So hear that load balancers, basically a proxy, the shopping cart sends traffic to it. The load balancer figures out where that traffic should go. All yourn on

business, logic. Lives in that load balancer. But even this pattern has its flaws, which is a bouncer sits in between. But you might have more and more client instances sending traffic through it and it starts to become a bottleneck. You don't have to figure out how do I scale this thing up? How do you, make sure it doesn't go down. The other patterns, emerge smells support service which supports important. But I see plenty of customers were sending traffic from both shopping cart critical services and support from less critical Services through the same proxy and

issues around scaling making sure it doesn't go down services. Oxycodone issued for your other services. And so now you're dealing with isolation problems when you're working through these texts problems. Over the past couple years we've seen a pattern that helps us to solve this. You have a dedicated client-side load balancer off in that is a sidecar proxy and so that might run right next to your shopping cart, on the left hand side, you see any of that shopping cart? It has its own

Logic for the shopping cart routing Lowdown, 6 to the shopping cart. And if you want to scale up and you have another shop and Curtis will that has its own side, so both of those can independently one shopping cart instance, or one shopping cart instances, speaker, proxy misbehaving. The other one is unaffected nice. Simple way to get rid of those concerns from Nexus, things that you've got configuration as highly Taylor just to that shopping carts, not shared across multiple services. That's what the sign. Her proxy model that we seen in the last couple years. And so far so good. You know

you basically solved a lot of the problems that describe it earlier, but now you're getting yourself into a new game and that is that you need to figure out how you're going to consider that the one way you might do that and I just made the stuff, this is a proxy. Comp filed. Imagine you will file. It contains, you know, the configuration for your side, car in points for the payments back and it has made me how you're going to do Health packs for those. He's got the information that the prosecutors in order to support that shopping cart. And

so, okay, fine, writing a proxy., no big deal, right? What happens when you have sex again, until a way to do that, is by yourself another property, I think you can already see where I'm going with this, but you've now, effectively started to take on this additional responsibilities, which is How do I write distribute store, proxy configuration, in this case? You know, this still pretty trivial? When I talk to customers who were doing, you know, really enterprise-grade stuff.

They have the more complicated each running their own sidecar proxy soar middle proxies know. It's things like encryption turned on where different load balancing policies and so now you've got yourself into the game consoles. Number Services, expanding the number of regions like checking these validation in place for proxy contest files figuring out how to make sure that there's no issues in those boxes of files. And I'll go back to the thing at If you are a retailer, what does

that have to do with selling goods and services to customers to solve this? There's a solution for it to control plane into the control. Plane is a service that is basically responsible for Distributing. Making sure proxibid.com gets to the right side. Car price. I have an API. It might have you no different checks to make sure the sidecar boxes are helpful. You probably seen things like this. For example, is CEO is a great example is still has a component is a control that runs in your cluster? Yes. And

apis to Tallahassee. This is what the world should look like. Is it a policies that I want in force and then we'll take that and go and distribute it to all of your sacrifices? And so I talked about Tyler as an example of a control. Another example of control is traffic director. Traffic director is a control plan as a service. It's a GDP. Managed service talk a little bit more about what traffic trick will. It makes traffic director special on foot at its core so I could install an updated, any of that stuff. It runs within gcp knows about the world because you

tell it what the world should look like and it also collect signals really intelligent. And it generates that configuration for your props. And so what is traffic director? Well, there's four things that I want you to keep in mind, traffic. Director is a universal management role play. And so, what do I mean? When I mean universal, I'm the one part of that is that its Global Network as a global thing. And so that is plenty of customers. Ready to go on

containers, traffic doesn't make a distinction about with Services is supporting. If, you know, it's Scott for example, an Envoy proxy on the virtual machine or on the container that connects to traffic director using a set of apis that known as the XTS apis traffic configuration back. The only thing that's really cool. It's not just gcp. We're starting to now have, and you'll see more about this later. No, customers have the Clements in PCP PCP environments and other clouds. And

so, for us, being able to support those other environment is really, you'll see you in a second about what that means, practice, director, s Global as something bigger than just gcps something that's bigger than just BS. So I can reinforce that further. It's the fact that traffic director is more than just a service match is a type of employment that traffic tracker can support. It works really well. With service man, were also starting to see customers working with more than just the sidecar proxy service match, referring to see customers who are using

application Library without proxies. We'll talk about what that means in the second. Well, and also we're seeing customers who are just too plain, you know. Foxy on a virtual machine that acts as a logo similar to that model I showed earlier the one before the sidecar base model where you don't necessarily have to be ready to ask proxy server where to get value from traffic director, our goal is to support a lot more than just the traditional model when you have sidecar proxies next to application workloads, we want to follow you, where your deployment is, what is the palsy looks like it's

something that we want support. The third thing to keep in mind is as programmable traffic director. We've got a rich set of policies. I'm so traffic management, policies layer, 7 base policies. We've got a lot of other interesting policies coming up as well, but this gives you a centralized way to manage all of your configuration. Your policies in a single place. You don't need to worry about Distributing policies about a hand-crafted. It's a programmable interface that you can use to configure your application Network. A

lot about the solutions as it's a control thing as a service. What I mean here is that they managed service manage by g c p. Scott s. L a r, s r Reserve, kind of making sure that it doesn't go down. And if there is an issue were taking care of that, for you, get rid of a lot of the things that I would call you not really related to your business around, how do I install a control plane? How do I keep it up? How do I make sure that What is replicated across multiple regions? Really fundamentally difficult problem set you with your retailer at your bank. If you're a pharmaceutical

company, that's not really your business and you're not really deriving much value from taking on that responsibility. What's that? I'm going to pass on the Chelsea is going to make it make it real. He's going to show a demo of Chelsea off to you. Alright. Thank you Stewart. So we're going to jump right into it as we just talked about traffic directors, kind of Center for all of our service configuration and the Heart of our service mission gcp is often paired with a thing called invoice asykar. Some people look at is just a proxy. The nice thing about it is on voice. Of course, the standard

XPS protocol for its configuration. That means I can play in that traffic director to grab all of its configuration. A lot of you will be familiar with a similar process that you seen something like this to you. But what happens if the overhead of having an additional sidecar, like, I'm boy becomes too hot, maybe for high-performance scenario, or what about the management over head of having to manage another binary and its configuration. And the interesting thing to talk about today is proxy list service mesh. If you think about it, there's lots of high-performance frame works out

there grpc is one of my favorites inside of grpc, we now have native XDS integration. So what does that mean? Well, it means that we can actually get our consideration from traffic director just like Envoy and once we have our configuration, we can actually have backens. And we can do some of the traffic management features that you normally find in a broader service, mesh or proxy like Envoy. Now, to set this up, we need some Services. Now I can run my services on VMS but kubernetes is one of my favorite platforms. So we're looking at here is just a basic kubernetes cluster caught

next. If I would have drill into this cabinet is cluster, I have and deployment of my calculator app and I'll show you how that calculator that works in the moment. Just know I have a set of containers deployed to kubernetes. Now I have three of these and they're set up what they autoscaler to go up to up to 10 if I need it. Typically, when you have multiple instances of something, you typically have a load balancer. So it's clicking to that. So here's my service configuration. You'll notice this is just a standard kubernetes service object, there is no load balancer at all. All

I have is a collection of odds and their IP addresses. So we're going to need something else to handle the load balancing aspects for us. Now, the nice thing about this is the proculus grpc integration. We can actually do this on the client Side by getting the configuration from traffic director. So before we do that we need to make sure we understand how the calculator app actually works. So the thing you have to keep in mind is we're not going to change the way we write code here. For example, I can write a standard to your PC application and test it on my laptop. So let's do that now.

I'm going to city into this calculator directory. This is where the source code lives and here's a server instance. I'll just go ahead and Bill that really quickly so I can actually take a detailed look at how it actually works. Now the server calculator app, all it does is supposed to end point where I can call and add method and give it a little Ray of numbers and have it. Give us back a result. Once that's done, compiling will run it right here from my laptop. So here we go or run that here. So you can see that is starting listening on Port 5051 timely the default to your

PC port and also have my checks on port 8080. So that's running locally here. The next thing we'll do is we'll take a look at the client. Now, the client site code is pretty straightforward. I'll CD into that directory. And then what we'll do here is I'm just going to compile the client and this is going to be a very standard kind of request, a response, grpc application. So as we're building, our client wants to set up will take a look at its command line Flags to see what options we have. All right, so now that the client has built but stick, look at those flags. I will see that

we can give it a calculator flag and we can tell it where the calculator service is running. So if I default is going to look on service hose on poor 551 and then we can give a list of things to calculate pretty straightforward. So let's just run that now. So we'll see a client and then we'll give it a couple of numbers will say 10:20 and you don't have to be great at math, to get the answer for this, and we can see that the result is 30, right? Pretty straightforward. Now, what are we want to send our application? The server somewhere else will remember, I pack your stuff, that's every

component into this calculator app that's running inside, of gke or kubernetes. And you'll see here that we have multiple instances running ideal, you want to Aha service And then we also have it configured to listen on to this port. 551. Now, what if I wanted to actually do this from a virtual machine in monitor my laptop? Now we have to deal with the complexities of service Discovery and making sure that we can load bounce across all of those packets. So the first up, we got to do before we can do anything else, we need to configure traffic director.

So I'm going to pop over to traffic director. Now what you'll see here is that I have one service healthy but if I scroll down here you actually see something interesting. You'll see that I have one network in point group when I created my service inside of a kubernetes, I told the gke cluster to actually manage the network in point group, that has all of those hides behind it. So we have three or 10. All of them will be group behind us Network in point group, you also see the screen checkbox hear what this is saying is that traffic directors doing its own set of health, check to make sure

that all of those pods are actually healthy. Before we send the configuration with those back ends in it, to any client site, grpc application, looking to leverage these. The last thing we need here is also, how do we communicate, what service do we want? And this is where some of the routing rules coming to play. Now, if you have any experience with Google Cloud load balancer products, this is going to look very familiar to. You first start with a grpc protocol. We're going to associate a set of services that we created early on the services Tab. And if you click on calculator here

but we'll see is that we give it a couple of name matches so he your calculator plus that Port combination or route us to this grp service call calculator, right? So now all the configuration is set on the server side. Now, the next thing we have to do on the client side is we need to tell grpc how to do this. Now, you're probably thinking. Do I have to rewrite my entire application to make this work? Well, the good news is with One Imports statement. We can make this work automatically, what's going to pop over to get her really quickly so we can actually see what the

code looks like. So here we going to click on our main code zoom in a little bit here so we can see what's going on. This is the only import that we have to do. If you look at this, it's not going to lie. I can actually use XP as anywhere, where you going to load the client site load balancing and service Discovery components that make it compatible with traffic director. So, this is an XDS client service. A client site. Will bouncing one more stuff that we need. We actually have to tell that bit of code where traffic director is. So here we

have a service, your I. So this is going to be our excuse control plane. In this case, traffic director, we're going to use our service account credential so we can authenticate to it. And when we register with traffic director, we have to pass a little bit information about ourselves mainly our project number. And what network would care about. Now, one thing to keep in mind here is my kubernetes cluster is running in a network, in my VPC, call Google kubernetes engine, and that's super important and we have to let traffic director. Know, that's the network that we would like to

find Services them. And the last thing is traffic directors, really intelligent about routing or traffic to the say, the nearest zone. So if I'm in US1 and I have multiple coronavirus clusters with the same application deployed, it'll be nice to have traffic director, use our locality information to give me an Optima set of backens, maybe for things like latency or rizzonis down redirect me to another Zone that's available. all right, so now that we have all their configuration, let's copy this client, binary to a VM So we're going to do here is I'm just going to copy the

pre-built, binary to one of our vehicles that I have in my infrastructure. So we'll just take a look at that calculator. Binary, make sure that we have it for the client. You see it here. What I'm going to do really quickly so I'm just going to SCP this binary to a little via. My call proxel is grpc, so it's just one that come in really quickly. So we're going to do here, just going to copy this binary that I just ran to my laptop on modify code to that virtual machine. And now we're going to do is a sensation that virtual machine. And what's

wrong? That virtual machine will have the ability to run the same code again, but this time we're going to try to hit the containers running in kubernetes. So we need a couple of things. First, we need to make sure that our environment is setup correctly. We have to tell XDS we're a bootstrap Fallon. So you'll see this environment variable being sent to my local files system will be XTS. Bootstrap, configuration file is So, we'll just sort that to make sure that our environment is ready. And this time, we're going to do something slightly different with our client connection. So we're

not dealing with localhost anymore. Going to do something slightly different with a calculator flag. We're going to give it the XTS schema and what this will do is it will tell the XDS library and all our client side load balancing to look up the service inside of traffic director. That's going to be just enough. So it bypasses, the normal look up, service in my case, DNS by default. Let's give it a couple of numbers. We have one in 56 or run it now. And we see that we get the results of 57 just to make sure that this is a live demo. That's just pick two random numbers here. 1 + 999

+ 1000. Aerie thing is working great. I know you might be thinking like, this is a virtual machine. How hard would it be to install Envoy in a virtual machine and just use Envoy to keep everything standard? And you make a good point, it doesn't necessarily address the need for high-performance. If you want to avoid the proxy that you're right on a virtual machine, we do have the ability to just run Envoy as a sidecar. Write down the verse machine and bypass doing any of this. So there's another thing that I want to try. Now this might get me in trouble with the PM team because this is

not something that we advertise is working. But what's a live demo with that? A little bit of creativity. One environment that I like a lot and really make sense for a proxy list integration with traffic director is cloud. Run you not remember, Cloud run. It's our server list offering or I can run my containers. So earlier we just showed rain on the laptop running on a virtual machine. I think about Cloud run. I don't have the option of running any additional sidecars, I can only provide my application and that's it. So in this particular environment is pretty constrained in

terms of what I can do. So, the nice thing about this is I have the ability now to do the exact same thing that I was doing on that virtual machine on a service offering like labra. You'll even notice here that I'm passing in the command line flag that matches the exact same. One that I was running earlier that we did something a little bit different here, I took all of that kind code that you saw earlier and I'm moving into an hdb Handler. So if this is all working I should be at a call curl. Hit the clown, run URL. And then the client code will basically use the exact same

address that we were using before. Call out the traffic director, get all of the back in that he needs to Route traffic to now there's one more guy to hear Club run Rising a slightly different network than my V PC. So I had to set up one more thing before I can run my application, I had to set up a VPC connector. Now in the service were all we can set up these connectors to be very specific to a specific Network. And this case, the Google kubernetes Network, did I have set up earlier is where my kubernetes cluster is running and all of those containers and it also matches the

configuration that traffic director has all I'm doing here is setting up this service connector called calculator. So when I deploy my club run application, I can also give it that bpc. Connect their name to ensure that when it's container starts up. It'll Also be able to have a leg into that BBC in the last thing. We can also. Replicate the fact that I have this XDS bootstrap config inside of this environment variable letting me know where we saw just like in the VM, it will take one more peek at something else just to make sure we're all clear on how this actually works. If I

go over here to the calculator, I want to show you the docker file that I used to create that container want to show you that. We don't actually have to change your code too much. Does a lot of noise here? So I'm just going to break it down. I'm building a calculator app. Just like I've built early on my laptop bit different that I'm doing here is I'm taking the binary from this. That I'm also copying in the sexiest bootstrap, config into the Container image down. I think about that is I have all the things I need necessary to call traffic to work. And I run one deployment script. And if

everything is working, I should be able to call that particular in point. So I have Micro command. This is the URL from cloud, run. And what we'll do is we'll pass in a few integers here to see if we can get this calculation. You can see it coming up pretty fast. Those are pretty big numbers and some of you may be struggling with math, let's make it a slightly easier for the viewers. So we'll give you one 3 and 5 and see that our answer is not. And with that, I like to end this presentation. As you can see traffic directors, super flexible in terms of employment in a

standard XDS protocol combine that with practical is grpc integration, we can now take a standard XDS client, write it on my laptop a virtual machine or even a service environment like lava run, and we don't have to match any other binaries to get what feels like a native service mesh integration with that, tattoo used to it. Very nice. Thank you very much Kelsey. So, you know, so far we've gone through contacts on why something like traffic director exist, what it is Kelsey brought it to Life by showing how someone might interact with it. What are

some of the really cool things you can do with it? Another thing I wanted to get into it and it's good to see you do some Hands-On, but it wasn't real problems. That customers are solving when they use traffic director. Let's go back to the case of a retail and I know I keep harping on these large Mobile retail retail. You really can't afford any down time. And so something like Black Friday coming up where every single second is money lost, if you're down

in front of you right now, An example of the type of deployment that you might see what traffic director. I'm so we got traffic director running separate from your region separate from your deployment is a managed service that you don't have to manage yourself example, deployment wearing multiple regions of the same services. So you might have a retail front-end hosted in USA. You have other customers, do you want to make sure that when they send you a request? It goes to the nearest services that you do to reduce your network travel cost. You also want to

make sure that you replica surfaces. Increase one goes down into traffic too. Easy to just automatically your shopping cart. Each of these are hosted in multiple reasons so that you can meet those goals around latency around cost around High availability. And so customers are using traffic director in combination with other agency here, on the left hand, side is the global global HTTP load balancer clients on the public. And so that sends traffic to, you're

smart enough to send something to the car in retail front end. For some reason in US Central one is down. New version of the service. That didn't work, maybe you. So rather than having your retail front-end send traffic to the shopping cart and errors for the end, user. Traffic, just goes to your shopping cart in in the South East Point. It just feels over seamlessly to another reading set up a traffic director, you just add Atkins in multiple reasons, as

part of your service and traffic director, will make sure that your retail fraud, that knows how to reach the ones that are helping service customer get served. Despite the fact that your shopping cart at USF, automatic supercritical type of business, where you have to make sure that every request that doesn't have a table for you as a business. Another use case, I see I got a customer is one of the world's leading Logistics providers. Is that to make sure that when they deploy new version of the service, they don't

have any down time but just for a little bit contacts here, this customer has a prediction service running and what that service does is it tries to figure out when a package is going to be delivered by training, a machine learning model model, as part of a workload that's running on an event to make sure that that model actually works. And he's done. Deployments knows that a deployment is inherently risky. You might try to make sure that you have a development environment staging environment environment or qualified new version as it goes through. But no two are identical. And so even

if you do Everything in staging in Valley that it works and steak, and there's still some risk that once you get to production, it's going to go down. So this is a great example of how we see customers using traffic director. In this example, the front end is sending traffic prediction service. There's a prediction, which is the old version prediction B2 which is the new version of the model. The customer has set a trap explicit policy, which causes the front end to send a percentage of traffic on a percentage of traffic to

traffic. And so some of your departments are a lot safer because you always objected a small percentage of your traffic to the new version. You checked out of work. If it didn't actually work, will you can put that back to 100% to the old version of this gives you a lot of safety around here. Deployments it also unlocks. All kinds of really interesting devops patterns like Organizations are do traffic director deliver those types of values cases. A really interesting one is this idea of network and services for DCP environments and I'll talk a little bit more about what I mean

services in both large banks, for example to have services that run in gcpd and they also have services that run in their own, unfriended Center in that might be, you know, a temporary stay as they're migrating everything to the cloud. That also might be good reasons why you might want to have some great, right? Makes a lot of sense. One of the things that customers come to gcp for is, you know, our Global Lowdown store in. This is what I mentioned earlier is the thing that delivers to you this group of

services that I called Network and services. And so, what do I mean by that? I mean, There's all kinds of different policies and services that you can. Apply their example is you want to make sure that you have. Section place so that your clients on the public, internet overloading your packets of it. So this checks to make sure that you know, the traffic is coming in as safe as you a bunch of cost by cashing different static acid delivered by the load balancer built into it like a global anycast dip so that you get a

single IP address around the world. And your clients can just address it that IP address. It'll go to the closest instance. And that's how you minimize that, late-season minimize that cost. And I mention these things, because customers great, I can use all of this. With my cloud service. I can make sure that I have a service is protected. I'm cashing things right way. For example, don't really have that same Global footprint. I don't have points of presence all across the world where I can deliver or I can do a

different security mechanisms that are technically do for cost, reasons in feasible, for me to deliver myself. You could actually have that for your on-premises services as well. So one of the things that we see us very critical for traffic until today to actually use that And services with your on prep services. So what does that look like? Well that looks like a pool of proxies to Port. Huron. A computer engine management group is this group is just part of our infrastructure, it scales up and dynamically. Each of those is running a proxy and access

a middle profit. Figuration, an example of the type of configuration would be there are these on premises and Points. Each of them belonging to the on-premise service. The AC on the right hand side, traffic director knows that way. Traffic. Director Wes, the proxy know about it. And so, now the next time a client sends traffic to the global that traffic towards the traffic on John concert. And so this is really powerful. Use case, where you can get things like loud, Armour class c t. I o n Global any task dip, his Network

and services for both gcp services. And services running in non gcp environments. We also see this. This is a team, which is this infrastructure is really interesting, but you know, how do you go from zero to deployed well? So, what you see here is the current way of the service mesh and setting this up, but it does require some thought, where do I get this proxy? What's the version that I want to use? How do I make sure the version is secure and I keep it up to Dave, you know, how do I configure iptables before I started

laughing like. This thing is really scary. Find that really easy way to think about it is, it's basically a virtual machine template. It's a virtual machine template that you control with some flags with some printers. No dash dash service proxy enable and what that will do, it will automatically on virtual machine which could be running like a client application or server application to on to those virtual machines that template won't stall Envoy. A recent version that we qualify, it will connect

to traffic. Director is a really easy way to do updates, you know, it's as easy as do. That is issuing a couple of commands to initiate a rolling update. You don't need to think about picking an Envoy. Compiling it, that's all handled for you. And so another thing that were announcing this before is this idea of using an application Library, instead of an Envoy proxy, traffic report for proxylist your PC services, and what do I mean by that? But, I mean, look at the most recent

versions of your PC support is protocol called XTS on. XTS is a group of apis used by traffic to exchange information about the world about the network. I'm so much the same way as when you have on Voice, using a recent version of Europe to see if you can provide some configuration, when you started up, use a different XDS resolver, it connects the traffic, director traffic, director tells that proxy. What are the different endpoints associate with the services? They trained because your PC before. It's a super-powerful application primer, but it doesn't come with everything that

you need to do a distributed service. I miss you. A lot of customers who are doing things like Roxy. So, your application goes to that onward, Proxima. Spend some of the traffic and that's totally fine. It works really well for a lot of customers. But if you're finding that, you know, how to get something for reasons. Like, you can't really a sidecar mess with iptables. Well, New York to see. You can very easily get the benefits of service match Play Straight Out of the Box.

Service Discovery know what are the endpoints that I'm trying to read more traffic management. So stay tuned for that going to see that the goal is that we bring all of the capabilities that are relevant to grp see that you can get me something like on voice of your PC as well. I'm so simple syrup. Much easier way to adopt service manager. Also performance. We found in some cases if you have to go through an Envoy proxy. First, not exactly, but also very interesting thing to

explore especially if you're already thinking about your PC performance performance benefits, this can get you even further. It's a with that I want to wrap up and say thank you very much for listening today. She's got questions, there's a Dory where you can ask those questions will be looking at the questions trying to get them before you check out the doc's was a setup guy that walks you through end-to-end, how this stuff works. And also you know, some of the things that I showed you today may not be released yet at the time where this comes out. So if you have questions or I created this

form that you can do is go to the form, check it out. You can request access and will try to get back popped. So with that I want to say thank you very much for the shout out to the traffic director team. He's pushing this product for thanks Kelsey, for helping us with the demo today and that's it for me.

Cackle comments for the website

Buy this talk

Access to the talk “Build an enterprise-grade service mesh with Traffic Director”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “Google Cloud Next 2020”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Similar talks

Chris Crall
Product Manager at Google
+ 1 speaker
Sven Mawson
Principal Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Dan Ciruli
Vice President Of Product Management at Zuora
+ 1 speaker
Tony Pujals
Engineer at Google Cloud
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Xiaowen Xin
Product Manager at Google
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Build an enterprise-grade service mesh with Traffic Director”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
635 conferences
26170 speakers
9693 hours of content
Kelsey Hightower
Stewart Reichling