Duration 47:21
16+
Play
Video

Cloud Networking for the Hybrid Enterprise (Cloud Next '19)

Matt Nowina
Manager, Cloud Customer Engineering at Google
+ 1 speaker
  • Video
  • Table of contents
  • Video
Google Cloud Next 2019
April 9, 2019, San Francisco, United States
Google Cloud Next 2019
Request Q&A
Video
Cloud Networking for the Hybrid Enterprise (Cloud Next '19)
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
7.9 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Matt Nowina
Manager, Cloud Customer Engineering at Google
Zach Seils
Networking Specialist at Google

Matt Nowina is CE Networking Specialist helping drive innovation and adoption with customers. Prior to joining Google, Matt has spent the last 18 years working with a variety of service providers architecting platforms for web hosting, data center managed services, media streaming & marketing rewards programs. He loves to talk to customers about what's possible in the cloud, with a focus on regulatory, security and container based workloads.

View the profile

Zach Seils is a Networking Specialist with Google Cloud, where he works with customers to accelerate their adoption of cloud networking. He has a broad range of experience spanning core routing and switching, application acceleration, virtualization, network management, and software-defined networking. He is a frequent presenter at industry conferences, is a published author, and holds multiple patents in networking technology. Before joining Google, Zach was a Principal Engineer at Cisco. Zach lives and works in New York City.

View the profile

About the talk

Enterprise networking and security teams manage extensive on-premises environments and are accustomed to design and implementation tactics that take advantage of traditional networking constructs (subnets, IP ranges, VLANs, ports, protocols, etc.). Moving to the cloud offers numerous opportunities to leverage modern networking and security abstractions but also presents challenges integrating design and configuration approaches across environments. This session goes beyond the knowledge required to interconnect on-premises and cloud and addresses how to map design and implementation specifics between the two environments. We'll then present an incremental path forward to start leveraging useful networking and security configuration abstractions that allow you to simplify and scale networking in the cloud.

Share

Welcome everyone. Good afternoon. Thanks for joining our session today. The session is cloud networking for the hybrid Enterprise. My name is X-Files. I'm here with my colleague Matt and we know Matt and I are both networking specialists in the customer engineering organization within Google Cloud. So we work on a daily basis with customers such as yourself to help kind of onboard them understand networking with in Google Cloud how to best take advantage of our networking teachers and more specific to this session how to best integrate with your existing on-premise networking environments. We

do have the Dory Q&A enabled for this session. So if you have the next app, you can click on Dory Q&A and pop questions into the app. We want to make sure we have enough time for all the content in the presentation. So we're not going to do live Q&A today, but the Dory is open through the end of the month so through April 25th. So if you have questions during the session, if you have questions after the session when you get back home, I'll come into the app Matt and I look forward to interfacing with you online and we may even blog about a few other questions. We'll see. We'll see what

happens. So quick summary of what we're going to cover in the next 15 or so minutes. I'm going to start out by talking about next Familia networking Concepts that everyone is familiar with from your on-premise networking environment and how those map into the Google Cloud. So if you have kind of a direct correlation between things I'll make the connections for you and I really highlight how some of those things are different in the cloud vs. What you may be familiar with in terms of your on-premise network. Next time. I went over to Matt Matt's going to talk about some common Enterprise

designs and some recommendations for how you can approach those walking in a little bit more detail about the features that I've talked about how you actually leverage those based on our recommendations. And then finally we're going to come back together and we're going to talk about some best practices. So what we typically recommend to customers as they start their Journey towards Google Cloud. All right. Let's Jump Right In I'll get him and start with kind of stomach, networking concept then I'll map those into what their equivalents are inside of the Google Cloud environment. This is

the current state of Google's Global backbone Network. So we have our own private backbone network is one of the largest private back on networks in the world. One of the unique benefits of cloud networking is that within a few minutes through configuration that you define the space that becomes your infrastructure so you can you extend your on-premise network environment literally across the globe based on which Cloud Regents you deployed compute another workloads into and that's really the premise of this presentation, which is how do you extend your on-premise networking environment

with things that you're familiar with but do so in a way that's scalable manageable and hopefully with his little design and operational burden as possible. So again, essentially this is basically your backbone Network completely under your control. Put start with some very familiar kind of constructs though. You're on premise data center environment. Undoubtedly is made up of physical network devices. These devices are commonly purpose specific. So whether we're talking about a course which or an access switch or maybe a firewall whether it's physical or virtual these devices on your

network are often put together in a very specific topology based on those capabilities and where those devices are placed and how you connect them is very relevant to how the network performs and what capabilities it has now Datacenter typologies have evolved over time you had your traditional three-tier kind of cord distribution in Access networks. And you have this Resurgence recently of kind of cloth base Fabrics. Were you have spine and leaf networks, but generally speaking in your on-premise network the topology at least the physical topology is generally fairly static who doesn't

change that much. Do we have the concept year of the device? We have the concept of placement being very relevant important in terms of what those capabilities are and how they apply to the things your You're actually connecting to the network. Next we have different capabilities for how you virtualize that physical infrastructure. So for example, you have virtual lands RV lands that allow you to provide virtual segmentation within your physical topology. You can even have isolation from an IP forwarding perspective. So for example, you can have vrf virtual routing and forwarding

instances and historically there's been some providers specific implementations of these capabilities. So for example, maybe birth centers are bdcs and various different types of overlay and underlay Technologies. The perimeter is really the same right? How do I basically virtualize and isolate different capabilities on top of the same physical infrastructure? So here again, we have the virtualization concepts of vlans Ip forwarding domains like VR apps and sometimes virtual data centers are bdcs. Moving on we have what I referred to as kind of network identity. So

when you talk about how you identify systems, but you play from a network perspective. It's no surprise. We start talking about IP addressing and subnetting these kind of exist pervasively everywhere right there in the public internet there in your private Network. They obviously exists in the cloud. But what am I specifically talking about in this context is the importance of what an IP address actually means about the identity of a system. Like I know these range of IP addresses are my database servers are these range of IP addresses are at DMV right? When I've kind of Appliance certain

type of traffic controls and place. It is also important here right often times where something connects to the network says a lot about what it is and implies what types of access it has or it will allow And much like there's a low rate of change with physical networks. There's typically a pretty low rate of change with the number of subnets and VLAN that IP addressing scheme that you can figure in your on-premise network. I mean certainly they change and expand over time, but they're generally not changing at a high rate of volume on a daily or weekly basis. Now if we gotten this far in your

thinking grade sack IP addresses and said that time super happy. I came to your session just be patient, please. I promise we're actually going to help you think about these constructs in a different way in the cloud that hopefully makes your Cloud Journey a little bit more simple from a networking perspective. Next we have various different types of traffic controls from a Datacenter perspective. So you have firewalls write these again to be physical or virtual devices. You have network based Access Control lists, you may apply these at various points in the networks of these

are more like very network-centric you create these based on IP addresses something that's in our faces Etc. Sometimes you have host-based filtering through agents. Are there things you actually install on the in points and finally more recently you have kind of license in an emergence of what I called drop by default Fabrics or networks where multiple hosts are connected to the same network. And where is they may have historically been able to automatically talk to each other if they could discover each other. That's no longer the case. These networks are Fabrics actually dropped most

unique half traffic by default which requires that you must explicitly configure. What communication is allowed across that Network. So I've got to youra set of controls for how I actually identify in a filter traffic. So that's where we kind of came from right? We have the devices. We have the importance of placement of those devices based on their functionality. We have the network-based virtualization Technologies. We talked about VLAN. We talked about vrf. And we talked about access control capabilities. And so now it starts a map to ease into their equivalents inside of Google Cloud.

So the first one is really the device. What is a device mean? What is the equivalent of your on-premise network switch and the Google Cloud environment and there really isn't one. So this is funny for the first one is really not a direct equivalent in this is really a byproduct of the fact that most everything inside of the Google Cloud environment is totally software-driven and globally distributed. So there's really no equipment. Do I have this network switching device? What is it equivalent in the cloud? It just doesn't exist. Now that being said we do actually surface phone

capabilities as devices. This is primarily from a configuration management perspective. So, for example, when you're setting up routing between your network and the Google Cloud Network, you can figure that routing using something called Cloud router now Cloud router is presented to you as a thing you can figure but it's not actually a device behind the scenes. It's actually a distributed set of software processes that live within a particular Cloud region. So you're not actually configuring a device you're basically providing a configuration that we've been programmed these software

processes with and distribute them across the infrastructure. In a similar fashion, you have the VPC firewall the native firewall built into the virtual private cloud in Google Cloud. We present this to you from the configuration perspective as a single Global list of rules. So it's very simple to configure. You can kind of see everything in one list for one page. But in reality what we're doing if we're taking those rules that you can figure and we're actually pushing them down and floating in them at the house level on the individual machines where your virtual machines are scheduled so

really no equivalent from a device perspective, but we do kind of manifest that in a way that's convenient for you for both configuration and management. Nexus mapping for Network rice we have our physical Network. We have virtual networks in r on premise environment. What's the equivalent in Google cloud and it says the BBC virtual private Cloud. This is our Global VPC and it's really kind of a direct correlation with your IP forwarding to Maine on Prim. A lot of customers have just a single ip42 main reason you have one routing space when I pee space and there's a lot of similarities

between that and the BBC in the clouds. So for example of the IP addressing inside of a VPC and inside one of your forwarding domains on trim has to be unique right you generally avoid duplicate IP addressing. I likewise, you don't generally automatically connect to networks together without some explicit configuration. So when you're on from environment, you may establish connections between Starbucks bgp, routing protocols between networks are actually explicitly make communication happen very similar in the vbc environment, right BBC's are isolated and that they do not communicate with

other V PCS inside the cloud. unless you do something explicitly to make that happen I'm wanting something about the BBC is it is a global constructing Google cloud and what that means is that the scope of the VPC how big of a geographic area actually covers is completely dependent on how you deploy workloads in the cloud so for example if you start out by deploying your workloads in one Cloud region that's really the scope of your VPC but as soon as you start to fling workloads and other Cloud regions across the globe the network scope immediately expand to include that entire

network into this is actually pretty you need to Google cloud and it's actually a really really nice implementation because it allows you to basically instantly grow your network and Global screw out of going back to that Global backbone slide that I presented the beginning with a single routing and firewall policy so your network your VR apps on Tramway map into vpcs in the cloud environment Next we have the concept of vlans. So we only use vlans in a very limited fashion in the cloud. We don't use them actually within the Datacenter switching fabric meaning you don't put

your virtual machines inside of a specific deal and we only use be managed to actually virtualize the connectivity back to your on-premise environment. This is specifically with a product we have fog Cloud interconnect allows you to have high speed low latency access from your network to Google network, but you can actually create multiple virtual links across those physical circuits and terminate those indifferent vpcs in different locations across the world. That's it. There's no vlans anywhere else in our environment. There's no deal and numbering you need to think about making

sure that you put, you know, access control or access lifts a map interfaces to a certain VLAN. It just doesn't exist in the cloud environment today. Next we have stuff. That's so show hands who in the room remembers to playing workloads and Google Cloud before we had BBC's and subnets. Anyone few people. I used to be one big flat IP space across that entire Global backbone I mentioned but as customers started connecting to our environment in more different Geographic locations. We needed the ability to provide them with optimized access my routing perspective to the closest Cloud region

where they're deploying workloads. And that's really the purpose of the subnet inside of Google Cloud. I would advise you to think about it as less of an isolation mechanism less of a segmentation mechanism and more of an identity regionally for instances that you deploy in certain Cloud regions. So for example, you can apply resources in a cloud region on a single subnet so long as that stuff that is big enough from an IP address in perspective to handle all of your workloads. Do all of your kubernetes cluster is all of your virtual machines Etc. That's really the Genesis. That's the primary

Price of subnets, they're really not intended to be kind of a First-Class Wednesday to identify system. It's about efficient routing between your environment in our environment. So your subnets they map into something that's in Google Cloud, but the purpose of Google Cloud subnet is really about Regional Identity or Regional proximity of resources. Then we have IP addresses to our IP addresses within your VPC the regional constructs. Do you create a subnet within a within a particular region and has a unique set of IP addresses for that region. We also have public IP addresses that are

used for Global load balancers that are internet-facing for the regional IP addresses. These are by default automatically managed by the cloud. So we automatically allocate IP addresses to machines without a automatically allocate IP addresses to Brunetti's pods. Sarah actually ask you to do something that you kind of go through the session. I want you to think about what if you don't have the ability to explicitly to find the IP address for a given virtual machine. This is already very common in kubernetes, right? You can't specify that a kubernetes pod has an explicit static IP

address assume you can't do that with a virtual machine. How are you going to handle forwarding? How are you going to handle Access Control micro segmentation right when you talk about some of these things a little bit later. And finally the equivalent for the firewalls in your environment or in general Access Control. We have a number of products be there again globally distributed and they really kind of span layer 3 controls the building access controls based on IP addressing all the way up to layer 7 control. So does this user who's part of this group have the right access to actually

interface with this Cloud API. So this is a complete spectrum of access controls that cover a lot of different parts of its back from the network all the way up to the application Level. So again, this is important when we start talking about segmentation extract how we identify and control a system separate from where it's actually placed in the network. And so with that I'm going to turn it over to Matt who's going to talk about some common interprise design scenarios and how we can approach those. Thank you very much. I don't know how many of you are. But this is normally

they the part where you get stuck looking into the lights and forget everything that you've prepared. But what is how do we start mapping these analogues into Google Cloud solution specifically, so here you get a snapshot of what the Google Cloud networking product portfolio. Looks like today. I was 20 plus products and services is all focused on enabling your journey to the cloud. We will sort of group sees in two different sections that represent the way that customers are thinking about the clouds. So connecting scale optimized Security in Modern

Eyes. What we're going to do in this next section has really touched through a series of these products and sort of answer questions that customers come to us with so in this section what we're going to focus on our cloud in to connect to VPN mvpcs the customers think about when they're they're coming to us. How do I take advantage of this new network often times when you have on person varmints you have fixed infrastructure and fixed locations, but you don't have the way a way to extend your applications to where your customers are improve the speed in

latency. And so that's where as I was mentioning before Global TPC comes into play. This is a way of simplifying the ability to connect to all of the regions where you deploy things makes it simple for you to enable replication across your applications as well as leveraging Google managed services that are built from the ground up on a globe are multi-region 2.0 model and they are so availability the next thing wants to sort of started to map out. The network was her to think about. Well, how do we fit this into our operational model? So bye-bye show of hands no

longer have dedicated networking or security teams. So there's a few and then we know the industry is moving towards the boss models are devsecops models. But until we would have fully move into that good to think about how can we take our work clothes are on premise environment today in Annapolis over to the cloud. This is where shared VPC comes into play to share DPC is designed around the idea that we have different teams that deploy our applications in different teams that a manager networks and we don't just want to give free rein to the application teams to deploy

networks as they see fit. We still want to have some level of control to share the PC is Enterprise friendly to centralized model and it allows you to socialize your Administration in auditability. So now he's mentioned before I want to look at a hybrid Enterprise. How are we going to interconnect into gcp and it's important to note that you need different models depending upon You're going to rely on that hyper-connectivity for are you just doing management? Do you need to do bats data loads at certain times of day or is it

going to become a mission critical part of the application? This is where things like Cloud VPN Cloud interconnect coming to play and being able to map the your requirements and your your cost to the exact implementation. So what does it look like when you start to put these things together? This is an example of a zoomed-in simplified exam network with for dedicated in a text connections coming into Google Cloud. These are coming into two separate regions that are all are both accessible through the

global VPC. DPC is stored within a shared VPC hosts project and we can share individual subnets out to the service projects. So yen is a relatively simple Play of leveraging physical connectivity insuring a 4/9 SLA and giving you centralized Administration. Go from here, one of the other common questions that we week yet, and I was kind of confused when I first heard this but there's some customers who are surprised to hear that are managed services are typically posted on the internet on external eye peas. And I would probably be

among the first to argue that he was simply putting something on a public IP really is about accessibility and not security but for customers that have invested in these interconnects that wanted leverage his private connectivity for accessing managed services to make sense to have an option to do so, so that's where we start to introduce private Google access and private service access. So what these are our ways of extending those managed services to your on-prem environment? Tell password is so can I still take pictures but what it really looks like is this so that same

interconnect model that you had before what what you're now able to do is to as a part of you are Dynamic tiering with the with with Google as you're able to advertise a restricted VIP back to your on-premises environment. So this is a special IP range that will come from your V PC get your on-premises environment and then through the use of DNS you can swing Services over to that restricted VIP the three basic models for for using this are the first is to do an enterprise-wide synonym of start a google apis. Com to the restricted to

apply to all of your services what you may choose to do is Implement a DNS review. So if you only for certain clients are implementing the exact same redirection, who are you can even implemented as a host base level. Can we talk about conductivity now? I want to spend just a few minutes on security and moves more specifically on our Cloud armor IP BBC firewall rules at Cloud firewall rules. So no one is going to consider leveraging a cloud service provider. If they don't have complete face in the implementation of firewalling gcp VPC

firewalls provide micro segmentation model because they are effectively implement it at the house base level what that means is two instances running on the same physical host cannot connect to one another without reversing that firewall and the default dance. That far wall is Adonai Ingress. At the same time these rules are stateful. So they're simpler to maintain than more traditional stateless ACL and we can see the segmentation model. So against this is a relatively simplified model here. But when I started thinking about this question how many people have had to implement 802.

1X in an on-premise environment does a few people with poor face network access control. So this was introduced at a time when we no longer knew what endpoint was going to plug into. What's which so we're in the network. They might be in be able to dynamically configure that port with the associated rules for that in point will the same thing to be done with Ipsy far will rule through the use of tags or service account. You can have your influence inherit the rules that they need in order to access the correct and points and you can do this

in a very similar to that one X, but without the pain of having to manage tac-x. So with this combination we have brand new rules that are applied dynamically with interview PC. And then as we start to push out towards the edge, we have Cloud armor and identity aware products Cloud armor will extend our defense in depth strategy by adding layers 7 and a web application firewall in scrap or apps and he provides a way of extending our applications to only specific users and text me. We also introduced quite recently a blog post that

talks about using IP and place of Bastion host. So now you can open up a 22 to your individual host if you want to manage, but it sure that's only exposed through certain to certain identities without the need of having a separate. The next part and this is really critical one because we've all heard repeatedly of these stories about Miss configurations of access rules on a managed service that end up exposing a data to unauthorised clients. So how you know, how do we how do we address this will actually take in a two-pronged approach

to this? The first is a set of open source security tools called for setting which give you the ability to establish an inventory policies and Remediation actions whenever changes are made with in your environment. And the second is BBC service control, which allows you to establish a trusted perimeter model around your vpcs and projects. So this is what it looks like in practice. So here we have a project that is EPC and the associated services that we want to protect. We also through the same mechanism of the

have the ability to extend these services to our on-premise environment and say that's a part of our trusted perimeter. So when the BBC service controls is enabled only resources from within the perimeter can interact with those services and they can't be used to copy those to any external unauthorised project and prevent access from the internet. So this protects against that Miss configuration. Lastly all of these security controls are great but centralized logging and stim solutions are not going away. So this is where VPC flow logs and Farwell logs come into

play VPC flow logs provides you with Netflix style data to speed up information without the payload to get the information about the flows within your VPC environment and firewall logs gives you insight into what's being allowed and blocked by Avicii firewalls. so Now that you have a sense of the various products the analogs to be on Prime what the options are in PCP. What we want to do is try something a little different and I get the chance to see many different customer configuration. And then this next section we wanted to try and verbalize the thought

process do we go through when considering different sets of customer requirements? Before we start that I just wanted to give you a few quick point in terms of EPC design pre-work and recommendations. So the first is identifying who your stakeholders are this can vary depending upon who you're trying to design this VPC for is it for an individual application a line of business your entire organization. It's important to understand who you're trying to address and make sure that you really understand their requirements. The second is to start with security objectives

and not security controls many times. We see customers come to a PNC. How do I do ask in gcp rather than thinking about why they're doing X so by starting with the security objectives. You have a very clear understanding of what you're trying to achieve and what your options are in the cloud. The next is understanding how many VPC is you're going to need and I don't mean coming up with a static number like 5 6 7 10 important to understand what you're trying to achieve where your scale and quota limits are going to to

play and get an overall understanding of of what magnitude you're going to have to address. And lastly think simple don't design things just because you can I mean, we all know that Simplicity allows it is directly correlated with supportability. So keeping at 2 exactly what you need is going to be important. If you want to add on this side of the point of every three years is relevant, right? It's not a static number you're trying to get its really a pattern that you're trying to use so that you can can grow and scale your network

environment inside the cloud in the most efficient manner possible. So let's start with a simple scenario here. Basically our day job, right? So, how do we kind of think about the the protein is out loud? So obviously here I've got a single BBC Global pretty straightforward looks like I have both development and production workloads in the same DPC either across different region. So they're in different subnets single project, which is also which is also irrelevant when

we look at how weed scale out and everything. That's the parent here as it looks to me like this is predominantly a cloud isolated workload. I don't see any hyper-connectivity back to the on premise environment. So that's one of the things that initially pop out at me and we can take away from this design. Yeah me to take a couple here that really resonated with me on something you just mentioned which is start out simple. So I think this is a very common approach the customer to either new to Cloud can take just so they can I get their bearings in the cloud environment or even customers

that are coming from other Cloud providers and they're just trying to get some miliar Annex have experience with how some of our networking constructs made different Beaufort May differ. So for example of the global BBC how to connect we behave in practice. So Simplicity is is a key one here. I think the other one here to is just a small number of subnets with larger address ranges, right? There's no really reason to kind of over rotate and start creating a bunch of subnets within a region. You can start with one without a good IP address space. We have a great future where you can actually

grow the size of your existing subnets in a completely hitless Manor to the VM that are already deployed. This doesn't preclude you from creating multiple subnets. This is just what we see and what we recommend for customers who were just getting started right one. Nice thing here is that things are relatively Programmatic and easy to deploy and manage inside of Google Cloud. So even for core infrastructure stuff, it takes a lot of planning and a lot of effort Implement in an on-premise environment. They can actually be pretty disposable in pretty easy to delete and recreate inside of a

cloud environment. So I think overall a good starting point one thing I do know to see you though right now. I think it's a pretty common recommendation is customer start to scale in the cloud is really I'm kind of trying to segregate or put more kind of firm isolation between development and production workloads. There's a couple things that jump out to me right away. I mean that the 1st is is exactly what you used certified benefind there. We've moved from a subnet model as isolation boundary between

art are different environments and into VPC level. So a VPC level we've now segregated firewall rule management into two separate vpcs and then the other big one that jumps out. Is the hybrid conductivity? So here we can see that we've now deployed Cloud routers dedicated interconnect and attachments into each of the individual vpcs. The hybrid connectivity, right? If you notice here, we have actually separate connectivity coming from on-premise into each BBC right in this is virtual connectivity. So if you're using a

VPN, for example, these are separate VPN tunnels. If you're using interconnect, this can be the same physical circuit that you appear with Google Network Google Network on but different logical connections those interconnect attachments. I mentioned previously the terminate the separate projects, right? So you kind of extended the isolation of the different environments all the way through the connectivity back to your on-premise environment, right? That's really really clean kind of Separation, which which works well in previous lives because we now have separate independent

vpcs and there is no inherent connectivity between them. We need to start thinking about where it where are application deployment. Look where are billed service where we actually came from for deploying from on-premise. This works. We got fun activity between them. But if the build servers are sitting in one of those ppc's we now have to start to look at VPC peering or I think those pieces together in order to allow for that build processor that employ process to succeed and when we start to think about those those. That's what we need to start thinking about. What what are the aggregate

resource requirements when we mesh these two BBC's together. So when you when you you think about this design what what's the next sort of logical extension where do people go from here specially if we wanted to sort of a line with that workflow framework from earlier who doesn't have network security teams anymore. There was like a few who is Sophia soon the inverse. I assume that most of you still do and so one thing that we see if I can go to the next slide is this concept of shared BBC that Matt talked about and talk to me Sharon TPC is as much an

organizational construct as it is a set of technology that you leverage in the cloud and what I mean by that is that shared EPC is designed for organizations where you want to maintain centralized Administration and control of the network. Security functions in the cloud. Is there shared VPC does that she had BBC is still fundamentally a single VPC but it's a single V PC that can be leveraged by multiple projects and it has a curated set of permissions of I am from Michigan. So there's a specific role for Network admins in the show DPC and they control as you might imagine creating

subnet establishing hyper-connectivity establishing routing policies. There is an explicit rule for security admins in the shared VPC model who control firewall rules etcetera. So if your organization is structured in that way and that's an organizational construct you intend to carry over into the cloud the shared VPC is a really nice model for this so he can see we've got a couple things and play here. We thought actually multiple different vpcs all still within a single project in in the shared VPC model. We call this a host project. So your hosts project has all of your

networking and security stuff the vbc the firewall rules the connectivity back to your on-premise environments. And then you have one or more service projects service projects are separate projects. They're usually given to the application are development teams in those separate projects. They can spend up their own compute. They can spend up their own kubernetes clusters. They have the autonomy to manage their workloads themselves and the service project attached to the shared VPC to leverage those resources. So the networking and security teams maintain control over the network and

security the application teams. They manage their workloads. They manage compute. They managed to Brunetti's clusters Etc. I think about this a little bit you remember one is I like Simplicity, right so we could still pretty simple model. It's still the single VPC kind of model. If you will be getting kind of just reinforces the point that we want to kind of start to think about non networking constructs as the primary identifier for work clothes, right? So we don't want just the fact that you happen to be connected to a

VPC to say everything about what you can do, right? There's a better way to address that which we going to talk about a little bit another nice thing about shared VPC is you actually have pretty granular control on which portions of the VPC are visible to the service projects. So, for example, you can say, you know, this particular group of service projects for you. No service XYZ. They can only see this particular subnet or set of subnets in the BBC, right? So you can kind of help people not make mistakes and terms of deploying their workloads in the wrong location. This works really well.

It's actually a very common thing. It's pretty popular with Enterprise customers mostly as I mentioned because of that organizational alignment. The question I get back for you is what if you need to kind of Steel this out right because fundamentally were dealing with a single BBC what if things go really well and you need to kind of scale up to maybe, you know tens of thousands of virtual machine instances. Yes, I think that's when we start to see a move towards this model. We're now we are separating out the host projects and going with a single VPC / hosts project.

This allows us to more accurately align BBC in and project photos to an individual hosts project and allow them to scale independently of one another so no longer. Are you going to be worried about you know, if a Dev resource spins up things that your project or project relies upon because now they're managed independently, the other big thing that this design starts to to introduce is segregation at the I am level when you're using the security admin role within that VP or that that hosts project you would have had the ability to modify firewall rules

across any of the bpc's in this model there now independent of one another so we can have different users math to each of those host projects and we're starting to see A scale-out pattern here. And what we were talking about is building that that hosts project segregation at the environment level. So protest death, but we could continue to make this more granular for application requirements demanded where we can create poster projects on individual line of business or application. That's because the permissions are so cute at the project level

two kind of delegate Administration to different parts of the cloud environment scale model. What you do see is now we are increasing the number of cloud Riders increasing the number of VLAN attachments and well it's a software-defined network on the Google Cloud side that we need to think about how we're going to be managing those those connections. So, you know, what if it's what can we do to help optimize for that? Looks like it's great. The network in the cloud is software-defined.

I can spend things up and delete things with relative ease, but when it starts to talk about hybrid interconnectivity with your on-premise environment that may not always be the case, right? So getting the appropriate permissions to change configuration add additional interfaces create new bgp. Pierre's sometimes customers want to avoid this and what they're really after basically is trying to leverage not only two things physical connectivity with the Google Cloud network, but the same logical or virtual connections with the Google Hub Network. And so we move on to a different

scenario here which the actual VPC structure here which test and fraud in multiple projects that really stays the same. The one big difference here is we've actually leveraged another VPC that were turning here in this presentation iconnectivity DC. Now, this is not a separate type of BBC that you check a box and say I want this to be connectivity. This just happens to be a normal BBC that were using it a very specific way to actually done here is a moving all of the hybrid connectivity. Out of the individual environment based projects and we're putting that into this connectivity VPC and

then we appear that connectivity DPC with all of those environment based projects. So this actually relies on a feature that we've just recently released called VPC peering custom routes that allows us to propagate routes that are learned to dynamically from your on-premise environment across VPC peering relationships. So the routes that you advertise into the Google Cloud environment that specify what networks in your on Prim environment are going to be reachable from the cloud. We cannot propagate those routes all the way down to the environment based ppc's so pretty pretty powerful

capability. Nice way to not only leverage the same physical connectivity, but to really remove a lot of the kind of hybrid connectivity constructs out of those environment bpc's Another common thing. I see a customer's grow a lot of customers especially larger Enterprises. They may have multiple business units are business entities inside the organization and those entities pretty much one operate autonomously within the cloud except when it comes to paying for hybrid connectivity, right? They almost always want to share that writes only use the same kind of high-speed connections. I have

to the Google Cloud environment, even though I may be an entirely different business entity. So where we have is segregation here by environment type like Dev test in production. You could also think about that to be separate logical business entities or small. Groups of BBC's for a particular business entity name them whatever you want and inevitably in that case. There's almost always Services you want to share across those business groups, right? Whether it's active directory or source code repositories or the CI CD pipeline. Those resources are also a good fit for this

connectivity VPC because it has. Access all of his Downstream BBC's as well. So this Again, this is Billy just about how you use the networking constructs in the cloud vs. Just going and saying this is going to be like a connectivity BBC through the scope and use of the BBC really defined his purpose inside of a cloud environment 113 Independent Business entities inside of their organization. Each one has their own kind of connectivity DPC if you will and then they have a set of

small group of Downstream BBC's that are environmental. I'd write for Dad Stage production Etc. So I think I'm going to have one more right which is one more. So I think maybe we'll do a little curved wall here. Just thinking about what customers commonly kind of come to us with I'm going to give this one to you just because I can so what if a customer needs to bring 10 of a third-party Network capability or device into the cloud is a virtual machine from change the way you're doing BBC designer some of the things that we talked about. I

didn't wouldn't it be nice if if if there were Cloud solutions for everything we wanted but the reality is that for any number of reasons they're going to be times when you need to bring appliances into the cloud you need to start to think about what those deployment models look like now it together but what what we wanted to to sort of call out here is the idea that typically these devices are require multimix so you can you can think of and then she FW that you wanted to do. 7 inspection on or something that's going to act as a router between your

various species, but there are there specific rules within gcp that actually works to hear. What what you're seeing is that multi-tap requires different dpc's for each of the of the interface cards and all of those BBC's have to be in the same project. So far. Just one second when you say multi Nick's devices you talking about like a virtual Appliance because event or something says multiple interfaces. The best practices the things you need to the consideration she needs to bring it to play

at when you're thinking about these devices though is how are you rerouting traffic to those devices? Are you modifying the default route? What are the implications to accessing managed Services when you're no longer using the default internet gateway instead using a third-party device as well as thinking about high availability. So how is this? How are you ensuring that this device is up and boarding Pockets? How do you help check it each one of these are important considerations, but there is a deployment pattern for using these devices with in gcp. Good stuff. So I think we've covered

a number of different scenarios. Right? I think this is pretty commonly what we see from customers in terms of how they start their Journey really specific to networking inside the cloud again. The premise here right is just star symbol. It's easy to kind of expand and evolve over time as your requirements you need change in the in the cloud environment, right? There's no reason to kind of overly kind of engineer from a design perspective. Also think less about the traditional networking constructs like placement in the network and IP addresses and subnet membership as really the kind

of primary identifier of a particular workload right is Hazmat mentioned, right? We have capabilities within the firewall to do micro segmentation that can actually follow a particular virtual machine instance poor communities pod regardless of where it happens to be instantiated in the network rights to this makes actually creating the policies and maintaining their policies inside of our environment much much more simple by leveraging these kind of abstractions for identity. And I think that would we have any we have something else right? You you have won something else you want to talk

about Zack and I would like to think that you know, 40 minute presentation on some example VPC deployment scenarios would be enough for for you to try to breathe think you're at your designs. We actually have gone one step further. So we believe that as a cloud service provider. The obligation is on us to provide best practices for these things. You would have seen as we're going through these deployments that we had different sets of best practices that go down the side. So I am very pleased to announce

during the section that we have just made live RV PC best practices and design guide So the links that you see up here are links to all the things that we think are important when first moving into your VPC design decision. So we want you to start by thinking about your organization your line of business to your project where what are you trying to design for and then use these links so fiddly slash net 201 - VPC for RV PC best practices guide net 201 - 94 best practices for Enterprise organizations

201 policy for understanding. I am policy design as it applies to Enterprise customers. You may have also noticed in the past few months. We launched a new Coursera course for networking in the Google Cloud platform. We encourage you to to use that if you want to continue to get your hands dirty understand how these different components work together and then get started today build something understand that this VPC design that you implements now uses all the Information that you have available to you, but it may not apply in the future. There may be interation and that's

okay and lastly showcase your skills. So we also another thing that we worked on. The last few months is launch the Google Certified Professional Network Engineer and I've been told there's some pretty cool swag if you decide to to go and write the test either today or in the future. Additionally. We have a special guest with us today who's actually the author of The VPC best practices Mike Columbus who will be signing autographs if it doesn't want anyone wants to talk to him, but you also work on the certification. So

perfect when the door is opened, it will stay open through April 25th mention, please please add your questions. They are now or in the future. We really look forward to engaging with you online and take the survey right? Let us know how we did. Let us know what things you want to hear about in the future whether it's in print online. Find whether it's in future next sessions. We really really appreciate you taking the time to come to our session and we hope you enjoy the rest of your weekend next. Thank you very much.

Cackle comments for the website

Buy this talk

Access to the talk “Cloud Networking for the Hybrid Enterprise (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “Google Cloud Next 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
163
app store, apps, development, google play, mobile, soft

Similar talks

Dan Sullivan
Principal Engineer/Architect at New Relic, Inc.
+ 1 speaker
Mike Truty
Technical Curriculum Lead - Hybrid/Application Development at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Naveen Chand
Product Leader at Google
+ 1 speaker
Breno de Medeiros
Staff Software Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Joseph Holley
Head of Gaming Solutions at Google Cloud Platform at Google
+ 1 speaker
Mark Mandel
Developer Advocate at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Cloud Networking for the Hybrid Enterprise (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
567 conferences
23025 speakers
8619 hours of content
Matt Nowina
Zach Seils