About the talk
Does your journey to hybrid cloud include public cloud such as AWS, Azure, and Google Cloud Platform? Then join this session to hear how Citrix ADC customers are deploying their applications anywhere, to any location. You'll learn how Citrix ADC helps you deploy applications, form factors, and data requirements in public cloud environments and how it lets you scale on demand and manage all apps through one unified cloud console. Note: This session will be available for on-demand viewing post-event on Citrix Synergy TV.
Good afternoon. Thank you for being here at a 4:30 p.m. Session. I know it's probably at your last session before beer or happy hour, right? No, yeah, so I think you my name is Marissa Schmidt. I'm senior director of product management for a Citrix ATC formerly known as netscaler and I do have muscular shirts to give away so we could do that throwing stuff to you or throughout or you can ask questions at the end as well. I do not want to bring them home. So
we do want to give all of them away today and with me. Hi, Samantha going I'm a distinguished engineer on the netscaler sales engineering team. I've been here fifteen years now pre-acquisition. I still say netscaler and I will always say netscaler even though something is called an 80s and now I have to remind me in and I'm the suffix 82 product manager for public clouds being with Citrus for 10 and a half years. Like you said, I don't so don't like such a state ECU but yes, but netscaler Odyssey project manager.
Alright 10 years with ness Keller. Yes. I still call it Ness Keller and so is the Forty thousand customers that we have so I'll always be together to us and and that is part of the reasons why we're here cuz being in the I'm responsible for the cloud platforms as well as the on-prem. So part of this is really providing you the best practices for the public cloud deployments in and with the Citrix ADC, right? So we go interchange between Citrus HTC Gallery
This Is How We R Us but here's the agenda that will go through for today is giving you an understanding the differences between the public clouds a device and Usher in this case. And then we also going to go through the elastic deployment approach from ATC perspective and sample of the public cloud deployments in this we're really rely on and Greg to give us from the field perspective and that is part of that best practice as well and the automation of the deployment your way the way we
we have different automation tools GitHub and saw all those details will get through today. Antenna for free to ask questions. You do want to have a beer in the mic at the end. I was told you had to be in the mic cuz we recorded and then you know, it depends on your question and you can get a shirt right now you I really tried to get as much search church, but we have a 8 to cover. So keep Cloud differences will start with the system and not working parts. So what we go through in the Microsoft Azure as you see in a stand-alone
deployment in the marketplace, right and that's in a VP exercising know all the different the four vcpu. For example, the ram the three Nick the output in here in the Azure is up to 3 gig of 2 foot. Then and now network interfaces what we recommend so have this pain in your back pocket as well as the routing Bars by default how it connects with the v-net resources and external access how you do that with tip and so forth. That's an important one. Then in the age
of us, as you know, it will be from 2 and its 8 vcpu is that recommended sizing this can go up to five gig of through but so that's using that m42 X extra large, right? So that's the one that we would we would look at and three Nick and outputs that you would do and the recommended 3in and I and the the VPC is what what a device really cares about here in terms of routing routes must be created manually between the subnets. That's an import. Piece to note their internal internet gateway as well as that EIP route security rule that is said also an important one
in the 1800's. Then other key points is the actual how is deployed? Right in the high of a bullet High availability different scenarios you could do I use the arm using does your arm template then using the functionality with three in the face is right. What you need to do there in the front end for the ale ale B and the ink the Inc mode as well with the same L2 and you need to sell netscaler subnet IP and then and nsip. Sheridan Vapes is also required as well. As this one. You really had to know that
in as your you have that 3-second fell over but when you look at the a device is very different in this area in the AC to you also need to do the floating EIP for the functionality piece Ford f h a and to do the multi multi availability Zone you need to do that. And then it's separate also your subnet IP and an InStep as well. So the people in here and I fell over is that is that EIP and a graded based on IP set and it is longer. It's about 20 20 seconds versus the three-second fell over. So that that is an important one
and now we are going to go through the details now as your wish hurry as responsible for so I'll he'll take it away. We'll go through here in the architectural pieces for a zebra and a the west then go through that deployment. To the important points in Azure are typical deployment that we have for a Jour and queso from HIV use the LG front ending which has a public IP address we're here so you can apply the ATC on as you're into markets at Commercial and go we have both
and so active and passive. It's very important. So if you have a concussion scenario, you are absent Dex told customer. I was such a customer and then what happens if you want to make it to in the public Cloud then you need to use the lb for rhe in case of stand-alone deployments. You don't need lb to this one important point to note that one. So I should apply this thing in both stand alone and high visibility. We have a helmet template which is called Azure resource manager templates available where you can put in a third of
a TT set an availability Zone the difference between is that the availability set is in the same data center different rocks, right? And it gives you 99.95% of availability and in case of a multi-zone is different data center. So if you see that there are 54 regions probably a 56 reasons for a sure you have available tea sets everywhere, but availability zones you have on the 8th. Right, but it's good to go for availability soon. So there's one more guy friends over here is the l v a l being available. This phone is a list and if you need to use in case so far away BTS
that you can use lb free. What important thing I want to talk about. This is sexist in public clouds, right? So here is the back and auto scaling. So we always talk about elasticity in the cloud. So by the last to see you mean that you need to be able to scale your back and servers ask for that such a KDC. Who am I talking about the back and all the scale. So using backing out a scale. Basically, it's in Hearne company of the cloud itself in case of a Jour. It's it's done through BM scale set right in case of a show it is auto scale group.
Tortilla talking about it. You said the properties in VM scale set out to have to have service and you want to extend that to as many as possible based on the traffic that's being generated from the users who are your users? Right? So the trash all you can say have 70% CPU which is on the web server you can scale out. Okay, if the if it comes below 70 person you can scaling so does the default architecture. Let me talk to you through the next lights to come on and one more elasticity that we offer as your absolute AWS is autoscale, right? So in order
scales be tackled both the front and auto scaling which is the artist scaling of your subjects atc's and the back and auto scaling which is done by the beam skill set in a short and other scale group in AWS. The important components on the elasticity piece of the artist Kelpies are Azure traffic manager LD and Citrix ADM, you might have heard about such a single-pane-of-glass where you can manage all your subjects resources particular listen to ATC group services, and I got traffic is being generated. You can have SSL inside as they can size.
Everything is in built into Citrix ATM I talked about the AWS. I want to show the AWS architecture over here cuz the same thing two companies in the AWS via Route 53 and I'll be and such a stadium also over there so you can see that such a stadium is a single-pane-of-glass. It's the control plane where you will go and configure all these things on both Azure and AWS. So let me talk about how this happens, right? What is back in auto scaling? What is Trenton Auto scaling here? So in this case are there AWS and Azure
as I said in a short call VM skillset and in AWS, it's called Azure or AWS scale group. So Savannah where traffic comes over there all this first the web servers will reach his potential mean certain that if you will spike in a web servers because netscaler is capable of handling our Citrix ATC story of handling boat traffic, right? So whenever the web servers CPU spikes then what happens is that a new web server be spun by then VM scale set itself. Then what happens is that there is a pulling happening from the netscaler to the VN scales that it detects that if a
new server has been deployed. If a new server has been deployed. It takes that IP of the particular web server and adds it to the service group on netscaler. It's all automatic. So that's what your happens and then in this next that was to fill out and scaling. Let's see the threshold decreases to party person. And then what are you saying that the new web server that was created before it's killed. It's no longer needed and there's a connection training property. We have that in we can wait for the eighth of the Azure or AWS tell us that this
server is no longer needed. I'm going to remove it from my list. Please do also so we'll wait on AWS to do the training first and then we will remove it from our service group. So does a typical waffle what happens if you want to set this up? This is available boat on AWS and Azure and if you want to set this up, you have certain prerequisites for from both detectives idiom side and under Azure or AWS side. So here in case of Asheville restart with LG over there, you have to create a v net in case of a WSB have a nice template available
where you can just been out there to create the VPC Security Group everything. So that's the primary requirement and also from that situated inside. There should be a connectivity from the ATM to the Azure or AWS. Let me call This Cloud access profile. So you need to create that one and one status created you go on with creating Auto scaling group. I have a demo for this thing after finishing that I'll just show a demo of how this works. So the basic principle is that after everything is created the traffic Flows In Fast as always the backend
autoscale scale-out will trigger. Okay, once they're more vet service at that point then it'll respond and it will be added to the cluster. Correct to pass Bernardo Silva trigger and then the front end of the scale will trigger and the whole cluster node will be enhanced one more thing. I want you to find out here so we can maximum go up to 32 nodes in a cluster. That is same as what we can do incorrectly ps32 notes. Coming to deployment David talked a lot of customers and most lot of people are moving to both AWS and Azure.
I've sampled the plan was for both. If you are a Citrix apps in Dexter customer problems Vizio typical deployment wedding. You have a storefront in the cloud and end your are you have offices in different regions Lexi as you're rested as your East over to capture both East Coast and West Coast over here, you can use a gslb solution. This is an active passive. So you are using as your load balancing the friend and you using the capability of fog V PSG selfie rights gslv capabilities to load balance this across multiple regions.
And typically let me show the date of the last appointment in the deployment. You have the same thing. You're made sure that the Indus VPC you have active directory. Everything Gallery controller storefront. Is everything is there all you can use Citrix Cloud for that one. So if you do that one and you make sure such a cloud agent is running your own frame or on the Azure Cloud, which is pulling all those dumb activator 3 details and for your authentication needs go to make sure there's a VPN Gateway available and users are trying to reach that one or there's also a possibility that
you can use the Citrix Gateway, which is available in Citrix cloud. Ultimately depends on where your videos will be there on absolutely that can be on Fram Oil Can we unplug wherever you want to do that one? But this is the way that you can you can have an extra pair on your AWS to load balance and the end-user is hitting that particular with and accessing your applications index. So off one more deployments I was talking about before that. We have arm templates and the confirmation templates. I just want to quickly go over that
one here in a drawer by the baby recently released thirteen.org, but I'm still showing 12.1 over here or two weeks ago. There's a new belt 2013. Or at least we made it available to all users. This is as you're here in as your you can I log onto and then there is a Creator source button. You can search for citrus a DC 12.1 and high visibility and they offered these different options for you to easily created whole environment that you need Dr. Availability
set availability Zone the back end of the skin, which I just explained what he can do and I'll ask you a few questions and it'll create the hold instances for you on as a super easy. It just takes 4 minutes to apply hold up its infrastructure. The one thing that we do not do and we are not supposed to do is invoking a fool for trees management. Basically, we do not do that one generally are all typical customer. What they do is they do not have one public facing management available. So they that they are using express route a direct
route probably. Greg will talk more about the deployment. But this is the informants whole deployment set up for you. We also have an end in case of a w s we also have this templates in the cloud formation template. We also make sure these templates are real regularly developing these things and East a place available in GitHub repository. We recently 10 days ago. We updated the internal the asking for more customers, right? I understand that right now your template for some one day a public IP address for a lb, right we do
for HHA pair set up but but my requirements that we don't use lb public IP. We are having an express route between a ram and a cloud so and I just want to use a private IP for my LV. So we immediately that this thing which is internal ha3 Lake internal deployment. Where in the hell do you use the public IP? Kh2 if you can give us feedback if you want to deadlift more it is easy quite easy to put this place it in place in this GitHub repository and probably you guys can also contribute to this. It's it's very easy design templates and it is it saves a lot of
time for you. I have done myself to have a manual car seeing a Citrix dog and doing that. It takes an hour to deploy. How old is things if you want to do my newly in the template to Just 4 minutes. And the same thing is true also for AWS. So we have all the templates available over here and you can use those. And let me give this to wreck. All right, so that all sounds really cool, right? How many people are actually using any of this? Oh you get the
shirt? Garner people going to grow into this right? I mean that that's the goal. We're starting to see more and more people are are finding that they're on Prime is doing fine, but they need to expand their management structure is all cloud cloud cloud. And so they're playing with the clouds are stranded to play things in the cloud and it's becoming a reality. I bet if I asked the same question next year this exact same group. I probably get different answer.
I probably get more hands raised some hands that raised might actually go down pretty funny how the cloud Evolution works. So let's talk about from a customer standpoint how this is working and what it's kind of look like for people. So what is a migration from on-prem to cloud look like migrations the wrong word here? It's one side. I forgot to edit. No one is migrating their entire work clothes from on-prem to the cloud right now. I mean, maybe someone but AWS and Azure certainly want people to desperately so they're coming to us and asking us to
help them figure out how to do entire workload movement of people moving partial things to the cloud the spinning of new website that probably of a devops community inside their company that they don't even know about that. They're running kubernetes in the cloud and and spending resources out there. They find out about it later and then I got to catch up and figure out how to integrate with those kubernetes environments got some great sessions on that here by the way, probably just keep an eye out for anything charity pasta. So this is making an assumption avoiding moved to pull
capacity of a funny question. How many people in here using full capacity on the net scalars? Sweet how many people in here actually have net scalars? Thank you. All right did arms arms work, and I'm people can raise them. Well, thanks. rugby player I'm bad. I hate getting hit reality number to over the next couple of years. I have a feeling most of you are going to be moving to a pool capacity model. It's it's much more Dynamic. It's actually easier to license your boxes with and a whole bunch of other stuff has its challenges,
but the benefits outweigh the challenges once you get everything deployed and everything working. The next move like this is going to be dramatically easier. So we started out we got the zero capacity Hardware that's running on Prime with some capacity on it. I don't know why that just disappears, but we'll call it good. So we got a DM running on the top of all this which is necessary for the licensing server in the first place. But then it's also keeping things going on on both sides of the infrastructure. So in this case, we've thrown in a few hundred gigs on Prime and 50 gigs up to the
cloud so that that's where we're starting point. But then we've decided to hate the Clyde experience is actually going really well. We're having a good time. There is actually lowering some costs were able to reduce some server infrastructure. Maybe we didn't have to refresh a bunch of server Hardware that we are planning to so it's good and move even more up to the cloud. The implication here looks like we've gotten rid of everything on Prime. But that's not your the point is it was all portable. It was very easy for me to to take some capacity off of on Prime and just move it to the
cloud through the licensing server I had to do was tell it to do it and I didn't have to go up to the Citrix licensing site and pull down a license and put in my my Mac address and all that other fun stuff that goes along with it, but you can't even more fun when you're spinning up, you know 150 VPX is in the cloud and have to license them. Not a lot of fun Cebu licensing really is the only thing that makes this all possible. Now the other cool thing that last example was using one version of pool licensing. It was the instances / bandwidth method. So for every VPX, I deploy
reverie VPX on an STX Reven an MPX. I have to deploy an instance which tells it it can be a device and then I have to deploy throughput to it which tells it how much capacity that individual wins has the new what are the new models the recommended model for cloud services at the end of the day is honestly beef CPU. So from 13. O 150 next Elder current build Segway. Soap from the current releases on you can outdo BCPS One of the things that I will say we owe you is what is a vcp. You mean to me? You
know, how much throughput am I going to get out of one vcp? How many requests per second how much SSL how much booked all of those numbers we need to go grab a dl580 and run some tests on it and come up with some number so that you guys know if I deploy one vcpu this is kind of my expected capacity. The nice thing is a vcpu. You can multiply Your Capacity. I'll just like any other VPX adding CPUs. What you don't have now is an artificial bandwidth limiter or anything like that. It will just do however much that virtual CPU can do
about that since Felix was announced yesterday. So we would need to do something similar in terms of sizing each of the CPU. So the nice thing is it's all still check-in check-out, you know right now if you're deploying instances and and band with the more bandwidth, you deploy actually the more vcpus get deployed. But if it's an algorithm that's done. It's not actually distinct right than asking about VC fuses. You literally know how many cores are taking up in an environment. It's a it's a known and sometimes it's nice to have a known instead of an ephemeral. And then I
yeah, it's going to check out just like you would today. It's just a little bit different licensing model. So pulled licensing evolution is really what this is for the cloud. Write the other cool thing how many people out here use terraform? That's okay. Don't worry sweetie. Yeah, you don't count so Terra formars just programmatic waited to deploy not together to configure netscaler. So as you can see if you go up here, we have a few terraform temp. What's up there? These are supported template. So these are being deployed by the netscaler
development team. These templates also can be requested to be changed or added or anything else. Just got to make a request to your SD team to get anything added against all up on yet available to anyone. Just go to get and Ennis there same thing for ansible. Probably more people using ansible much more. I kind of figured that was the case. Actually, I love this group. Obviously. We are the sports team crowd. Ansible same thing if you got some request for ansible, please send them to us quite frankly. If you guys are
creating your own ansible modules and want to be part of a community just two-player ansible modules for us and let us know where she want to engage more in the deaf community in and with the customers to make things even better that the kind of a goal. So how do we look when we actually take Automation and move into a a public Cloud shift? So in this case, we've got an on-premise system and then we've deployed a cloud system. I've got end-users that a connecting to both so kind of your typical hybrid multi-cloud looking deployment and I'm maintaining an umbilical between on
Prime and the VPC or the availability Zone through Direction actor Express router. Whatever happens to be So, here's what they decided to do this actual customer deployment. I mean, I'm not just making things out. This fees come out of customer deployments. So they're using ansible to talk to ATM. And then what ATM is doing as it's taking the language and symbols written in its injecting that that configuration into itself creating style books and send them out to the VPC Cat6 or the on Prime ABCs for config changes. The nice thing about that is your devops team who doesn't
necessarily know netscaler can just configure things using ansible in a more friendly tax to them. Send it to ATM ATM doesn't translation Bill systolic sends it out. It's all beautiful. I think we do make it sound a lot easier than it sometimes it and that's what I'm talking about engagement with the community and and more and more. We want to make this easy for you all and we want to make it understandable for everyone and not the final goal in that again. If it requires us to all talking and you guys to talk to your ass, he's in your sales managers and and come up with
suggestions so that I seen here is everything is automated. No one is logging into a netscaler in this environment and that that is a goal of future netscaler environments. You will never log into a netscaler again, except maybe during our initial deployment of an MPX or something. Everything should be going through ATM ATM is now your gateway to everything. So the cool thing about this diagram, we had a little scenario where in AWS and Azure, they were given a domain name as a reference but the IPS are Dynamic so they
can change so we had to do is come up with a solution that could dynamically handle those those changes and make sure that availability was still there for the end-users connecting to the site. So this case they they have and I'm crammed into cloud deployments. And then what you look at is we're not actually the load balancer in this solution up on top up in the cloud work. They're using lb actually I love how we say. He'll be the reality is there using lb in both sides cuz Microsoft and Amazon decided to use the same acronym for their load balancer cuz well that's just the way they work.
Sometimes. He'll be a family and Albion Ale Beer the actual load balancer. So it in this situation with the with the netscaler atc's in the Gs will bees are doing and all this and all the locations is really handling the Dynamics of the IP changes and making sure that there's no DNS blackouts. Nothing's getting lost. So when eccentric ATC in the cloud get the request for a DNS resolution, it has the most up-to-date IP address that that the is owned by the lb the other side of it is I'll be as auto-scaling. So is it auto scales? No more are actors are
getting added to the net scalars GS will be dynamically and it's still maintaining that between the sites and maintaining all of those new era records come in. So what does little eyes has to be cloud-native and it helps in this case to simplify hybrid multi-cloud for the customer they could do this without a net scalars up in the Gs. I'll be locations. It would just be a nightmare mean trying to handle all those DNS changes. How do you update your DNS infrastructure to make sure it's all we took care of that for them as part of a solution
in this was actually an RV that was completed when Can one do you want ya Ali recent thing I found out about it this morning. But but yeah, it's it's there. It's cool. If you're not super busy in the cloud right now, you don't understand necessarily why it's important. It's very expensive to have a static IP in the cloud. So with this allows you to do is it's actually cheaper to run the netscaler gslb device and use non-static eye peas than it is to have static IP and run standard DNS. So that's the end use for this was just really to save money in their
deployment. Now, let's look at a more interesting deployment and something that is becoming more common and should become more common in this is going to be sort of an image of what everyone's HMC deployments will likely end up looking like over the next few years. You'll have your own priming your clouds. Now you can have to distribute the traffic. So here's the real fun question. How many people in here are familiar with itm? And don't look at the view. How many people are familiar with cedexis?
You have more names came out. That's that's funny. It's interesting how that works, right? So we purchased a really awesome company name Sodexo sin and got to get hold of there really awesome gslb solution. So it is a toasted DNS and then it's also a massive amount of real user data that helps director traffic properly around the internet unlike our typical gslb that we have which is kind of dumb and knows who you're all DNS server is an about it. So it makes me still use them and there are
very good use cases for g s o b g s o b will always know what's happening on the load balancer back itm knows what's happening from load balancer to the client use them together. And all the sudden itm becomes aware of what your server capacity is it given site and so it can actually down rank of sight if server start dropping like flies and at the same time I T M is very where what's happening on the internet. So if here's an example, I live in Renton Washington, I'm on Comcast internet and I live in Northfield Avenue vs. My neighbor and we're both
accessing some of the same stuff. Why would he gone and access denied emod knows what my experience was two different places on the internet and itm knows what the experience was for every single person in my neighborhood on the internet already. So it doesn't matter everywhere in the tags. All that data has been collected for the major Cloud providers already from from every geolocation faction, really freaking cool. Try to get hold of her at 8 a.m. Every morning. It's like never respond.
Thing is when she likes and it starts doing things instead of GS will be having to make some new decision it already knows and so she already gets the best experience from the start. So we're finally likes. Yeah. I'm deployed in front of websites. We're seeing a rough worldwide reduction of how much and latency. Customers you deploy this can see if the 50% reduction in worldwide latency for their sites because we stop doing stupid things with DNS. Like I live in Seattle and my gslb solution decides to send me to Singapore for
DNS resolution. That's a 1-second DNS resolution. Not a good experience. I haven't gotten to a website yet. Never to spend a second waiting on about you guys. I start hitting at 5 at that point. My website is that because we doing stupid things with with DNS resolution and start doing everything very intelligently. If you haven't gotten the idea here go ask your sales teams about itm. Yeah. Yeah cool. I'm going to have a talk with your sales teams already. What's
up? They're supposed to be supposed to that's why we have we got a lot of training on itm. I mean, so there's some deeper stuff that that they're not going to understand with a deeper steps not relevant right off the start mean right from the start for free you guys can all go log onto the page and you can get a heat map of what the internet looks like around the world based on selection of say Azure data centers. And or AWS data centers and you can go. Oh well, okay, I want to go into Europe. I don't have the best
experience. So I'm going to open a Datacenter in Europe and that's going to change that heat map to show you with the average user experience is this is live and real-time data that you're looking at at that point. You can then see what it looks like with geo-based gslb, you know, your standard everyday and then you can turn on the itm tag and watch the Heat Map get even cooler and it's real data. That's that's the best part about it. There's nothing that's made up about the whole thing. Where should we go with this? next slide black
slide is kind of boring. I was trying to take up there to nighttime. Thank you. So you can see there's a lot of documentation is that we've done at automation is that are in get have to definitely look at all of that information and my ass for you guys to is is to provide in a positive reviews. And is your of your experience in a device and I sure cuz we could really use some of that information and knowledge in there as well as we've actually in a device alone. We have over several thousands of customers and a Niger because of our
partnership with with with with Microsoft we have over 5,000. So so any of those information that you know of anyone is share some of that positive Express Please provide the reviews which he will look at in the regular basis to help figure out how to move workloads better. So we're working with with AWS and Azure to actually create templates to take an entire Citrix infrastructure. And in relatively few clicks to play that entire infrastructure to the
cloud with the Gateway and everything else already working connected to active directory on pram. That's that's in process. That's going to be coming very soon. So people who are hesitant to move to the cloud because of how much of a pain it is to actually deploy up there right now that's going to get made easy. And I think we'll see a lot more people adopting when that happened yet Play books reading Play books with a the West at as a partner to help with hybrid multi Cloud solution some of that migration for a Zeus cases as well as for the
Zen of Zin desktop use cases as well. So one thing I want to highlight is the recommended deployment scenarios with the cluster and clustering is actually a key feature in the outer scale that's really are differentiation. As you know, they've had out of scale question for many many years celebrities that everything that I descale was an important piece is the max number of items in a cluster. Well, that doesn't Shock Me. It has to do with the routing issue.
Oh, who is that you yeah. Yep, 32 is the max number of nodes in a new CMP group differentiation as well. So a lot of this I really want to highlight your key takeaways. These are the ones and we have the many documentation for you to have and then before you leave make sure you do the survey on we are we need to speed rock stars in order to come back next year. So five stars only. Okay. I like this. I know you made the comment, you know about netscaler good good five stars.
Rate this session, please in this I think you guys have all the same app and then tweet about the session if you like as well as this energy in general, we see a lot of usually in the Keynote. And then all the rest of the sessions that are available for you guys cloud-native. It was a big hit at yesterday as well as today. So hit as many of that he talked about uncovering the popular one and obviously there's more sessions tomorrow. I even have a session that's
happening at the same time right now with the ATM and that's why we only get half of the people here, but it's good that you're all here today. They're saying it's a requirement good question the Champs. Any good one dealer a well below which ones closer to you. But how do you make the Mescalero? Well, for which back and application server is closer. So somebody else will be is going to do is it's going to look your ldns server and from that L DNS servers IP
address do a Geo IP location. So assuming you're still located with your ldns server, you should be able to get a resolution. That's to the site closest to you. The DNS resolution can happen anywhere in the world. If you have 50 that's Steelers playing in the Gs will be cluster. You're going to get an answer from one of those 50, but the answer that comes back if assuming geoip is being used should be the closest netscaler to your ldns is IP. This is where the problem is. If your ldns happens to be two thousand miles away from you. You're going to go to
the site. That's two thousand miles away from you. What I T M does it takes all of that out of it and it knows the actual users data based on the tag information that we're sending and so it will be sent to where the actual users close to cite as but it doesn't necessarily the closest site going to send you the fastest set. The fastest side happened to be on the East Coast cuz there's some congestion on the West Coast or are the fastest States in Singapore because there's some congestion in Japan or China. It's going to send you that fastest site regardless of your location and that's the
advantages of item much more until it was for a reason but because sometime we could just offended into the a slide so you can have it to download Data center in you've introduced cloud and you're wanting to do primarily load balancing to your primary data center. But slowly Drive traffic to your cloud services. How would you do that? So you do it through percentage-based guess I'll be public or private. Yes. So you would be doing percentage-based. So you would say I want
90% of my traffic here 10% of my traffic there and so 90 out of a hundred a records would be one site and then you just start switching those percentages and tell you get a hundred somewhere else very easy to do one of the nice things about DNS has it's very mutable, right? So, I mean you can you can change things and manipulate things very easily. across two different Cloud platform exactly exit if we get a video of our Canary stuff, you should go look at that cuz that might actually help with that thing to. Thank you.
Buy this talk
Access to all the recordings of the event
Buy this video
With ConferenceCast.tv, you get access to our library of the world's best conference talks.