About the talk
Amazon EC2 provides resizable compute capacity in the cloud and makes web-scale computing easier. Learn about the variety of compute instances well-suited for every imaginable use case, and discover how to scale your compute capacity while maintaining the lowest total cost of ownership (TCO) by blending Amazon EC2 Spot, On-Demand, and Reserved Instances or a Savings Plans purchase model. In this session, we'll cover the latest Amazon EC2 capabilities and how you can do more for less by pairing Amazon EC2 with other AWS services and features, such as Amazon EC2 Auto Scaling, Amazon ECS, Amazon EKS, Amazon EMR, and AWS Batch.
Learn more about AWS at - https://amzn.to/2B5k6FK
More AWS videos http://bit.ly/2O3zS75
More AWS events videos http://bit.ly/316g9t4
#AWS #AWSSummit #AWSEvents
My name is Matt Thompson and this is going to be the getting the most out of Amazon ec2 session. I'm actually the global head of easy to spot which we actually will be talking about in this session so you can see if I put a little extra effort into that. Lets go ahead and got a bunch of content to get through. So we're just going to start right away. First, we're going to give you a general overview of ec2 instances. And how far is from, secondly, we're going to talk a little bit about how to optimize basically Your dc2 Capacity, usage and pricing. And then third, we're going to
actually just give you some real World, of Warcraft examples, in which to practice that authorization. So some light architecture, slips jumping right now. History about 13 years ago, Amazon ec2 launched in. 2006, we offered only one incident size and type. It was the M1 instance, you can you just one be CPU and 1.7 GB of system memory. The most important thing we do with the time was actually allow you to just pay for what you used and what that means is that you could scale up and scale down as needed without actually
occurring extra cost along the way on demand model. Fast forward 13 years or 14 years to 2020 and we actually have the broadest and deep this platform of choice. I'm really what happens there is. We have almost three hundred different instances by the end of 2020 275 plus today. And what that what that means for you as a customer as we can support virtually any workload of business need that you have a, we just believe, Sports better performance for specific workloads that you may have today to come a long way in front of those 13 years.
We're going to talk a bit about instances in this next 30 minutes, so I want to spend just a little bit of time talking about the taxonomy of an instant so you can familiarize yourself. So every instance family, just like that and when I mentioned at the very outset has a certain basic type. And so in this case, the instant family, if you were a general purpose, compute type of series, we also have this little thing that's called additional capabilities in this case, that little D, for example, means that this instance, just has
instant storage is physically connected to the host server in a bunch of different things are like an N Etc. But in this case, it's a d importantly, that extra large that T-shirt size. And in this case and extra large has four. BCP use or virtual CPUs associated with it. I'm in each size of t-shirt size that you go up, has a double doubling of the CPUs. So for example, 2 XL will have eight be CPUs and a 4XL will have 16 piece EP is so and most importantly, and then size. So that's just in case you get lost along the way.
Let's go really quick way through our instance, types of general-purpose and these are great for stuff, like web applications, small databases, these are kind of creepy general-purpose types. Tell Me a Memory optimized instance types. And those are great for workloads, the process large amounts of data in memory, like nosql databases mongodb. For example, and then we also compute my senses types. And these are high-performance processor is more or less that are great for batch processing workloads, add serving video transcoding. All of that. So, I want to
stop there, on instance types, we have more, but I want to talk about those because those are underpinned by kind of choice of processors that we have today, until underwire instance types. We moved into some AMD processor types, underliner instances. And with AWS reinvent laughter, actually announced our graviton2 processor, 2018, every event, we announced our graviton One processor, which was introduced as the A1 instant Skype, but in 2019, it was actually a Bass Performance increase with the graviton 2x faster. Processor was
So what does that mean? Kind of, in terms of what you're going to do with these and so what that means for us today is that these graviton, instant to instant processors will actually go into general-purpose computer optimized and memory optimized instance types in the future will be seen for us soon in 2020 is US releasing basically the M C & R series but the 6 generations of not stiff anymore but with graviton underpinning it. So you see like the m6g in the sea 6th Street excetera Overall, not too deep into these industry benchmarks, but you can kind of see the Orange is.
Basically this m6g processor or m6g instance type, I should say was actually the M5. So, we're just comparing our last generation instance, type to a newer generation. It's it's typing. I'm really just going to check out the fact that these instances these instances perform about 44% faster, and see what that means for you as a customer, is actually better cost for a show. Just a kind of complete, our store are instant type Journey, talk about storage instances. So these are basically designed for a lot of operations, right on
local storage and then large data sets, for example, are great for these We also have very popular today are accelerated Computing instances. These are bad back. Usually buy graphics processing units to use. These are great. So when you think about a floating-point calculations and finance workloads were high performance, Computing workloads like computational fluid dynamics and very popular. These are actually custom built for those workloads. Something we did also just announced
is our inform processor, and this is actually very specifically tailored for inference or ml prediction and this is the only thing you really need to know about. This instance type is that you get five times lower cost for imprint than any other Amazon ec2 GPU. So these are very specifically built for prediction in box. Once been a lot of time on this side but I think it's important to know that everything we do is actually built back by her Nitrous system, which is very proprietary 2013 and released at first instance, Types on this platform in
2017 is Riri thought the entire virtualization architecture and what that means for you to customer basically is just give me a quick renovation from AWS. You certainly had better security with the Night, Nitro security chip. And then obviously better performance on a server by server bases, because we offload a lot of the management to other servers. Fully functional server. And of course, if we talk about instance types, we have to talk about EVS volume so storage, right? And so block storage has always been available with a to attach to ec2 instances super-performance.
The customers use sap on top of our block storage, rebs volumes, very reliable with 590, kind of types of EVS that are good for cons of different types of work clothes. So, basically, she is great for transactional, databases or Enterprise applications. Where is HDD is very good for, like, log processing. In Big Data. And then I would be remiss if I didn't mention the security with EVs and I just going to talk about this through another reinvent announcement last time, which is default encryption for your EVS volumes. So, if you're looking for a very Secure
Storage method EVS is very secure these days because we just an evil encryption where you can bring either an AWS provided key or a key that you create yourself. So, we just spent a ton of time on how I'm thinking about how a w s by focusing kind of on customer workloads to accommodate kind of workload, business scenario, but we also don't like it when customers actually are not very optimized in terms of how they use on their ec2m. So we want to provide a bunch of different
pricing models capacity, kind of features if you will like optimization capacity teachers and then guidance to kind of help you on the way into what we're going to do with the kind of the remainder of this presentation is take a breath and depths of our technological portfolio and try to help you figure out how to optimize it for like the next year. Let's go into pricing models first. So, with four different kind of pricing models over. All right, the ways to purchase easy to. We talked about on demand at the outset on demand was our initial way to
to purchase ec2 instances. This is the kind of how much are going to utilize until on demand, what? You spent up and spin down with no extra risk, discounts in on demand in exchange for sale, 123 your commitment to purchasing that instance, these are great for steady-state were closed. So these are worn with certain usage all the time. No reason not to actually purchase a reserved. But I want to spend more time on the other two savings plans and spot instances
are capacity for easy. Two and spot is actually up to 90% off of on demand. So savings plans, let's just move right through this Slide. The most important thing on to note about savings plans. Is there just more flexible than reserved instances? And so, we're going to see that here in the next instance of a little deeper. And the other is the easy to instant savings plan. Let's start with the one on the right first, but you see to instant savings left. The cool thing about a 2% discount off of
on-demand. So let's say, for example, you have a m 5xl, if you're going to use that all the time, you should move that to an ec2 instance, that there is an hourly commit in terms of span that we ask you to commit to if you will too to get this kind of savings. Plan picture is the difference from this between Instant savings plan in a reserved instances of the same type. So let's say you start of the savings plan. An easy to instant savings plan percent off that ends 5xl in along the way you decided that your workload.
Just just work a little bit better with an M5 4XL changed. It actually, you can also change operating system, so if you're working, vice versa, is a 66% discount off of on demand. One of the things that's really nice. But they're two days actually said that are really nice about it. And you decide you need a computer optimized it in to type. And you want to move to a C5. You can do that to move to an RFI Etc to any of those instant family. Not just in terms of the size, within
an instant family, like an easy to instant savings plan. The other cool thing about a complete savings plan is it, you can change regions. So let's say there's some political event and you need to move from the EU to Ireland in the EU into of the London Datacenter, you can totally do that with a compute savings time. So, you can actually savings plan as well. This was something that wasn't a cheetah before and it finally, actually your spend on this kind of savings. Plan also applies to a serverless architecture like the bar gate so ever Serverless, container service.
If you want to spend your savings plan, there you can have some of the benefits of savings plans, and in the sense that you don't have to make any commitments. Aw, aw, what you're, what you're really doing by the way here, imagine that you're just a leveraging to AWS scale and the scale comes from stuff like Black Friday. So when you think about how much on demand AWS make sure we have for something like a Black Friday shopping all the time and put the rest of the year. There's not quite that much Demand, right? But we have to make sure we have that capacity, as Chief promise of elasticity.
So what do we do with the rest of that capacity in the rest of the year, we sell at a spot and so the one contract we make or the caveat we make Caveat we have a spot is if for some reason we need that instance type back will give you a 2-minute warning and then we'll reclaim that is before you, but that's also why you get somewhere between 70 and 90% of on-demand. So if you're able to deal with those types of potential interruptions spots, actually a really good purchase model for you and we'll go deeper into spot right here. So there's some real strategies for using spot and I'm
only really going to focus mostly on the one on the left which is instant flexibility the way we think you should be using spot, is to construct a fleet of potential instant type. So if we go back to our favorite m, v excel in this example, what we would say is hey if you're willing to use an M5 XL on spot, maybe you should also use an M5 to excel Excel Excel and so on and what we ask you to do is construct a large Fleet of resources their time for just one resource. That
is quick replacement of any lost resources to build a cheat to spawn. A flexible are also important. If you're a CI CD workload, for example, which is very popular spot, it might just complete a few minutes later than normal. Because you might have lost a couple balanced in the middle and then region flexibility if you're willing to move to different region. Sometimes there's more spot capacity in different regions and sold. These three types of flexibility are
really cheap to using spot. Let's talk a little bit about interruptions. The one question I usually get is why I don't know. If I can handle being eruptions and hotspot, the truth is there. Two answers to that man interruptions do not happen as often as you think only 5% of all spotted. That's what we like to say is that you really should prepare for interruptions. It's just better architecture and better fault-tolerant Thunder from from your workload and so
we'd actually develop a bunch of strategies. For example, checkpoint, just take the top of these if you want to see ICD workload and you're basically running that and it gets delayed if you will and a spot instances remove basically on the two men. What is best practices to check point where your workload was actually at? So kind of a save if you will And you're good to go. So check winnings. An important piece of this classic ECS doctor Mayes container system or eks with the scheduler. We actually develop Interruption hammer for both those container types and what they'll
do was actually terminated prematurely drain it and then basically spend up another note and let the job keep going. So we've actually I'm prepared you for some of these interruptions along the way. That's basically our purchase miles and hopefully you learned a little bit about savings plans and spot as well as kind of knowing on demand and reserved instances. Let's talk about not just how to optimize our purchase model, not running basically, let's call it 30% off tomorrow and the first thing to note is
we really believe that you should actually combined these purchase options to deal with your capacity. So we look at this bar set represents reserved instance your savings plan. This is always on basically put that underneath your data. Number 2 is on demand then make up with a little on demand and then the third is spot. So maybe a better way to think about this workload is something like a big day to work. The master modes are on the bottom, those are always on. You got to keep going to put on something like an RI. Or at least plant and
then you might try to meet you or sl8 get that job get that big data job done and you might use on demand for that right? Because it's got to be done and it can't be interrupted. But if you want to exceed your SLA, really burst bad spot to think about all of the different purchase options. So what are the major kind of mechanism to help you scale up? Mrs. Autoscale so he's E2. Auto-scaling, auto-scaling, groups is a great way to dynamically scale up and down. This will make sure that you don't leave a bunch of
instances running when you're not being utilized and incurring cost, right? That's one way to think about. It also means that when you have it it'll actually push up or bump up your supply of Instant Power. The show me, think about that. Basically I'm not going to go into the lowest-cost option today which is the middle of this. This is basically especially You can actually choose an option called lowest cost and we'll basically go try to speak. Basically to find you the cheapest possible capacity for your workload. But we are going to spend a little time on capacity of a
Toyota HR in second really quickly about autoscale. So, he's been around for a little bit of time and the interesting thing about really learn from our customers what they wanted to do. So the smart customers really ahead in this optimization game from the start. And before we used to have what we call Auto scaling groups, and before smart optimizing, customers built custom logic in this diagram. What they would have to do different groups that work together. So, on the top. Obviously
they had an M4 large spot and then an M5 large was spot as well. Then had to mix in basically a c4x barge on On Demand. We create three different groups to change the way Auto scaling groups were so that we can combine all of what I just said. These purchase options are Eyes On Demand and spot. On the single I was feeling so just wasn't like a, the domain for example, and the main thing this did for a customer decides simplifying, the ability to optimize right, was it make sure that the customer could fulfill the base capacity with additional with any spot
instances for work Club? And now we've kind of gotten even more advanced. So one thing we heard from our customers with it is that they really wanted to Tunis, right? So obviously we had optimized for purchasing, we had initially optimized for basically really making that workload work in conjunction with purchasing. And so we added weight. And so wait. What waves do? So these are the prioritized way to do if they allow our user privilege certain instance types. So, in this case, we look at the diagram. We actually see the M54 XL on demand has a weight of 16.
And I'll let you sit there and look at that. The other ones are basically for an eighth, right? So roughly half of this user is saying or customer saying we really sent to m-44 XL is break from work. I want to make sure of this Auto stealing group as it comes up. It was a great way to really fine tuning. Your workloads. Never talked a little bit about auto-scaling with capacity of optimized, allocation strategy. This is actually very to two people who are power users of spot of which there are quite a few. What are the things we we heard
from spies? Hey, I can deal with interruptions all day. That is great. I don't mind them. I just don't like a ton of them. Can I get something? We're basically you can help me maximize or basically, we came up with you. Get a free bison in this case and are five large and M4, large and M5 large and then select optimize. I will just go in and basically find the deepest pools of spot resources for you. Today, we believe based on machine learning will minimize any interruptions optimize allocation strategy.
I should mention that basically all of these Smith Auto scaling and spot is basically available in a bunch of different managed services. So it's not like you actually have to stick together a bunch of different services. So services like ECS and eks, which I mentioned already as well as a w at a high performance Computing and on auto stealing groups natively, infused into what they do today. So it's a great place to use them out there like to blow on the big data side or the Jenkins
open-source CI CD platform. Dave actually obviously integrated spot in a sgian to make it easier to spin up and down computer using their tooling. The last guidance we talked about pricing compared capacity, really started to get heavier into diamonds around which instance types you should select. And what we announced in 2019 was a w, s computer Optimizer Optimizer wso position, look like looks like, so we've trained this I'm really into work was looking at resources, utilization, configuration performance data. And that we come up with
recommendations, are you right size or not using should you using smaller or larger? And so, in this case, compute Optimizer on average, today, we see that I reduced cost on instances by about 25%. That's pretty impressive. Given the basically six or so months, big spin spin mop. The only thing to do it on computer off. My eyes are as it's free, so no reason not to use it and it's actually available in the AWS Management console for all customers. So if you have one take away today that he's he's pricing change, Collier a.m.
What's the remaining time? I just want to go through some really light architectural diagrams on work. Let's, let's talk about analytics and Big Data first. This is kind of a big day to work and I'm going to call your call your attention to just a few things here that are important. So you'll notice I said before, the master note has to be on our. I reserved instant or savings plan, or Odie something that will not go away, right? So that's where you see your master node in the upper left
corner right? Again we find that those are better on on demand or are we have customers that want to run those on spot but it's not a best practice. I don't recommend it necessarily. But most importantly here is where you get the maximum ization or kind of savings if you will is really not lower left hacks. Right? We find it to be a best. Practice to put those on spot. Any big data workloads, rebalance restart, keep going without any problem at all. And so one of those things that we go out and talk to our customers about in terms of best practice of
Architecture is to put cash on spot. What's the weather in to cic to hear devops? So I actually kind of mentally include FS work was in here but this is more of a perfect cicd reference. This one happens to be you obviously a very popular CI CD pipeline. In this case again calling your attention to some of the diagram. You could see everything's behind an ale beer and application load balancer. And in the DPC, which is that nice box to the right. Basically, you what you see is the
Jenkins plugin launching spot instances as agents, so behind, Location load, balancer. You actually see the Jenkins, Master again, not on spot, but it's there, but all of the agents on spot. So, basically, what will happen here, same idea to rebalance can checkpoint. And what we see is Susie's are non production. We're closed, they can finish a little bit further along, but they're also a great place for cost savings. If you're looking to save money inside of your work, or are you at work? Right at CI, CD work with two spots, a great choice in this workload.
Just one where we actually see, combining different purchase options on to be very important or web applications. And obviously everybody has internal external web applications out there. It looks a little like, the CI CD work in front of a dynamic set up resources, right? And so I'll be in this case actually automatically routed incoming web traffic across. Just a dynamic number of changing his so you can imagine with traffic and cetera. What this will actually do is optimized. The auto-scaling group,
infrastructure, PCS website on the other thing to note is we actually kind of have mixed and matched on demand and spot to spot in the left hand side. On Demand on the right hand side. It's a one of the best practices with ECS on a web applications or any web application. Really is no more than 30% on spot is usually about your customers. Take me to take away from the presentation. Basically, if you didn't get the fact that we have three hundred, each
instance types and sizes, basically, good for every work out there from General compute, two gpus and a i m l e r. We can actually handle pretty much anywhere close with our breath. And number two is we really want you to automate and fine kind of like you're costing capacity optimization because we understand the breadth and depth can be a little complex. Sometimes we don't like it when you're running at 30% capacity. Want you to be fully utilize and use the right pricing model on for your work load and then the last one
just here is what I said. When I died a little deeper into anything, you want to get a little more technical, honor, Nitro, whatever it, maybe go and visit the learning library at DWS. Training and you can learn way more about kind of some of this costing pass the optimization optimization. Easy to digest, winter. I just want to thank you for your attention. Take care.
Buy this talk
Buy this video
Our other topics
With ConferenceCast.tv, you get access to our library of the world's best conference talks.