Events Add an event Speakers Talks Collections
 
Python Web Conf 2021
March 25, 2021, Online, USA
Python Web Conf 2021
Request Q&A
Python Web Conf 2021
From the conference
Python Web Conf 2021
Request Q&A
Video
"Deploying a Virtual Event Platform Using Fargate and Terraform" by: Calvin Hendryx-Parker
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
33
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About the talk

"Deploying a Virtual Event Platform Using Fargate and Terraform" by: Calvin Hendryx-Parker

Learn to leverage cloud native tools and launch a scalable application into the Cloud with Fargate. We’ll dive in with how to getting up and running fast, but leaving the overhead of managing virtual machines and Kubernetes behind. Create and store the application Docker images in ECR and without touching the AWS console we can create fully Infrastructure as Code automated deployments via CodePipeline into Fargate containers and S3 buckets. Deliver the React application via CloudFront and S3 for full global scalability. Leave the legacy deployments behind and forge bravely into the new world of Cloud applications.

Recorded at the 2021 Python Web Conference (https://2021.pythonwebconf.com)

About speaker

Calvin Hendryx-Parker
Chief Technology Officer at Six Feet Up, Inc.

Calvin Hendryx-Parker is the co-founder and CTO of Six Feet Up, a Python web application development company established in 1999. At Six Feet Up, Calvin establishes the company's technical vision and leads all aspects of the company's technology development. He provide the strategic vision for enhancing the offerings of the company and infrastructure, and works with the team to set company priorities and implement processes that will help improve product and service development. In 2019, Calvin was selected as an AWS Community Hero. He is among a select few individuals to receive this title. The AWS Hero program recognizes his community impact, industry knowledge, and excitement for sharing his AWS expertise. Moved by an ever burning desire to learn new technology, Calvin believes that programing knowledge should be open, available, accessible, friendly and engaging. At Six Feet Up, he has established a knowledge-sharing culture to enhance the effectiveness of operations and professionals both in his company and for his clients. In 2007, Calvin founded IndyPy, the local Python user group, with the goal of finding the tech community in Indiana. More recently Calvin has become Eleven Fifty Academy’s official Python and Django teaching partner, and is an Iron Yard Indianapolis Advisory Board Member. He worked with TechPoint Foundation for Youth to help build the world's largest network of Coder Dojos and was nominated Tech Educator of the Year in 2016 at the Mira Awards.

View the profile
Share

That we're going to get a introduction to cloud-native deployment and development, and we're going to do it through the lens of the loud swarm platform, which is actually, we're we're all sitting today. So you should see on my screen right now. I'm sharing architecture diagram and we'll walk through a bit of all how this works or not talk about some of the cognitive vocabulary and words that are involved in building a cloud native application. And then we're going to talk about moving for the developer experience in to deploy in the application along this way,

along this path that we're going into and normally I would be on an adventure desktop. Yes. That is correct. Shawna is how I normally roll but I had this all set up on Mac for doing this to do today. So that's why did so we need to lay the groundwork for going. Carnitas in the previous talk. There was a lot of great discussion about how to sketch of a doctor eisinger. Location. And that's one of the keys for us or for me has been in any lost, enabling us to move forward and a cloud made of deployment is being able to deploy and application repeatedly and how the developers think

mostly about. How can I deployed? I know how guide plug application. How do I develop new features? Increase my velocity of getting new features out the door and I don't want to think about how is the boy or scaled and then the operations Engineers on the project want to think about how do I scale the application? I don't want to care or depend on. How do I get all the penalties deployed in all the inner workings of building these beautiful snowflakes of application servers. I just want to be handed the application. I know what is entry point is going to be. I know it asks me to run to

support the application. And I want to be able to deploy that into the cloud and be able to repeat it for each environment, where he needs to be deployed to so forgettable into our sandbox, environment are staging environment, and then ultimately into our production environment. And I want A repeated basis only be able to track what I do in a systematic way, of some kind, so that I can do this over and over again day in day out and allow the developers to increase their velocity of changes as they increased and develop new features for the application. So that that's kind of laying the

groundwork for where we're at. The first step was taking an application and be able to do so that we can now on board new developers quicker. And I'll talk with a little bit and allow each person to have their own development environments up. However, they want use whatever IDE they want. So if you're on High charms and lobbyists code one person wants to develop on v. I, I don't care this early, but what I do want them to have is a consistent experience so that they're not running into individual problems. I want a consistent experience and a one interface for developers to use so that they

can be effective and efficient at cutting on their their local machine. So, how do we do that? And moving to using Docker compose taking our Django application where the previous experience would have been a developer installing postgres. A developer installing read us. The developer installing celery, and all the celery tasks and flour and installing male hog, installing the express nodejs server. And if you were to install these pieces onto your local laptop that be great for one project, but what if you're a consulting company, like six feet up and

you're working on 10 projects or even if you're not a consulting company, but your product company that has microservices and maybe those microservices have different requirements for different pieces of the application or different applications, silos themselves. This is where compose comes in because I can now specify a specific version Post restaurant in a container. I can specify specific versions of redis to run in stone container and I can give each developer in with the code that they pull out from that bucket in our case that describes all the. The

moving pieces that need to be installed for them to work on the application where they only care about Django. They don't care about how long are the surprise guest? But they wanted to work, just out of the box. So our developer work phone. When you can go from pulling out a clean laptop out of a box, brand new. My goal is to get developers to be productive in under an hour and 40 minutes would be an awesome Target. I think I'm going to make 0 into Envy anything's installed on that laptop to producing code that can be committed and pushed into production in under an hour. And we can do that

would develop with a Docker compose and we use Docker compose specifically in the developer experience. We don't use Docker in the deployment experience and I'll talk about that as we get into the deploy experience with us, but the doctor Is what really greatly accelerate our ability for developers to be productive right away. So I can look at this. What happens if we have bitbucket for our source Control Management developers pull code. It has the docker file. It has the docker compose file. It has the terraformed bits and then we'll talk about it later. But it's pretty much get

clone docker-compose up. It pulls all the needed to penda containers and Away. You go as a belt right now and now it's set it up so that you're ready to use know. Why use Pi terms of Pi * aware of doctor. Available allows me to debug, instead of my experience developer to set. I'm ready to actually start developing and being productive on my local machine. I'm running the whole stack of software. I didn't set up any of it. The people who need those two entities in the applications are responsible for setting up that addresses cashing in to my app. I'm responsible for adding reddest to the

doctor to post office to the all the other developers don't have to think about it. When I The pole and get my changes. They also get my infrastructure pieces that are required to support. Code change in the application itself. Class first grade and works on the developers experience for getting them to be effective as developers. Push code into back into the bitbucket source control. We will now start talking about the Dakota pipeline like, getting that code, from written on a developer's. Machine 2 released into the production environment. That's where the

wind comes up. You could do this with or even strictly an AWS. We must you're using bit like it because that's just want to Tool's new pad longer than around longer than we. Probably were you doing any more, serious development things or even before eight of us had their code commit and can co-star Suite of tools, but we will use the code Pipeline, and Toad build from the other side. And as we go to, the boy into production pipeline itself, is responsible for taking the code, the developer Renton building up the image that will be used in production, and then

pushing a m. Into the into ECR which is the elastic container registry on Amazon site. So you see we've gone from a pipeline into the plastic container registering at this point is now. Anyways, pics of AWS takes over and we are going to be going through a code Pipeline and keeping the developers and Luke so you can see we got a code pipeline. There's actually a Lambda hear there's going to be watching for the pipeline finishing where there is successful over. There was a failure and pushing it back into our slack channels. And so I can be basic for my desktop as a

developer. I can say he's make policy. So it's also part of our work. So I can make sandbox release that will kick off the code Pipeline and let me know when it finish the code pipeline has many stages set up, that basically bringing all the artifacts are needed including the image from the image repository. And so now we'll talk a little bit about Allowed to experience. So I won't give a damn about time cuz you're all using lots of right now, but the front end of love for him is completely a react application and it is serve statically off of extra bucks. That's three buckets

from Amazon, but we build the react application, as part of this bitbucket Pipeline. And then the the code pipeline is responsible for taking those pieces out of the image has been built and pushing the static, resources over into the S3 bucket that make up the react application itself. There are tasks the also happen in Stargate. So if we're starting to get closer and closer to the deployment part of this experience, for those deployed tasks will make sure that our example with jingle application. You typically have to run at

collect static, so they can compile JavaScript and CSS resources to put into a media folder that we also serve. Out of an S3 bucket in a media Source. We have a task specifically set up in fargate to handle that for us. So we're still using that one image that we built over here and the bitbucket pipeline. Which if your developer and wanting to debug a production problem is super nice because you can actually see the exact version of the images on production. This may be exhibiting a bug that you weren't you, maybe didn't catch it early enough in the development process.

You can now pull that exact image out of the container repository and bring it back to your desktop and use it in a development environment and actually try and reproduce the bug locally instead of the old way of doing things which would be well jump on it and production server and see if I can get this thing live and no one wants to do that. That's where I feel like we really moved away from these beautiful unique snowflake servers into. I'm going to package my application into an image. Now, this is where we do need to switch a course guide to cloud-native vocabulary. Was

talking about images of talking about containers and I'm talking about docker. The words get a little mixed up. But basically we want to produce as a part of our build process is if the doctor is an image, which is a container compliant. Container engine compliant word where I'm building the image that will run at container. But think of the images like a class and the container being an instance of that class. We've got an image that we build and we can install mini versions of that image, pillows, that are running in live. And those are called containers. The

problem is this right here, you'll hear about contain container, Registries. Like dr. Hogue, War II. UCR what? They're really image repositories. So, if you get confused about that, you're not the only one that family. The people who named these things were terrible as well, and they can get it, right. So we build images that are deployed is containers, but they get stored in a container registry. I'm sorry. But that's just the world. We live in at the moment. Let's back up before we go into the application deployment part of this. We've kind of prepared our our

image for production. We're getting ready to go to Tanner after application. We talked about the developer experience. We've not talked about is how did I get all of this stuff built out? And that's where terraform comes in. For those of you who aren't familiar terraform is an infrastructure as code a platform for building at in structure on any number of clouds. We just happen to be using it here with us. No more than using all of our resources as objects in a terraform file terraform maintains information about the state of what we have deployed.

And it allows us to make changes to our infrastructure and visualize our understand what's going to change. So you can see down here, part of this developer experience. We've got the code developers. We also have our operations developers who are building out the emperor. She will use in that that infrastructure does include this pipeline more than just like where we're going to like, host our actual house on application. It also builds out where we're going to build our Lots on application of a little bit of build suction going on there. But for the most part, what you do is a lot of

times, it kind of work, so we would definitely follow you. May sketched out inside the 80s console or use the Amazon CLI to sketch out the infrastructure. You may need for these pieces and then we'll use, we will pull that all down. It's time. I've got two arrows here. Going from sea lion from the eight of us. Console will pull those artifacts all back together once we've gotten close representation of the end goal and put that in to terraform files. And then from the store phone calls making our repeatedly reproduce our infrastructure into in this case are codepipeline.

But also we will use that same pipeline are the same chair form to produce our infrastructure for the application delivery. So it does double duty there. So a lot of times terraform state may start out as a quick salad inside of the Amazon Council. You may use the Amazon cle2, pull down your Chase on descriptions of what has been built to help aid in basically put transferring that knowledge into a set of files. I can now be tracked inside the same code repository as our software development process and now we can have a full audit Trail. We can see what's

changed when and where Also, it's nice here. Developers are getting more and more involved in the fact that if they need a piece of infrastructure at some point in production, they can make sure that it gets defined as instruction code earlier on in the process. A way, we structure our code repository. We're going to have the Django application of the code. The react application of the code, a code, and the terraformed code all in the same code repository. So any changes we make to the application are going to be a tonic across

all three aspects. They're out of the front. End of a Cannon album structure. So, if my code depend on right as being present in the sandbox environment, I can add the developer had my code that leverages the Snoopy song Aladdin in the docker compose, the other developers now, get ready to install whenever they do a poll and get my new feature. I will also be responsible for adding it into the infrastructure as code under terraform so that I can share. We were my tasks launches into the far containers that you'll have a red aesthetic and connect to. So, it's kind of nice to lock

those together. So if you ever did rollback code to, if you releasing a previous version because you wanted, Baka maybe, do some issue the infrastructure will follow right along because it's all part of anatomix MIT. And if you roll out that one feature, you can roll out the pieces of infrastructure that it was depend on as well. Alright, so any questions so far, we've talked about building the pipeline reaction, do then. Now, when we want to make changes to this codepipeline, since it's all describing and structures code, and makes it very easy for us to go in to terraform to

identify either new steps in the pipeline or changed, the code, build process, or some change those tasks. And then plan in a quiet. Into our environment, as opposed to logging into the database console. There's not a lot of shortening other than there is cloud trail and in some some, some auditing level of like who did what in there. But it's really difficult to say roll back to a previous state of the bill pipeline that we can always have. Someone does go in and and clicks around the console and changing some things. We can always rent are For now, to see what did get changed and a

roll. Change back out or incorporate that change back into our terraform States. And then, push that back out as the new standard for using, for this by the belt. All right. Now is talk about building blocks for delivering that we talked about building the application, getting developers all on the same page, leveraging that doc rice container. Let's talk about the application, kind of rubber meets the road for delivery of said, application for a slide over here to application delivery. Ivan loves Amazon architectural charts. I am no exception.

I love drawing things out and come kissing together, all the various building blocks. And that's one of the core aspects of cloud-native. Delivery for me, is the fact that I can, I can leverage the platform to run host and manage some of the exori part. Some applications that are not the core pieces of my application. I don't want to manage a poster server anymore. I want Amazon to do that. I don't want to manage a cluster of red nodes. Amazon could do that for me. I definitely don't want to run email servers. No one on this planet. Once the Run email servers. I feel sorry

for the people who do have to run a mile service cuz you already know service for ages do that. Luckily. He was on his monster mini t. L a s three letter acronym, has tons and tons of these kinds of building block services that you can piece together to build your application in a robot robot. The way that I can now just a ploy my container or my image as a container into some server someplace and I don't even care what that server is anymore because I'm trying to Leverage The serverless aspects of the Amazon Echo System infrastructure,

even exchange people don't want to me to email servers anymore. We're just going to go. But I want to leverage that those building blocks to to piece together all the bits and pieces. I I would normally would have to deploy and kick it by hand. I mean, the old way of delivering these kinds of applications, we would have a monolithic server. It would be running post, fix to send mail to be running varnish to cash or content. It would be running haproxy to do some kind of load balancing. And then I'd be responsible for running my own whiskey server with Django and all the

instances of Jenga that I would need together. Be able to do this with nail containers and Docker compose and an action movie from Docker compose into this infrastructure where I'm actually using fargate. So, what kind of start down here in the private Cemetery of the application delivery of this. These are fargate instances, all these tasks and that have been deployed as containers and we deploy different containers. For different styles of work clothes here technically and believe you're technically. These are all using the same image, but they're

being deployed with different entry point so that they can service different types of tasks. I forget able to Django task here. If you looked at the last one application, you'll notice that it's all being served out of the S3, buckets for the react application, for any static assets. There inside the application. Everything else is going to be API calls back to Django, Django does no rendering of any service side pages that are delivered to you as a user of the Lots on platform. Those are all And so, the API calls can be somewhat compute-intensive depending on what they are. And we'll talk

about how we gotten around that and how we discovered where there are problems as opposed. To if you notice, when you're using the application, there is a lockbox on the screen where you can see the new track freecloud. You can see actually watch the discussion going by in real time. Those are using websockets in websockets. A very different than the rest API calls in the rest. API calls are typically stateless, just like HTTP, but websockets, our state. Lashay, a stateful, long-running connection between your browser. So the event India Pierce and the application

itself, so that I can send new messages that come in over, slack up to the event, webhooks back out to all the attendees were connected and watching the session line. The type of work is very different than the stateless API calls. Websocket requires a certain amount of ram per attendee who's watching and I need to be able to scale that differently that I need to scale the eight guys. So I'm using, I'm using the same container image. I'm using a different ways. In this case, the websockets, any website that's coming in are being split out. Two different back end

from the AL be. So that's our load balancer and going into the websockets which canal scale separately from the Django, rest API containers. So they keep it there. As we can now use the auto stealing groups, instead of different skills and characteristics based on different parts are application, even though it's all the same application. This part is hailing traffic in a very different way than the rest apis. Same is true for the asynchronous task Runners back here. So the celery workers and flower, which is a management tool for watching. Does asynchronous tasks. There are periodic tasks

that run in the background on a time schedule or in response to something happen happening via the API applications. We don't want those to slow down the API responses, to the application that can be done in the background and then possibly sit back over the websockets, not to the end-users. We can nail scale-out task. So if we have an event, that's 500 people. Or if you have an event this like 5,000 or we can run a different number workers to make sure that the tasks don't backlog and get kind of stuck and then also frees up. Begin. The ultimate goal here is free at the number request that

the Django service itself has to deal with. So it can very quickly respond to the application and make sure that everyone has a good experience that when they're watching a lot for my demo is here. Play mentioned. Performance. And one thing I want to mention about performance is tools like Century. I think I mentioned this during the last talked track. One are awesome. Are they recently added in some performance? Tracking tools, a lie to do something about tracing there, was some discussion about open

open Telemetry in track one as well. We've been using Century for a few years now and they recently added in performance tracking to the century monitoring. So normally it's like they are tracking, I can see like when an exception happened in my back and I'll be able to go in and Chase down. See the call stack in the database queries and see what's going on. There's something swimming now for performance. So as leading up to our early events in the lot swarm lifetime, I highly recommend, folks, do a load testing against your application. We found that in other

developers thing on your own machine and basically, clicking away at the application itself. Some of those edge cases where if you had a hundred concurrent users or a thousand concurrent users in the application you get a very different load on your machine because maybe you were doing some things that were inefficient or seeing lots of cruise. But as a solo user on your own dedicated workstation, you're not going to uncover those until he goes into a bigger environment where you're using load testing to actually identify those issues. We had an issue and it's something everyone

should be aware of when they're developing application of at the top of the hour and there's our schedules, all kind of on top of the hour is when new session start. I'll react application was reloading, the player on the page and doubling up the number of requests that are coming in for the, the new player in the new session metadata and just chilling. Our performance, Thundering Herd. I, we, we inflicted this upon ourselves. So, we have to think about how you Riri, architect, your applications to handle things like that. So for example, the the bar along the top

of the last form application, where it shows all the dropdowns with the various sessions that are in them. That was many different queries, which we've now optimized to be a couple queries, but the problem was on the back side. The rest API, I was actually crying the database for that same data all the same time. So if you had if you had 200 user's all clearing the database for this very large and can't time-intensive query. The server was falling over now, another two kind of pro tip in here. I was feeling can only get you so much if you are auto-scaling and you and I was getting group

set up and you've got it set for if my container runs at 80% CPU, for some time. You'll start spawning new containers to handle the additional load with all the load come instantly into your application and you don't have enough containers already available to handle it. Those containers most likely will start falling over either. They're running a memory or they're going to get bound down and some ice. If you bound Ohio, boundary bound II of the main things to start looking for. We had that exact problem. So the top of the hour, Our existing senior housing application would fall over

the autoscaler with C that they fell over and start spitting up more, but they were starting to get bogged down the request before they can even get spun up in time to start service in her class. You have to look at how you can use the autoscaler smart and make sure you get prewarm some instances. Or if you've got to be able to anticipate what your applications traffic is going to be like, and that's going to be different for every application that gets the boy onto any service, no, matter what. Let's step back. This is where effective cashing guns in. Now, we are using cloudfront and CBN

as part of our delivery pipeline at, which is nice, to having a constant distribution Network, that has over 200 points presence across the globe. That means that everyone who uses our application gets a pretty easily, nice and snappy experience because typically the resources, they are looking for the build up, the application are going to be right next to the same thing for a video, CDN. All the Webkinz, all the video content. You see inside of the application is being streamed live from zoom and being streamed into our video, CDN. And that's also be redistributed closer to the

attendees. So we can make sure that folks who are all the way across the other side of the globe, get a relatively similar experience to someone who's sitting here in United States of America. That's great for casting of the content. But we also need to cash some of the queries. Like, some of the careers are more expensive. Like, for example, generating the whole schedule for a specific event can be a pretty expensive operation. Especially if it's being him multiple times and almost instant steam instantaneously. So we originally had set up that the schedule could be cast

into Wii U's redis cache to build up to the creative. Build up to the results at cash in the reddest, put a t, t l on that. So time to live for that data and if the application sees that there is A transfusion of the schedule data, they will serve it up. If there isn't a case that each LS, past than the query will happen again, so, that person make it a habit to lag as they wait for that. Create a complete can actually serve them, the results of that expensive query. Taking that the next step further. We Now set up some tasks workers in the back on because we know

that Corey is going to happen to the schedule doesn't change that often. So instead of having a three minute or 5 minutes, she'll work on my cat's tail, a Creator. And also I have to suffer the performance penalty of waiting for that recipe. I to regenerate the content, we can now generate that content throw it into the reddest. What's a minute with the same tgl just in case in case that generation would fail somewhere along the line, the teacher y'all can still kick in and taunting can still be generated and on the back side, even though they might be a delay, but now everyone gets the

snappy experience of getting basically most all the exit while the majority of the expense of API calls inside of our platform are going to be delivered straight from read us because we've been pre-calculated an R-rated, which greatly reduces the no load. And the number of containers you would need to actually serve this application in the real world. Now, we do want to monitor things. I do use cloudwatch monitoring inside of us. So we can see, for example, the flower process, if we want to be alerted to, the fact that maybe one of those tasks that

was generating, our schedule didn't succeed. We can now get alerted by washing flower for long-running passed or failed test and had that kickoff a Lambda which can now then go notify us. Which is also means, we don't have to run that infrastructure for logging long. As you're following some, the best practices from a 12 pack, your application. We logged the same it out and I'll need all of our containers are doing the same way of logging all that little doll. Those logs. Now, get shift, shift is over into a single spot where we can actually easily carry them using cloudwatch

metrics. We also are able to use cloudwatch metrics with their new dashboards to watch General Health of the whole application. So we can actually see and put put in place in some critical. I can't lie. I'm going to say it's looking glass, type metrics where we can now. Watch the help of the application. Hopefully, anticipating it was going to be any issues. Now, we know, we know, basically, how we need to scale the application in anticipation of large events for smaller events, and we can scale back down to almost nothing between in between times when there's not any events

running. So we're not spending money running a giant server like we used to in the old days. Because we need to have that capacity. Ready in a moment's notice. We can Leverage The elasticity of the elastic Cloud to handle that for us. And Wilson Ave corporated. Some other services that are part of the stack and see Route 53. S e s which is the mail service. This is our cloudwatch. Dashboards guard duty at literally a checkbox to throw off their into the mix for cloudfront. We get the benefit of adding a web application firewall to

cloudfront. So for these aren't from his medication firewalls. You can now, I did, I did say it Chris. I did drink, we can now have cloudfront. Evaluating the traffic has been coming looking for malicious traffic and that's not even getting into our application space. It also before it even gets his hit me or anywhere near the application and watch for some really Easy. Well-known exploits. There's some awesome examples of using a web application firewall where you can actually put into it. Some smarts like run, some Lando's have some

analysis done against the traffic, kind of trying to anticipate our forecast at Melissa's traffic coming into the site or things that are Anonymous to your site. And one of their other pieces, I haven't talked a lot about is Secrets management as part of your web application and no air in here. If I talked about Secrets database, passwords tokens and credentials that some people hopefully don't put it into your infrastructure as code. What we do for our secrets management is terraform will deploy, but it'll have references to the AWS, SSM

parameter store. So the parameter store that, fancy I can write here allows us to securely store our secrets for each environment. So sandbox staging and production in a secure fashion that is available. Do the application or two specific people based on their? I am role in the in the cloud infrastructure. So it's nice to hear as we can actually, say the developers could have access to say the devil environment or the sandbox environment credentials. We don't have to give them access to those credentials in the production environment. Things like the Django secret key cuz

we will use encrypted fields. For example, inside the application, that encrypted Keys only available in the parameter store for production and it's only going to be and then we can give rules to these berries containers. That allow them access to grab those keys as they spin up using binary variable for passing in all the credentials Keys Secrets you name it because there are secrets for the database other credentials. I are secret such as the Django secret

and we don't want those in plain text anywhere. Not anywhere. The developers will have dummy versions of that. Inside the the The doctors opposed file and he's developing and change those as they see fit. But that way there's never Secrets checked into Source control. There's never Seekers checked in with terraformed and we can actually still managed and see and rotate the secrets as needed using the printer store. And then as new incidents has been up, they will grab out the latest versions of those secrets. That is kind of the Museo using environment variables and keeping

secrets secret. We have a single your L that accesses the application it. So everyone comes into the front door of cloud front. Cloud front is a really nice too old to throw in front of almost any application. You can host, static websites and S3 with him in front of front or you can use my AC to work in our case. We get containers, here are using the fargate service. Which means I am not managing an ec2 instance, that is running our containers. I'm also not managing a kubernetes cluster which the soon as you invoke

that word Ace giant stock of management tools and complexity also appears in front of you and it like as if you were playing D&D game and gone down into a dungeon. No one wants that. I'm trying to keep this whole architecture. As simple as it needs to be, and no more complex. Until there's a requirement for that complexity, be introduced this simpler in the last lines of code means hopefully less surface area and less bugs. It also means its approach will buy us mere mortals developers. If you design a system that is so complex. You cannot debug, it is obviously not a good system to

maintain in the long run. I like to recommend the folks to keep things as simple as possible into the Telly. Just can't stay that simple. So, not having to manage ec2 instances is also benefit because I'm not dealing with that. I just specify in our terraform, what size containers were there. I need more soup. You will, I need more memory. What do I need to do? The small ones? I can run Mini at a time and scale horizontally. That's all done in terraform. I'm not managing your own any servers. I only pay for when those instances are running. It kind of best of all worlds are in my, in my

opinion. No one can access any of these containers. There is no shell on these containers, which is also another big benefit. There is, there's no way to SSH into those containers and there's no public access to those containers. You have to go into the front door. You need to enter the site through the load balancers. Even the database is not accessible publicly, which I don't like to do that, but some people do. That's why we have a management meal and over here or be PC over here. Where if case I do need to get a database known in the office and place. I can't do it from the

application cuz there's not a shell to log into but I have put together. We do have like a Bastion host where we can now go and grab us an ad database dumb of the day. If we needed in that same management bpc. We all stats also where the ECR repository lives for our containers. So that because each environment will need to have access to the container to pull them and then deploy them into the environment. Environment has its own. I should mention that to each environment, has its own pipe. Codepipeline could build and deployment process, so you can actually move changes

for your pipeline along just like you moved changes for your code along as well. So, if I'm changing something about the code pipeline, I want to try that out in the sandbox environment. First, no one wants to pass. Accidentally slipped into production and those are managed and upper environment basis. But the the images themselves are a fairly Global artifact to the whole application so that we can make sure to eat tomorrow. I can pull whatever specific version, the image. Every image is exactly the same across the same box staging and production. The only thing that changes is going

to be at the secrets that are cast into it as environment variables, more configurations are passing to it as environment. Variables. The code says, the chain stays the same. The containers themselves are immutable. Nothing is written in the container. I login goes into Cloud, watch. All the printers come in as from framer store, or as environment variables. So that I can ensure that nothing no code changes. When I'm sure I've got an image that works the same box. I can deploy the image of production. If I have problems of production. I can post my local machine, and now debug as

appropriate. Is load testing. I mentioned we caught some early issues. Would load testing the tool we use for our load testing. In a highly recommended is Locust which is also written in Python. It's an asynchronous load balancing tool that allows us to put together a more realistic real world application load against our system in developing load testing for your applications. You want to avoid two hotspots in the application as an example. If you want to seem like a thousand users logging into your site simultaneously or across some time frame and doing some kind of actions inside the

site. You want to be careful that you aren't using the exact same user to log into the site. Because for example, you may write out a last login time or you may use that same last login time was the same user is logging in central Tennessee, across a thousand instances. He may see strange hotspot in your database that are actually mimicking real-world performance issues and you may miss other ones because maybe dated getting cash for that specific user and just being sent back out a thousand times. So look at the laws has to actually build up and feed it, and I kind of affixture

full of a thousand different random users. So, we have a utility built into the application that can allow us in sandbox to generate a number of random dummy users for our load testing test cases. And we take that same spreadsheet of users and pass it into our Locus cluster. Those going to be doing the load testing against the server, so we can now see real world. Load testing. We take that load the Locust, while you two are locusts found you passed out in, you can specify task groups or sets of actions that are happening together. So if you want

to stimulate someone doing the schedule, clicking on a session, clicking back to the schedule or you want to seem like the whole login process or another nice thing about locusts that you can't do it. A lot of other schools. Not only coming to stimulate the API calls, hitting Django, and requesting all the resources and the data, we can also see me lately, webhooks. We wanted to make sure we had enough resources available for the websockets. What hooks for the websockets to ensure. We didn't run out of memory, each kind of Maine constraint for websockets on server. Side is

going to be memory cuz each connection back requires some number kilobytes to megabytes of memory. We want to test that. And so with Locust, I can have a thousand different users to stop at a thousand different websites sessions. And now, I can see how that application is going to balance those websites across multiple containers running and trying to help identify any of the bottlenecks or edge cases when they've got no, maybe that much contention for sending across the websockets channels. So I make sure you can lay out and think through how you going to test and debug with

Locust. Also debugging remote test like that, or debugging a locust load test is kind of tricky because the code run synchronously, but is not impossible python and a small, some tools for doing that, like the remote pdb so we can actually dig into a load test as it's running spawn on remote PDD. Maybe I'd say you're an exception or at a break point that we put into the code and now we can actively with you Thai charm or pdb. You're analyzing see, is it a specific uses being reused? Or is it something I are in a customer at the dollar G. That was a huge help and actually

making sure this all went smoothly without all being said, I want to thank you all for hanging out with me for the last 45 minutes and chatting about terraform in PDF format, in fargi and deploying application, and I hope you really enjoyed seeing that. I'm a meta presentation about the platform. You're actually using right here to watch lots warm. Thank you all so much. I will be wrapping up and jumping into the face-to-face meeting. So if you want to talk to me, click on the link down below, I'll jump in the face to face for a few minutes before. I need to go back over

and moderate MyTracks. Number one, and thank you all for joining. I will be around the conference all week long and I love chatting about cloud-native deployments and I look forward to talking to you all. Thank you for for coming out and see me.

Cackle comments for the website

Buy this talk

Access to the talk “"Deploying a Virtual Event Platform Using Fargate and Terraform" by: Calvin Hendryx-Parker”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “Python Web Conf 2021 ”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “IT & Technology”?

You might be interested in videos from this event

November 9 - 17, 2020
Online
50
19
future of ux, behavioral science, design engineering, design systems, design thinking process, new product, partnership, product design, the global experience summit 2020, ux research

Similar talks

Aly Sivji
Mathematician / Software Engineer at Noteworth
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Calvin Hendryx-Parker
Chief Technology Officer at Six Feet Up, Inc.
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Amit Saha
Software Engineer at Atlassian
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video
Access to the talk “"Deploying a Virtual Event Platform Using Fargate and Terraform" by: Calvin Hendryx-Parker”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
835 conferences
33915 speakers
12829 hours of content