Duration 37:38
16+
Play
Video

RailsConf 2019 - Background Processing, Serverless Style by Ben Bleything

Ben Bleything
Developer Advocate at Google
  • Video
  • Table of contents
  • Video
RailsConf 2019
May 2, 2019, Minneapolis, USA
RailsConf 2019
Request Q&A
Video
RailsConf 2019 - Background Processing, Serverless Style by Ben Bleything
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
848
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

Ben Bleything
Developer Advocate at Google

About the talk

RailsConf 2019 - Background Processing, Serverless Style by Ben Bleything

Background processing is a critical component of many applications. The serverless programming model offers an alternative to traditional job systems that can reduce overhead while increasing productivity and happiness. We'll look at some typical background processing scenarios and see how to modify them to run as serverless functions. You'll see the advantages and trade-offs, as well as some situations in which you might not want to go serverless. We'll also talk about the serverless ecosystem, and you'll walk away with the knowledge and tools you need to experiment on your own.

Share

Hi everybody. My name is been I've been blazing my pronouns. Are he him? I'm here to talk to you today about what I think is absolutely the most exciting aspect of modern application development, which is background processing. I think you and I want to talk also about like serverless and how you can kind of put the two together and do interesting stuff a little bit about me. I'm bleeding everywhere on the internet. I've been doing Ruby for a long time almost 15 years. I worked on some pretty interesting

applications like get Hub and LivingSocial white pages and then some others in in smaller more interesting, but let's start up places like animation and financial services and ending music licensing and I've seen some stuff as it's up. My focus is mostly on infrastructure and operations and architecture and I am a developer Advocate a Google Now and my job there is to think about those things and to think about how to modernize our architecture how to make them better for

both for us as developers and for our users by adopting new technologies, which I don't necessarily mean new like emerging but new like me to us and just kind of think about how to make things better. That's that's what I want to talk about before I go on though. I want to make a couple of disclaimers or or whatever. First of all, this is not a sales pitch. I am not here to try to convince you to use anything. I'm not here to try and convince you that Services the way to go. I'm not sure it's the way to go. It might be maybe

for you might not be for you. I don't know. I'm also not here to tell you to use JCP you can and if you're using it also, that's cool. If you don't have a cloud provider yet. You're looking for one check us out. It's pretty neat, but I'm not interested in trying to convince you to come use my thing if you're already using something else. The truth is you probably don't need this stuff. I don't think anybody needs it. I think it's I think it's cool. I think there's a lot of interesting ideas here. And I think that those ideas are inspirational. I think that we may find as we

explore this and I don't mean during my talk, but I mean as a community as we explore this early coming years, we might find a lot of interesting places where we can use these Technologies to make our stuff better. And like I said, I can't you know, I'm not here to tell you. You should do things. You're the only people who know enough about your systems to know whether or not the things. I'm going to talk about it the right fit for you. My Hope Is that I can tell you about some of the things that I found out as I was researching this talk to tell you about some of the things I've

learned and maybe inspire you to go do some experimentation on your own and maybe save you some time cuz there are some caveats that I'll go over this week was we get into it so with that kind of introductory material out of the way let's let's talk about background processing I suspect that most folks are familiar with this this is when you know you have something that you're trying to do and you want to get it out of the request cycle cuz it takes too long and you're either going to set a time out to user just make a bad experience an example of use later is like a

your building the new YouTube and you want people to upload videos, but you don't want them to sit there for however long it takes to transfer those videos I'm sorry, I forgot to mention. I developed a cough this morning. And so I'm going to be hitting the water quite a lot and potentially turning around and turning off my mic and coughing a little bit. I apologize. I don't like it either. Anyway, probably most folks have either seen or work with sidekick or rescue or something like that. There are others out there a delay job in sneakers

backburner Sucker Punch a bunch of others. This is a really common thing in modern application development. There's a lot of times will you want to offload some work and do it in the background? So just out of curiosity how many folks have used something like that before. It's everybody. Who's built their own? Cool. Alright sort of. All right is is anybody is anybody using what they would consider service right now for background processing? Sweet, I would like to talk to all of you later and probably going

to want to talk to me too, cuz I'm trying to say some stupid stuff and I would like for you to correct me. So what is serverless I went into this having a really big idea. Yeah, I've been around this world for a while as I mentioned but service is just one of those things that I've never had a talk and I was always kind of curious what it meant and I had built this in over the years of being just around technology. I built this internal definition, but I wanted to see how it matched with the other people on the internet who think about the stuff more than I do. And so I asked

my friend's what's your internal tweet sized definition of service? And I got some interesting answers. The first one I got is from Justin Watkins to the roomiest old-time previous out of Portland. He said function as a service works fine for me and I suspect the most people that's probably the case like service when most people think of service most people think of functions. It's kind of the I believe one of the oldest things that ever called itself serverless and it's still probably one of the most used There are other things and I'll talk about them in a few minutes, but for the most part

I'm going to focus on functions because it's it's the most accessible and I think the easiest to understand. Audra who Savino said paper paper unit compute you pay some tiny amounts each time you use compute and then you don't pay when nothing's computing. It's ok Google we talk about this in terms of scale 2018. That's an industry-standard turmeric. That's just to turn that we like to use you might consider pay as you go or whatever. Play this is this is interesting stuff because this is has the potential to be a revolution in the way that we think about cloud like Computer

Resources in the same way that easy to In and Out of the lungs were what 14 years ago. So before that and before that, I mean before you adopt the cloud when you're still running around metal, you have to buy capacity months in advance about a year-and-a-half ago. I was still a consultant and I put in a quarter million dollar order for the new servers for on my clients and we had to wait almost nine months to get them because there was a worldwide flash shortage going on and we just couldn't get up at Steve's they'll come get us Estes Orwell rather dumb wouldn't sell those

recipes to ask if there was only one other people and so we had to wait forever to get there. So we had to plan at that point. We were planning over a year out for that additional capacity because we knew that we were going to need it in a year and then we're going to take us 9 months to get it and then of course, you know another 3 months to get burnt end and set up. When you see two came on the scene in 2006, is it kind of changed all this because now you can buy compute right now. You can buy it in a matter of seconds or minutes rather than in a matter of weeks or months. I know this is

not new information, but I mention it because Timmy serverless is the next jump of this instead of paying for capacity. Now you're paying just for usage with VM. You are still buying capacity. You're still saying I want some number of CPUs some amount of ram slime out of disk. So I'm out of network and just let it sit there until I need it and then when he needed I'll use it and I don't need any more. I'll either stop using it or I'll turn it off and then I'll stop paying for it. But service is the next step service says okay cool. If I'm not using it don't charge me for it. Like

here's something please run this code when somebody wants to interact with it. If no one's interacting with don't perfect for it. So the shift from having to buy this capacity months in advance and doing by the way, if you've never been part of the capacity planning exercise for like actual physical Hardware find someone who has and talk to them about it after the stories it is wild and I think it's something that people who are not in operations don't think a lot about sand and maybe have an experience but imagine having to predict not only the growth of your user base and the

growth of the way that your users are going to be using your system, but also new features new developments new technologies over the course of 15 months trying to plan out that far in advance so that you can buy the right Hardware now to have it when you need it. It's very very difficult. Yeah. Highly recommend asking some people who've been through it. It's crazy. So but Turtles you can ignore that. It's just like hey, I got more demands today, please execute some more times. My coworker Sandeep had another take he said anything to charge is only by usage

and requires no manual scaling or provisioning. It's the first part that's kind of what we were talking about before. But the second part is more interesting no manual scaling or provisioning. So we have Auto scaling. We've had Auto scaling for a long time at the instance level that like a VM level and we've also had it in kubernetes for containers and things like that. So this is not necessarily you like new and revolutionary but it is still interesting because now again you have this code sitting there somewhere and I'll talk more about it. But like you got the supposed to be there

and if no one's using it it's fine and it's a million people are using it a million people can use it at the same time and you didn't have to do anything. It just happens and something with provisioning. You don't have to worry about vm's. You don't have to worry about setting up your network. You don't have to worry about configuring any of that stuff. You just like you're some code please run it. Obviously, I'm doing some Headway then you do have to worry about some of those things not all of them. But you know, this is the sales pitch as part of this whole thing. I know I said it was

going to be but yeah. My friend Jack went into just a little bit more detail about this. He said serverless heavily implies a near-zero up workload. No shaft. No terraform. No dockerfile Etc. And I kind of know what he's talking about. I think from the developers' perspective. This is true. You get to stop thinking about a bunch of stuff that you had been thinking about you don't have to think anymore about you know, you're like the whole rails framework now, you're just focusing on a little bit of code that just does the thing you're talking about. But I said I come from the Ops World and

Ops background. So I know that like we're not really getting rid of op responsibilities. We're just driving him somewhere else. But if you're buying your circle is stuff from your sending it off to the cloud provider. Your Ops Team doesn't have to worry about the operations of the function like the the function framework or whatever anymore because you're paying your cloud provider to to think of it for you. So these things get shipped it around and some of them you can stop paying attention to and some of them you you think about differently but this is this is another big point is another

big point of serverless is that it reduces your overhead and overall. I think it does. So just quick functions was a service that mentions a couple of times the idea is you take the small Tonka code a function in like the Cs like the programming language sense and you shove it up in some framework somewhere and that framework handles getting requests to it and I can happen in several ways. One of those ways is via HTTP. One of those ways is b a message queue. Every cloud provider also offers the ability to like respond to events that happen elsewhere in their infrastructure

with some fairly contrived examples in a second, but I want to say we just one thing real quick. If you haven't already looked into this before we go too much farther. I just want to warn you that if you want to do Ruby if you want to write functions of the service in Ruby, you have to use Lambda AWS Lambda. Google doesn't support it as your doesn't support Ruby Amazon does I don't know about IBM Cloud. I don't know about Oracle they might but I can tell you for sure as written in Google don't so this is something that you have to be aware of that this stuff all sounds super

cool, but Ruby Ruby support is not as strong as it is in other places without all the way. Let's take a look at one of those really dumb contrived examples. I just mentioned So I wipe Handler for some reason you've decided it's a super cool idea to write a custom continuous integration continuous delivery tool. So you want to take GitHub webhooks and you want to build artifacts when those Olympics come in. So in the super basic sense, it might look like this you have GitHub it sending you a webhook your rails application is catching that webhook and then

it's including a job in sidekick. I'm in the sidekick as a stand-in for background processing in general just because it's what I have used almost exclusively and it's what I know, but but feel free to replace the sidekick logo with your abeka job server of choice. So if you want to server list this service it up. You can use the HTTP method that we talked about before so at Amazon they have the single API Gateway and it is pretty much what it says. It's an API Gateway. It takes if he

requests and it translates them and send them over to the functions running on Landa. I don't know and I would love to know if anybody has a better answer for this. I'd love to hear about it later. I think we call that idea an API Gateway because Amazon calls there's API Gateway and that's just what happened and maybe the other way around. I'm not sure. But this is interesting. This method of indication is interesting for several reasons. Why one of them is what we were looking at a second ago, which is this like this webhook idea. I didn't mean to go back there. I'm sorry. I'm at the idea

where a lot of services we will send you a hug and if you can consume that hooks directly without having to run your own rails application or Sinatra rapper, whatever if you can just throw it right at a function, then you've got this all of a sudden you have this much smaller piece of code to maintain that does the same kind of work. It's also interesting because HEB is absolutely without question the standard like the fact of transport of our industry. Everybody knows how to use it. Everybody works with it everyday. It's just it's what we know it's is very well understood. There's good

Cooling and so this is kind of a very low barrier-to-entry way to get into functions as you just put a Gateway in front of it or if you're using Google. I think we don't have an API Gateway thing. You just say exposes the internet it takes care of it automatically. I think after works the same way. It's effectively the same as putting something like that. There is just manage for you. Okay, cool. But like why would you do this? Like what's the point? And in this particular case, that's a good question. It's hard to say definitely. This is simplified your

environment a little bit. Like I said now you're not running a rails application or Sinatra or whatever anymore. Now, you're just running the what is effectively the action that does the thing. Okay, cool. That's going to meet whatever it also separates it from your production infrastructure. If that's something that's important to you. If you want it to be running on a completely isolated set of compute. That's you know, so far away from what you're doing. Sometimes that's important. If it seems like I'm having a hard time coming up. It's because I am this is kind of neat,

but I'm not sure it's super compelling unless you have a compelling use case for it. But yeah, they can't write example not a compelling use case. Add another appointment Boston. I'm sorry. Footlocker another example another, use case for background processing in Rialto or web application error in application Imagine you are making a new Social Network. You're tired of Twitter time something new but you've been around for awhile, you know, you got to do some stuff you have to you know, people are going to post their whatever you're going to

call him their status updates and your index those like they have yet to be able to search him. You probably want to do some spam filtering. You probably wanted some analytics. Do I look for links? You want to like figure out who's talking to who you want to do all of these things like this is just this is common text processing stuff. So get this is that this is a very simple simple case. You have your clients. They're talkin the rails application directly. You're doing that stuff right when you get that status update you're doing it inside the request cycling you're throwing in the

database. This is how we did things a million years ago and didn't in a small for low usage small simple tasks. You could probably still do it and get away with it. You'll be fine. They start to grow and you want to search better like you could search in your database or you could use something like elasticsearch you decide to use elasticsearch because I couldn't think of another search engine tool. So that's one reason and so all you really do now since I'm putting it just in the database now, you're putting into the database and elasticsearch. And

again, if you're still relatively small if it's pretty simple, you might still be able to do the synchronously might still be able to do it as part of a request. Like I wanted might not be that bad but there will come a point when it will be that bad and you won't want to do that anymore. And so you're going to go to background processing. So I guess I know that this is not this is not interesting or new information goes to the background processing it can pull or pull the information out of the database process it put it back in the database put an elastic search whatever he needs to

do. And that's cool for a lot of reasons. Mostly, you know it, you know, it speeds up your requests and all that, but it also allows you to paralyze this so you can do a lot more at the same time. Thank you. I was not sure about that jokes. Thank you. Okay, cool, so that's all fine and good how would you do this seriously Well one way is to use that message queuing stuff that I was talking about it earlier than the message to style of function indication. So in this case you have that that thing that looks like an iMac is actually a rails application in this

case. I'm sorry. I used the wrong icon. So he's putting is putting a message into the message queue and that message queue is forwarding that message on to any functions that have said hey, I want to know about this this kind of message. So far real stop that we were talking about our new cell phone that work thing here is kind of what that could look like is you get that message into the rails app that you sent to the database then you drop it into the message HQ at Google we call ours Pub sub and then it passes that off to every function that has been his said it you I'm interested in

messages. So you got your spam filter in in Dexter and the analytics analyzer or whatever you want to call it. And it did this is a pretty simple straightforward kind of thing. So why would you want to do this? well scalability is a good one. Say, you know you all of a sudden you have 10 million new users and all of a sudden your search and indexing went from taking, you know, a millisecond or two seconds per post to now it's taking a hundred milliseconds and you're not really sure why you're trying to figure it out. Well,

that's not really slow down your TV and end up with this big backlog. And you can fix that by scaling up you can you do add more workers or whatever, but if in this kind of service world that scale up happens for you. It just kind of is there and all of a sudden you have more traffic? It's doing more work for you. It also lets you decouple again your processing code from the rest of the application which may or may not be a value to you one place where it might be kind of interesting is in the case of like diverse having a diverse text back. So say your

main application is written in rails, but you're doing a lot of ml work with the text of the status updates python is the de facto language for ML. And so maybe you want to do much processing this in Python. You can do that here. You just write a python functions of Ruby function. In fact leave here on Google. That's the only choice cuz we don't work doobie. There are of course, they're also disadvantages. It's pretty opaque. You don't necessarily have the level of insight into that message to you as you do into your own register rescue, you know, it's like a kind of

thing. It's also complex in the sense of there more moving parts. Now, you have these functions elsewhere that are not necessarily in your rails app, that could be you know, you could store the code in the same repo but it's still it's another piece moving somewhere else that's managed in a way that you don't necessarily know a lot about that. You still have your still responsible for monitoring and you're still responsible for making sure that works. The other thing that I wanted to mention is this is what we're looking at a second ago and the Kingdom's River will notice that this is exactly

the same as doing it was psychic. There is no real difference between putting it in pubsub and Fanning it out to a bunch of different functions as there is putting it into a background task executor multiple times to do different kinds of thing. There are I mean, there are some small differences in the way that you would actually write that code but this is not this is not a wildly different way of thinking about it, which could actually be good. This makes it a little bit easier to adopt this style because it's the more familiar like flow of stuff through your system.

Yeah, but why? Maybe you don't want to run a background task executor. Maybe you have limited resources and you don't want to spend any of those resources on keeping servers running just to run a background jobs when they're available. I guess I know I said that four or five times in different ways already, but that's actually a reason why you might want to do this. Another interesting reason is that on the major Cloud providers there lots of services that can consume from those pubsub message queues.

So for instance, I mentioned a millisecond ago, maybe you want to use one of the provider's managed ml stuff. You can just point it at that cute the two that already exists that you're already using for all of your other stuff and it will start consuming that to maybe you want to put all that stuff into bigquery for instances are like big data database thing and you can just point it about q in it will start consuming that stuff. So it gives you a way to integrate with managed Services, you know them and services that your cloud provider provides in

an interesting way. So that's another kind of interesting thing. And then also you don't listen to school. Let's talk about another one mentions new YouTube. So let's look at that. So in the very simple case again, you're uploading file to the rails application applications going and putting it on an object stores somewhere. Maybe that's as local disk, but much more likely it's something like S3 Azure blob storage store or Google cloud storage and maybe doing a little bit of data extraction without it. And if you're using

like active storage or trying or paperclip or anything like that, this is probably in in the default State. This is basically what's happening. Photos to your rails application or to your web server and then it gets put up wherever it's ultimately going to live and you can attach some hooks to do other things like metadata extraction or whatever. But what do you do when you want to do more things? So you want to like transcode that video to a known good for matter format that works better for mobile or whatever else. You got it that you want to still do the

upload. You still want to extract metadata. So again background task make sense here. And in fact, maybe depending on which of those like which file upload management killer using it might actually do this or you might actually have those jobs built into it into those jobs for you. Again. This is a this is a very typical like media management audiophiles photos movies, and it's all of this is pretty standard stuff. And just for what it's worth of Flo jobs in this is not exactly you know, what I should be for after you transcode. You do need to upload whatever the

result of the Transco to get. Blood that too. So that's going to fire off another job and I don't want to add a second floor mat or whatever above the original all that stuff. How do you do that Siri LaSalle? What the ways you could just do the message passing thing like we were talking about it so I can go with it with a message to you or you can look into this the service event invitation style. So this is the one where you know, something happened somewhere in the cloud and it tells your function to that happened and you react to it. So in this

case again, this is Google Cloud Storage when you upload a file it will tell your functions. Hey, this file does not uploaded. Would you like to do anything about that? And if your functions have said, hey, I'm interested in this kind of event. They will get invoked and fire and do their thing when that file gets uploaded every time. I file gets uploaded to that bucket or whatever. They're called. It'll get you know, it'll trigger that function. Are there lots of different events available? Every service provider has a different set a different set of services this works with so like

look into it. If this sounds interesting on Google on gcp we offer start likes events from Storage. Like I said also events from our fire store database in from some of the Firebase tools like authentication analytics. If you look at Amazon, they have exactly the same thing with storage with S3, but they also have events from Alexa and some of their it's called. I think it's called blacks is like a chatbot kind of service. I got it like a human interaction service and there's some others So there's Watson services available that can

trigger these functions. So in our back to our thing, so your is application is uploading that video to cloud storage. And then when that upload is finished it says okay cool. I have different than all the other places but that message gets sent over and then every function that said hey I'm interested in in finalized objects gets involved. And so we're going to transcode at the transcode going to upload it back to storage the metadata extractor is going to do its thing. And then, you know eventually we'll notify the user you wouldn't actually probably wouldn't actually

design your system this way. We're probably wants you transcode you're going to put it into a different bucket then put it back in that same bucket you get another message about it. And so you can very easily get into this Loop where you're just transcoding the same video forever. This is one of those things you have to be careful. If you're using the style of it, really you just have to be careful to make sure that you're not executing jobs that you're not supposed to be that you've already like you're not going to be doing work that you've already done. So in this case, you might make

that job take a look at the the metadata of the object or where that object is inside the bucket and say oh this looks like something I already transcoded. I'm just going to ignore it or like the Notifier my I only notify one that sees a transcoded image or video. It might get the original Rob video. It's not going to do anything. But when I get the passcode one and it. Object has been finalized nobody like cool now I can tell her But you should definitely just two separate buckets of your building YouTube. Okay. Why why would you do this? This is a this

is kind of an interesting one. So video transcoding is a very intensive process and because it's a very intensive process, you might not want to have the capacity inside your Datacenter or inside your cloud account to handle the load that you to have it all time. So it's so again we're talking about scalability. Although I should also say that because it's an intensive process. It's not necessarily a great fit for running on cloud functions will talk more about that in a second. But if you think about other things image processing maybe versus video processing or or audio

or you know scanning PDFs or something like that. These are still good options and they do allow you that that flexibility. And again, like I said on the previous example the ability to choose different technology if something makes more sense for you, if you have something that performs better than what you could do in Ruby, you can potentially do it in that is assuming your your provider support that Downsides pretty much the same as before. It's more opaque. It's in this in some ways. It's even more opaque than before because now it at least when you're using the like ass to ass or Pub

sub or after Q, you can interrogate that q and say hey, how many messages do you have? What am I waiting for? What's going on here when you're just waiting for something us3 to tell you to do something. You can't necessarily there's no way for you to find that out. So you have to pay a little bit more attention and you're monitoring. The situation is a little more little bit different than what it would have been otherwise, but things are going to be monitoring it one way or the other so it's not like it's it's not like it's a new work that you have to do is just different. There's more

moving pieces. Now in addition, you know to your actual job executor thing. He was in this case has the function you got that q that you don't have any access to you and you have whatever that might be happening that you don't really know. Anything about Evan storage is another piece of this a lot of pieces. They're moving here. So get this is a way that you could do it, but it's not necessarily a way that you should it's you know, it's really up to you whether or not this this fits well with your application. So I'm going to leave you with a few things to know. So

I guess you could sort of Call These Warnings. I talked a lot about you no scaling automatic scaling but it's not unbounded like all of the providers that I've seen have about upper bound to how far they're going to scale for you and you actually want this because what you don't want is someone to maliciously upload several million terabytes worth of several videos and I all the sudden your bill is 6 million dollars because it's just like, okay cool. I'll just do all these at the same time and the cloud providers

going to try and save you from that a little bit by putting some limits in place. There are rate limits. There are execution limits both simultaneously simultaneous execution limits and duration of execution limits their resource limits like in only use so much RAM, you can only do so much desk. and got to know about all of these you have to think about all of them. I did I did some I didn't experiment a couple of weeks ago where I put a hundred million messages into Google pubsub and consume them

with Cod functions as fast as I could. And it turned out as fast as I could was around 5,000 seconds, which I mean that's that's pretty good. But Their eyesight work on systems that needed more throughput than that. I would imagine there's lots of people in this room who have as well. And so this is something that you need to think about you to understand the limitations of the system. And because this is a limit with Google any way that cannot be changed. So that is that is a hard limit that you cannot go over under any circumstances. So if you need more proof that

the night you need to re architecture system do a little bit different. You also really have to understand the the systems involved the pieces that you're using some part of that is understandable and it's like I was talking about the part of it is understanding the semantic inference and thought experiment. They did the hundred million messages in in pubsub. The thing about the pub sub of the message queue products at the major Cloud providers is that by default none of them guarantee that you're only going to get a message once they all do at least once delivery and they all none of them

guarantee ordering you can turn those pictures on in various ways at Amazon. You could just create a different kind of Q. I think that's the same as her with Google you have to pass it through our data flow products through a particular workflow that orders and be duplicates the messages but this is something that you have to think about it out of that hundred million messages. I got about .1% repeat messages over the course of 24 hours. So about a hundred and I think it's like a hundred and ten thousand messages repeat it. So if I was sending email That would not be acceptable cuz I did

a hundred thousand people who got multiple emails and when I actually did a further analysis of that a lot of those messages were the same messages that were just delivered tens of thousands of times. And so you could end up in a situation where you know, you see a very low repeat message rate say you sent a hundred million messages and you only see ten thousand messages repeated, but it was one message ten thousand times and now you got a very pissed off user. Do you think about these things and be careful with them? Sometimes it doesn't matter if you're indexing text, you know, if you're

just rubbing it into elasticsearch you don't care. If you do it twice, it's fine. It's the same text both times the second time. That's fine that email maybe not so much email you want to have a little bit more structure around it. So your little bit safer. And then you still like I said, you still need to monitor everything you you still need to build tools and build systems to monitor what's going on to make sure that you're doing all the work that you meant to do that. It's getting done in a reasonable amount of time and that is getting done correctly. And hopefully

the ecosystem will start to provide some of those tools for us some of them already exist if they do, I don't know about them, and I'd love to hear about them if you do, but it's just another thing to think about. So if you want to go do some more digging, these are I mean, you could just Google serverless Azure serverless in gcp serverless. These are the sort of like product landing pages that describe each of the three major providers service offerings in general. If you want to dig a little bit deeper into functions in general

you want to look at so the on the left we have the the product Amazon Lambda Azure functions and Google Cloud functions on the right are some open source projects to do the same thing openwhisk. This is Apache project. It was I don't know how tightly I think might have been created by IBM its powers IBM clouds Cloud functions. So it was either created in Partnership or by IBM, but it's an Apache project. FN is the same kind of thing. But from Oracle, I don't know if it operates Oracle Cloud functions or not. If you do know I'd love to know but it's a project that they sponsor

and then fission. IO is just one of several. Frameworks that put functions of the service on top of kubernetes so efficient. I always one there's one called open openfaas up in Fast and there are there certainly others. And that's all. Thank you so much for coming. Like I said, I'm bleeding everywhere. You can find me GitHub in Twitter blazing. That is my websites. You can email me and whatever and I'd love to talk to you more. I'm going to be at the Google boost in the in the exhibition hall for the rest of the day. So please

come by and say hi. And especially if you're doing this and want to tell me about how I was wrong because I I really would like to I honestly sincerely would like to know that. All right. Thanks so much.

Cackle comments for the website

Buy this talk

Access to the talk “RailsConf 2019 - Background Processing, Serverless Style by Ben Bleything”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “RailsConf 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “IT & Technology”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
173
app store, apps, development, google play, mobile, soft

Similar talks

Alex Reiff
Senior Software Engineer at Weedmaps
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Will Leinweber
Principal Member of the Technical Staff at Heroku/Salesforce
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
John Beatty
Founder at Tercaet LLC
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “RailsConf 2019 - Background Processing, Serverless Style by Ben Bleything”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
646 conferences
26471 speakers
9831 hours of content