Duration 42:10
16+
Play
Video

Worried About Application Performance? Cache It! (Cloud Next '19)

Gopal Ashok
Product Manager at Google Cloud Platform
+ 1 speaker
  • Video
  • Table of contents
  • Video
Google Cloud Next 2019
April 9, 2019, San Francisco, United States
Google Cloud Next 2019
Request Q&A
Video
Worried About Application Performance? Cache It! (Cloud Next '19)
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
3.95 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Gopal Ashok
Product Manager at Google Cloud Platform
Karthi Thyagarajan
Principal Specialist Solutions Architect (Globals) at Amazon Web Services (AWS)

Product management leader passionate about creating impactful products Track record of planning, delivering and marketing enterprise products Proven leadership skills with emphasis on empowering people and driving data-driven decisions Strong analytical skills with a record of developing short term and long term product strategy Fast learner with the proven ability to learn new technology areas from a technical andmarket perspectiveSpecialties: Product Management,Product Strategy, Program Management, Database, High Availability, Enterprise software; Messaging and Positioning,Customer relationship, Public speaking

View the profile

Karthi is a Solutions Architect at Google Cloud focusing on Data and Analytics. In his role, he helps customers build large scale distributed applications using Google Cloud Platform.

View the profile

About the talk

In-memory caching is a common architectural pattern used to speed up application performance. As easy as it sounds, storing and accessing data from in-memory stores have challenges. How do you size your cache for optimal cache-hit ratio? What APIs should I use to solve a specific use case? How do I troubleshoot when there is a latency problem? With Cloud Memorystore, we make it easy to deploy and operate an in-memory store that is fully compatible with Redis. In this session, you will learn how to effectively use Cloud Memorystore to speed up your application.

Share

So the whole if you just look at the application ecosystem today, I think it's very fair for us to say that there are some fundamental characteristics that the applications are rebuilt today have regardless of whether you're building a departmental application or any internet application the speed and the response time for applications are super important in today's world. You couple that with scale you couple that and you add High availability and you add the Big D need to run application infrastructure at a certain operational costs. It becomes a very interesting

problem. So when it comes to cashing why is cashing so such a fundamental infrastructure when it comes to Applications, right? Let me see when we see customers use cashing primarily is around few high-level. These cases one is off the office one. We're just waiting for you. Right you want to reduce latency of the application you want to have any memory caching learning between your application layer and your database layer? That's the main use case the other one

so you can argue that if you have a back-end database where you are charged at for the office for 2nd or the number of off that you are using for the backend a cashing there in between can actually help you. That's one of the other use cases that we see customers use are the benefit that customers ride together of cashing. What day is 2nd April 2nd benefit it's kind of tricky because you have to be very careful in terms of how do you spice to your cash and not get into this whole notion of a Thundering Herd problem is that part of the cast goes down. Do you

have enough capacity on the back and Exedra? That's especially the case where you don't want to be super careful in terms of how you design your assistant. I'm right. I'm literally across all the different types of applications cashing it to use and I don't think we need to elaborate on that. But the whole idea is that you have a cash in your application structure so that you can get the frequently accessed data from memories rather than from disc but this whole bunch of things behind it in terms of optimal. from accounting pattern perspective

I would think the Rachel casseus fundamentally what everybody uses mostly right with essentially say that hey go try to get the data from cash. If it's not there to read it from the database and want to read from the database cash it so that subsequent prairies can get the data from the cache of the fundamental counting patterns that most applications used. But then how many of you use right through today or any implementation of right through The writer is an interesting case where it's supposed to solve the problems around cats consistency, but it is

it is a complexed logic and there are issues that comes our consistencies shoes that can crop up in there a lot of error conditions that you need to watch out for Wrightsville cashing means you right to the cash first and then right to the back in database and you only connect once the right there are in both the places and right behind essentially was very very rarely use me and you just write to the cash first and the late and you legally ride to the back of database. So that's an eventual consistency model and an eventual consistency really depends on your application if your application

can tolerate that kind of break. But the other one that we also see you then process or what I mean by in processes that if you have if you want to have an application close to you or a cash flow to your application know for example, you can have for example of process running directly in your application note to cash like local data. So the reason I bring that up and just contact us because sometimes when you have a man as for that gives you like a manager at has all this is kind of a distributed centralized cash, then process continues hazy may be better off as diploid on your own

Reddit along with for example, if you have on a ZTE application notes indicated may be better off this morning right as directly in the know rather than using a centralized management manage cash, but these are some of the different variants So why redis I think everybody knows about right as but I think that's too quickly summarized. We all know right? It's as a very versatile Dayton memory. Do you store? It is categorized tusky Valley store, but there are so many different uses off at that it is I actually like this more than just a few hobby

store. But if you look at the key capabilities of retos From this you don't forget about the latency part, which is pretty obvious. But the data structures is one of the most powerful aspects of red is right. There are all kinds of different structures that allows you to efficiently process data from various use cases and I'll talk to phone. A little bit but the other things like how you available can persistence also make it a lot more usable in four different use cases, right and I'll talk about these kind of abuse cases in just a bit

of scripting modules and now streams in red is 5/3 powerful when it comes to process LinkedIn. I'm really talk about these cases. Obviously, they're all kinds of different you season two, very common ones that we see and what we have seen with memory store is obviously the first I would say for are pretty common for cashing straight-up Clarity counseling session store is very very common with redis. So essentially you basically want to have your affections Jordan Reddit and

read limiting. Do you have a lot of customers basically using redis for API rate limiting? That's one of the common used skis exactly see Nothin just as an interesting use keys where you can use various data structures within read us the basic me quickly process feeding data. and in your pipeline So when you look at these cases, like I said, right that did that are different data structures that you can use to basically support these use cases. I think most of you are probably very familiar with it if you're already using right as I'm just missing a few

sessions to read like for example has a very interesting data structure and red is essentially allows you to basically take a set of key values and data structure very memory efficient. The one of the things that we recommend is if you are using as Bentley spleen key value, it's probably a lot of Keene Valley pass. It's probably better to take a look and see whether you can convert that into hash hash this because it's very memory of some sort of said, it's also very powerful force raid feed the login to

easily absorbed through the scores and basically give you a what are the top 10 for example player are the top 10 players in a game Etc. So I'm not going to go deep into those. I just want to give you a quick overview in terms of Y in a y reticent so popular and looks like a lot of PR already using redis. the interesting thing about you know, when you move to the club one of the key benefits that everybody wants to get is that you don't want to manage your own application instruction. That is fundamentally one of the key benefits of everybody's looking for. Super my

database portfolio perspective. You probably will see the slide through the mini sessions. But the point is that the offer a variety of databases manage to managed Services so-called memory store is our Madison memory data Adidas store and it's essentially manage credit and I'll talk more about Cloud memorystore going forward and the old to have the non-relational databases Cloud bigtable because our white Collins tour obviously call spanner, which is hardest to beat a relational database and the other thing if you listen to The Beatles, one of the things that

we are also trying to do is to work very closely with our partners and make sure that if there are capable of these that the partner ecosystem provides you can easily get that from Google cloud in a very integrated matter. So the bottom line here is that there's a portfolio services that we provide and the key goal for us is to provide. It doesn't matter to this so that you don't have to do the work in terms of managing decent restriction. All right. So let's dive into the core of the talk with his Cloud memorystore just a high-level marketing fly. But the key Point here is

that there are three things are optimized for one of its going to be fully managed by us. So what that means is that any kind of patching security vulnerabilities that come from right as I could probably automatically take care of it. You probably have already done to security passes already that come from gratitude. We get notified even before you get to know. So we take care of all that the other things that we really focus on available for me as a product manager. I keep telling my team that reliable these are number one feature, right? And because. This fundamentally what do you expect

us to do from Atlanta to respect so when it comes to reliability, I'll talk about the different offerings that we have and that kind of capability that be provided but the key thing is IBU. Replication album automatic failover interview outfits with white after laser on it. and the Latvian o'laughlin last but not the least from a security perspective the essential that we have various capabilities that we provide to make sure that the system is secure and I'll talk more about Sao memory store, if you're not if you're not these memories your memories or

comes in two flavors one is will be called a basic here and that's essentially a stand-alone instance of Reddit. This is Donald and Sons the question. What do you get with that sounds really take care of all the health monitoring. We take care of the automatic recovery from failures. The other thing that we also do is a tight IT preservation. So in case the system goes down and comes back up and get your guaranteed to have the same IP. So from an application perspective, you don't have to worry about IP changes. The senator is the replicated reticence in so

will be provided sounded here is Crossville replicated read it, right. We automatically take care of the fade over and it's a single and point. So we essentially have a virtual IP that is in front of the of the influence and happens are the notes biggest back and forth to your defense be connecting to a single it. So there's new IP change when that happens and we love you too, you know scale up and down seamlessly and I walk through the demo and show you how this works in

actuality in the demo, but we also provide clean lines of availability for the standard to your service. And the other thing that I want to kind of freaky Touch of phone is that the way we have expose the service essentially is that you read us a single Credit Service. So what we what we are officially done is behind the scenes. We have created the year to service meaning the more memory you provision you get more Network super the behind-the-scenes. What's happening in WWE number of CPUs. I'll be provisioned depending

on the size of the incident that your provisioning saw the higher the memory the higher the trooper. That's the that's a provision. So we went GA probably Revenge about six months back and we've been doing a lot of work behind the scenes to add more features. So we have registro currently in beta. We are working on right as 502. The other thing that we are enabling as of next is the active glitchy to access memory store from Athens and standards 2nd generation one time

at all. Doesn't matter or is it useful? Yes, I do again to point that I want to make hear that. It's only for second gen run time. So if you're using happens in sounded frustration One X obviously. Only supposed to happen the next weekend, we both write a protocol. So if you're using second gen run time, you can essentially connect to memory store. The other thing that we've been asked quite a bit from customers at the ability to test your application before you deploy especially when

you're using Santa two instances are one of the things I do want to be able to test if they knew were the behavior renewed. When do you say the word command that you can use to test your service before before you deploy? We added a bunch of new marketing metrics. We've learned a lot since DEA in terms of how customers are using it. So we are exposing more and more metrics to help you understand how the system is behaving and how you want to react to that breakfast practice a section view luxury walk through each of these metrics and what are some of the things that you should be

monitoring and watching out for Oral awfully beeping continuously adding regions. We are pretty much all over the regions except maybe like 3 or 4, which we probably will roll out pretty quickly. And then we are on our compliance support for memory store. So this is probably an interesting fool to you will happen to use memory store. So I just want to go walk to the product and give you an overview. But at the same time answer some questions are on the beat in a certain characteristic of memory Source essentially is listed under these storage

portfolio and Google Cloud console. And it's pretty straightforward. I think we made it pretty simple to go and provision a Radisson since I'm just going to create one right now. So from a work in perspective, we recommend that you start using photo if you are going to start using memory sore now we support both 3.2 and 4. Oh, we will find a smart beta but we will go ga pretty much in like a few weeks, but we highly recommend that you start using photo going forward and this is

where I was talking about the Kia so you can select either basic I'll send it to you. It's going to feel like salad year to what happened to the standard to you're essentially is reddish stats against Navy for pretty much in most of the Region's today. And the Zone essentially you can leave it then any and we will pick the zones for you. Is there any reason anybody need to pick his own in your case for a stand for a Radisson seats? Together we recommend just leaving it at any and we will just pick the zone for you from a capacity perspective. This is very essential used tires you

write a sentence. And as I mentioned if you look at the Network's board as you provision more memory size, you got more Network throughput Are the other important aspect to remember is the the security model for 12 at least you're so today the security model me have a private IP Frank and the others security authorization that he can have. Is that restrict the access to a specific VPC. Dr. Thakkar insecure remodeled. We understand that there are cases where you want to be able to have much for their restriction at the instance level. So that's something that we are working on enabling to have

instant level authentication for memory store requiring. Authorizing just the BBC Network. That means pretty much any VM that is connected to that BBC natural thing that project can ask the reticence. So that's pretty much it. And if you go into the advanced options, we have exposed a few of the right as configuration. We have an expose all of them and that we like to do intentionally and expose the main one. The one that you want to watch out for is max memory fall asleep

the max memory policy by default. We set it to wall tile. Are you I'll talk more about the eviction policy to make it wrong, but I got something that you want to be super a ver off in terms of what is a default that be set. Then that's all the time about you. That's it to you put create. You can look create and that's pretty much it. That's all you need to do to provisional reticence. And wanting provision you basically get the management capabilities right here on the console. We have essentially a list of metrics of exposing the

console itself that he can use them on Monday or recommendation is that he used tack driver for monitoring fall family store because we expose a lot more metrics and snacks are there compared to the console and Sapphira also give additional capabilities like at loading excetera. So we will have a demo in just a bit to just talk about the different capabilities or different monitoring metric that we have and what are some of the things that you want to watch out for when you monitor memory store. How many of you use?

Share bbc in your environment. nobody the concept of shared VPC is something that you probably will wanting to see if you stop using DCP lost more. So the concept is that you can be certain we share Network across multiple projects does the concept of a host product and service project meaning if you are let's say From a dog's perspective you want to be able to give a ton of Meet the individual application teams. You can basically create multiple service projects and

share this, network across all of them. So in case it's one of those things that if you want to deploy plowed memory story into one of those for a service project that is currently not supported that some of the things I'd be working on. So if you have that pretty ladies case, that's something that will enable pretty quickly. A good example of that one dimension real quick. We have customers who want to set up a family farm or Surya Dev or test environment where they share memories for instance across multiple application for multiple versions of the application and share

bbc would be useful in that case. The one of the other important point that I want to make with where we are with memory store is that both basic here and Saturday or instances are single Master like so that's that becomes an interesting things are beer are off because obviously right as being single credit. There are certain thresholds that you will head pretty quickly. If it does the single massive based on your workload solution for that obviously is right as pops rain. We haven't been able to write a sponsor in yet and call them restored. So that's something again

that we are working on. So that's something you need to be aware of and I've seen a few times where customers look at standard to year and they expected to be a scale-out Marvel right? But the end is a highly-available write a sentence with a single master. Has anybody deployed redis cluster today by yourself in gcp? So now what I want to do is I want to touch upon some of the best practices around when using Cloud memorystore. So when I look at Best Practices and this kind of spans mostly best practices on Reddit, right and there are some specific things around 12 memory

store that you want to be available when you are using the service. So when you look at the best practices on red is ic3 I look at the three buckets of things one is there are things that you can do in terms of storing data that you can where you can do things to do things more efficiently. The other one is how do you manage your eviction? Right that's kind of ties into also the memory the memory management officer Reddit and finally monitoring. That's something that we want to talk about race specifically on cloud memorystore because there are certain things that we do

from a man that will be useful for you to understand that'll help you do manage your instances better. From storing data perspective there a lot of very good recommendations out there like 5 tomorrow to touch up on like three things that we find a very useful from this perspective. I think one of the things is that if you can compress the data whenever possible do that so that I very good libraries like Google library that you can use to compress data. So that's that's a very good way to where

you can depending on the kind of data that you can't even get light up to 50% compression. But that's one of the things that you should definitely look into when you are using even memories or because of memories or behind-the-scenes use is open to any best practices that you that applies to open those right? It's also applies to solve memory store. All the other one that we fall asleep quite a bit is that in terms of using the right to your Eliezer right TV lights are on currently in addition. So there are some very good stabilizes out there.

Google protobuf is one way to serialize the data and more efficiently if he wants to work in the binary format. So that's one of the things that you want to take a close. Look at the Gann in terms of how do you store data? Because again, it's it's all about how efficient you can be in terms of memory because memories expenses and the more data used to or that can quickly yada. Oh and I mentioned this before a hash and you can have to get stuck sure to ready his memory usage is again, it all depends on the kind of data that just storing but I think the key

point that I want to make here is that it's important for you to look at it from a rightist perspective and apply the best practices that you would apply for credit. If you were deploying it on your own even when you're using Cloud memorystore, so that's I think that's the key point that I'm trying to make here when it comes to best practices on storing data. A virgin pulse is a very important very very important. Then it's something that you have to be super of air off and everything policy is all about how you actually worn store data and how long you want to stay with Ada

combined with how much money you want to spend to? No way, right so fundamentally eviction policies as we all know. It's essentially the rules that you applied to a big data from read his memory. Beats on certain characteristics to groups of eviction policies. There's no eviction which means that it doesn't really like a fixed memory size that once you hit that memory limit affect any data and the eventually get ready cuz basically going to be gone store anymore. Then there's all keys and wallet. I'll basilar use your like I said that's actually is more

weed and a black default and this is something that we recommend it's an interesting. Conversation when it comes to eviction policy in your register. Who are you is that if you don't said the TTL then that essentially means that there is no fees that can be evicted. Right? So that's an interesting. But at the same time all the time are you is a is a more efficient way to a manager memory because It essentially allows you to tell the teacher whenever or at least make the Aspire the keys. I want to hear the TTL but that requires a little bit more understanding of your

usage patterns. And you know how much I'm going to want the keys are used for and how long you want the keys in memory excetera, but that allows you to basically be a little bit more efficient in terms of memory and how much how large of an instance you want to provision. All Kiesel are you on the other hand? Essentially what it says is that once you hit the max memory it's going to basically if it's the least we can do use keys, but he basically he's going to fill up the memory and then start everything keys. Interesting consequences of doing all these allow users and

we'll talk about that in our Bona touring that's practices. I am the registers reconvene to use the LSU policy ready cuz it's a little bit more efficient algorithm. Zainal are you where is mold is it's all about how many times you have access to the keys to LSU. So that's also an interesting are you or similar to a larger you again? You just need to have a very good a good sense of how your keys are being used. And then using a lawyer you or else you will actually help you with better. I'll memory management Reddit. The other keeping

is Have you guys wanted to memory fragmentation issue on your side? So we take care of that. Sao memory fragmentation is a real problem in redis it again. Like I said, if you're using open-source, right as you will run into memory fragmentation phone and what memory fragmentation is, basically Reddit have the concept of max memory, right that is essentially the memory that you said before the right is processed. But the wave Reddit allocate memory. Can you add

more memory than what is allocated for max memory. So you're essentially freep into the system memory that you provision all decks with memory that you for some point, you will run out of memory and that'll cost a crash. The reason memory fragmentation happens is a few different reasons. If you have very if you have homogeneous Keys, then the memory allocation is pretty homogeneous when you use a demonic operator, but if you have a varying size skis and it's quite possible that you would run into high fragmentation or there's more chance of running

into by fragmentation 7 High fragmentation happens is a bunch of memory that is unusable and then be free to get into a situation where are redheads have to do a few more memory than what is allocated for redness and then you get into that issue. Basically running out of system memory and that can potentially Fresh Meadows. So one of the key things that you want to watch out for is how Benson plays that will allow you to detect memory fragmentation on take action on it. So are best practices demo. We will walk through what is that

you need to watch out for because there are certain things that we can do from a manicure this perspective. There are things that we do from amanecer this perspective to ensure that your systems are up and running but they're also cases where you want to be proactive about not getting the situation very young height memory fragmentation. All this problem was actually worse and 3.2 but then fold out though. They have interviews active defragmentation. Also that I've been doing some running. There's a background thread that basically diffraction. This comes with the cost of a higher

CPU. So there's something that you basically want to turn on on free for your workload. I want to CPU overhead is but from based on everything that we have seen in the community and protecting that we have done. It seems to be a fake thing to turn on but again, depending on your homework already want to monitor your CPU when you do this so marketing fragmentation and taking action is so you know, it's one of the things I do if you start using that restore you absolutely should know how to do that. And even if you're not using them restore from a right aspect of it is

important to understand more as a mother metric that you should be monitoring to I'll make sure that your system runs in a very stable Manor. So let's go over that. through let me quickly walk through the stack driver monitoring if you haven't. USAC driver before the staff driver is Google Cloud monitoring platform. So it's like 12 or so. What are you going to do? If you can essentially goes look for metrics send this pretty good case against a Raditz.

And we have exposed him a lot of metric related to Brightest over here. Right? So there's a large metric that you can monitor. But the question is the baselines automatic that you want to Monitor. And what do you want to watch out for it? And that's what I like. That's what we want to kind of talk about here. So what we have done is we've created. Couple of dashboards here. Let me just go back. So what we have here is an instance where we are running a benchmark behind the scenes. We are using running a men pure Benchmark. So what we are essentially doing doing as we

have create event in Sims with the basic of the default eviction policies, are you and we have set the time to live to be very short, right? So you can see the maximum reset that you can GBS be a graph with emacs memory and use memory. So this room is actually running pretty stable right now, right? So there's enough memory available for register guns you but that's one graph that you want to monitor to ensure that your memory consumption is is okay the next class a physical system memory usage ratio is something that we just release

the different defense me the graph that tells you how much fragmentation do you have in your system? So just remember using ratio what we do when we might as well services at Renaissance beastly required many provisional Radisson, since you obviously require some overhead to take care of known data set right at the whole bunch of background operations and that they do so there's a recommendation of you know, you need to provision some system memory over max memory to make sure that the system is up and running in a single fashion to take care of that today. But the thing is

that you need to be able to even then there will be fragmentation that happens in your system. So many monitor system memory usage ratio have been seeing what you're looking for is you want to look for whether the graph is trending to 100% So based on the overhead that reprovision behind the scenes your sister music memory usage ratio will always started at percentage rights. Typically it's going to be around 60% which is fine because that's based based on the amount of overhead every provision behind the scene. But if you think that you want to watch out for is what are the trend

as your workload peace and over a. Of time whether it's pending divorce in a let's play 70-80 90% right? So that's a large white wedding replication. Again is a very interesting that you should always watch for because redis replication is a synchronous. Right and we have the right load and the kind of insurance that you have. You can have a pretty big replication back wall. And this is just the nature of Greta. So the word that tells you is whether your instance is able to sustain the workload that you have running on that particular in the

consequence of bikes running replication going high essentially is that it can also Drive higher memory consumption but also writers can get into this behavior of doing a full synchronization bonds it by spending replication go below a certain limit. So that's again another key graph to watch out for when you are doing, you know, just managing Radisson General, but all these benefits are exposed for you. So you can just use the timer to do that the other interesting want to see you know, the number of keys with an expired Wilkie's right? It's like it's a good thing to keep do understand in

terms of hay set fire Bukis. That's about time. That means you haven't really slept well and you change your Nicole. Bagley school down a few more things that you want to watch out for is obviously CPU network throughput is a good one to just keep an eye on and everything Keys. You can understand support example, if you can get run into memory pressure and if you're not seeing an eviction, is that something that we want we may want to kick go take a look at the Figure 1 is also an interesting morning because there are

certain areas where we have to say the word in Sims without you knowing or triggering just to make sure that the system is in a stable State very good to know whether it's multiple Fela what's happened in the. Of time and that can also have consequence your application because when we do I say the word obviously be disconnect the client. So if you're running into behaviors or scenarios where you're like, why is my application, you know getting disconnected or if there is some kind of inconsistency when you're running like a job, for example, you do a fade over because right is replication is

a synchronous there could be a leader or potentially lot of things that happen. So monitoring if a dog was are happening in your environment is a good way to understand if some of the application behavior that is happening is due to the state over that's happening and assistant if you see a lot of Taylor was in the in call memory. So that means there's something going on and you know, we will know about it will take care of it, but it's important for you also to know that that's going on. The Beast I would say are the key metrics that you should be monitoring regardless and other

things are bad are happening or not venue provision memory store. You should be just monitoring these metrics in your environment. Let's take a quick look at Alder down out of their graphs st. Patrick's but interesting thing here. Is that video here on this bench mark, and he said the long TTL and that's essentially what I'm talkin about in terms of the system memory usage ratio. Keep going up and be kind of I think we saw the Benchmark at some point, but if you continue to run that Benchmark for the incident size that we had we had since they would have hit the system memory usage ratio by

100%. Points devotee to work right and beautified to keep the system up and running but in some cases it can get extreme where we are not able to order system why is not able to recover itself? So that's the reason we recommend that you monitor your system memory usage ratio to be around 80% right set an alert and stock driver and the app that you can take at that point to Jesus catering. Right, so that'll instantly give you more memory for the workload that you're running. So that's a very important graph to keep an eye on and also, I love song.

So I talked about monitoring to see memory usage ratio and then figuring and a load the question is and also said hey do you want to do scaling? Right? So one of the depression is more than efficient way to do it if you want to talk about so as we were as we were kind of working through this demo, one of the things we wanted to Showcase is the fact that in addition to the reliability that you get with Cloud memorystore Paredes instead of you as opposed to running Reddit on your own is the gcp ecosystem of services that you get to use and this

is a good example of that. We're as gopal mentioned you look at the trends for example system memory ratio and other things as big as they, you know Trend in a direction you want to take action. You could take manual you can take action manually or you could automated and this is one way to do it. The reason we want to talk about doing it this way. It's cuz it's fully service and service is all the rage. You're not standing up separate things like that in this count on the fact that stackdriver is essentially a giant time-series database so you can

inspect, you know, the metrics that you're interested in inspecting. All the ones to go PowerPoint at 2 and you can do that on a scheduled basis as I'll show you we're going to do it once every minute sample the metrics that were interested in kind of seeing the trends for and then based on how they're trending. You can actually use a cloud function to scale up your Cloud memorystore for Reddit instant. We give you an API just like with every service gcp has cloud memorystore has an API to initiate is Caleb so you can automate this.

So let's look at that real quick so we can switch back to the demo. So I mentioned the cloud scheduler service. This is a service that we recently released. You don't want to stand up a VM and run crown on it. You want you want to use a reliable service for this the way this works is you can set up. You can set up the service to generate at pops up message. And then you know this case we're going to do it once a minute and then based on the pups of message getting deposited into a topic. You could run a cloud function and I'll quickly

talk to his function. I don't want to take up too much time. I hope there are folks who are most of the folks in this room. Hopefully no python. Essentially. What we're going to do is check to see if our system memory usage ratio is trending up rapidly and that is up to you to decide and then another thing we have to pay attention to if you remember that we were tracking was the Bite spending replication. We want to make sure that this is not some crazy number where it's like eight gigs behind you don't want to trigger

replication. I mean to trigger a scale up event at that at that time because you might have they lost so you have to balance this and what I would recommend we would recommend is, you know, try to get ahead of the situation where you fear seeing growth early on and you know, the growth is going to continue on and then triggers trigger this kind of scale up event. And then at that point you decide, you know, how much the scale up by in this case. I just randomly take two and you call up the call the scale of function which uses the cloud memorystore for redis python SDK

and basically, you know set the memory size to the new value that you would like it to have. Of course you want to make sure that you do this, you know with some boundaries in mind cuz obviously we have a limited 300 gigs and you may have some you no limits in terms of budgets and things like that. Keep that in mind when you automate it but it is recommended that you automate this. Write thank you very much that was talking to y'all.

Cackle comments for the website

Buy this talk

Access to the talk “Worried About Application Performance? Cache It! (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “Google Cloud Next 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
163
app store, apps, development, google play, mobile, soft

Similar talks

Emilio Del Tessandoro
Senior Software Engineer at Spotify
+ 1 speaker
Sandy Ghai
Cloud Bigtable Product Manager at Google Cloud
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Joseph Holley
Head of Gaming Solutions at Google Cloud Platform at Google
+ 1 speaker
Mark Mandel
Developer Advocate at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Preston Holmes
Head of IoT Solutions at Google Cloud
+ 1 speaker
Zoltan Arato
Product Manager at Google Cloud
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Worried About Application Performance? Cache It! (Cloud Next '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
567 conferences
23025 speakers
8619 hours of content
Gopal Ashok
Karthi Thyagarajan