Duration 39:05
16+
Play
Video

Understanding Android memory usage

Richard Uhler
Software Engineer at Google
  • Video
  • Table of contents
  • Video
2018 Google I/O
May 9, 2018, Mountain View, USA
2018 Google I/O
Video
Understanding Android memory usage
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
29.56 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

Richard Uhler
Software Engineer at Google

Richard is an engineer on the Android runtime team who works with first-party app developers and platform developers to better test, understand, control, and improve application memory use on Android. Richard has a PhD and SM in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. He earned a bachelor's degree in Electrical Engineering from the University of California at Los Angeles.

View the profile

About the talk

Understanding where Android applications consume memory is important for performance and usability on low-memory devices. Different types, including shared memory, DEX memory, and GPU memory all contribute to the user experience on applications. This talk will present a model for understanding where applications use memory, helping developers improve their own applications.

Share

My name is Richard Euler. I'm a software engineer on the Android runtime team. I'd spent the last three years trying to better understand memory is on Android and more recently. I've been working with first-party app developers to tackle the challenges of evaluating and improving Android memory use it's great to see so many of you here interested in Android memory use To start off. Why should you as an app developer care about memory use for me? It's really about the Android ecosystem the ecosystem of applications of devices and of the users of

those devices and where memory comes in the play is not so much for the premium device is where you have a lot of memory available but much more for the entry-level devices because these devices need a decent selection of low memory apps to work. Well if application memory Houston requirements grow Dim the entry level devices won't work as well if they don't work as well. Oh, yeah, memes will not want to produce these devices and if they don't produce these devices, well, we're kind of excluding a bunch of users from are in Android ecosystem. And that's a bad thing. So app developers have a

role to play when they're developing their applications to do whatever they can to be efficient in your memory used to reduce your memory used to keep it from growing too much so that we have a nice selection of low memory applications. So that entry-level devices behave while they work well and if that happens only produce these devices and we can put them in the hands of users to use our applications. So let's talk. I'm going to talk about three broad categories three areas first. I'll talk about the mechanisms that coming to play on Android device when it's

running low on memory and how that impacts to user. I'll talk about how we evaluate and applications memory impact and in particular some very important factors to be aware of that coming to play with that and third I will give you some tips for how to reduce your applications memory. In fact, especially given that a lot of the allocations going on in your application originate deep within the Android stack on which it's running. Okay, let's start what happens on the device when it's running low on memory memory on device physical

memory on device is organized or are grouped into pages and each page is typically around for kilobytes. Different ages can be used for different things. So Pages can be used Pages user pages that are actively being used by processes. They can be cached Pages. He's our pages that a memory that are being used by processes. But the data they contain all soul is somewhere on the device storage, which means we can sometimes reclaim these pages and then there might be three pages in memory sitting on the device that you're not using. So what I've done is I took 2 GB

device and I started it doing nothing. So at the very beginning of time to run times not running and then I started using it more and more so lots of different applications exercising them which has the effect of using more and more memory on the device over time so we can see it the beginning the Flatline nearest before I started the run time. They start the run time up. There's plenty of free memory available on the device and this is a happy device because if an application needs more memory the colonel can satisfy that request right away from them free memory.

Overtime as you use more memory the free memory gets exhausted it goes down and to avoid very bad things from happening. The Linux kernel has this mechanism that kitchen called K. Swapped e n k swap TVs job is to find more free memory this kicks in when the free memory goes below when I'm calling hear the case with the threshold and the main mechanism that case swap D uses to find more free memory is to reclaim cached pages. Know if an app goes to access a cached page or a memory that was on

a cash basis been reclaimed. It's going to take a little extra time to reload that data from device storage, but probably the user is not going to be noticing that so that's that's okay. Now is I exercise more and more applications that use more memory. The number of cached pages is going to fall is K swap D starts to reclaim them. If it gets too low, there's too few cast ages then the device can start to thrash. And this is a very bad thing because it is basically the device will completely locked up. So on Android we have a mechanism which is called the low memory

killer that kicks in when the amount of cache memory Falls too low and the way this works is low. Memory killer is going to kill him pick up process on the device and it's going to kill it and it's going to get back all the memories that that process was using. Now this is a this is an unhappy state to be in especially if the low memory killer kills a process that the user cares about. So let me go through and tell you a little bit more about how the low memory killer decides what to kill. Android keeps track of the processes that are running on the

device and it keeps him in a priority order. So the highest priority processes are native processes. These are ones that come with Linux things like in it K swap. Do you which I told you about demons like netd lagdi and Android specific demons a DVD install D. Basically any native processes running is categorized into this the next highest priority process we have is it system server, which is maintaining this list. Followed by what are known as persistent processes. He's our kind of core functionality. So telephony NFC SMS those

kinds of things. Next we have the foreground out since it's going to be the application at the user is directly interacting with in this case. Perhaps users during a webpage. So they're interacting with the Chrome app. Next in priority or what are called perceptible or visible processes. These are not prostitutes of users directly interacting with perceptible in some way. So for instance, if you have a search process, maybe it has a little bit of you I on the screen or if the user is listening to music in the background, then they can hear that music to there headphones. You can

perceive it. After the perceptible apps we have Services user Services started by applications for things like sinking or uploading downloading from cloud. And then we have the whole map. This is what you get when you hit the home button that also host your wallpaper if you have something there. So in addition to these running processes, we also keep track of the previous application. The user used was so maybe they're using this red app your red app. And it brings them to Chrome with a link then when they switch to Chrome that app is going to be the previous

app. And we also keep in memory a bunch of other processes which caste applications to user used before some of them maybe recently some of them not for a little bit of a while. I want to point out here that these Cash Processing fees when I use the term cash. This is a different use of the term cash than the cache memory pages. I was talking about previously. Okay. So the reason we keep around previous and cash processes is because if a user wants to switch to one of these applications and say they want to switch to the previous application

is very quick to switch to that. I should say this is for a device in a normal memory state. So if you want to switch to previous application, that's very quick. But also if you want to switch to an application, it happens to be in a cast process that's very quick to do because it's already in memory. If we step back so and say well what happens when a device is low on memory in that case we could imagine the memory used by the the running applications is growing. The number of cached Pages drops below the low memory color threshold of a memory

killer now has to come in and kill something to free up some memory. What is going to start killing from the bottom of this list, so maybe if it kills his blue application that's gone. We get some more memory back for the applications that are still running. But if the user now wants to switch and start using that blue application, it's not cast any longer. It means it's going to take unnoticeably long time to launch that application. It could be 2 or 3 seconds and maybe a place some state where the user first starts to really feel o something something's going on here. That's making things

slower. Is the processes that are running continued to use more memory so we get under more memory pressure low memory Killers going to start to kill more cash processes if they continue to grow more and more until eventually there's only a few cast process is left at this point. We say the device memory status is critical. This is a very bad place to be if the running process has continued to use more memory though. We will memory kill his going to have to kill more processes eventually. It's going to end up killing the whole map at this point the

users going to ask for what just happened to my wallpaper cookies when they go home. It's going to be a black screen for a few seconds before the wallpaper starts up again. If it's even worse maybe a perceptible processes to the users going to say hey what just happened to my music I was listening and it just stopped really bad case the foreground up is killed to the user. This looks like that the Antichrist and the most extreme case you can get into for a low memory killer basically needs to kill the system server. This looks like your phone is rebooted. So these are

all very visible impact of what happens when a device is running low on memory and it's not it's not a good user experience when this is happening on your device. I want to go back to this graph that I was showing you before about what happens to the memory pages on the device as you use more memory. This was a 2GB device. What do you think? It looks like this graph for a 512 megabyte device to give you a few seconds to think about that. You have an idea what it looks like. So I tried it or if I have 12 device same things start the run time use more memory and it looks something like

this. So because there's so little memory available at the beginning is very few free pages that we can use up before the K swap D has to kick in and then there's very few cast Pages we can reclaim before the low memory color is needed to start killing things. And so you can imagine if you have this device and the low memory killer is always active. It's always killing processes and at least to this bad user experience then maybe you're not going to be too interested in shipping this device because while it just doesn't work well and that gets back to the ecosystem

challenges. I mentioned in the beginning. So this is why why we care about memory Now, how do we figure out how much memory an application is used? How do we know your applications memory in fact? I told you that memory on device is broken down into Pages. The Linux kernel is going to keep track for each process running on a device which page is just using so maybe we have a system process Google Play services process couple apps running on their device. We want to know each of their memory

impacts. Well, just telling up the number of pages that is using It's a little bit more complicated than this because of sharing because multiple processes on device can be sharing memories. So for instance if you have an app is calling into Google Play services, it's going to be sharing some memory for have to code memory or other kinds of memory with the Google Play services process and then we can ask how should we account for the shared memory? What is that part of the responsibility of the application is that memory? In fact something that we should care

about and there's a few different ways that you can approach this one is to use what we call resident set size are RSS and what this means is when counting in apps RSS were saying the application is fully responsible for all the pages in memory that it's sharing with other applications. Another approach is called proportional set size PSS. And in this case were going to say the app is responsible for those shared Pages proportional to the number of processes that are sharing them. So in this case two applications or processes sharing these Pages the application will

will say is responsible for half of them. If there are three processes sharing the same memory. We would say the application is responsible for 1/3 of them and so on. And then I started first you can take just called unique set size where we say. The application is not responsible for any of its shared pages. Now in general which approach to take really depends on the context. So for instance if those shared Pages, we're not being used in the Google Play services app until your app called in to Google Play services, then maybe it makes

sense to say the app is responsible for all of those pages. We want to use RSS on the other hand if those pages were sitting in memory in the Google Play services process before the app called into Google Play services, there were always there the app is not bringing them into memory. Then we wouldn't want to count them USS would be more appropriate in general. We don't have access to this high-level contacts to know at least at the system level. So the approach we take is the most straightforward one versus proportional set size with equal sharing and one benefit of using PFS for

evaluating applications memory impact, especially when looking at multiple processes at the same. Time is it will avoid over counting or under counting of shared pages. Okay, so use PSS for your applications memory impact and you can run this command ADB shell dumpsys meminfo - s give it your process name combat example. Richard or or whatever it is or you can give the process idea. If you happen to know that Miss can output something like this and epsom review of the

applications memory and at the very bottom there's a total and that number is the applications PSS. This is ADB. Shell dumpsys meminfo - s Now, let's say you do this you figure out what the PSS of your application is. There's a very interesting question to ask. How much memory should your application be using because you know, I say earlier if we use a lot of memory that's bad because low memory killer pics in but we're actually using memory for a reason we're using it to provide features to ride user value to provide, you know,

delightfulness everything that makes her have great is going to be taking up memory. So we have this trade-off between user value and memory That's what I'm showing here in this graph the trade-offs faced in an Ideal World. We're kind of up and to the left on the graph work. We're providing a lot of user value without very much memory impacted all but in practice this is going to probably be technically infeasible because you need memory to provide value and there's only so much value can provide you with a limited amount of memory.

On the other hand, the Other Extreme would be if you're using a lot of memory to provide not much value and I think it's safe to say this is not a great nap because it's basically it's providing too much memory. Using too much memory and fortunately my slides are not showing up right but there's imagine occur on which there's too much memory for this app. It's not worth it to the user to use there. They go wonderful next. We can look at this corner the grass. We're not providing too much user

value. We're not using too much memory. We can say this is like a small application. Maybe your desk clock app. And if the other end we can have apps that use a lot of memory to provide a lot of value. These are large applications maybe a photo edit or something like that and we can say well what's better a small lamp or a large app in this case. They can both be useful. Except that when I said that an application is using too much memory that really depends on what kind of device you're running on. If you're running on a premium device I can support much larger

application but on the smaller device and entry level device maybe this large app uses too much memory to make sense on that there really I should be drawing a line and say too much memory depends on the device premium mid-tier an entry-level might not support that large out. For Better or For Worse would I see happening often is overtime as you develop your application you tend to add more features. It tends to take more memory. So you tend to go up and to the right in this graph. No, this is actually good

for midterm premium users because they're getting more value more bang for their Buck memory-wise. But in this case is a little bit unfortunate for the entry-level device user because while he could use the older version of your app, you've now added so many features and it's using so much memory that it just doesn't work as well on their device. To the point that I want to say here are the takeaways anything you can do to improve your applications memory efficiency is good. So if you can move to the left on this graph, so less memory used without sacrificing user value is great

and just be aware that when you're adding new features while it's good for it can be good for mature and premium device users. There might be a negative consequence for these entry-level devices. Okay, there's something wrong with this graph. Does anyone know what it is? Well, let me say the problem that this graph is it's suggesting that in applications memory use is one number. So you give me this application and I can tell you what's PSS but in practice that's far from the case because in applications memory

impact depends on a whole bunch of different things such as the application use the platform configuration and device memory pressure important to be aware of when you're testing your applications memory perhaps testing for regression Ethan optimization is working to make sure that you're testing the application use case you care about and you're controlling all of the other parameters so that you're doing a proper apples-to-apples comparison. Let me go into a little bit more detail. So how does an application use case impact memory

what I've done I started using Gmail and I've switched to different use cases in the application of overtime. So every 20 seconds, I switch I started my viewing in boxes using just a little over a hundred megabytes PSS then I switched to looking at an email that had some text is a little bit more memory. I switch to looking at a different email this one time with pictures that uses more memory. Then I started to compose an email used a little bit last I stopped using the app and I don't use less memory so you can see here that depending on the application use case memory impact

varies quite significantly and it doesn't necessarily make any sense to compare your applications memory from point A and point because these are different use cases. Okay application use cases is pretty straightforward Factor something that's less obvious. Is that your memory will change a lot depending on what your platform configuration. So what are showing in this graph is take the one of those applications use cases from the previous slide Gmail looking at an email with pictures and I running on a bunch of

different devices. So a Nexus 4 Nexus 5x Nexus 6p pixel XL and also on a number of different platform versions even within the same device. So for instance for the Nexus 5, I ran it on Android m in an O and you can see that there's quite a variation and how much memory this application use case is taking off. This comes about because well for different devices, we have different screen resolutions different screen sizes, which means bitmaps take up different amounts of memory. You might have different platform optimizations on the different devices. You

might have a different zygote configuration different runtime configuration is running your code differently. And so there's a lot of different factors going on here which mean when you switch to a different platform configuration, you're going to get different memory issues. So I would say when you're testing your applications memory use try as hard as you can to use a consistent platform set up the same kind of device the same platform version and the same scenario of of what's running on device. I know there's a third case I want to talk

about which is pretty interesting because it's a little bit counterintuitive which is an application's memory impact depends on the memory pressure on device to hear. What I've done is I taken Chrome application and I started running it on a device that had plenty of free memory and then I set up some native process in the background that's going to slow you use that more and more memory on the device so that I can see what happens to Chrome when the device gets under medium memory pressure or high memory pressure and we can see when there's plenty of free memory on the device solo

memory pressure Chrome CSS is pretty fun except for that little Spike, which is probably some variation in Matthews case for or the platform. When the device gets under enough memory pressure that case for a few kicks in and starts to reclaim cast pages will some of those pages that it reclaims or going to be from the Chrome process. And that's going to cause Crohn's memory and pack to go down its PSS is going to go down until eventually if device is so much memory pressure to low memory killer is active and it decides it wants to kill Chrome then vs S4 phone is going to

very quickly drop to zero. So you can see here. Is that even for the same application use case the same platform configuration? We have a wide range of PSS values. We might get it. So you have to be a little bit careful. Imagine I've come up with this optimized version of the Chrome APK and it has just kind of lighter blue line for the memory profile. I'm confident that this is an optimized version of the APK from a memory standpoint because for every level of device memory pressure that uses less memory, but if I'm doing a test and I sample the PSS of the original chrome

version at Point a but a sample of the DSs of the supposedly optimize Chrome version at point B, and I compare and they say, oh well a is less than be so it has less memory. I might fall asleep conclude that the original version of Chrome is better than my optimized version. So you really have to be careful when comparing PSS values to make sure that the device memory pressure is the same. Otherwise you can get these funny results. My advice because it is pretty hard to control for device. Memory pressure is to run your tests on a device

that has plenty of free Ram so that there's a load of ice memory pressure. You can see the PSS numbers will be much more stable in that area. Okay, so we talked about why you want your applications not to take up too much memory how you can evaluate your applications memory. In fact, let me know give you some tips for how to reduce your applications memory in the first tooth is checkout Andrew Studios memory profiler profiled your applications Java. He is going to give you a ton of information useful information about the Java objects on

your heat. So we're there allocated what's holding on to them how big they are pretty much anything you want to know about the job. I hate you can see from this my tip for you is to focus on the app keep so if you open this up in Android Studio, you'll see three Heats when does a zygote heat when the image when the app keep the in the chief in the zygote heat or inherited from the the system when your application first launches, so there's not much you can do about that, but definitely you can do a lot. Are you on the app TV? I'm not going to go into a ton of detail on

how you would use this to actually not very much at all because Estevan is going to be giving a talk tomorrow at 12:31 exactly how to use this tool his team built the tool he's going to be talking about how to do a live allocation tracking and deep analysis. So I highly recommend you go check out that talk tomorrow at 12:30. Okay. So you say Richard you told us that we should care about TSS. That's our applications memory. In fact, you just told us we should use the Android Studios memory profile

to profile of Java Heat. But if we look here we see. Well, we don't like you not actually all that much of the overall memory. In fact of the application. What about all the rest of this memory? What should we do here? This is tricky because most of these applications or allocation. Sorry are originating deep within the platform stacked Android stack. So if you want to know about them and you really understand them, it helps to know a lot more about say how are Frameworks is implementing The View system resources,

or how are the native libraries function SQL light and resources. Sorry webview is working from the Android runtime how it's running your code from the hardware abstraction layer how Graphics is working all the way down to virtual memory management management in the Linux kernel By the way, I live in the orange Block in the middle of the Android runtime. That's where I am in this deck. She might ask. Okay. So these this memory is coming from the platform or within the platform. Should we be using platform tools to diagnose this memory? For instance? If

dumpsters Amendment 4 - ass that summary of you isn't enough you could try running dump System Info with Dashie to show basically everything you can see from the platform perspective about your applications memorias give you a much more detailed breakdown. For instance instead of seeing your your code memory regressed. You can see is it because my eye. So memory map means of regress turn my. APK or. Excellent memory mapping said regressed it also show you a breakdown of the different categories in memory. So private clean share

30 and so on private dirty memory is like the used memory. I was talking about at the beginning private clean memory to clean suggest. It's it's Like the cache memory that also lives on just so you could use some System Info if that's not enough detail. Maybe you see okay. APK n that progressed. There's this tool called shown up. You can do run on your application and it's going to give you an even more fine grain breakdown of your memory memory memory mapping. I know Alexa give you specific files that R&B memory mapped in your application and it's can

help pinpoint what files might might have led to regressions. And the platform we have a heap dump your that. I've developed an experimental heat them to work all day had that tries to surface more platform-specific things. You could try using that to learn more about your Java Heat though Andrews Studios. Memory profile will have all the same information. And then we also have on the platform something called debug now at this is where you can instrument your application so that every native allocation. It makes its

going to save a back trace a stack Trace to that allocation. Are you taking what we call native snapshot of your app when it's running instrumented and if you have the symbols, you can be symbolized to stack traces and you can get latest actresses for all of your native allocations quite a bit of overhead it run time, so it can be a little bit tricky to work but it provides a lot of insight into the Nativity. So we have these platform tools should we use them? Can we use them what we certainly you could they're all available, but some caviar with these tools

they tend not to be well supported. They have very clumsy user interfaces as you just witnessed from my snapshots this approach requires a quite a bit of deep platform expertise to understand friends, So what's the difference between decks and Matt B decks in that boat and map where these things coming from for instance? You might need to have a rooted device such as in the case for show map and debugging mail like you might have to build the platform yourself. If you want to get your hands on a hat or the symbols that debug now, it needs to be symbolized the

numbers tend to be pretty noisy because you're looking at memory at a page level and a lot of the memory use you'll see from these tools is kind of outside of your control. Anyway, so you might see zygote allocations runtime allegations that then aren't related to your coat. So I don't think that this is the best use of your time to try and use these tools. So by all means go ahead and show him out. I'm going to give a bit of a different suggestion which is if you want to improve your overall memory use do two things one profile image of a heat using Android Studios memory profiler

like I should have before and to reduce your APK size and let me tell you why I think this is a reason of a reason of Approach for you to take to reduce your overall memory impact. First is that allocations that are outside of the Java Heap many of those are tied to Java allocations. So your application is calling in to the Android framework, which is calling into native libraries under the cover which is doing native allocations or even Graphics allocations who's lifetime is tied to Java objects. For instance just to give you a

sampling if you see these kinds of objects or SQL database webviews patterns. Those are all have native allocations associated with them to see a text file object that's going to have sex and nap. Detox. Om episode skated with it. If you have thread instances on your job at heat that's going to be associated with stack memory and if you're using bitmaps or sometime surface fuse or text reviews that can lead to Graphics memory use under there are many others so If you want if you're focusing on your job and keep

your word is not going to help anywhere else. That's not true optimizations on your job and he are going to help with other memory categories as well. I am trying that is part of my job to surface better this information about these kind of non Java Heap allocations and you start to see that if you look at Android Studios memory profiler surf report a number called native and I just want to let you know this is this is kind of an approximation or a suggestion of some of the non Java memory that might be associated with a Java object take it with a little bit of a grain of salt

but it works really well for surfacing the memory impacts of say did not Okay. My second suggestion was reduce your APK size. When why do you do this? Because a lot of things that take up space in your APK take up space in memory at runtime as well. For instance. Your classes. X-File is going to take up space on the Java Heap in terms of objects. It's going to take up code memory for the number in math. Text file is also going to take up what shows up in private other in the app some review run time meditating representations for your Fields

methods in strings and so on. If you have did that signore APK when those are loaded it runtime the pixel data is going to take up space depending on the platform version or how you floated them either in the Java Heat the native heat or as grass. Resources in your APK take up space on the job and he say you have an asset manager object. Also on the native. Keep you have a part zip file structure that shows up there and you're going to have code memory for your a pecan and. So files if you have if your shipping libraries j&i native libraries

with your application when you're accessing those libraries that run time it's going to take us so all of these things if you can shrink them, are you reduce your APK size you reduce your memory size and I will tell you that measuring APK size reliable is much easier than memory because for an APK you actually do have one number for the size if you measure the APK size for a single APK repeatedly, you will get the same result very much unlike memory. There was a talk at Google IO last year called best practices to slim down your outside. So I recommend you check that

out. They'll give you more advice more concrete action items. You can take to shrink the snakes. Okay, let me do a quick recap why we care about memory what I suggest you do to improve your applications memory you so first I talked about how as we use more memory on device to low memory Killers going to eventually kick in is going to kill process these if you use your cares about these processes, that's bad. If the device is getting low memory Killers running too much, then oem's won't want to produce the low and

sore entry-level device. Then we lose those devices. We lose those users. To evaluate your applications memory impact use PSS. Anything you can do to improve your memory fission C is good. When you're testing for memory regressions or optimizations, make sure your Target in the application use case. You care about and controlling for the platform configuration. Test on a device that has plenty of free Ram to help control for device memory pressure and to reduce your applications memory use do tryout and

Drew Studios memory profiler focus on the app teeth go to the session that esteban's giving tomorrow at 12:30 to learn more about how to do that. Do what you can to reduce your APK size and check out the talk from last year on how you can do that. So thank you all for coming. I would love to chat with you more and hear more about the memory challenge is your face team. So you'll find me. I'll hang out for a little while out outside after the stage, and you can also find me at the Android runtime office hours with your 5:30. So that's just a couple hours until

after this talk. Thank you very much.

Cackle comments for the website

Buy this talk

Access to the talk “Understanding Android memory usage”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “2018 Google I/O”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
159
app store, apps, development, google play, mobile, soft

Similar talks

Yigit Boyar
Software Engineer at Google
+ 1 speaker
Chris Craik
Software Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Reto Meier
Developer Advocate at Google
Available
In cart
Free
Free
Free
Free
Free
Free
Romain Guy
Senior Staff Software Engineer at Google
+ 1 speaker
Chet Haase
Leads the Android Toolkit team at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Understanding Android memory usage”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
558 conferences
22059 speakers
8190 hours of content