Duration 32:51
16+
Play
Video

Bringing AI and machine learning innovations to healthcare

Jessica Mega
Chief Medical Officer at Verily
+ 1 speaker
  • Video
  • Table of contents
  • Video
2018 Google I/O
May 8, 2018, Mountain View, USA
2018 Google I/O
Video
Bringing AI and machine learning innovations to healthcare
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
51.94 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Jessica Mega
Chief Medical Officer at Verily
Lily Peng
Product Manager at Google

Jessica L. Mega, MD, MPH, is the Chief Medical Officer at Verily Life Sciences. As CMO, Dr. Mega's focus is on translating technological innovations and scientific insights into partnerships and programs that improve patient outcomes. She oversees all of Verily's clinical and science efforts, including the Baseline Study. As a faculty member at Harvard Medical School, a senior investigator with the TIMI Study Group, and a cardiologist at Brigham and Women's Hospital, she led large, international, randomized

View the profile

Lily is a physician-scientist whose work focuses on translating scientific technological advances to clinical medicine. She is currently the Product Manager for the Medical Imaging team at Google Research. The team’s work focuses on applying deep learning and other Google technologies and expertise healthcare, specifically medical imaging. Here is some of the team’s recent work: Detecting diabetic retinopathy at doctor-level accuracy ;Predicting cardiovascular health factor from retinal images; Detecting br

View the profile

About the talk

Could machine learning give new insights into diseases, widen access to healthcare, and even lead to new scientific discoveries? Already we can see how machine learning can increase the accuracy of diagnoses from medical imaging, and may be able to predict a patients risk of disease. This Keynote Session includes short talks by Lily Peng of Google Brain and Jessica Mega of Verily (Alphabet) about how they are bringing technological innovations in AI and machine learning to healthcare.

Share

Hi everybody. My name is Lily Peng. I must have fishing by training and I work on the Google medical Google AI Healthcare team. I am a product manager. And today we're going to talk to you about a couple of projects that we have been working on in our group. So first off I think you'll get a lot of this. I'm not going to go over this too much but you know because we to apply deep learning to medical information. I kind of wanted to just to find a few terms that we get like a used

quite a bit but are somewhat poorly defined. So first off artificial intelligence. This is a pretty broad term and it encompasses that grand project to build non-human intelligence machine learning is a particular type of The artificial intelligence I suppose that teaches machine to be a teacher's machines to be smarter and deep-learning is a particular type of machine learning which you guys have probably heard about quite a bit and we'll hear about it more. So first of all, what is deep learning? So it's a modern Reincarnation

of artificial neural networks, which actually was invented in the 1960s. It's a collection of a collection of simple trainable units strong organized and layers and they worked about the two together to solve our model complicated task. So in general the smaller data sets and limited computer which which is what we had in the 1980s and 90s other approaches generally work better, but with larger data sets and larger model sizes and more compute power. We find that neural networks work much better. So there's actually just

to take away is that I want you guys to get from the slide one. It is that he's learning trains out rhythms that are very accurate when given enough data and to that's deep learning can do this without feature engineering and that means without explicitly running rules. So what do I mean by that well in traditional computer vision, we spent a lot of time writing the rules that machine should follow to make a certain prediction task in convolutional neural networks. We actually spend very little time in feature engineering and writing these

rules most of the time we spent in data preparation and numerical optimization model architecture. So I guess this question quite a bit and the question is how much data is enough data for a deep neural network? Well in general more is better but there are diminishing returns Beyond a certain point and a general rule of thumb is that we like to have about 5,000 positives for class and but the key thing is good and relevant data. So garbage in garbage out if you the model will

predict very well what you asked it to predict. Stowe when you think about so when you think about where machine learning especially deep learning can make the biggest impact it's really in places where there's lots of data to look for one of our directors Greg Corrado puts it best deep learning is really good for tasks that you've done ten thousand times and on the 10,000 the first time that you're just sick of it and you don't want to do it anymore. This is really great for healthcare in screening applications where you

see a lot of patients that are potentially normal. It's also great where expertise is limited. So here on the on the right, you see a graph of the shortage of Radiologists kind of worldwide. And this is also true for medical other Medical Specialties, but radiologist are sort of here and we basically see a worldwide shortage of medical expertise. So one of these screening applications that our group has worked on is with diabetic retinopathy are for sure cuz it's

easier to say than diabetic retinopathy and it's the fastest growing cause of preventable blindness. I'm off 415 million people with diabetes are at risk and need to be screamed once a year. This is done by taking a picture of the back of the eye with a special camera as you see here and the picture looks a little bit like that. And so what doctor does when they got an image like this is tag rated on a scale of 1 to 5 from noticed noticed. He's so healthy to proliferous disease, which is the end stage and when they do grading they look for

sometimes very subtle finding little things called microaneurysms that are outpouchings in the blood vessels of the eye and that indicate how bad you're a how bad your diabetes is affecting your vision. So unfortunately in many parts of the world, they're just not enough eye doctors to do the tasks. So in one of the one of them with one of our partners in India are a couple of our partners in India, there's a shortage of 127000. I doctors in the nation and as a result about 45% of patients suffer

some for vision loss before the diseases detected as you recall I said that the disease was completely preventable. So against this is this is something that should not be happening. So what we decided to do was we partnered with a couple of hospitals in India as well as a screening provider in the US and we got about a hundred thirty thousand images for this first go-around that we hired 54 ophthalmologist in build a labeling tool and then actually graded these images on the scale from no Dr. To proliferate if the interesting thing was that

there was actually a little bit of variability and how doctors call the images and so we actually got about $880,000 no season all and with this label data said we put it through fairly well-known Convent, Lucien or not. This is called Inception. I think a lot of you guys may be familiar with it. It's generally used to classify cats and dogs for you nor photo photo app or from other search apps and we just repurposed it to do from this images. So the other thing that we learned while we were doing this work was that while he was really useful to have this five-point

diagnosis. It was also incredibly useful to give doctors feedback on housekeeping predictions like image quality whether this is a left or right eye or which part of the retina this is so we added that to the network as well. So how well does it do? So this is the first version of remodel that we published in a medical journal in 2016 with Aleve and right here on the left is a chart of the performance of the model and aggregate over about 10,000 images sensitivity is

on the right on the y-axis and then one my specificity is on the x-axis. So sensitivity is the percentage of the time when a patient has a disease and you got that right when when the model was calling the disease and then specificity is the proportion of patients that don't have the disease that the model or the doctor got right when you can see you want to do something with high sensitivity and high specificity and show up and to the right or up and to the left is good. And you can see here on the part that the little dots are the doctors that were grading the same set.

So we got pretty close to the doctor board-certified us positions. And these are opthamologist General ophthalmologist by training. In fact, if you look at the S4, which is a combined measure of both sensitivity and specificity. We're just a little better than the median ophthalmologist in this particular study. Improve the model so last year about December 2016. We were on par with generalist. And then this year at this is a newspaper that we published. We actually use Retina Specialists

to Great the images. So they're Specialists. We also have them argue when they disagreed about what the diagnosis was and you can see when we train the model using that as a ground truth the model predicted that quite well as well. So this year we're on par with the specialist and this way to cap a thing is just an agreement on the 5 class level and you could see where you know sort of in between the ophthalmologist in the retina specialist. Another thing that we've been working on since since Beyond improving the models is actually trying to have the networks explain how it's making a

prediction. So again taking a Playbook are play out of the Facebook from the consumer world. We started using this technique called show me where and this is where using an image be actually generate a heat map of where it's irrelevant pixels are for this particular prediction. So here you can see a picture of a Pomeranian and the heat map shows you that there's something in the face of a Pomeranian that makes it look Pomeranian and on the left right right here you kind of have an Afghan hound and the Network's

highlighting the Afghan Hound. so using this very similar Technique, we applied it to the fundus images and besides show me where so this is a case of mild disease and I can tell it's mild disease because well, it looks completely normal to me. I can't tell that there's any but a highly trained doctor would be able to pick out little things called microaneurysms where the blue with us right where the green spots are. Here's a picture of moderate disease and this is a little little worse because you can see some bleeding at the at the ends here and

actually has a bleeding there and the heat map so you can see that it picks up the bleeding but there's to this other two artifacts in this image. There is a spot on a little dark spot. And then there is this little reflection in the middle of the image and you could tell that the model just ignores that essentially So what's next time you train a model? We showed some that is somewhat explainable. We think it's doing the right thing. What's next? Well, we're

actually have to deploy this into Healthcare System and we're partnering with Healthcare Providers and companies to bring the sufficient and actually doctor just Mega who is going to speak after me. It's going to have a little more details about this effort there. So I've given a screening application and here's an application and diagnosis that were working on. So in this particular sample, we're talking about a disease what type of breast cancer metastasis use of breast cancer into a nearby lymph nodes. So when a patient is diagnosed

with breast cancer and the primary breast cancer is removed. We spent the surgeon spend some time taking out what we call lymph nodes so that we can examine to see whether or not the breast cancer has metastasized to those notes. And that's it has an impact on how you treat the patient. Stop reading these lymph nodes is actually not an easy task. And in fact about 20 and 24% of biopsies when they went back to look at them to 20% per cent had a change in status, which means that if it was positive is red negative is negative red positive.

That's a really big deal. It's published that shows that a pathologist with unlimited time. Not overwhelmed with data actually is quite sensitive. So 94% sensitivity in finding the tumors. When you put time constraint on the patient, they're sensitive course right on the on the provider on the pathologist the sensitivity drops and people start over looking. I wear a little mithaas disease Maybe. So in this picture, there's a tiny massages. He's right there and

that's usually small things like this that are mixed and this is not surprising given that so much information is in each slide. So one of these slides if digitized is about 10 gigapixels, and that's literally alleyton you don't a haystack The interesting thing is that pathologist can actually find 73% of the cancers if they spent all their time looking through it with zero false positives for side. So we trained a model site can help with the task and actually finds about 95% of the cancer lesions and it has eight. Assets for side. So clearly an ideal system is one that is

very sensitive using them but also quite specific that relies on the pathologist to actually look over the false positives and calling them false positive. So this is very promising and we're working on validation in the clinic right now in terms of reader studies. How does Ashley interacts with it with a doctor is really quite important to leader applications to other tissues. I talked lymph nodes, but you know, we have some early studies that actually show that this works for prostate cancer as well for Gleason grading. So in the previous

examples we talked about how deep learning can produce algorithms that are very accurate and they tend to make calls that a doctor might already make but what about predicting things that doctors don't currently do for managing. So as you recall from beginning of the top one of the great things about deep learning is that you can train very accurate algorithms without explicitly writing rules. So this allows us to make a completely new discoveries. So the picture on the left is from a paper that we published recently where we trained State learning models to predict a variety of

cardiovascular risk factors and that includes age self-reported Sox smoking status blood pressure things that doctors generally consider right now to assess a patient's cardiovascular and make proper treatment recommendations. So it turns out that we can we can not only protect many of these factors and quite accurately but we can actually directly predictor 5 your risk of a cardiac event. So this work is quite early release luminary and the AUC for this production is 70 is 2.7 of what that number is means is that if given two pictures one

picture of a patient that did not have a cardiovascular event and one patient one picture of a patient who did it is right about 70% of the time most doctors and is around 50% of time cuz it's kind of hard to do based on the retinal image alone. So why is it exciting? Well, normally when a doctor tries to assess your risk for cardiovascular disease, there are needles involved. So I don't know if anyone has it gotten blood cholesterol screening you fast the night before and then we take some blood samples and then we assess your risk. So again,

I want to emphasize that this is really early on but these results support the idea that we may be able to use something like an image to make a new predictions that we couldn't make before and this might be able to be done instead of a non-invasive manner. So I've given a call a few examples three examples of how deep learning can really increase both availability and accuracy and Healthcare and one of the things I wanted to kind of ulcer with knowledge. Here it is. The reason why this has become more and more exciting is I think

because temp tensorflow is open source of this kind of open standard form a general machine learning is being applied everywhere. So I've given examples of work that we've done on Google but this is there's a lot of work that's being done across the community at other medical centers on that are very similar. And so really excited about what this a technology can bring to the Philadelphia Art introduced just make up unlike me. She is a real doctor and she's a chief medical officer. I barely

Well, thank you all for being here. And thank you Lily for kicking us off. I think the excitement around Ai and Healthcare could not be greater. You heard. My name is just Mega. I'm a cardiologist and I'm so excited to be part of the alphabet family barely grew out of Google and Google X and we are focused solely on Healthcare & Life Sciences and our mission is to take the world's health information and make it useful to the patients live healthier life and the example that I'll talk about today focuses on diabetes

and really lends itself to the conversation that Lily started but I think it's very important to pause and think about how broadly right now any individual who's in the audience today hasn't been out several gigabytes of Health Data. What do you think about health in the years to come and think about genomics molecular Technologies Imaging sensor data, patient-reported data, electronic health records and claims were talking about huge sums of data gigabytes of data and a verily and it alphabet we're committed to stay ahead of this so that we can help

patients. The reason we're focusing initially some of our efforts on diabetes is this is an Urgent health issue about 1 in 10 people has diabetes and when you have diabetes it affects how you handle sugar glucose in the body prediabetes the condition before someone has diabetes. That's one and three people that would be the entire Center section of the audience today know what happens when your body handles glucose in a different way. You can have Downstream effects. You heard Lily talk about diabetic

retinopathy people can't have problems with their heart kidneys and peripheral neuropathy. So this is the type of disease that we need to get ahead of but we have two main issues that were trying to address. The first one is an information Gap. Even the most adherent patient with diabetes and my grandfather was one of these would check his blood sugar 4 times a day. And I don't know if anyone today has been able to have any of the snacks. I actually had some of the caramel popcorn. Did anyone have any of that? Yeah, it's great. Right except probably our biology

and our glucose is going up and down to if I didn't check my glucose in that moment. We wouldn't have captured that data. So we know biology is happening all of the time when I see patients in the hospital is a cardiologist. I can see someone's heart rate their blood pressure all of these vital signs in real time, and then people go home but biology is still happening. So there's an information gas, especially with diabetes. The second issue was a decision Gap. You may see a care provider once a year twice a year, but Health decisions are happening every single day. They're

happening weekly daily hourly and how do we decide to close this tab? If there really were focusing on three key missions in this can be true for almost every project we take on we're thinking about how to ship from episodic and reactive care too much more proactive care and in order to do that and to get to the point where we can really use the power of that AI we have to do three things we have to think about collecting the right data and today I'll be talking about continuous glucose monitoring. How do you organize this data so that it's in a format that we can unlock inactivate

and truly help patients. So whether we do this in the field of diabetes that you'll hear about today or with her surgical robot, this is the general premise. The first thing to think about is the collection of data and you heard Lily say garbage in garbage out that we can't look for insights unless we understand what we're looking at. And one thing that has been absolutely revolutionary. It's thinking about extremely small biocompatible electronics. So we are working on next-generation sensing and you can see a demonstration hear what this will lead to for example with

extremely small continuous glucose monitors where we're partnering to create some of these tools this will lead to more seamless integration. So again, you don't just have a few glucose values but we understand how your body is handling sugar or someone with type 2 diabetes in a more continuous fashion. It also helps us understand not only what happens at the population level, but what might happen on an individual level when you are ingesting certain foods, and the final thing is to really try to reduce cost of devices so that we can really democratize health The next game is how

do we organize all of this day? Then I can speak both as a patient and is a physician the thing that people will say is data is amazing. But please don't overwhelm us with a tsunami of data you need to organize it. And so we partnered with sanofi on a company called on Duo and the ideas to put the patient in the center of their care and help simplify diabetes management. This really gets to the heart of someone who is going to be happier and healthier. So what does it actually mean what we try to do with Empower people with their glucose control? So we turn to the American

Diabetes Association and look at the glucose ranges that are recommended people then get a graph that shows you what your day looks like and the percentage of time that you were in range again giving a patient or user that data so they can be the center of their decision and it finally tracking steps through Google Fit. The next goal then is to try to understand how glucose is pairing with your activity in your diet. So here there's an app that prompts for the photo of the food and then using image recognition and using Google tensorflow. We can identify the food and this

is where the true personal insights start to become real because if you eat a certain meal it's helpful to understand how your body and Sol relating to it and there's some really interesting for limited data suggesting that the microbiome may change the way I respond to a banana for example, or you might respond and that's important to know because all of a sudden those General recommendations that we make as a dock. So if someone comes to see me and clinic and they have type 2 diabetes, I might say, okay here are the things you need to do. You need to watch your diet exercise take your oral

medications. I need an exercise and the endocrinologist in to integrate. Who is all soap are all of this information in a simple way with a care lead? This is a person that helps someone on their Journey as this information is surface. And if you look in the middle of what I'm showing you here on what the care lead and what the person is saying, you'll see a number of different line. I don't want us to drill down and look into that. This is showing you the difference between the data you might see in an episodic

glucose example or what you're saying with the continuous glucose monitor enabled by this new sensing until it's a we drill down into this continuous glucose monitor and we will get a cluster of days. This is an example. We might start to see patterns and is Lily mentioned. This is not the type of thing that an individual patient care lead or physician would end up digging through but this is where you start to unlock the power of learning models because what we can start to see it's a cluster of different morning. So maybe

we'll we'll make a positive Association that everyone's eating incredibly healthy here at Google IO, so maybe that's Cluster of the red mornings, but we go back into a regular liars and we get stressed and we're eating a different cluster of food, but instead of giving general advice we can use different models to point out. It seems like something is going on with one patient. For example, we were seeing a cluster around Wednesdays. So what's going on is it that the person is going and stopping by a particular location, or maybe there's a lot of stress that day. But again

instead of giving General care we can start to Target care in the most comprehensive and actionable example, so against thinking about what we're talking about a collecting data organizing it and then activating it and making it extremely relevant. So that is the way we're thinking about diabetes care. And that is the way I is going to work we heard this morning and another discussion. We've got to think about the problems that we're going to solve and use these tools to really make make a difference. Other ways that we can think about activating

information and we heard from Lily that diabetic retinopathy is one of the leading causes of blindness. So even if we have excellent glucose care, there may be times where you start to have end organ damage and I had mentioned that elevated glucose levels ended up affecting the the fundus and the retina now, we know that people with diabetes should undergo screening, but earlier in the talk, I gave you the laundry list of what we're asking patients to do who have diabetes. And so what we're trying to do with this

collaboration with Google is figure out how do we actually get ahead of the product and think about an end-to-end solution so that we realize and bring down the challenges that exist today. Because the issue in terms of getting screened one of it is accessibility and the other one is having access to optometrist and ophthalmologist. And this is a problem in the United States as well as in developing world. So this is a problem not something just local. This is something that we think very globally about when we think about the solution. We looked at this data earlier and

this idea that we can take algorithms and increase both the sensitivity and specificity of diagnosing diabetic retinopathy and macular edema. And this is data that was published in Jam has Willie nicely outline the question then is how do we think about creating this product? Because the beauty of working at places like Alvin and working with Partners like you all here today as we can think about what problem-solving create the algorithm but we didn't need to step back and say what does it mean to operate in the space of healthcare and in the space of life science, we need to

think about the image acquisition the algorithm and then delivering that information both the physician's as well as patients. So what we're doing it's taking this information and that working with some of our partners. There's a promising pilot that's currently ongoing both here as well as in India, and we're so encouraged to hear the early feedback and their two pieces of information. I wanted to share with you one is looking at this early observations. We're seeing higher accuracy with AI than with the manual greater. And the thing that's important as a physician. I don't think I

don't know if there are any other doctors in the room, but the piece I always tell people there's going to be room for healthcare providers. What these tools are doing is merely helping us do our job. So sometimes people ask me is technology in a going to replace Physicians are placed it replaced the healthcare system in the way, I think about it. It just augments the work we do if you think about the stethoscope, so I'm a cardiologist and a stethoscope with invented about 200 years ago. It doesn't replace the work. We do it merely augments the work we do and I think you're going to

see a similar theme as we continue to think about ways of bringing care. In a more effective way to patients. So the first thing here is that the AI was performing better than the manual greater. And then the second thing is democratized care of the other encouraging piece from the pilot with this idea that we could start to increase the base of patients treated with the algorithm. Now the turns out I would love to say that it's really easy to do everything in healthcare in life science. But as it turns out it takes a huge village to do this kind of

work. So what's next what is on the path to clinical adoption? And this is what makes it incredibly exciting to be a doctor working with so many talented technologist and Engineers. We need to nail partner with different clinical sites that I noticed here. We also partner deeply with the FDA as well as regulatory agencies in Europe and Beyond and one thing if they're only that we've decided to do is to be part of What's called the FDA pre-certification program. We know that bringing new technologies and new algorithms into Healthcare is critical but we now need to figure out how to

do that in a way that's both safe and effective and I'm proud of us at the alphabet for really staying ahead of that and I'm partnering with groups like the FDA. The second thing that's important to note is that we partner deeply at verily with Google as well as other partners like Nikon and Optus all of these pieces come together to try to transform care, but I know that if we do this correctly, there's a huge opportunity not only in diabetes, but really in this entire world of health information. It's interesting to think about it as a physician who spends most of my time taking care of

patients in the hospital. How can we start to push more of the access to care outside of the hospital? But I know that if we do this well and if we stay ahead of it we can close this Gap we can figure out ways to become more preventive. We can collect the right information. We can create the infrastructure to organize it and most importantly we will figure out how to activate it. But I want everyone to know here. This is not the type of work that we can do alone. It really takes all of us together and we have fairly we are Google we had alphabet look forward to partnering with all of

you. So Please help us on this journey a Lily and I will be here after these talks were happy to chat with all of you and thank you for sending time that I owe.

Cackle comments for the website

Buy this talk

Access to the talk “Bringing AI and machine learning innovations to healthcare”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “2018 Google I/O”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
159
app store, apps, development, google play, mobile, soft

Similar talks

Daphne Luong
Director of Engineering at Google
+ 3 speakers
John Platt
Software Engineer at Google
+ 3 speakers
Rajen Sheth
Senior Director of Product Management at Google
+ 3 speakers
Fernanda Viégas
Research Scientist at Google
+ 3 speakers
Available
In cart
Free
Free
Free
Free
Free
Free
Martin Görner
Software Engineer at Google
+ 1 speaker
Yu-Han Liu
Software Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Yufeng Guo
Developer and Machine Learning Advocate at Google
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Bringing AI and machine learning innovations to healthcare”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
558 conferences
22059 speakers
8245 hours of content