Events Add an event Speakers Talks Collections
 
Duration 45:43
16+
Video

MLHC2020: Nicholson Price and Leora Horwitz Moderated Discussion/Q&A

Nicholson Price
Professor of Law at University of Michigan Law School
+ 2 speakers
  • Video
  • Table of contents
  • Video
Machine Learning for Healthcare
August 8, 2020, Online, Los Angeles, CA, USA
Machine Learning for Healthcare
Request Q&A
Machine Learning for Healthcare
From the conference
Machine Learning for Healthcare
Request Q&A
Video
MLHC2020: Nicholson Price and Leora Horwitz Moderated Discussion/Q&A
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
31
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Nicholson Price
Professor of Law at University of Michigan Law School
Leora Horwitz
Director, Center for Healthcare Innovation and Delivery Science at NYU Langone Health
Michael Sjoding
Assistant Professor at University of Michigan

Nicholson Price is a professor of law. He teaches and writes in the areas of intellectual property, health law, and regulation, particularly focusing on the law surrounding innovation in the life sciences. He has an extensive collection of colorful bow ties. He is a core partner at the University of Copenhagen’s Center for Advanced Studies in Biomedical Innovation Law and co-PI of the Project on Precision Medicine, Artificial Intelligence, and the Law. He previously was an assistant professor of law at the University of New Hampshire School of Law, an academic fellow at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and a visiting scholar at the University of California, Hastings College of the Law.

View the profile

Leora Horwitz, MD is a general internist who studied social science as an undergraduate and is now a clinician researcher focused on quality and safety in healthcare. In particular, she focuses on systems and practices intended to bridge gaps or discontinuities in care. She has studied shift-to-shift transfers among physicians and among nurses, transfers from the emergency department to inpatient units, and the transition from the hospital to home. She is currently adjunct faculty at Yale; her primary work is at NYU Langone Health, where she directs the Center for Healthcare Innovation and Delivery Science and the division of healthcare delivery science in the Department of Population Health. Her current work is focused primarily on developing a learning health system through innovations in clinical delivery and in data capture and analysis.

View the profile

Dr. Sjoding’s primary research is in how best to measure the quality of hospital care, with a focus on intended and unintended consequences of current quality measurement programs. He is also interested in the use of high dimensional data to enhance our ability to measure and improve the care of patients with critical illness. Methodologically, this work employs the use of both clinical and large-scale administrative databases, multi-level statistical modeling, causal analysis, and simulation. His work is supported by the NIH/NHLBI T32 multidisciplinary training program in lung disease.

View the profile
Share

Thank you very much so, you know, because this morning's conversation works so well having both speakers present at the same time, I thought we tried again this afternoon, so I'm borrowing in cyclers approach here and we have with us today Nicholson price. He's a professor of law at the University of Michigan. We also have the Oro who is an internist but also involved heavily in. The coin, integrating machine learning and AI at a New York University. So, I just want to welcome everyone to this afternoon session. I'm looking forward

to an exciting conversation I'm going to start with U Nicholson and I'm just indulge myself a little bit and ask you a question to some sort of get to know you're the speaker. So someone in law who is interested in artificial intelligence machine learning, I think that's unique and very interested to know how did you get into the space? Yes. So I'm really fascinated by how the law shapes, the development of Science and biomedical technology in particular. So I did a lot of greedy at the same time as my PhD and put graffiti. It's a medium

data work there and then moved into a legal Academia and I'm really just excited. My kind of how Innovation happens what the law can and should do better. And this is the space where a ton of really awesome stuff is happening with the law, has just so many profound impacts, but it's so new and different. And so, I've been spending the last 5-6 years. Can a diving into that trying to figure out what's going on and how we can do it better. Cool and we'll be. All right, so we're in general internist and you know, it sounds like you've been

involved in various research questions for a while and her CV. It sounds like, you know, you're interested in Health Services, type of research and in health delivery times question. So, how did you get into the space of machine learning and AI in healthcare? Well, I saw this emerging to go since we're at actions at right around the center, for healthcare, Innovation, and delivery science at NYU. And we have a large group and do lots of things, but you cannot have a center for healthcare Innovation. If you don't talk about AI machine learning.

And so, so one of our groups, I have come over seeing one of our groups, which is the Predictive Analytics unit and its lead by, you know, an opinion upon and has a number of other colleagues. And, and they build machine learning models for help hearing until I hope to think about how to get from the model to the Yeah, it's so it seems like the questions that will be discussing or thinking about together involves. So you have a model. Now, what, let's put into healthcare system and see what

happened. So, I first first, let me just encouraged the audience to be sure to listen to both of our sneaker socks, cuz they're super informative and interesting Nicholson's. Really was very clarifying to me as as someone in the space is like, oh yeah. Now I understand why. Immediately want to just throw this new thing into the medical record because it's not reimbursed or blah, blah, blah and and and then I and I talked for the aura stalk was just really, really helpful. It's like, okay, you've

got a great auroc, but that's not everything when it when it when it's thinking about something about deploying a model, I guess I guess I'm going to just start with, with Nicholson here. And one thing it was helped, sort of helpful and clarifying about your talk is you sort of laid out The standard regulatory environment for healthcare Innovations. So this ball, the patent office is involved. Insurance is so, can you just like remind us really quickly about how all those together for both? Have an interplay when it comes to a new device?

Things interact in, they all have big independent individual rolls with the three things kind of interact in as much as snow for new drugs in particular, but also have physical medical devices. FDA has a pretty extensive review process, especially for drugs, right? Uncle trial, super expensive, but not as quite as much miles is a new. Medical devices has the equality oversight role that also ends up creating a pretty substantial barrier to entry pretty high upfront costs, the patent system helps

Innovators. We could start off by saying this invention. So if you spent, you know, 500 million a billion dollars, take an approved by FDA whatever, no one else can come in for some period of time because of that protects you and all of this stuff ends up being too expensive for anybody to afford it anyway, and show so insurance reimbursement, covers it, as the social payment mechanism, that also ends up serving both as an incentive mechanism on the front side and is kind of a little bit of something that they approved. Insurance will generally cover it,

but sometimes they'll say, you know, actually we want more evidence that this is going to be fully work in this situation and so we have it insured. Playing a little bit of a scientist and a little bit of an FDA like roll tide roll in terms of quality oversight. Much different in the context of a I right? And so, and so now as we think about, you know, a new AI system that helps provide care, right? Like it just doesn't seem like that. That current regulatory environment is going to offer the same type of

assurances reimbursements. Make it all work. So I guess I guess on the one hand, it's going to work. If you know if your startup company that has some new device that that like Altima door, some, some new software that ultimately you can get reimbursed for when you put it into the market. But for for everyone else, like how we, how are we supposed to do? You know, how, we, how we going to get a shirt that like these new AI software systems are going to be held to the same

for the standards? As the drug that the FDA approved, there are a few products that get approved list of things that have been cleared or approved. And there is some a iPod, I seen them. There are some things that go through this extensive review process but there's tons of stuff, including lots of things that folks in this virtual room have developed and then thoughtfully put into use in their hospitals, or health systems or whatever it don't go through anything. Like that process, of course, is that everybody in this room and everybody is

doing really awesome prospective and retrospective validation and making sure that they understand how things work in this when all the right things to ensure the quality but in terms of some third-party sitting out there saying yeah this is good this works you've done a good job developer and we're now sure that the Works. Like that's just not happening for the vast majority of any eye medicine at least, as far as I understand it and insurance and someone external validated the model that was developed and said this is spectacular.

I totally works. Even that is no assurance that it will work once applied to your own data set and your own population. And I cannot tell you how often we have acquired models developed and validated and let you know. That seemed pretty good elsewhere and have turned them on in our system. And they do not work for us because of the different ways that people practice or the different ways that data are collected or the different populations. That we have in the more complicated, the model of the right. So Laura to what extent are you involved in

these types of decisions? Are you going to turn on a new model at your health system? How does that work at your house? So then our whole system. We first we will try not to acquire a model unless it has some validation that we dress. There's a paper about it, you know, it's shocking. Number of these places, don't provide any data about their performance. So we try not to even start with us, but if we have one we think is promising. We will program it and turn it on in the background and run it for a while and see what it does. Will never ever

learned something lies that it's Jones, the clinicians if we haven't. And I will say that the vast majority of Mom. They're not returning on the background and they don't, in fact, work as as promised or as designed and then either, we need to mess with them. We create something at, you know, internally based on our own data or And so, it seems like that kind of model is, is becoming more and more standard, tickle, an academic centers. I think we have something similar where there's some sort of committee that sort of

looks at models and makes decisions about, you know, whether whether they're going to potentially be of benefit at their house system, I just wonder like is that going to be scalable? So this is back to you Nicholson. What about these community health or are you know, Community Hospitals that don't have that type of resource? That status, I think it's the big problem in the big question, which is to say if you're an academic Medical Center, then hopefully those places. If join us work, hopefully, you either have the resources to homegirl your own

stuff. Or if you're going to try to import other stuff than one in the background and evaluated, made for Tinker with it, home for your own version or figure it out or reject. It's okay. That's not right for us. But if you're someone like, when going to eat or Michigan medicine or Partners or whoever those resources just aren't, they are left with this. Really unenviable choice of either saying like, oh, here's a bunch of technology that could potentially be transformative and we just don't have access to it or the position of trying to say okay like

your stuff it was developed somewhere else. They seem smart and have a lot of resources and a validated it we're just going to incorporate it and we don't really have the tools or the expertise to make sure Like you said, we're going to be really problematic doesn't work. I recognize. I'm only saying problems without Solutions but I'm a law professor. Like we trying to do that you know, back to the hitech act as a sort of motivated, the acquisition and the in the in the use of electronic health-record systems across,

I need, do we need something like that to sort of level the playing field for the sort of the Next Generation? I eat artificial intelligence to support Health Care to make sure that there is the same type of healthcare can be delivered across Health Systems. I'm in my dream here is that we've got that. A lot of this investment comes from scalable data collection systems, so that it's easier or I don't think it's going to happen. That smaller institutions are going to have the capacity to do the evaluation the whole drawing on their own. At least. Not for quite a while

since the future. When I think it's potentially more likely is the idea that we can push harder on interoperability, we can push harder on the ability of actually collecting data so that so that these things can potentially be back on. But also as there used elected and someone else maybe someone who's not at that Health Center in San like all like you got a problem here or this is this is working out or hears that week that's why I'm not I'm not on the sharp technical end of the stick I mean I think that I actually think that the stone is here is is

on the vendors and I think you should not be allowed in fact, his cell a tool or a model if you're not also sell any infrastructure to evaluate it and it would not be so difficult for epic are others to do really University provide the infrastructure and the tools that are necessary to evaluate. But it takes some programming, and it takes some effort. And so he home build that, but it is not it. It's unreasonable to expect everybody. And so in the same way, that that many of Michael Sanders in and others are selling models now and selling the pipeline in the

infrastructure to turn them on. So it's not so hard to do that. If it's going to go put that in a in a box that I was in schools in a box and and I I think that we as a as a profession and as a community need to start demanding that Yeah, that's that sounds. That's music to my ears. I really love that idea from both of you first interoperability of data. So so that it's actually easy for researchers at an Institute at an institution to build a new model to actually test it out at a variety of Health Systems Community academic otherwise. And even the next

building into the infrastructure of electronic health. Records to a B test models, would be just extremely exciting. So we have the first question here from the audience is regarding FDA approved. It's obvious that if you're a new drug, if you're a drug company in and you need to get in, you have a new drug, you have to get FDA approval. It seems obvious that in most cases if you're a commercial entity, and you have some sort of complex software that drives

patient care, that you probably should get it. But if you're a academic who was built a new algorithm that he thinks helps helps, you. Take care of patients with sepsis and you're actually planning on open-source thing. It could you still get FDA approval? Like what, what do you think? Nicholson lawyer? I am not your lawyer applies to everybody. Listening here. I have a law professor, I think about this stuff so I'll make I'll make a distinction between kind of three different

levels of what we might call approval by the FDA approval is actually a term of art, FDA approval, refers specifically, to either drugs or devices that go through a pre-market approval process, which is the most rigorous thing clear stuff, which is things that are pretty similar to what's come before. And that's a less rigorous process. But still like a cape with a stamp of guess, this is okay. And then there's the kind of really informal thing like, you talk to FDA and they say doesn't seem like it's so much of a problem. So the the sepsis Watch Team it in one of

their papers said, you know, we talked with people that actually have a talk with him about how we needed to keep a human position in the loop enough that this wasn't running afoul of that the air approval approval requirements. And so, that's kind of not, we're doing this to have a problem with it. Is this, okay? And the agency, like, Helps you out there. The last thing I think, is the most likely thing that people would want to do, which is to say, talk to a few and I'll probably say no, you don't need to go to this agency to the best of my understanding. And

experience is generally pretty willing to talk to people about this most of the time because there are a bunch of ways that looks kind of medical or mostly talking about doesn't probably needs to go through FDA is not being marketed, navigate to care about stuff. That's marketed commercial roof. That means it's not a medical device. Most of the time, according to a 2016 law of the 21st century if it's developed and deployed just within one setting. It's probably the kind of diagnostic diagnostic

diagnostic test says we're not going to worry about because it's just in one place probably like a lot. Uncertainty in this area. And I think the answer is most the time at the is not going to involve to be sent it. I talked to some of you. Most of you aren't talking to ask if you are, I'm curious if your not, I'm also curious, but I think the answer is probably you don't need to it probably wouldn't hurt, but he is not, as present in this space, as in certainly, physical medical devices are, or drugs, not by a long shot.

So, one of the things we are, that you talked about the FDA requires randomized, clinical trials. Whereas in for the deployment and integration of AI into Health System, almost, no one is doing actually. I don't know if I've ever seen such a study published before, but, you know, I don't read everything in the literature for trying. You those types of studies? Can you just sort of elaborate on why you think it's so important? And not just for a title, spit for operational. Interventions in general. I need, we make this kind of artificial distinction between giving someone a

drug or changing the way that we organize. Our house is somewhat organized, our practice. But both of those influence outcomes and organizing resistance in the way that we don't know if they work, we go after they go home from the hospital to find out how they're doing. And, you know, try to reduce the risk of post messages to the vision portal. We put up posters, we'd like we have decision support. We can all the stuff and we had a cool and we have no idea if they're affected and that's a waste of everybody's time and energy. Like we could be more active in the same way that. I think

we need to do this for myself or just, I think we need to do them for operation. I know how getting information changes people's behavior, or doesn't. And therefore, it's hard to understand if there is benefit or if there's harm and there can be no harm from these. Also, I'm misinterpreting the results or if I'm having the models be wrong, or biased, or or whatever. Right? And so, so the only way to know for real, if that, if the tools are acting is influencing outcomes for benefit or harm. And so

we randomize our are just so, you know, where am I going to go? Trials, obviously the, you know, medical conditions. We think about those who serve the highest, but one of the challenges in doing a randomized, trial, is so expensive. So how do you do that? Or how, how do you get a get by with doing a randomized control trial? On AI systems. I'm assuming you're not going to ask them. Can we use an AI system on you? You're not at work around operations, including a and we spent a lot of

time setting up infrastructure and and the organizational like a bye in first. And so, of course, the first people we talked to her or RB RB directors, fabulous, and, and I sent you or 5 months really kind of working this out with her the way that we think about these every said NYU is that we think of these. Marcy is quality improvement, intervention in that. We are not seeing whether or how we can get what we already know to be best practice establishing and implemented effectively. So we already know it's a good idea, how we might randomize different ways. Trying to get people

to get that Miriam. We already know. It's a good idea to talk to patients at the end of their life, about tired of Karen goals of care and and so forth. But we're randomizing is whether a tool that predicts that they're about to die or not influences people's behavior to get the best practices. So that's why I kind of just how, how can we get people to adopt this practice more effectively? We also make it a sin to research and quality improvement in that the people doing the work doing that.

The study doing the interventions doing, the practice are the people on the front line are, the actual people delivering care. And when we find that something is a better approach, this is another. We thought of you immediately like, it's to improve the care that we deliver as opposed to generalize ably thinking about it. And it is this an interesting approach or not and maybe, And something's Fallen to our fault. If they do, then we do not obtain consent, but we do not molest take our ethical

obligations seriously. So we measure always the outcome that we're trying to achieve and also the potential unintended consequences for how we measure the, you know, the outcome is rigorously as we can. So that we are confident in the results. We make the fact of the trial public as best we can post to Uncle's house. Can we put an internal newsletters meet? Like I know people that I think is happening to that all the way out of any consent. There is at least knowledge that does not mean that the patient knows that there's

nothing going with this early, but we at least try to, you know, we try not to make it a secret. So we do a here with me attempt, a lot of things to keep the message that this is an area that absolutely. I mean when you think about sort of all the things that we do, we don't have any evidence 4 ride in the, in the power of love, doing these sort of systems, random people trials can provide evidence for these. Things is exciting. So I think there is there seems to be a lot of

interest from questions about gosh. How can we, how could, what could be done to make it easier to run similar rcts by modifying commercial EHR systems? That says, if you want to develop knowledge that everybody can use because we're actually sure that it's like this is generalizable knowledge that that's so much harder to do. That a world leading Health Center trying to figure out what the right thing to do is like just to be clear. We are not trying to develop knowledge

than anybody else can use. Basically impossible for us to do instead, we're just trying to do with it like this is a deeply bizarre system that we have settled. That is the particular patch that innovators are let down that it's faster to make it easier to run some more rcts by modify commercial ehrs. I guess if a few cents is on on, how did you guys eat build build up your system Leora from the commercial? You try that you use. So it's so with a randomized trials. In general, it depends a little bit on what we're doing and who are trying to Target today.

I feel we do randomize this with in our electronic health record and our particular EHR, vendor makes it relatively easy to randomize knowledge. Just isn't quite difficult to do that by clinician or by unit. So it's much harder to say, I'd like the randomized doctor or a, to see this message, and Doctor be not too, and quite easy to the randomized. Like, patient. A to display this information, patient, depending on our projects that particular right now, is it a model X predicting, Adverse Events

for patients with covid and it makes sense to do it. If a doctor can see it going fishing, they can't see if another patient but I do. But again, as I said before, we don't keep it a secret. So we post the information for half the patients and put the other half. We say we write on it so that it's clear to people that there is information there, but we are hiding rather than having them mistakenly assumed. Oh, there's no information. The patient must be fine, or whatever, and we talked about your writing so we can rent. We can we just

here's a question from the audience that I think it's sort of interesting and I and as, as someone who has thought about building models and putting it into our house system, You always have to sort of face administrators who have a bunch of costs and you know they have a limited number of cost. And so so so so maybe actually we don't want to spend that extra money to get my specific electronic health in into the records. Should should should researchers think about insurance? Should we think about ways in which we can get insurance companies to help support

our systems that were building as, as two way to distribute them? A letter, what do you think about that Nicholson? I think it is not sorry, I'm going over head. I think it's hard to avoid thinking about money. Question is where it comes in. And insurance are one possible source of this. I mean it's much easier to convince administrators have a thing to put something. And if you say look we can bill for this is just institution itself which is to say making things more efficient or of wedding costs to the

extent it access care or Adverse Events are associated with excess costs. Like that's a nice story to be able to tell. Look I have waited all of these problems and and therefore I say or this will save money to the extent that excess care is associated with Revenue. That's a horror story to tell which is problematic right? If you say I'm going to avoid these Adverse Events but at these Adverse Events turn out to be super profitable for a hospital. Obviously no, ethical administrator would say, well, that's a bad tool for that reason, but the incentives are

there not maybe to push us hard to implement it, so I think of it as soon as just pulled it into a facility speed for instance, like something a lot of stuff, isn't one of the interesting facts of that is you would expect to see incentives that are higher for developing supporting an integrating a product that fuel more, like those possible medical devices are procedures that you can get them individually, reimbursed and relatively lower incentives or developing deploying and integrating products that are

ghost, tennis it in the background and improve quality, but might not show the port of India impact. On the bottom line, you do for What do you think? I don't think you have to be so narrow-minded about about reimbursement. So I think it depends on what the model is. If it's if it's something that really is very unlike your stepsister additional, there are other ways that one could imagine insurance companies reimbursing for that's in the same way that if you own a house and if you have a discount on your, on your address, in the same way, you can, you can make

to insurance companies. That actually, you know, we have these systems in place that will allow us to take better care of patients and and I have better outcomes. And so we should get better right sign out from the insurance company and people have built things like that into a contract that you would argue that, you have another way. So, so I do think that one can get that sort of infrastructure paid. For in that way, from insurance company. Are by far the biggest users of AI right now, how much the head of Google Enterprises

better than that. But they are they were among the first adopters and they do it in that sense. If you're building models that insurance, I want to ask Nicholson about that a little bit more and Nicholson's, talk, he sort of describes how there's like the spectrum of a i n Healthcare. There is sort of sitting right at the sharp end of delivering Healthcare and then there's somewhere in the middle and the end of this, the science paper is a cautionary tale. I mean,

those things we already discussed this yesterday, but maybe we should discuss it again today. We make sure that those things don't happen. If people aren't going to be biased, but I'll ask for her opinion. I mean, I think about this, I think about the biggest interventions to try to help these issues as showing up on the front end in the back, end up. Well, on the middle like we should have people that are developing the stuff think about these issues. He talks like the odds I think are diesel in as much as you talked to a couple hundred percent on it

and I have Brady Bunch of labels. Like that's a really, really useful things to do on a friend. I think about filling outdated infrastructure. Write a lot of the problem in this camp in the second half, was the talk talking about, the knee is a fair. Bit of a problem is data that are in a few, constrain them from a relatively constrains Universe to the set of the extent that we can build out data. Infrastructure by either providing resources to places that otherwise don't have the resources or by assembling big National data sets. Like the all of us cohort, like that sort of

thing that are We represent a dataset katzkin, help reduce bias on the front and on the back end transparency. Strikes me as the important bit, right? Like the reason that he had caught this or was able to catch, this is getting access the data from a big houses on to the extent that these sorts of things can be less kind of hidden in the shadows behind the scenes and more. What's going on here? Are data interrogate them, it's more likely that we'll be able to catch this stuff on the back of this. Well, We are so much of it

is about looking at all, just block, even being aware that there's a thing. And so we we part of our process internally needs specifically look. And that that's always a help with dark and thinking about what you're doing with the models. Also, not sound so aren't, you know, right. So you could get used to like reach out and get them transport and you can get them to your office right now. That's really interesting sort of basic like when they're models helpful, when are when are they not? But it but I thought I thought it was interesting

or helpful when they don't, they don't just obvious, they can actually help you. And and then also they don't make decisions. And so you sort of sort of in your that is necessary but not sufficient. There's so much more. That is this is important here when you thinking of thinking about applying a model to practice. And so I'm getting a lot of questions about And then also thinking about the other metrics. So you knew you talk about the trade-off of other ways in which you can sort of test tomorrow before you deploy it, to sort of

really convince yourself that this is worth pursuing. This, this could really have an impact I love you to speak more on this, in the subject. Sure, you know. Do this all the time when people are building models, I like the good malls and listen to do you build a modeling and okay, we'll take the top five, you know, predictions every few days and look at my cousin trying to figure out if it's working or not. And sometimes we do that and it's just obvious after August after obvious after obvious. And I'm thinking, well okay, I'm glad that your mom all is

working but I don't need you. Like, I just don't need you for this stuff going to be annoying to me, and that's not always the case. And I actually literally give us a sample of the mortality model where we either we were very concerned about the about the cut off that we applied and we only took the one we're going to wear it very high risk of dying as a clinician when I look at those tracks like they're all really obvious. I mean the really obvious so I was right. That it would be even better than any point in even turn it on and we did see but we did see that even the really

obvious when people are not acting on it. So there are times where things are pretty high stakes and I don't want to check on something that is just my personal stuff because it's a big step, then it's hard. And and so I still might be helpful in some cases to have to support that. I need to think about the clinician impact on the condition of the of the model. And what I really want to help me with the hard cases. Like I really want a model to help me with the end that he was once in the middle of

like that while they could be or they might not be and I don't know and I feel like there there's all the other data and other patterns that I'm just not able to capture in. So that's what I'm always hoping for a while. Like I want to be surprised, I don't want to be surprised all the time because then I stopped trusting you all together, right? And I don't want to never be surprised because I don't need you. So I want to be surprised, just the right amount and that's the trick of making a really good in practice. And I just wanted to say that that message cuz I feel like

the Pew are computer science developers, don't know. I was thinking about that and I also accidentally added that a person can do to be able to say, oh, that's nonsense. And ignore that. Stupid it in again or not always well design. Yeah. So you know what? I think that's really important for you to say, and, you know, my sense of this community recognizes that there's there's more than a u r o c because you know, they are our population average discrimination or whatever. But I really thought that that categorization what was this

helpful. So should we hold the area under the curve to be as high as standard as The Arc in all our machine learning for healthcare publication. The issue is that it's an independent and so that's so it's not important and it needs to be good while I would like to see a good auroc, right. But when it comes to application because I'm not interested in population as a physician, I am interested in the patient in front of me and so when you pop up a message that says this is a patient that the high

risk of Acts or this is an issue that probably has why I want to know how is this information. Changing my pretest probability a my pretest probability is entirely dependent on the prevalence of whatever that thing is in the group that I'm looking at a Super Bowl party and you tell me that something is super likely, still don't really believe it. It's not going to change my Boost. It ignores all together. That's what the that's why we have to think about it. Except in a specific population,

it would be good. I think about that at least an example of a paper cuz it just struck me as so egregious that are like, we're not seeing papilledema that much. Like, that's an indication of increased pressure in the brain and like, that's not me. And so they write this paper about this. Amazing. And you are a c, and I know that if I implemented, it would just be wrong all the time, and since you should at least acknowledge a given population that you might, We apply it to you and I do.

Yeah, point taken and hopefully will. It matter is the premise of the population. Madison different institutions, different problems about this earlier, but that, you know, the questions come up with smaller institutions, may not have the resources to develop a model. It sounds like your kind of a big proponent of data and sharing data, everyone who has the data people who have the data. So what can we do to? Everyone wants to wants data because data is I mean, honestly,

I feel like this is a place where lost Epps in it. Strikes me as a place where we want to Legal intervention and right now we're in a situation where there aren't strong incentives to Armstrong, policy-driven incentives to spend a ton of money that all of us stayed in a bunch of systems and do all that dwell. On this face, is much easier to keep all your data secret, like, for the incentive side. And then also, if it's a thing problematic. Lots of issues like we make it really easy for people

to keep their data secret. And we have very little incentive for people to share their data and I think that's a place for a law to step in and say like, hey, you know, that interoperability rule, where you're proposing, that says, everything has to be interoperable. Also you got to share a lot more stuff for research purposes and less you got some particularly strong reason not to like I think this is a place where lost you play roll. It will people keep him scream mercilessly. Of course they will. Does it have the potential to have to literally massive benefits at a social level?

Yeah. So I think I think this is like a legal mandate type place. So what can you do right or congressperson? Instead of a u r o c like they say A you want to see every time? Nobody says aura, that seems very strange to me. So I think that's really about all the time we have this has been really fun. Thank you both Nicholson and we are up for joining us for this conversation this afternoon. I think at this point I'm going to hand back the reins to

Cackle comments for the website

Buy this talk

Access to the talk “MLHC2020: Nicholson Price and Leora Horwitz Moderated Discussion/Q&A”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “Machine Learning for Healthcare”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Artificial Intelligence and Machine Learning”?

You might be interested in videos from this event

February 4 - 5, 2021
Online
26
106
ai, application, bot, chatbot, conversation, data, design, healthcare, ml

Similar talks

Ziad Obermeyer
MD, MPhil, Acting Associate Professor of Health Policy and Management at School of Public Health, UC Berkeley
+ 1 speaker
Byron Wallace
Associate Professor at Northeastern University
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
John Morrison
Assistant Professor of Pediatrics at The Johns Hopkins University School of Medicine
+ 6 speakers
Ali Jalali
Senior Data Scientist at Biofourmis
+ 6 speakers
Hannah Lonsdale
Clinical Research Associate at The Johns Hopkins University
+ 6 speakers
Paola Dees
Physician at Johns Hopkins All Children's Hospital
+ 6 speakers
Brittany Casey
Pediatric Hospital Medicine Fellow at Johns Hopkins All Children’s Hospital
+ 6 speakers
Mohamed Rehman
Eric Kobren Professor of Applied Health Informatics at The Johns Hopkins University School of Medicine
+ 6 speakers
Luis Ahumada
ДолжностьDirector Center for Pediatric Data Science and Analytic Methodology at Johns Hopkins All Children's Hospital
+ 6 speakers
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Richard Peters
Technology Innovation Lead, Assistant Professor Population Health at Dell Medical School at the University of Texas at Austin
+ 4 speakers
Ali Lotfi Rezaabad
Machine Learning Researcher at Bosch Center for Artificial Intelligence (BCAI)
+ 4 speakers
Matthew Sither
Lead Software Engineer at Dell Medical School at The University of Texas at Austin
+ 4 speakers
Abhishek Shende
Co-Founder/ ML Engineer at BrilliantMD
+ 4 speakers
Sriram Vishwanath
President and Co-Founder at GenXComm
+ 4 speakers
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video
Access to the talk “MLHC2020: Nicholson Price and Leora Horwitz Moderated Discussion/Q&A”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
949 conferences
37757 speakers
14408 hours of content
Nicholson Price
Leora Horwitz
Michael Sjoding