Events Add an event Speakers Talks Collections
 
Duration 18:10
16+
Play
Video

REAdi Tool: Using Shiny as a Tool for Real World Evidence Evaluation (Brennan Beal, Beth Devine)

Brennan Beal
Quantitative Scientist at Flatiron Health
+ 1 speaker
  • Video
  • Table of contents
  • Video
R/Medicine 2020
August 29, 2020, Online, USA
R/Medicine 2020
Request Q&A
R/Medicine 2020
From the conference
R/Medicine 2020
Request Q&A
Video
REAdi Tool: Using Shiny as a Tool for Real World Evidence Evaluation (Brennan Beal, Beth Devine)
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
317
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Brennan Beal
Quantitative Scientist at Flatiron Health
Beth Devine
Professor at University of Washington

Trained in both health services research and health economics, Dr. Devine’s research intersects the disciplines of comparative effectiveness research, patient-centered outcomes research and clinical informatics where she employs methods from epidemiology, biostatistics, decision analysis to answer research questions in her areas of interest.

View the profile

About the talk

This video is part of the R/Medicine 2020 Virtual Conference.

Share

And thank. It's my privilege to introduce our next speaker is Brandon Beal and Beth Devine from the University of Washington Choice Institute. They'll be talking about the ready to roll using shiny as a tool for real-world evidence evaluation. Share my screen here. All right, we're good. We're good. Thanks for the introduction. So, I'll just go ahead and introduce myself. So, my name is Brandon Beal. I'm a second-year healthcare economically fellow at the University of Washington and in partnership

with a PC. And then Betsy, when introduce yourself to see and economics Institute, University of Washington, Health Services, Health Economist, with expertise, for today in real-world evidence and comparative effectiveness research using Yes, so I guess I'll go ahead and start with what is Real World have been. So probably everyone on the call is a little bit familiar with real-world evidence by definition. Is that derive from real-world data? It's not incredibly

helpful. Is that that's outside of what we are like a traditional randomized, controlled trial, right data or claims or Charter reviews on things like that and why it matters and why it's so important to Madison one is because we're just having access to a ton of data currently in Andrew. What evidence is a good way to capture aspects of intervention. In our cases Healthcare Economist. Most mostly focus on. I'm pharmacogenomics is my focus. And so I helped capture aspects of

medications that that aren't captured and randomize head erogenous. Controlled trials, two, really big, parts of that are underrepresented people's. Right, so we we talked a little bit about underrepresented populations in medicine today. Already Andrew, what evidence is one of the tools we can use him and then other things like an intervention. Comparisons could be the drug is better than drug be in a randomized controlled setting. But maybe you know it's rugby has other aspects

in a real-world setting that are more amenable to on. Patient outcomes in yours is to harness Is it a problem that we come across has real-world evidence is fairly complicated just because there's a ton of different study designs, a ton of different ways that you can go about generating real-world evidence and that's not such a problem for researchers. But ultimately, we want that real-world evidence to inform adoption decisions, right? So our main focus with, with our tool without energy in a bit, was for payers to be

able to access road evidence and make a quick decision. Not a well-balanced productive decision for medication adoption. So if you just look through all the different guidelines mean, we've got the grade handbook and star Robin's, 1 Robins to All these different aspects of a real-world evidence generation still, I mean, the list goes on, so they have a hammer. Everything looks like a nail. So we bethan, I have been working on this problem for quite some time

and she brought it up to me, they'd developed an initial phase of the tool and an initial rollout, what we wanted to do, something more structured. And something that would take the process all the way from evidence identification, all the way to recommendation to create an online platform. That provides structural framework. They can walk users for real evidence identification. All the way through the process of making an evidence-based technology, adoption decision and so, and walk shiny because that's that's kind of my expertise. I have been using our for

4 to 5 years now and actually, I've only been using shiny fur. I guess a year-and-a-half. But it seemed like a fun problem and something that we were well equipped to do. And so I'm going to do a little bit of a live demo server. Once crossed your fingers. I'm so this is our tool and this is just the homepage. So we actually have a nice little login where you can see progress of all your other studies that you're doing. I'll get back to this demo later. But first, I just when I kind of go over the tool stuff so the tool is broken into best, me confirm that we can see this

broken down into phases, right? So the first identifying the second is reviewing it and writing it and that's what time is the meat of tool and third is summarizing your grated evidence and then finally making an evidence-based recommendations And so I'm just going to walk through a quick example with you all live. So the first part is identifying real-world evidence in this is really just going over the Pecos criteria, right population, intervention comparator. And then shines actually well-equipped to do this in some of these

things. We can use the popovers to show. You know, this isn't going directly into a search ring, but the Inca love this pic. So if they were focused on like the population, do you know what his own Purses, like Rosie, Glenn his own and then looks. They were interested in my hba1c. So we can then look at a time frame. So we want to know over, let's just do 20, we did 20th and Max events. They were interested in a couple of these, these things aren't going directly into the search screen. And then finally,

like I mentioned, there's so many different types of real-world evidence that we could look for. In this case, we can just select all of them. Animal submit that forms. When you submit it, it creates a search string and takes you to phase two. So you can see face to up here. And we create a search string. So you can click on that and never have it. So this takes you to PubMed and it's really nice and so if nothing else at the end of the day, we've created a Search tool in. So here you can kind of identify all these studies that we can see that we have type 2 diabetes.

Systematic review people is on his own and type 2 diabetes, cost-effectiveness, cost-effectiveness gear Glow Zone, plus metformin. So you can see that this would be a good place to identify your evidence and find it. And then, once you've done that, once you've gathered all your evidence and you're ready to go to step two, You can just say thank you and then this is reviewing and Grading of the evidence, right? So this is the hardest. Being I think from from in user sampling right. Researchers don't have trouble with this, but

a lot of times in users to. And so let's say we identified a couple studies. So the goal of this page, Ross was just to say, okay now we can say if you have a pragmatic controlled trial or let's say quasi-experimental, you should be using the robins one tool and so we can link out to that tool. If you prefer we don't have to because we've got it all here right. And so you can go through an answer these tools based on the robins one criteria. And the goal of this is just to grade the evidence to say do we think the evidence was biased or not? And then at the end of all of these, the end goal is

to say, okay, now that you went through all these questions based on the kind of study that you were using, Right. So for systematic review, we give you the AmStar or two until the end goal here is just to say, all right, this is unclear or Tyrus, goodbye as moderate risk of glass blowers to buy. And I'm actually taking the Liberty to fill one of these out for us. So we can go back to the account. Progress. Let's go to 3. Well. What's down, what's down? Face down, face

three done that, you can submit all your answers and you can braid all of it. Until in this case, we had multiple outcomes for this for this particular study and it takes us to phase 3. So this is just summarizing and literature. So here we can take advantage of the plotly package, which I really like. And it's really useful here. So it's fun to make this interactive for patients or four in the user. So for example, I'm only interested in my primary outcome, you know, you can talk all that off, chocolate back on if you just have found a ton of literature in any one, I

kind of have a quick overview of. Okay, what is the literature telling me? If I could summarize it back into here, we can say hi for a secondary outcomes. Most of them were, well, it was half and half of low and moderate risk us, but we had none, clear and not high risk. And so do we use the great Pro? And so we walk in users through the Spray Pro criteria. So basically based on all of the grating that you've done already, what kind of your overall risk of bias? And so here, you know, you have multiple different things and shiny was

just, I was kind of surprised because at the beginning, you know, you we could have done this probably in any any language, but the synthesis of shiny with all of the different packages made this really, really nice, really affected until we can go through and just bases are the studies consistent or the inconsistent for our hba1c outcome. I'm at the Aurora stood by us, are the result size. And then we have one from mortality. So in this case, we had two outcomes of Interest. And so finally

with the GT package for GT tables which is relatively new. I think at least within the year we can we can create like a really nice table of all of these responses from the end-user. And so here we're seeing multiple packages at play all in one pretty quick screen, right? We have plotly which I love and then we have the tables which is very, very nice. And then this is kind of the second to last phase. So once you've done all this and you've assessed, okay, now I've identified my literature, we have raided each individual one and now we have

the summary of all of our literature. Now, shiny can take us into making a recommendation. So, I've already filled all this out for this one. What would it looks like, is this? So, this is, again, design for pairs, but anyone in a medical profession could use this. I'm thinking maybe for like, four wheeler decision-making. I know there's some people here from hospitals Across the Nation and even internationally, I guess XU us. And so, was there any literature evidence available probably if there weren't you wouldn't be here. So, yes and then, you know how to answer your research

question. Yes. And then wasn't sufficient, right? So Not only was it out. The pool was sufficient General guidance on making a recommendation, right? So there's a lot of questions that we have to think about as end-users such as like, risk-benefit is the adoption feasible for a payer Oregon hospital. Does this make sense and then you can buy for the intervention And then you can't. This is a big one that's actually Four role. Of evidence is, can I equitably deliver the intervention

across my population? So ideally as a pair as a hospital system Healthcare System. That's your key criteria. Can I use it for whatever? It's and does this inform decision-making for my specific population of Interest? And then we provide a couple different options for recommendation, making. And these are things like performance-based for sharing guideline coverage, with prior off, things like that. All these other specified summary in this is very much still in production. We're working on it.

So any ideas you have would be welcome and then the next phase kind of takes you to a summary of everything that you've done. And so shiny has been a really incredible and I've been really impressed. Honestly, we've taken it far beyond what I thought it would be capable of and so we kind of have our PECO Detroit area up here. So population intervention, Play the song and then we can go into summary response. So we still have partly right here. We have GT tables like I said so this is past forward and then we have the

evidence-based recommendations. So we decided in this case that covers with prior auth made sense. Into that are tool and that's kind of how we used shiny in it at any step along the way you can save your progress of course and then it'll go into your account for that specific. So you can see I've saved it today. So that's the truth in a nutshell. I don't know how it how we doing on time. Beth, you have an idea, you're doing great. Do you have anything to add? I knew that was a quick overview

Brennan. Thank you so much. I would just reiterate that the motivation for this came from a group of pharmaceutical companies that came to us instead our health plans. Just don't even know what to do with real-world evidence because it is so complex as Brennan suggested. So the goal was to create a guideline for users who are evaluating evidence, not researchers who are conducting the study who are in health plans or health system, who are making Health technology adoption that is reimbursement decisions related to Pharmaceuticals or devices or Diagnostics or any other

intervention of interest. And so, the purpose of it is to do the search using structured and well accepted criteria of the picots, and then to bring to bear the different quality rating. so that users can get a handle on the evidence at hand, and then as Brennan showed at the very end up checklist, if you will of things to consider in making a decision, Turning this into an archway new product or a switch has greatly improved it. We actually had it in Red Cap before it was but we have a little time for questions.

A lot of folks are interested in the chat and using this immediately. What kind of use cases do you expect or are going to be coming? So I can take that the best, you can add any ideas that you have to use cases in general are mostly, I'm four pairs. I think we had pairs in mine with this tool, especially as they're considering, you know, should we eat? We think about this with the prior authorization or should. How should we go about accepting this evidence? And is it applicable for a patient population? I think that was the end patients before the endless Beth.

Yeah, so quick earlier decision-makers you know, definitely because that's the world we live in but certainly it can be adopted and adapted for other uses as well. So we're working primarily in Pharmaceuticals but as I said it's not limited. This was not limited to the using it in the world question for Miss Jane asking could you edit this to make something specific for prism of a systematic reviews? So, we could right now, you can include systematic reviews as a type of study design in your search and the AmStar criteria for operating the

quality of systematic reviews is already embedded and bread and quickly went to that and go through the systematic reviews. I think you guys did such a great job on time. We are in good shape. And I'm going to ask Beth to move it on to the next room. Thank you very much. Thanks.

Cackle comments for the website

Buy this talk

Access to the talk “REAdi Tool: Using Shiny as a Tool for Real World Evidence Evaluation (Brennan Beal, Beth Devine)”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “R/Medicine 2020”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Medicine, Health and MedTech”?

You might be interested in videos from this event

August 18 - 20, 2020
Online
6
40
bud, compliance, covid-19, hospital pharmacies, pharmaceutical compounding, preparation, science, stability testing

Similar talks

Chris Beeley
Senior Analyst at Nottinghamshire Healthcare NHS Foundation Trust
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Cass Wilkinson Saldaña
Data Instructional Specialist II at Children's Hospital of Philadelphia
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Michael Kane
Machine Learning and Biomedicine at Presagia Inc
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “REAdi Tool: Using Shiny as a Tool for Real World Evidence Evaluation (Brennan Beal, Beth Devine)”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
816 conferences
32658 speakers
12329 hours of content
Brennan Beal
Beth Devine