Duration 19:24
16+
Play
Video

A Bayesian Hyperparameter Tuning Algorithm for Clinical Healthcare Models Built with {sparklyr}

Neil Dixit
Data Scientist, Machine Learning Engineer at Independence Blue Cross
+ 1 speaker
  • Video
  • Table of contents
  • Video
R/Medicine 2020
August 28, 2020, Online, USA
R/Medicine 2020
Request Q&A
Video
A Bayesian Hyperparameter Tuning Algorithm for Clinical Healthcare Models Built with {sparklyr}
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
164
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Neil Dixit
Data Scientist, Machine Learning Engineer at Independence Blue Cross
Johnathon Kyle Armstrong
Mathematician & Data Scientist at Independence Blue Cross

I am a motivated individual with a broad range of skills in programming, data science, and machine learning engineering. I primarily work in R, a statistical programming language, and Python, although I am proficient in other languages as well, including SQL, C, C++, HTML, and Javascript.

View the profile

Extensive background in analytics, including a doctorate degree in mathematics encompassing linear algebra, topology, graph theory, and combinatorics. Expertise in application of mathematics, statistics, and data analysis to address complex business challenges. Experience as senior statistical lead on multiple projects responsible for drafting statements of work, statistical analysis plans, technical specifications for analytic data sets, tables, listings, figures, and programs. Collaborative and results-focused with excellent communication and interpersonal skills to build rapport with key stakeholders to ensure positive customer relations with excellent service in fast-paced, deadline-driven environments requiring adaptability and decisiveness.

View the profile

About the talk

This video is part of the R/Medicine 2020 Virtual Conference. (Neil Dixit, Johnathon Kyle Armstrong)

Share

Hello. And I'm bringing our next pictures to us. I'd kneel and Kyle who are going to talk about a busy and hyperparameter. Tuning a locker in for Health Care models, built with sparkly, are it's a video. I'm So as the video plays, you can ask questions in the chat, and when we come back, we'll do the rest of the questions then. Good evening. My name is Kyle Armstrong. Advanced analytics team at Independence Blue Cross today will be presenting Sparks machine

learning. Machine learning models offer an amazing opportunity to learn complex relations from data. Any of these models require It's been shown that these things can greatly affect performance or even as simple as finding an optimal solution can be in, in track. Independence, we want to ensure that these things were being optimally trism Crossing of various projects and mobs, be created a model of Gnostic, be approached using Orange. Grease live review.

Independence. Blue Cross is regional health insurance company headquartered in Philadelphia Pennsylvania, and our local region being sure about 2.5 million people. Actually, we also served many employees, including Comcast nbcuniversal. Urban Outfitters means that we work with hundreds of fossils and tens of thousands of Health, Independence. We work with a diverse set of data includes demographic, such as age, gender location product claims, which should I include coffee station, chronic conditions

procedure and diagnosis code, Roll-Ups and risk for NBC GTI cards, and Pharmacy, cost, codes, and test results. We have dated around benefits or just covered. Briefly, go over that box. That has 32 course 188 gigabytes of RAM and an Nvidia. P4g you We also have a spark cluster that has 10 G2 notes 320 Coors and 3.7 terabytes of RAM. sparkly are to interface are with the Is a brief example, to explain the algorithm motivation. Princeton sparkly hours, rainforest has seven some of their hyperparameters include number.

Typical number of levels for each of these hypercars. Princeton. She might try twenty fifty or a hundred trees for your random. Phone number of combinations. Product of you. One minute per mile research would take you roughly 250 5 years. It's finding an optimal combination is an intractable problem. navigate over Danielle to discuss the Thank you. Kyle are all the commands to make it feasible to search these intractable. Hyperparameter spaces, will not go over how it works. Restart my initializing, the process which includes

defining the hyperparameters is such as the one. We just reviewed for a random Forest, loading our data, which typically includes training, validation, and test data sets, Setting the number of Evans to run which equals the number of models over the whole try and finally setting. How many space up? Get someone to perform. Our example will show how the algorithm works by epic as if we're going to run 100. Will also highlight which parts of the algorithm are performed in our

or or RN Spark. We start with that big one, which will categorize at the training stuff. This means no updates will be made. The first step is to sample, one set of December. Next, we take our training, didn't sample parameters to train a model. Sparkly rsmo, random Forest classifier is an example of this. After training is complete. Will calculate several model evaluation metrics on all three? Some examples are Precision, recall F1 and a ucpr. We'll save our results, things locally in our such as our

model evaluation, but also the train model THU best which is done using Spark. We move to the next topic. no, we're going to jump ahead to our first hyperparameters, its update The first four steps are the same. As before will take a sample training model evaluated, and save, all of our results. Next will use the results for the last 20 updates to perform an update for a hyper brand, purses. This will include building our base, gel laugh, pruning. This face and saving results.

Now, let's dive into the update stuff. When we first entered from the training Loop, we load the model evaluation results and Hyper prep. only load the prior 20 at bits of information, We're also only interested in the validation results. Next, we're going to train a Bayesian generalized, linear model or base gel and for sure. This model will learn how the hyper parameters impact. Our model performance objective, on the results, from our validation data set, which are random forest model. Never saw.

Once the July mystery and will run the model to the arm simulate function. So stimulated or model, coefficient priors The prayers are said so we can reuse them in our next update. Step this can be thought of as a memory for a model. What's the Finish gel and model will use it to estimate a remodel performance objective across all hyper. Preterist is combinations. You compare these adjustments to our current best model and remove any models that are lower Plumbing. Similarly, to our prayers. We

save the prune, hyperparameters face. This completes the update, set it which one we exit back to our main Loop. Now here's a reminder of where we were and are training Luke. Will perform 20 more training steps after this before, we re-enter the updates. The final piece of our algorithm can be Illustrated with the second update stuff done at Epic 40. Similar to our first update stuck. We load our last 20 validation results and Hyper prep. Unlike our first update, though, I'm also using

something we realized. We proceed by reaching the leading, our prioress using our new model. He should not be closer to our true coefficients. That will help us identify the hats performance sections of our hundred rappers. We finish this stuff by saving our new criers, we save all prayers and prune spaces separately so we can go back. Next query estimate, the space objective using the prune space that we saved during our first update stuff. Using the same methodology from before we further Prima space.

Your team is completed by saving the prince face as we exit back to the main training group. Nothing you have a good idea of the algorithm procedure. How does it look in action? Illustrative purposes will review a few batches of updates for a logistic regression model, the main takeaway is that we see increasing model performance with successive updates. The graph on the left shows our original space with bounds of 2.56 and 1. This represents about 2.5 million combinations. The main objective for tuning. A logistic regression is to find the right amount of regularization.

Repopulate scripture, which type of parameters, the other than sample for the first 20 performance of each model, which one uses our space objective for the base model. With regrets, the results into a histogram on the right. You can see if the performance of models varies, widely around through point one and the higher run through. At this stage results and hyperparameters for updates. We're left with the regions of the algorithm. Now recommends, we searched through indicated by the tiny red triangle on the bottom.

In this case, our space was reduced by about 99% of its original size. For the next 20 at-bats, we limit our samples to this newly reduce space for a regular 0.049 for office Bradley. Results from our previous Bachelor. Still visible on the break. This will represent our cumulative. Look at on the left, we begin populated the 20 points, that show what type of parameters were sampled. Not for the second set of or exceeding. The results we observed in the first batch of models, this is most

apparent in the histogram on the right, two models. As far to the right of that one was arranged between 0.97 and 0.98. We're now up to our second updates. Will you do at the fire simulator during that one and the model results from Dodge to to update our space Tustin? Following this procedure, we're left with the hyperparameters Outline by the red triangle again, on the bottom left corner of the graph. The reduction is not as pronounced as the first, but the algorithm is telling us that we don't need much regularization for this model. For the next 20 minutes

now. Fx-41, 260 is a regular station 1037 for Alpha. One last time we populate the 20 hyperparameters that were sampled. Now for the third set of 20 The easy chair values are slightly higher than the previous batch of the vision, you can rest assured that we are choosing the framers based on information and insight. Rather than just blindly assuming, the model does not need to be regular. Ultimately, the final model how to test a ucpr of 0.97 * 1. What are the final results? However, the impressive nature of this

algorithm is in its ability, to sit, through low-performing, sections of the hyperparameter space and that are more likely to be indicated by the histogram on the right. With that, I'll hand it back to it, to call to wrap up our presentation. Imitations include the album is very slow on large data sets. And currently, we're only learning on a linear boundary. Our next step is parents to see how random sampling. asking you know what this album benefits run on learning during it and we're conducting experiments to better understand why the cell grow

Wrapping up here with some closing remarks. Identifying optimal number combinations. Presence instrumental. I'm all agnostic, phasing approach using orange for the search hyper. How many times to increase the chances of maximizing? Our analysis used to reduce our initial search? That's all the time we have for tonight. Thank you very much. Bob, I think you're Unmute unmute. Crepes. Sorry about that. Thanks I'll know that was. There was great. I enjoyed your presentation and I got to see some of the questions at school by

in the chat. There's only one one, one question posted so I can read that. How long did it take the design? How long does it take you to design and Implement your R programming environment? It's an ongoing process. Yeah, we're constantly refining our algorithm and the code that we use to to sort of streamliners process. In fact, Neil here has just released to our team, an exploration of this algorithm so that we can further streamlining and and and and further

tests this algorithm. So we're very excited to to Tessa's Okay, hoping to get a written into a package to publish for you. No public use. So that's kind of a part of that effort. I mean, I would say it took probably about a year to get setup with all of the infrastructure that we use in our just to be able to kind of efficiently work through our projects and that are realistic. Are you pulling a data directly out of your data warehouse? Yeah. So we have a we do have an information or like an Enterprise information team that manages a lot of our data for us. So they're actually populating

the day from our warehouse into our Hadoop environment and then for all of our models that we run, we're pulling on tens or hundreds of millions of records from their touch abilities models. Okay, looks like we have one more question and I guess we can, we can actually wrap up a little head of the schedule. What developing the solution? Did you consider search methods? Other than basic research? Yeah, I would say that, you know, this is kind of a ongoing design. We're continuing to kind of refined how it's done, I

think you've tried kind of Caper type, search riot-baton random search algorithms. I think this is kind of the culmination of John Cena. This is a good marrying of you. A combination of different designs, right? So you have grid search, which search was over all of the possible hyperparameters, any of random search, which is sort of surgery at random, but then we have our algorithm which serves selectively picks, your various spaces and then From from the trials and errors,

reduces the this, the search face. So that's kind of how we arrived at that at this hour of them. Thank you so much for your, for your talk and we'll move on from here to the next. Thank you. Thanks.

Cackle comments for the website

Buy this talk

Access to the talk “A Bayesian Hyperparameter Tuning Algorithm for Clinical Healthcare Models Built with {sparklyr}”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “R/Medicine 2020”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Medicine, Health and MedTech”?

You might be interested in videos from this event

August 18 - 20, 2020
Online
6
35
bud, compliance, covid-19, hospital pharmacies, pharmaceutical compounding, preparation, science, stability testing

Similar talks

Ted Laderas
Bioinformatics Trainer at DNAnexus
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Stephen Master
Chief, Division of Laboratory Medicine at Children's Hospital of Philadelphia
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Corey Fritsch
Applied Data Scientist at UW Health
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “A Bayesian Hyperparameter Tuning Algorithm for Clinical Healthcare Models Built with {sparklyr}”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
735 conferences
30224 speakers
11293 hours of content
Neil Dixit
Johnathon Kyle Armstrong