Events Add an event Speakers Talks Collections
 
SigOpt Summit 2021
November 16, 2021, Online
SigOpt Summit 2021
Request Q&A
SigOpt Summit 2021
From the conference
SigOpt Summit 2021
Request Q&A
Video
Keynote: Boost AI Experimentation to Design, Explore, and Optimize Your Models
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
251
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About the talk

Modeling is a scientific process that requires experimentation to get right. But experimentation is only as effective as the combination of tools and techniques applied to it. During this session, SigOpt GM Scott Clark discusses an intelligent approach to AI experimentation, including how to design experiments, explore model parameter spaces and optimize model hyperparameters. This discussion will include guidance on techniques and tools that make this workflow more efficient, effective, and scalable. You will leave the session with a framework for experimentation, including considerations around metrics, parameters, architectures, runs, compute and hyperparameter search methods. And you will receive a fully executable notebook that comes with free SigOpt access to begin to explore these lessons on your own.

About speaker

Scott Clark
Co-founder and CEO at SigOpt

Scott Clark is the co-founder, former CEO, and current general manager of SigOpt, acquired by Intel in November 2020. Scott leads SigOpt's ongoing efforts to build a product and vision of an Intelligent Experimentation platform that accelerates and amplifies the impact of modelers everywhere. Scott has been applying optimal learning techniques in industry and academia for years in areas that include bioinformatics and production advertising systems. Before co-founding SigOpt, he worked on the Ad Targeting team at Yelp, where he led the charge on academic research and outreach with projects like the Yelp Dataset Challenge and open-source Metric Optimization Engine (MOE). Scott holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell University. He also holds BS degrees in Mathematics, Physics, and Computational Physics from Oregon State University. He was recognized as one of Forbes' 30 under 30 in 2016.

View the profile
Share

My name is Scott, Clark's welcome to the city of summits. I'm super excited to talk to you today about intelligent experimentation and how we can help transform the way that you do model. I start out talking about the background of, it's led me to be in front of you today. It's within the modeling, hierarchy, and workflow. I'm going to talk a little bit about what social experimentation actually is and go through all the various aspects of it, different right off and things to think about when doing intelligent experimentation, and then I'm going to end with a few

high-level examples in a demo, so you can get started with intelligent experimentation for free today. So. I'd like to tell you a little bit about how I came to be. So I'm the founder and former CEO of stick up for acquired by until about a year ago. My journey with intelligent experimentation really started about a decade ago, when I was doing my PhD at Cornell. I kept coming in to the exact same problem over and over again. I would do something really cool with my visor or a collaborator International a bore and other University play Pittsburgh to use

to build model new way of doing bioinformatics. Whatever. It may be, the end of the process. It was always this experimentation phase where we would tune and tweak all the various knobs and levers within the system that we had in order to get slightly better performance. However, that was Define for project if we could do it. Well, I'm in a slightly better paper or slightly better conference. We jokingly call this within my department grad student descent because it often said students to do this fight, emotional optimization often times by trial and error in their head,

credibly huge waste of resources. I wasted many thousands and tens of thousands of government, supercomputer time, trial and error. Tuning and tweaking, these various models will be felt. So as a mathematician is want to do, I was like, there's got to be a better way and that's really the Crux of the field of Bayesian optimization and Global optimization more. Generally. I took some of that research and when I went to Yelp after grad school, I literally saw the exact same problem there. They had searched for advertising this down, the fan detection system. Again,

great complex algorithms that it built and built up by domain experts and had lots of tunable knobs and levers. It can make them more powerful profitable or more accurate. The difference here was, instead of getting a slightly better paper. If you did this bride at yelled and an industry more Broadway. If you can do this, right? Money came out the other end and I was really the cross for open-sourcing Momo or the metric. Optimization engine. One of the First application-based, Basin optimization packages, are still being used worldwide today and really why

we formed pick up because we realize that this was a problem that was pervasive across many different Industries, many of what you'll be saying here today in the summit, but it was also a problem that really needed to be solved with proper to land with a proper product. It wasn't enough to have Cool Math, you needed a way to apply this to your problems and do. So in a way that allowed you to get more out of what you were trying to do. And that's really the Crux of intelligent, experimentation over the last seven years. We've had the fortune building a great team. Working with

experts from around the world, many of which you'll hear from today and really bringing the concept of its intelligent experience. Do I need to take a step back for a second and talk a little bit about where sit-ups it within the entire model and workflow. And we're indulging experimentation set. Many different variations of NASA's. Basically the multiscale fractal nature of modeling different components with all the different jobs that need to be done. I'm going to focus on this

section, write up here for your tuning hyperparameter, experimenting with apologies doing some data augmentation, Etc. You were a lot of a tunable knobs and levers with in modeling in machine learning exist today, whether they be kind of more standard hyperparameters, like learning raise, or stick has a gradient descent parameters with, are they the architecture parameters, like the qualities of a neural network that you're augmenting, text Vision or audio data on before deep insight into the machine learning algorithm. Within this subsection of the

machine Learning stock is quite a bit of things going on. Like everything related to modeling on this a very non linear process and modeling yourself is a very scientific process. It differs from traditional software engineering in so far is there's quite a bit of experimentation that goes into it. A lot of hypothesis. Testing does the status at work. Does this model work? Is this problem? Even solvable? Is there a better way to do this excetera, as well, as more quiet optimization once I have something that works pretty well, can I make it better? What's the best I can make something? What are

the possible trade off? So I could, I could make before I go and put something into production. All of this experimentation is happening throughout the building process, and it's inherently nonlinear and it's a huge part of the time required to build models and you'll see hear people call it different things weather. Building experimentation. This is often one of the largest parts of time required in order to get a model into production and machine learning engineer data, scientists research, or whatever you call it a time, as they're actually building

model. We'd like to break this problem into three chunks, and I'll go into much more detail on this as I talk about the aspects of being intelligent experimentation, but it's a very highest level until June experimentation is about designing experiments. So asking the right questions, exploring experiment. So, gaining understanding from the questions that you already asked optimizing experiment. So once you've asked a question, what you being too understanding to make sure it's the right question. How do you get to the right answer at the

festival e and efficiently. And pick up is about helping. You along. This journey, does non-linear Journey itself to go back and learn different questions to really open up the scope of questions that you can ask. And then again to get you to that answer it, efficiently as possible, open terms of your time as the expert. But also in terms of computational time in the wall clock, time required to get these models out. So, as an intelligent experimentation platform, it fits within the modeling stack in that part of the journey, but in terms of actually spitting

within your modeling stock, it really just bolts on top of whatever you're doing today. So, you have a training environment that has your model in your data, excetera for managing analyzing and transparently sharing this experiment station framework for able to get to the audible version of the weather. Using our proprietary Ensemble of optimization methods, for bringing your own Optimizer, into our intelligent experimentation platform. And finally, an API

be able to do this. Caleb Lee, the full volume right away, and complexity of the models that you're dealing with day-to-day handling, a million API, request an hour. For a customer's again around the world and each one of these customers with a slightly different problem. Whether it's a different, they decide a different approach a different time, constraints different metrics excetera. This is just a small subset of the people who've been using sick out for the last 7 plus years, that we have been in business here, a lot about them later today. I'm in, ER, stories

and experimentation and how it may utilize intelligence fermentation to get significantly better results in their problem. Best cigar out what we do and where we sit within the larger model and stack. Now, I'm going to take another step back and just talk about what is intelligent, experimentation more brought that so people would get caught, many different things. Experimentation bucket includes things from even justify, meaning metrics. So, understanding, even the types of questions, you can ask, and what you want to shoot for doing

it and analyzation of the pry ammeter space and really gaining understanding and comparing different things. Different stitching to link together. Many different things that people call this many different ways. The people are doing that, but I guarantee, if you're building models today, you're doing experimentation fundamentally again, this is a relatively simple Concept. In the scientific method is built upon experimentation. Between traditional experimentation and intelligent experimentation. Is this experimentation is

designed to make recommendations and help you make design decisions, help you explore and ultimately recommend Optimal Solutions. So instead of a system of record that can blindly record what you're doing. I said that you have no interest action into this blindly telling you what to do. Intelligent experimentation is about having a collaborator having something that learns. As you're going, suggest as you're going, adapt, as you're going and ultimately helps. You not only stand on your own shoulders as your TV, new results standing on the shoulders of other Giants, as well to

really accelerate this process of mod. And again, we think about this in three large buckets and I'll go into more detail about some of the decisions, trade-offs and aspects of each of these pillars of experimentation, one by one. The first is designing experiments. So this is asking the right questions in this is actually a very difficult problem. You might say I'm doing a machine learning model. I just want the model with the highest accuracy. But if you're doing a real world application, you

might want to trade off at accuracy for entrance time. And if he's not actually actors, do you care about? Maybe it's some complex combination of precision and recall maybe it's not actually quite as bad but the specific subclasses that you need a threshold on to make sure you don't drop below a certain accuracy. Then you want a larger profit. Be expanded all well again. Keeping maybe inference time or a memory footprint within a specific size that you can actually deploy this in production. All the sudden. Simple question of like I just want the best model. What's the best model

becomes is much more complex problem. But even designing the right questions to ask is a big part of the exploration is about understanding and better, understanding the questions that you can ask to analyze everything trade-off patterns and ultimately take that experimentation, that's high-dimensional optimization out of your head and into a tool that allows you to get the most out of every hour. You said you spent and the cluster is spent to get these

results and really get you to the best possible. Next up. Finally optimization is about once you phrase the problem in a specific way. How do you get to the best version of your model that satisfies that problem? Efficiently as possible computational Cycles is few back and forth for a few different configurations. Pride. You want to maximize a few metrics have constraints on a few metrics fresh. Hold on a few metrics excetera and ultimately get to a model or a collection of models that you can then put

into production. First into designing different things that goes into designing models on in designing the questions on that, we can ask with an intelligent experimentation platform. The first up the bread and butter of any modeling Endeavor is the data itself. The data source, says version clean data different ways to do, feature, engineering or augmentation. Very often, not just going to take a fresh data sets out of a data Lake and immediately throw into a model, especially these data sets. Get more complex computer vision, natural language processing

excetera, and you'll hear some of this later today, even the way that you're just instrumenting augmenting, flipping rotating changing the color of excetera. The data itself can have a massive impact, not only on how long it takes the train, but ultimately how accurate your models are, at the end of the day. The second thing you need to choose when your modeling is, what model itself to use wood frame works. Are you using are using machine learning and Ensemble method custom variation of these

East? One of the Frameworks themselves have different rate off. I toured might be better than tensorflow and certain applications, Etc. Shooting. The model itself is kind of a metal choice and can help phrase the design aspects of these models inherently itself. Once you've been chosen the model, there's still the choice to be made of. What exactly am I having this model do? So. Am I even having this model learn? Like what is it learning board with the data itself? What's the Lost function? Is it trying to

again last night? It go for is there different ways that you want to pry ammeter eyes. I was even going about justifying 16s is and what is trying to learn towards again? He's lost amongst ourselves and have their own tunable parameters and this can have an ultimate effect on whatever it is the production metric you care about. I might actually come out to be So, how do you decide, what's the best possible thing to do a Ferrari perfectly the first time? So instead what you should do is try a variety of things, but

strike everything. And again, in an intelligent experimentation platform, gives you, the ability is understand these trade-off analyze their differences and ultimately, get to that next stage of exploring, the underlined models themselves. Start instrumenting and tracking all of these design decisions as you're making them. So you don't have to go back and say, wait a minute that rainforest that I use way back when that actually had a pretty decent accuracy. Even after I've tried all this deep learning stuff, maybe I should go

back there and do some training and speaking excetera. Keep track of everything. I've been in one place makes those decisions from an in-form position. Exploration is really about as you got a model that work. As you've done this kind of the initial smoke testing hypotheses, testing of like can I get anything to work now? It's all about what you really want out of the system. How do I take the warnings from all of these individual models that I've run in really understand? Not just the loss function, with the models, trying to do, what am I trying to

do as a modeler for the B? And again, many different metrics to go into this and the Westin to take away from this section is crack all of them. So you can understand which ones you want to optimize, which ones you are. Maximize minimize, which ones you want? Fresh holes on constraints on which ones don't matter on xcetera, but ultimately, every trip goes to Hawaii cycle. As you design. It might discuss it with other people on your team. You might need a validated against simple data set and see whether or not, it's I even face. What you're trying to do my

run experiments, Precision, ultimately. You want to see how it works in the real world and what happens if you're over, sitting at work Thursday, evening results at cetera. So, there's many different types of metrics that go into a machine learning problem, including training metrics. Again, this is what the model itself is is training in order to try to achieve, whether it's a simple, whatever it may be, the goal post that you giving the model for the

data is even typed in. Tracking these incredibly important, especially as you're doing the training itself cuz this can help showcase. Maybe the brittleness of a specific model. If he just jumps up to a higher actress, that might not actually constitute an interesting model. Am I just be a very lucky stochastic, gradient descent unstable equilibrium. So understanding how these things evolve over time, trade-offs, that are made excetera is often an incredibly important parts of, especially more complex, modeling Endeavor. Station metrics

are the metrics that your typically not training the model for. So they're not necessarily with the models shooting for, but maybe they're one step closer for you actually care about introduction. On these metrics might not be continuous or differentiable or contacts or easy to calculate. This might be something closer to how many prophets in my And to get an algorithmic, trading strategy, or whatever. It is not something that you can directly put into a deep learning model for a gradient descent on something that you actually want to calculate. Maybe a stimulation

environment or whatever. It may be based off of the signal that that's deep learning model is supplying a overall. I will go in the trading strategy station metrics, come in many different forms are often the a very complex and he's our offense things that you're designing as next for yourself and want to continue to design. As you learn more about the interative process of exploration to change. The metrics, an incredibly important part of an intelligent experimentation process. Finally guardrail metrics. We see this come up all the time in production use case has again. There

might be certain things that you're trying to maximize or minimized and then there's certain things that just need to be true on a Model. Needs to be a certain speed to meet NFL, why we're needs to be a certain memory size to fit on an edge device for it. Needs to satisfy some audit constraint in order to satisfy by a sort of wrote. Maybe you want to take into account and make sure as your maximizing towards a business objective, you releasing meeting these guardrail mattress because having the most

accurate model in the world and then trying to put it on the device and learning. Oh, okay. I guess it is a terrible place to be in as model. Finally, really a big saying about cigar papers that we provide a lot of advanced tooling into account all of these different types of ways that you're doing optimization. It's much much more than just, all I have a single output from the simple model and all I want to do is just get to. The highest value is easily possible, really being able to zoom in on these trade-off, but he's fresh holes or

the potato Frontier excetera, incredibly important part of that process and that exploration tell her. It's off of intelligence. Russian metrics to our kind of the true measure of the modeling process in what you make actually make the business better. These are often things that you cannot taste test in silica being able to stay ahead of time whether or not it's going to actually perform well against the real Market environment, or whatever. It may be.

So you can keep track in a single place. Design decisions. You made, when making the model, the metrics you chose to optimize for and the message. She chose to care about in the exploration phase, but also ultimately going back and a signing the production metrics that were observed in the real world. You can go and cross correlate that some of these Training Matrix, validation, metres. Excetera is it again V Fraser metrics, rephrase your questions, and ultimately get to the right

question to ask before you go into optimization itself. Having an Aldis in one place on this. The only way to do that. I'm keeping track of this in different systems. Online metric. Training stores, doing some of it in your head only texting. The magic numbers in to get Etc on this. Not a way to do this and reputable and transparent. Final pillar of intelligent experimentation on optimization itself. So really boosting model performance, by taking the model

and getting too hot, its peak possible performance. So this is about taking what ultimately looks like a black box system. It's a system that has many different in a continuous integer, or categorical parameters of inputs to have their own kind of hierarchical dependencies, or constraints on them, in observing the outputs of assistant, whether it's raining Forest, rain, boots with machine or some super complex deep learning the reinforcement learning out of it, whether it's morning on your laptop and a Jupiter notebook, or whether it's running on a

department of energy. Super, your cluster, really making sure that as you phrase, that question were able to go through this optimization feedback loop as few times as possible, in order to get to the best possible results. And optimization at the super, highest level on intuitively is about making the trade-off of exploration to learning more. About the underlying model running. Aldi's, input parameters output, affect the alpha metric, and exploiting that information to get you, the higher and better output Metra. It's a global

strategy because it sprayed both of these off. As you're exploring more, you learning more about the end of the line model in the response surface and you're getting better and better results, but it's it makes sense for the algorithm to go back and explore more. After is already resolved. The local Optimum. There's nothing left to exploit after you've already gotten to some people. Four minutes until the algorithms will naturally trade-off to do explore and learn more. Find other Optima resolve those and go back and forth as efficiently as possible.

I've already talked about this a bitch in the exploration phase of this, but one thing that we've seen many many different times and if you take one takeaway from this talk, this it's machine learning or modeling or whatever you're doing, then involve experimentation. Never involve the single metric, very rarely in life. Is there? One thing you actually care about at the expense of all else and making sure that you think about all of the different trade-off for the time and complexity, whether it expands and accuracy, weather by, as whatever it may be taking

all of that into account and making sure that you're making the best possible trade-off is incredibly important. So don't blindly optimize single metric. It's not about winning a very rarely is life about winning a cattle. Competition is more often about making intelligence right off and so Amazing, many metrics things on many metric in Spreckels. Havanese guardrails is incredibly important and one of the tenants of been one of the core capabilities of the Bayesian optimization optimization ensemble.

One thing that's important to note though that is important for really any intelligent optimization system. Is the ability to be flexible, though. So we have a, a great fire Terry Bayesian optimization system that we built. Many times and papers and collaboration that we done over the years built into it. But one size doesn't fit all the no free lunch there. I'm says, there's not one hour than they can be every other algorithm that every single past. And so being able to bring your own Optimizer is an important part of this, being able to

switch off tween optimizers, being able to take hand-tuned a system. That's really good. At solving one specific problem of leveraging that within the scale, transparency management, and analytics of a large and robust. In Elgin. Experimentation platform is incredibly important for developing more more. Is all the time but it works out of the gate, entrance of running, your own Optimizer talk. I can't go into detail about all of the different aspects of the optimization of framework.

Several decades of engineering years, been poured into our optimization framework again, battle-tested with real-world applications again, many of, which you'll see today, very different than specific academic use cases. Really, looking at what happens when you throw these optimization methods in the real world, what happens with outliers. What happens when things fail, what happens when you can break the problem in the multiple tasks, all these sorts of things really inform, all of these

different features that we built over the years and release expands the differentiation between an intelligent optimization and Halogen experimentation platform and just plugging in a random open-source iterator use found online. I'm in cheer up and methodology. So again, there's three pillars to intelligent experimentation design, which is a series of decisions and how your phrasing these questions on from the very beginning exploration, which is about learning from those experiments

and of gaining understanding about your model and using that to rephrase those questions, get a better idea of what actually matters for your model and finally optimization. Once you have a good idea of what it is, you're fine to achieve getting there as little data scientist time, Walcott time in computational time as possible and then going back and doing all this again, but actually that whole process efficiently as you possibly can. So a couple of quick examples here so numenta, it

is one of our great customers. They use us against a lot in the design aspect of things. They were able to use us in an intelligent. Experimentation Chase to a sparse high-accuracy version of Resident cheer. It was about making sure that you phrase that question well on because again, when your tuning things like a sparse Network and hyperparameters and things like that, all simultaneously, you need to be able to keep track of all the trade-offs that you're making an order to against raises questions. That's the way that

the optimizer can do his job and ultimately lead to this novel result. Exploration and I know you'll hear about this a little bit later as well to is about. Once you have an extremely complex model like a graph, neural networks said there might be many, many different metrics that you care about at the end of the day different metrics in terms of size complexity. And so here, it's about exploration, really driving you to better understanding of an incredibly complex model, which then again, ultimately opens up the door for optimizing, which of course, is the third

pillar. This is an example from two Sigma. One of our customers for many years, has used us to optimize our wine variety of models. I'm within their out in the trading strategies, in machine learning models, over the years, getting significantly better. Define open source, optimization methods out there on and doing. So, in a way that takes full advantage of their large computational, resources on being able to do a synchronous optimization across. A hundred different notes for experiments. For

each one of these models and getting to the best possible result in an extremely adversarial environment of album. So, we talked a little bit about and intelligence permutation the aspects of it. It's a high level example, but now I'm going to dive into a demo. With the zip code. You can be Off to the Races and doing intelligent experimentation yourself. So you just go to stick up cam so I know you're able to start doing right now and probably be up and experimenting. By the time I get done with this

demo. I'm going to walk to this really quickly. I'm going to take a super simple. They decide super simple model so that we focus more on the intelligent. Experimentation aspect of things are supposed to model itself. To take the breast cancer data, sad that's available and scikit-learn. I'm going to use the xgboost classifier one of ocular Frameworks in the world for doing such a great boost machines based, and we're going to end up trying to tune a method that maximizes the recall raise one of these

light bulbs for the medical application. The first stop is the entire model. And again, if it grabs of the day to start from scikit learn, and you can see here at the very beginning were starting to track these design decisions that were being made dataset the model excetera in the very bottom or tracking on sayings. Like the fairies out, put Netflix averages weighted averages of the various classes, their F1 score station recall and support excetera. All of this being kind of stuff to say, cops on as we go.

One thing. I'll point out though, here in the middle, is this is a very typical example of what we've seen in the real world where people will build the model on either. Not set these hyperparameters. Next steps, learning right Excedrin to pick the default or start tuning in. Tweaking these by hand a lot of times the default than these machine learning framework. They're designed make a specific example look, good or maybe run fast, but might not actually be the best of the Prime orders for your upcoming application.

So I was one line of code. What we're going to do is we're going to kind of abstract away, those those constants and we're going to start tracking them within Sagat. In addition to some of these modeling decisions and the alpha metric system so we can still put in the default. As you can see here, the number of estimators the next stops excetera. So we set those instead of trying to manually turn the three-dimensional space in our head. Each. One of those outputs is going to be As we do this and maybe try a couple

of different things to gain intuition again, instead of trying to keep track of a dozen different metrics and the input parameters and things like that. We can quickly go to the dashboard and start to surface Easton pattern, start to see. Okay, certain learning rage seem to have a specific impact. We get more support in certain areas excetera. But ultimately, we can start trying to say, all the metrics that we care about, what's something that we want to start, maybe optimizing tour. So we've gone to the design phase of

selecting a model and we're tracking everything. Now, in this exploration phase really trying to find a good objective metric or maybe set of objective metrics that we want to. One single note with this, even these six runs that we did manually here on his. We can see you start to see a pretty big spread in terms of the accuracy itself like a 90% 94 % excetera. So again, while our main Chase was to maximize the recall of both of these classes, we might want to

start thinking about accuracy is something that we care about as well. So we still got this. Now allows us to start raising. The question of giving us something to optimize towards do it again. In a few lines of code, here were selling cigarettes that were creating an experiment, three parameters and giving it a range of the managers and a double Precision of parameters to tune for any Transformations that you might have on these parameters and sell. Chelsea gossip

about the metrics that were optimizing for certain ones. We won't. We'll want to maximize certain ones. We want to minimize certain ones. We just want to keep track of like the Precision metric recall. We went to explicitly optimized for on super optimizing across these two metrics and certain ones. We might just want to put a constraint on. So I talked about accuracy and maybe being an important metric that we care about. So, let's say, we want the best recall we can across both classes with a constraint being, we want the accuracy

to be at least 95% for the entire model as well. And now we just go through this of creating the model in the exact same way that we would have done before only. Now, it's just all the experiment has not finished. We're pulling down suggestions. Getting suggestions from this pick up software the service. Evaluating them reporting back. Those metrics. And as we Loop over this Rabil to very quickly get to set of results that are very interesting. Do jumping back to the dashboard here. We can see that there's

a set of Grey points. Want to spray do, Frontier this efficient trade-off between the, the recall of both sides is great points. Don't meet that 95% accuracy threshold. The blue points do, but aren't as good as these orange point is cradle, off the most points that are the best we can do in any one metric without giving up performance in the other. And so now with a few lines of code, taking it just from the simple scikit-learn example. We started tracking everything. We started exploring a bunch of different metrics, new design, and optimization problem.

And solved it across two, different output metrics, and a threat to find two different models, that satisfy our business, constraints and our accuracy constraints. And ultimately, we could push into production and see how well they did. And then as worth for a mess. Again, there's many different ways to after you've actually done the optimization. Go back to exploration, start looking at, how all these different medford's, trade it off with different parameters in may be gaining insights on how we might want to shrink down or change. The

learning write in log space here at Sarah. This is not a linear process and showing you here in this demo is just one potential path. There's many different paths, but it ends up being a cyclic problem. We're constantly going back to ask questions or different metrics. So, how I'm at the highest level with, this is allowed you to do with a few lines of code. For a quick tip, install Sig up is designer experiments, lawyer experiment, and ultimately Optimizer experiments that your business needs.

So that's the captain and I shall again. All of this is available for free today on the Hue. Go to cigars.com sign up. You can be up in optimizing by the time, I'm done with my queue and we look forward to helping you achieve the best possible results for your models, spending less time on trial and error. I had more time with a collaborative intelligent experimentation. Thank you.

Cackle comments for the website

Buy this talk

Access to the talk “Keynote: Boost AI Experimentation to Design, Explore, and Optimize Your Models”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Full access

Get access to all videos “SigOpt Summit 2021”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Education, Training and EdTech”?

You might be interested in videos from this event

November 1 - 4, 2021
Lisbon
17
3.14 M
ai, brand, chatbot, fintech, it, maas, marketing, sport, startup, technoligies

Similar talks

Michael McCourt
Head of Engineering at SigOpt
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Basem Barakat
Large Scale Machine Learning Engineer at Habana Labs
+ 1 speaker
Evelyn Ding
Senior Machine Learning Engineer at Habana Labs
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Alexander Johansen
PhD candidate in Computer Science at Stanford University
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video
Access to the talk “Keynote: Boost AI Experimentation to Design, Explore, and Optimize Your Models”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
949 conferences
37757 speakers
14408 hours of content