Events Add an event Speakers Talks Collections
 
MLconf Online 2020
November 6, 2020, Online
MLconf Online 2020
Request Q&A
MLconf Online 2020
From the conference
MLconf Online 2020
Request Q&A
Video
Startup Showcase
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
219
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About the talk

The MLconf Online 2020 Startup Showcase features 4 new ML based applications and products. Each of the startups provide a ~5 minute presentation of their product, followed by 3-5 minutes of Q&A from our distinguished guest, Mohan Reddy, Associate Director of Technology at Stanford Human Perception Lab, CTO The Hive.

Participants:

1) UnifyID

John Whaley, Founder & CEO, presents: Math, Motion, and Machine Learning: Passive Authentication in the Real World

How do you uniquely identify people? What is it that makes you, you? Certain aspects of human behavior can be as unique and as hard to spoof as a fingerprint. The way you walk, the way you move, the places you go, and your little idiosyncrasies have the promise of being more convenient and more secure than other forms of authentication like passwords or biometrics.But, there are big challenges in building a system that can authenticate a person to +99% accuracy with just a few seconds of passive sensor readings, while still maintaining user privacy. This requires lots of math, signal processing, ML, tricky engineering, and re-thinking existing security paradigms. Come hear about UnifyID’s experience in building such a platform and a glimpse into the future of authentication.

2) InAccel

Chris Kachris, CEO & Co-founder, presents: How to get 10x speedup on your ML applications using the power of accelerators

In this talk we will present the easiest way to utilize the power of hardware accelerators to speedup your ML models. We will show how you can run up to 10x faster your ML applications with zero code changes from frameworks like Jupyter, Keras and Scikit learn. Using InAccel studio you can speedup your applications for classification, clustering and regression from your browser without any prior knowledge of FPGAs. InAccel provides a unique solution that utilize the power of FPGA-based accelerators and provides a rich library of ML cores.

3) Blueshift

Anmol Suag, Data Scientist, presents: Learning to Rank for MarTech

Given a user, product catalogue and a history of interactions between the users and products, we could create various types of recommendations for marketing. Some of these recommendations could be based on user’s category affinity, brand affinity, collaborative filtering, content-based filtering, next best product by textual similarity, product popularity, etc. The list of products (recommendation candidates) coming out of each aforementioned recommendation algorithm could be totally different and would vary from user to user based upon his/her activity, location, interests etc. This candidate set would be a lot smaller as compared to the size of the product catalogue. Once we are down to a small candidate set, how do you rank these candidates for a user with the most relevant products on top?At Blueshift, we use pair-wise Learning to Rank model trained on historical marketing campaigns to re-rank the candidate set. The features used in the model can be broadly put into these buckets: Recommendation Relevance, Product Quality, Product Fatigue, Category Affinity and User Activity. The LTR model is compared with other ranking techniques using MAP@K and delivers the best performance.

4) PerceptiLabs

Martin Isaksson, CEO & Co-founder, presents: A New Visual Way to Build Machine Learning Models

PerceptiLabs is a dataflow driven, visual API for TensorFlow. It is a free Python package (hosted on PyPI) for everyone to use.PerceptiLabs wraps low-level TensorFlow code to create visual components, which allows users to visualize the model architecture as the model is being built.The settings/hyperparameters of each component (layer) can be set and tuned with the visual interface. Since these high-level settings generate low-level code, we can provide lots of support to the user, which is the key behind the benefits PerceptiLabs provides. This allows us to auto-generate granular visualizations for each component and to suggest settings (auto-configs) for the user in each component. PerceptiLabs gives the user warnings, errors, and tips during the modeling process to guide them towards building better models. When training starts, PerceptiLabs auto-generates visualizations for every single underlying variable in the model, which can the seen and analyzed in a statistic view. When the training is done, the user can automatically test and validate the model, before exporting the model (i.e., push to production or share on GitHub).

About speakers

John Whaley
Founder at UnifyID
Christoforos Kachris
Founder and CEO at InAccel
Anmol Suag
Data Scientist at Blueshift
Martin Isaksson
CEO at PerceptiLabs

John Whaley is Founder and CEO of UnifyID. He was previously Founder and CTO of Moka5, and was a Visiting Lecturer in Computer Science at Stanford. He is an expert in computer security and has spoken at numerous conferences and industry events, including RSA Conference four times. He holds a doctorate in computer science from Stanford University, where he made key contributions to the fields of program analysis, compilers, and virtual machines. He is the winner of numerous awards including the Arthur L. Samuel Thesis Award for Best Thesis at Stanford, and has worked at IBM’s T.J. Watson Research Center and Tokyo Research Lab. John was named one of the top 15 programmers in the USA Computing Olympiad. He also holds bachelor’s and master’s degrees in computer science from MIT.

View the profile

Christoforos is the founder and CEO of InAccel that helps companies speedup their applications using hardware accelerators (FPGAs) in the cloud or on-prem. He is the editor of the book Hardware Accelerators in Data Centers. He has over 15 years of experience on FPGAs (reconfigurable computing), digital design, embedded systems (SoCs), and HW/SW co-design for machine learning, network processing and data processing. He has more than 10 years of experience in EU-funded projects (proposal preparation/writing, researcher, principal investigator, WP leader). The total value of his research awards, grants and funds is 9.2 million euros and his own share of these awards is $2.3 million euros.

View the profile

Anmol Suag is a Data Scientist at Blueshift, a leading SmartHub Customer Data Platform (CDP) that combines customer data, AI and omni-channel orchestration in an easy-to-use platform. Anmol has worked extensively on recommendation systems ranging from hybrid implicit matrix factorization, BERT-based content-filtering to Learning to Rank with pair-wise models. Anmol holds a MS in Computer Science/Artificial Intelligence from UMass Amherst and MSc in Economics from BITS Pilani. In the past, Anmol has worked with Sprinklr, Opera and has researched with Mitacs, IIT Bombay and Indian Space Research.

View the profile

CEO and Co-Founder of PerceptiLabs, a startup focused on making machine learning easy. PerceptiLabs created a flexible machine learning tool that gives developers full transparency into the process of machine learning models’ development. Received my M.Sc. in Robotics/Artificial Intelligence from the Royal Institute of Technology and UPC Barcelona in 2017. In the summer of 2017 I presented a Deep Neural Network that I built for segmentation of muscular stem cells in microscopy images, at the first Deep Learning symposium in Sweden (SSDL2017).

View the profile
Share

Welcome back. We are very happy that you stayed with us. This is exciting part of our sessions and let me introduce Joe to Joe, take it away. I am happy to take it away. I am so excited that area. I missed you yesterday. But I can speak. So I wonder if you can hear me. You can hear you. Thank everybody for joining us for our startup showcase session at ml comp online 2020. And during the startup showcase. We are going to as startup showcase would suggest showcase the new movers and shakers in machine learning. If

you get hyper hyper, if you're cooing, when you hear about kubernetes, you've come to the right place, this new session will feature a presentation to from each start-up relating to new machine learning based application and products will be given 5 minutes to percent followed by two to four minutes of Q&A. From our distinguished moderator. Mohan Reddy. Who is the CTO of the hive and associate director with Stanford University's human perception. Laugh. And I believe that our very first guess is going to be John Whaley, founder, and CEO of unify ID with that said, let me turn the floor over to

you Mona. Hi, my name is 180. I am the city of of high which is a company Corporation. Studio-based, apologize. And I will be moderating the session today. Probably let you know if you have about 5 minutes for each of the speaker to present into 4 questions, then, you know, please feel free to reach out or anyting else. Like you're after the session would love to get in touch with liking to each of the Ashoka startups and facilitate anything else. Excellent. Looking forward to hearing from our very first guess John Raley founder and CEO of unify ID. Great. Awesome. Yeah, it's it

is great to be around. I think everyone can see my presentation. So yeah, so we we are unified, we do something called him gay people based on their unique Behavior. Like because my my name is on the infected my pc at Sanford. I thought of Stanford as well. And before that years ago, and I did my research and then did my undergrad and Masters it in my seat. So like this is a really funny question of what is it that makes you you and you know, we strongly believe that you're not a nine digit number. You're not happy. Possible your photo on it and you're

certainly not a password with a capital letter, a number and assemble. And so the authentication is pretty technology. This is, you know, how do I identify someone like it's essential essential that everyone has to do, even before it was that you would look at their face, you would do that works. That if he falls over many thousands and millions of years that are very, very good at this and see what makes people unique and then use that for an occasion purposes. And in fact,

we're actually just using these these type of passive signals we can actually, People with a very high level of accuracy, just buy them being themselves. And so, did we are the first without requiring? Any, we can actually get you. The primary source of the end of use is being a phone and I can actually just chill with them. I've been holding my phone here the entire to this entire time. So, I mean, I have my own, we make an SDK, you can link into any existing application, and click to, to

dial into the call center, and you can see what happens. Call John. Thank you for calling. In from a known phone. One question. How can we help you today? So how did that? I didn't know what it was actually was calling using our technology. The fact that runs on this device. When I, when I click to the towels, then the ivr system that has been sick. That's because I've been holding my my devices entire time. I walked into this room. My Jeep is unique enough. My motion is unique about that. We can actually authenticate. In fact, with the same actresses of

physical fingerprints, about a 1 + 50000, false positive, rate, based purely On Emotion, other examples of walking across the street where the same height, same shoe size seemingly, walk the same on the right hand side to go, see a graph of of the gyroscope values. You know, why visually? See there's a difference between r r to the two ways that we walked. And so in fact, it's also the rate of machine Learning System that can that can identify these. I like the differences between the way that I walk over to

someone else and likewise for other types of emotion other than, and we can get a very high level of accuracy and is completely passive. And the Really, the reason why there's you can do this now is because there no sensors enough sensors in people's lives. I mean, not only from the phone but from iot and other devices as well. And also just really security is really hit the breaking point around passwords to FAO TV codes, things like that, and large amounts of data. And then, you know, I do end to end

training like train train, very, very reliable models based on actual human behavior. Deployed over 34 million devices. Now that that is really the key to getting very high levels of real-world accuracy. This. But the problem is very is, very non-trivial. Very difficult challenges. In our laboratory, environment is pretty straightforward in the end. The end the VIN number of examples of that time accuracy, with which usually doing very different things performance. We run completely on the local device. We do a lot of work with, on device

machine, learning, and training on device. A lot of background in there. As well. We have over 30 Publications in that space. And, and then decide, this is a no dealings and our system are not a theoretical concept. That is something to be actually have to worry about How do you train? How do you train a machine learning model, where you don't wear the data has to stay locally on the local device. So y'all are very interesting technical challenges and we have a great offer enough for years. Now.

That's going to solve this problem. We probably want a wee one runner up there. The first ever and only ever winter in a shower. Say, animations and box number of other accolades that we've had including most recently. Your name is the overall fraud prevention solution of the Year this year from for a solution, you know, ends in the key is, is because we have visibility into the type of day. Do we have visibility into is very different than what what your typically get access to? Because we We actually see the human

behind the device. We actually see that there are they the person's Behavior, which usually you don't get that, you don't get access to. And so that gives us a really unfair Advantage. Like, how he enrolled. I can update it to enroll the user as well as in terms of the inference as well. Thank you very much. This background. Congratulations. By the way, on the recognition of our moderator has a number of questions for you. Thanks, John. Are you using this even for cyber-physical systems as well? Like more than you? I am system. Even for like,

in opening up. The doors, are Wi-Fi in the building are like no other pictures, like the based on who you are. In fact, we could use the key fobs, but they're also moving to a mobile credentialing system and and so, with our technology, now, you just walk out the door like wise for cars in automotive and other physical worlds. Type of interactions, like mobile payments, QR code, for example, masks and so, Reliance on things like face ID and in face recognition that goes

don't work as well. Even before you can ride for the service because we have two running in the background. And so even before you open the app, or you go to scan a QR code of you, make that call really know what she was. A very high level isn't instantaneous measurement of right now, it's based on you know, what was happening, 5 seconds ago, thirty seconds ago, you know 5 minutes ago and said that it really increases the amount of accuracy. Thank you very much. John. I have any more questions

you let you know because there's consequences because I did used like a unified. I am act like this and somebody else has it just walked in like my fiance to go fix my car door and it opens to like in a lot of platina security play unintended 684. Say so, how is your sister made my system. Is there either lines. You are they don't know what they're talking about. Because if you look at security in the real world, How do I make it more usable for the end user? Because they're a lot of ways to make things secure, you could just, you know, how people use very long

passwords and then not reuse them or, you know, used to it. You know, I consistently those type of things but, like, people don't because I mean breakfast, but the passwords that people use every year, it's the same like the delete passwords that it's all the same password just like, one, two, three, four, five six are the word password, or things like that. The truth is it's is very hard to change human behavior. And like we see this especially in machine learning where for initially a technology is not good enough and people have to adapt to the limitations of technology. And now

with technology may we see the speech recognition? For example, I used to be very, very flaky now. Now it's gotten good enough that that enable the new modality of And I think this is the same thing happening with with authentication. And, you know, in the future, you'll walk in your house, your house, recognize your car, where I can I do brick and mortar store recognize you as well based on all of these passive signals boarding, a plane is going to be different like,

all of these things and it's going to be much more natural experience technology. Your name, correct? C o in Excel, he was an expert on reconfigurable Computing, author of the book, Hardware accelerators, and data centers. So, please give a warm. Welcome to Crest Co I'm in Excel. Because we can't hear you. Okay. Thank you, and I'm happy to be here. So I'm at Christopher's. I'm the CEO and co-founder of an oxy. And what we're doing. We are camping companies picked up there. Like so

we need more powerful machines, most powerful Computing systems. So they cannot keep increasing their performance task in the past. So what's? So specialized to perform accelerators like a dippy used and abused provide Sky performance center to use are CPUs. And my day, usually given better performance compared with CPUs, but it's very hard for you is that we Leverage The Power of the 58th in order to allow. Define distance to utilize the power of the truck. So what we provide that you're not going to be provided a framework for

Accelerated, machine learning. So, that way you can speed up your application online. Maybe you borrow for the hundred accelerators without having to change your code at, all. Right, so you can keep using that Jupiter, a 5000 scikit-learn spark Scala and at the same time, Ramadan SPCA without having to know anything about the Distance by using Excel at 11:30 with zero gold chain, that can give you an example for doing a logistic regression, the trailer park logistic regression. And you think you're not some framework,

you can speed up around 3:15. Next, your total cooperation. I chose to the main advantage is that they are you keep working? Can you keep developing your application on the typical familiar Frameworks like at Jupiter? And not so you can use the same time. So what you going to do, right? So the only thing that you have to do is just import Reign Aston library to just buy the library, everything that you do. You invoke a specific function. Specific function is automatically uploaded and this is how you get

Wii with Alyssa performance of less than 50 ml per where you can see that, you can read that out there. 3000 frames per second without the, you know, changing. About resnet50, I saw. And, of course, you can scale it out on multiple of the Day card instantly, right? So you can go in as many as you want. And then it's just that 7 millisecond. Latency is an example of a non logistic regression. Noxon. You can get around 15 x speed, right? So you can see how easy it is the same jupyter notebook, just by switching to, they not see, Gangnam,

basically, using if it is, you can get around 15 x speed up, right? And you are so you can see your face detection and the 200 friends, protect. And of course, if you have a Book 7 of Jupiter notebook, and you can run it on the classroom instantly. So how we do it? We have developed this, a unique, an axle orchestrator that sits on top of a glass, the deployment of a way, you can call them multiple users. Think I could learn using the bathroom. Send a message to

do. All of these Expeditions are located on a wso. So you get the same Security office in that, aw, and finance, the no mix that declared BJ's. Questions regarding the hardware acceleration. I think you have a number of questions regarding Hardware acceleration. Thank you very much, Chris. Thanks Chris. So just question. Are you guys model agnostic or your dependent? Like you are you modeled offended on your exploration guide, the population? Krishna joining us to unlock things to feel free to correct my pronunciation. I'm he is a machine-learning engineer with blue shift,

and we will now hear from Austin to correct. My pronunciation. Please, give a warm, welcome to. Thank you very much. Thanks, Joe. I'm in Wilson in San Francisco. I'm in who shipped. I work at the recommendations machine learning engineer and I were talking about Frank. Oh, okay. Concede. What is a Smart Hub? CDP or gas station in one simple is RadioShack from Lake View of the customer know, we have one customer call, Eric Chan 832 for Jesus, 80, San Diego, California. You think it's an entire

data and his wife transactions, and optimization different campaigns. Start up at a very big scale for millions of users and millions of dollars. Madhuri, our dog is modal recommendations. When I work out of the code of personalization admission for the given line, right? We can link similar products to buy. It'll be where you was bored base in bathing suit captured by his behind up. All these are products are candidates and we have hundreds of them with smaller as compared to

What is an ankh? So are we have limited space? Or to give recommendations may be on an email or an on-site right? And which one will you show these two? Or this room at least two difference? Resolve this problem is and how is basically Eric, Andre and Trina. Are you available and we are you excuse for not? Not a morning person. A train exhibit. Laundromat model, innovate. And this actually helps us about the first one is missing. A product relevance cosine similarity,

photos of dogs. Then use activity as a man. I didn't have said that you had Apostle user is most important to protect the future. So his historic recency frequency and magnitude would make a great lakes make Emoji. Nobody has more like you cancel the timer and looking for Lexus turbo. Diesel shop is Israel from this Lake setting up like I think I'm pretty used to explain any given you any more of the impartial dependency plots and see that, you know, our intuition. They match. For example, if you see this first blog as a cosine of the user

other users can text and reading David levithan, reading of the product increases speakers, more and more likely to become a product. family Visa category Infinity, block and someone So now that you don't like usually when will you know, you slammed them out and we used to use Sharper Image early because we thought we'd only download candidates and you know, all these are are are good for user. And and I'd like to thank those upper, lots of ammo and candidates randomly might be a different cable is 1 2 3, 4 5

features a sign. Block someone on school remodel. So we are being used by a lot of good brands, Discovery skin, shared goal, belly rub on and so on, have any questions about it? Some more later for a disgraceful dynamically change. Affinity stores, like different product categories of, right? And so is somebody if you go to say thank you so much. Babe, San Francisco and other parts and you was an angel. Thanks. Thank you. Well, it's now my pleasure to introduce the Martin Isaacson.

Just saw a co-founder of my screen now. So I'm going to percent the New Rochelle way to build machine learning models, which we have done here at perceptilabs. And my name is smart and Isaac son, and I'm CEO and co-founder of perceptive apps. And I'm going to walk you through the challenges of Machinery modeling and the bogging models and then what protect the laps is and what it means to do modeling with protect laps. So the championship with modeling in the body models, there are several and then first of all, that's difficult to follow the model architecture.

If we commissioned to write and building models with cold like tensorflow or pytorch, it can be tricky when we create complex model architectures. And just imagine that you work in a Moto 6 months ago and I have to jump back and start working on the middle. Again. It can be difficult to understand the architecture and what type of parameter settings you had Etc. And when did bugging models, we need feedback from the model and how we do today. When we build models in code with a sense of self-worth, I torch we had this print sections and we execute the script and we had

another print that we execute the script and we keep doing this in a very inefficient way and this comes to my next Point as well with 2in in order to understand or interpret them. We need reservations as well. And we don't necessarily know what we saw the station's we need when we start building our model. So this means we need to create this boilerplate code for Association every time we create a new model. So this becomes very inefficient as well. And when working with machine learning, as many of us are aware of, we need to think about loading data. In a distributed way. We need to

distribute the workload to make its efficient and consider all the resources. We have. Sometimes we need to do some reading and parallel process. And if you have work with python, that can be non-trivial. So perceptive, observation, modeling tools, and Robert, my co-founder. And I we we work with early version of tensorflow with a beta version back in the days. And in 2016, we came up with the idea that we can make this more efficient, and more intuitive. And a machine learning model, is really a graph. Where are the different operations in the middle of the

different layers? Corresponds to an old in a graph and data flow is connected ass purposes. And here's where the dragon drop functionality, fits perfectly with a dragon real functionality. You can create this graph which represents our machine learning model. And we also approach where we generate all the rest of the say, she just because it's so important in my opinion when we want to interpret and understand our models. So we created this Dragon drop interface. You can see it here in the screenshots. So when you drag and drop a component on the workspace, each

represent a layer in a Machinery model, you can see it as a visual API error 400. So, you'd automatically generates the low-level cancel Falco. Then you can see the different settings to the right of the workspace. And since what takes its visual approach, we can include some automation like we can automatic automatically suggest their configurations and settings and we can create this the Diamond Dimensions. So it just becomes a little bit more efficient and

and seamless to do remodeling. And that's what I talked about. The floor, with the bug models that we have to answer to Spring sections on and on and on and execute the script. We don't have to do nothing except the labs, you get an instant feedback without explicitly executing the script. So in this way, you can the bog, the model and the code. And with all of the origin, origin of the stations for each component that you can see your space research, debugging, understand and model, as well. And that is one part of the buggy when you do the modeling, the

second part is during training when the mall was actually learning and here for such lives as a statistic view which shows basically all the different variables you have in your model and we are trying to visualize it in an intuitive way since it's a lot of things to visualize so you can follow the gradient flow and you can see all the different variables sex way to buy yourself a soldering run time. And in this way you can see if something goes wrong. You can go back to the motel and change something. Start raining again and so on. And last but not least we think is good to be organized. So

we created this model, help where you can keep track of all the remodel, send your experiments where you can collaborate twerking teams and Will Smith short, if it's possible to Chevy or models on YouTube. And, of course, it's free as all great software and it's very easy to install as a python package. It's also exist as a darker version and openshift version in order to run a mortgage to be distributed. What type of workouts And I'm looking forward to hear your questions. Thank you. Thank you very much, And I don't know. How do you have a

number of questions? See. And I think you might be muted. So just want to make sure yes, sorry. I said, can you search it? Can you suggest connections and scaffold zip code? Can I suggest Corrections? Scaffold a code. I'm not sure. I understand what, what you mean with that question. I mean no one can you record? And then I may have made some mistakes or something. Can you automatically suggest? Like, you know, everything like it was tomorrow to be better? If you did this this or like, you know, you had these these many layers. Are you high

like being on a different way of using like, you know? I like different types of like, you know, getting functions. But lately I can take and you and I can meet offline and I can ask these questions. Thank you very much. And I thank you from one more guest. But I do know, more information about the Showcase will be available on ML Conn's website. So appreciate hearing from Martin CEO and co-founder of persepolis. And I think we have one more guest who should be joining us momentarily. And again, Mohan thank you as always for your

incisive and thoughtful questions. Really appreciate everybody else joining us so far for our startup showcase. And I think, once again, I think we have one more guest joining us in just a moment. There's rich. Hey Joe, thanks. Yeah. I think that that completes our showcase lineup. Thank you very much for rent in Joe. Preciate it. Thank you so much. And thank you everybody for joining us at our startup showcase.

Cackle comments for the website

Buy this talk

Access to the talk “Startup Showcase”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “MLconf Online 2020”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Artificial Intelligence and Machine Learning”?

You might be interested in videos from this event

February 4 - 5, 2021
Online
26
104
ai, application, bot, chatbot, conversation, data, design, healthcare, ml

Similar talks

Xavier Amatriain
Cofounder/CTO at Curai
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Ilke Demir
Senior Research Scientist at Intel
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Jon Krohn
Chief Data Scientist at untapt
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video
Access to the talk “Startup Showcase”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
949 conferences
37757 speakers
14408 hours of content
John Whaley
Christoforos Kachris
Anmol Suag
Martin Isaksson