-
Video
-
Video


- Description
- Discussion
About the talk
In the past five years, graph neural networks (GNNs) have emerged as a very powerful family of neural network architectures that operate directly on graph data structures. It is very powerful that GNNs model tasks at the level of a dataset’s (often) more natural graph representation, but the algorithms that enable this generate some side effects when it comes to scaling GNNs to large graphs. They are very memory- and compute-hungry models so going beyond benchmark datasets to real-world biology, physics, e-commerce and social network problems is challenging and will require researching and developing new techniques. Join us for a conversation with industry leading researchers from AWS, PayPal, and Intel Labs to discuss how they think about optimizing and scaling graph neural networks and what data scientists should know if they want to work with these models.
About speakers
Sasikanth Avancha (Member, IEEE) received the B.E. degree in Computer Science and Engineering from the University Visvesvaraya College of Engineering (UVCE), Bengaluru, India, in 1994. He received M.S. and Ph.D. degrees in Computer Science from the University of Maryland at Baltimore County (UMBC), Baltimore, MD, USA, in 2002 and 2005, respectively. He is currently a Senior Research Scientist with the Parallel Computing Lab in Intel Labs and is based out of Intel India in Bengaluru. He has over 20 years of industry and research experience. He has four patents and over 25 articles spanning security, wireless networks, systems software, computer architecture and deep learning. His current research focuses on high-performance algorithm development, analysis and optimization of large-scale, distributed deep learning and machine learning training and inference, on different data-parallel architectures, including x86 and accelerator architectures, across application domains.
View the profileFor the last few years I have been working closely with SigOpt customers to help them design better Experiments and better optimize their models. Since we joined Intel one year ago my goals have pivoted to extending insights from these often heavily involved, custom projects and using them to build product features that empower data scientists to spend more of their time working on the tasks that are most important to them, like attending SigOpt conferences to learn about fancy new types of neural networks.
View the profileVenkatesh is a Director, Data Science at PayPal where he is leading several applied research initiatives including ML on Graphs . He has over 25+ years experience in designing, developing and leading teams to build scalable server side software and AI/ML research. In addition to being an expert in AI/ML and big data technologies, Venkatesh holds a Ph.D. degree in Computer Science with specialization in Machine Learning and Natural Language Processing (NLP) and had worked on various problems in the areas of Anti-Spam, Phishing Detection, and Face Recognition.
View the profileDa Zheng is a senior applied scientist at AWS AI, where he develops deep learning frameworks including MXNet, DGL (Deep Graph Library) and DGL-KE. His research interest includes high-performance computing, scalable machine learning systems and data mining. He got a PhD in computer science at the Johns Hopkins University, Baltimore, USA in 2016. He received his master of science from École polytechnique fédérale de Lausanne, Lausanne, Switzerland in 2009 and bachelor of science from Zhejiang University, Hangzhou, China in 2006.
View the profileBuy this talk
Full access
Interested in topic “Education, Training and EdTech”?
You might be interested in videos from this event
Similar talks
Buy this video
Conference Cast
With ConferenceCast.tv, you get access to our library of the world's best conference talks.
