About the talk
Learn more about Code BEAM V America, held 10-12 March 2021 at https://codesync.global/conferences/code-beam-v-america-2021/
There has been a lot of excitement in the Erlang Ecosystem following José's announcement and demonstrations of Nx. We thought it was a great opportunity to bring the community together for a Meetup in preparation for Code BEAM V America to discuss how Elixir can reach the machine learning community.
Our panel consisted of Bruce Tate (author of 7 languages in 7 weeks), Justin Schneck (Nerves Core Team, working on integrating Elixir and Machine Learning), Brian Troutwine (Well-known Erlang developer currently using Rust for speed) and Garrett Smith (Founder of Guild AI),
Erlang Developer, Blogger & Trainer. Working at AdRoll through BairesDev. Community manager and organizer of SpawnFest, BeamBA, and SAtG.View the profile
Welcome, everybody. I think we're about ready to get started and our panelists are here and we're here to talk a little bit about machine learning and some of the recent announcements by Jose and dash bitch and their impact on the beam. So before we get started at like to thank our gracious sponsor or link Solutions in to make an announcement, that could beam registration is still coming up so you can register for free and we've got some information for you here.
And so here on this panel. This is not the not the exact right list anymore. We have a substitute. So my name is Bruce. We also have Brian we have Justin and David is is that out celebrating his first born and my right is that is her first one. And so on Garrett is going to be joining us instead. So let's take a quick lap around the room and introduce ourselves. Justin starting with u, Justin's next project is currently software engineering consultancy and
we get a great opportunity to be able to apply nerves nerves Hub. Elixir the entire ecosystem, two projects and four points that its face, but we also have a great deal of people who work across many disciplines including machine learning and back-end development. And the most exciting bits that I get to operate with and playlist is a scene at all, kind of come together and not just the machine Elixir back for more development, having all kind of come together into one big
concert that we can see producing these great. We're coming to play. Yeah, I can imagine things are pretty exciting around. Now. What about you dirt? Oh, yeah. Hi. It's good to see you. My name is Jared Smith and founder of a startup called Gilda. I we do tooling around machine on the experiment tracking and comparison of Dipping. So I'm going to sit in between the machine learning classical data science, mathematics and engineering and try to bring tools to the open-source Market
that help a scientist and machine learning Engineers, develop models more efficiently. So the focus was on measurement and so we have are actually kind of reaches out and gets into both systems and and the model performance across a bunch of different vectors, an interesting overlap of the data science, Splash Math and also the the engineering side It seems like the beam has been all about for a long time being around the edges of that whole machine learning stack and it's pretty cool to see
that last little Domino's start to from all started topple over a little bit and we get to talk about that today. Thanks, Karen. And what about you Brian? Highway, 100 from Berkeley, California for all of those of you in the chat who are saying hello from your local area. Some of you may have interact with me is BLT online or senior conferences. I'm particularly obsessed with optimization games. So how can we build mechanical, sympathy with the computers that we program? So do a lot of work in Airline. I've done quite
a bit of work and systems programming languages recently in my career and I've been working in data engineering or infrastructure support for data science groups lately. As as this sort of like subfield has exploded that's definitely been a thing that my specialization in supporting do it. Sometimes I'm a little bit of the wild card panelist and that I am not a machine-learning engineer, but I am very interested in how to make programs run quickly and run. Well, so yeah, looking forward to To building, everyone's questions, and being a little ignorant about
it. Bruce. How about yourself? What are you up to? Thanks for not forgetting me. Brian. I am at a company called Roxy learning. We actually teach the beam mostly OTP and life you, but we also kind of knock on the door of a lot of different programming languages. And recently, we've been spending a lot of time in the Julia space where we been kind of talking about about the Julia data science that can flux which has some of the goodies that Jose and desperate has been kind of been working on at your Whitney and X
Library. So we get to talk about that. So I'm really excited to talk to each of you today because it's a It's a cool story. Right? It's it's we have been in beam land. We've been kind of surrounding this whole machine learning areas. So that there's this whole thing about the data wrangling that you have to get, right? You have to go get the data and you have to have the Frameworks and take it apart cleanly and simple things. Clearly. You have the Internet. It's a things with Justin. Yeah. I'm kind of the you're knocking on the doors and ride around the edges with with carrots, with
Garret's company. And so it's kind of exciting to see that we get to start talkin about that last domino in the middle. So the first question, I'd like to throw out to the panel is, what is this machine learning? Anyway, maybe start with the data and what does it mean to have data? That's ready for a machine-learning application? And why is that significant? The race condition. This is classic race. Condition. Are you are lock. Can you call me? I can I can certainly a chime in to start here. I mean, it's it's really it's kind of the eighty percent problem in machine learning
my maybe maybe more there's nothing you can't can't really do a whole lot in machine learning without data. That's learnable. And that's a problem, a whole host of a field of problems. So the infrastructure that is required to do this in a while is is considerable considering data as a generic problem and others out there just isn't a single shop that does machine learning, that doesn't have a humongous invest in the data data processing, just moving around. And then in the end, you future vention to the general problem of of engineering or data for
learning. So there's a clean out their teacher engineering. There's all sorts of different methods for training test, validation, that it's a really, really big topic working with Possible in the internet of things and I'm kind of went on ramps. To establishing data, is this whole idea of, of sensors and how do you get those clothes and you've been involved in launching companies that are all about getting centers, close to important things to measure in the field.
Well, I mean, data is just one aspect of the kids. It's like the influx of the beginning of the problem, right? When you get your hands on most amount of the day, then you can start working on churning through it and being able to do something for when we talk about what is machine learning in general. I'm almost like focused directly on us, like, Mystique. I started coming up in contact with a lot of state of scientists in learning how to communicate together with data scientists and like speak similar
languages disciplines and I have this feeling where I was just sort of taking what they what they told me has to be like, the, the end-all-be-all truth is things that you're talking about, but they perform this magic in this mystique. And it was funny because I was hard wire has like the same same situation where these people was in the nearest from where would like, they're, they're sort of led to this Mystique as well, where they see this piece of hardware and they think that it's like infallible that it was built by. It must also carry this
like Mystique, but you asked about the sensor. It is very interesting because yes, From from what we're used to. As this cloud data, architect for words, sort of approach that a lot of us as developers. Feel. Like, I'll be on the other side. Entering the internet is where sensors are becoming more ubiquitous in our lives and are trying to be able to stream more and more data and is very important and I were sort of caught in this contention. I feel where we want to be able to
surface the data, but the data surfacing is expensive over, especially in the case over wireless connections, right? In those cases. What we usually do is take that juicy. Juicy David. I could be so useful for producing quality machine learning models, and we turn it into the model so that we can stop ourselves from having the Stream So Much expensive date of self-fulfilling prophecy of a problem, but we worked in several situations together so far, one of which is notably is that we see a lot of dust. Space,
the sensor data is coming from Mostly best. Rowing machines Warehouse automation, things like that. And as you can imagine, do two side effects of unfortunate circumstances is covered in. The street has sort of been seen in a life where they realize that they're behind in that, we need to be able to operate our facilities in safer less intense Pathways, but effective outputs to So we're knocking on the door now and then, so when I have this, this data
feeding into a model and what kind of it rate on the model for a while, it was rather than having a function that is purpose-built for, for a project. Right? If we have a function with lots of knobs and levers that you can configure, right? And that's kind of the L in machine learning. So, so now we kind of get into this place where Elixir needs to be good at something that it's just not early needs to have something that it just doesn't have. And that's so those are some of the things that are missing
in machine learning. So, so enter the announcement that we, that we, that we had a couple of what was it two weeks ago now, I can't remember. It seems like things like, Long time, but what are some of the things in that announcement that caught your attention? Brian. I think what? What caught my attention. So, if I start to put my infrastructure hat on, when I look at an organization is trying to do machine learning, there is the excitement around the machine learning but there's a little more or a little less excitement rather around the data engineering bit. And there is
almost no excitement around the data collection bareeze collection gets burned out to new people. And at each point in between those groups. There's some sort of interface that has to manipulate the thing going this direction, the problem that I've generally seen in organizations such as they get larger, is that the collection is an IO problem and they will be working in languages that are are suitable for iodine, problems or with technologies that are suitable for that and then it gets
narrower and narrower and narrower and it eat Junction. You switch languages, you switch. Jargon. You switch ability to talk back and forth with one another. I definitely work on projects that you would sit in the data engineering middle their deliver the whole thing. And then found out what it actually got to the machine learning people that because they couldn't read our program. It was just trash, it was very expensive trash and we delivered it very nicely, but it wasn't usable in part because the tiny sample that we were able to produce as a proof-of-concept was not indicative
of the total set. So what really interested mean about, don't tell the Holy Grail is, can we have a consistent vocabulary of jargon back and forth across the whole Pipeline and the biggest limitation? So, being stuff for work. Please work really well for. I do bad things, but then we got to do numbers, you stop, right? Can we do that now? It is a very interesting question to me and I'll be very curious to see what happens at that machine learning side of things. Which gives us, which brings us to that first announcement, right? So, there's, there's this idea
that we have these, these arrays, that work pretty well with functional. Our machine learning. We have tensors where we have to basically deal with them in bulk and is not just one dimension or two Dimensions. But sometimes many dimensional data. Very large data sets that have to move very, very efficiently and sometimes with specialized Hardware. So a little bit of the idea of the next enter in the abstractions around it. That allow us to do things like, oh, I don't know
matrix multiplication with with, with low-level operation rather than the typical inama reduce libraries that you can see. Yeah, I mean to it to Brian's point if you're working in Elixir today, and I mean, I guess two weeks ago, you didn't have any option, you really didn't have any option at all. And you you're forced into a translation scenario and not from a system standpoint, computation is quite a quite a nightmare because you are dealing with moving huge amounts of data through a pipe, just a computer. And that's up that's going to be a problem in any environment.
And if you happen to be using Elixir and of course, sitting on top of the desert of the distributor, powerhouses of the airline being ecosystem, there is a tremendous lever there to do distributive compute. And if you look at the, I would say the classical environments for doing this kind of work at C plus, plus, it's Python. And there's a lot of systems that are are, I would say, I would not become. Operationally, I wouldn't have to say they are being used operationally, but but the people of different Comfort levels a big problem
and I think the address in the computer was getting on track to dress compute and getting on a track to build out distributed competition of libraries, similar to what you would see in an ecosystem. I think it is. Isn't it huge potential issues, game-changer for organizations that want to have the best of both worlds with robust systems oriented application development, without having to deal with real, a technical problem of getting data through the skin, a little confused pipelines as you need to, to to, to operate on them. I think of the
general platform elixir with with this extension is could could potentially be extremely compelling. Attention lyrics. Cargo have Bruce out here again to set it up. Go ahead. Oh, I was going to focus on what it was saying. Regarding the idea as as you are mentioning with being able to handle IO bound problems. Really? Well, the Beans really good at this shindig to have this Venture into processing numbers. If you don't have to go outside for the eye of a portion of
it. I've always saw the beam as being exceptional at the orchestration transformation pipelines, the ability to float information through and a reacts to failure. Conditions due to the process modeling, those kinds of things. We love and appreciate its use for embedded State, machine designs and talked about the illusion of that machine learning. As this is this giant umbrella as we're talking to this these different Alleyways that you have to be able to go down all of these different subsets problems at make. Entire system is a hole
and you know, having that effective control plane means that we can efficiently move the pipeline, but we're saying we have to reach outside for some other language of the computer on the hardware itself has a massive amounts of data. And these pipelines are literally, like trying to be able to move it through the processor through memory through Fast Access area, so that we can be most efficient at these kinds of operations. And the very interesting also is that we were given this ability
to be able to process. These kinds of kinds of window. External memory card out of acceleration can exist, but we can see it. Okay for pipelines. We can control it. Got it through this backends compilers mechanism. So that it was at its first layer of of the Excel. A compiler, be sort of the first of of potentially many seconds that we could expose through this mechanism because we can a building block for being able to give us a vector into the outside
world and accelerated way, not just four numbers or tensor. Yeah, that I mean, that that sounds like it's it really starts to tie together. The architecture of the beam in the architecture of The Elixir language, in the architecture of nx2 to kind of start to really move the ball forward beyond beyond where we are now, right in into into the, the new GPS, the GPU model sun on the desktop or the purpose-built chips on the cloud or on in the internet of things. And that's a really cool thing. So this one more
announcement. I don't want to get you before we open up questions to the panel. This just kind of review where we come from. We have this data. We've got to like, grab it and prepare it and then we kind of feed it into this model. Right in the model is not like a purpose-built thing that computes, you know, the the color of the moon or whatever. I don't know, whatever your Computing. It's the, it is a mod. With all these knobs and levers. That's a general-purpose function that we feed to your week. We feed a training algorithm then, but then, there are
the three steps work where we have the training, and then we evaluate the loss. And then based on that loss. We make optimization into those knobs and levers, right? In, and the optimization or the the valuation step that Jose annouced was that we could take a loss function. If we could try it prints late that to 2, basically the speed of change, differential equations that your drill bit. Is that could basically give us, give us the trajectory of how how bad model is getting,
how fast, and then we can put that on set. Right? So, why was this such a big announcement? This autograd or Auto auto death. Or take that with care. Yeah, I mean, I think it's it's hurt a part of the bigger picture which is the ability to do optimization on important competition on algorithms. In this case, a part of gradient descent which is part of almost every standard learning algorithm. So it's it's really any kind of differentiation as it is inherent to most many, most machine learning methods
that fills out the story. I do think, though, that we are talking a lot about machine learning and I was thinking about it. I'm thinking about it a little bit in terms of generalized compute, and there is a really, really large ecosystem of the existing models turning out the rhythms Frameworks at cetera in other languages. And it's not clear to me, that just because you can do it in a lixer every Going to start to do it in an Elixir, but to be able to do any kind of
competition efficiently with large datasets is, you know, gives this community have the ability to start to build out Frameworks in libraries and other things. I think having optimize operations, the way they've exposed to currently is a good example of many things to come. So that as it is a political platform as an architecture, those who are building libraries to go ahead and and Implement these optimization most machine learning methods, but I think I look at it more, as a filling out, kind of
template. This is where the attack is going to be at. The approach of the famous team is taking. I think it's a good one. And I see this as an early-stage, almost a proof-of-concept. I'm really interested to see where the community takes this and how that a ball. Yeah, I think. Heading to sort of add on to that has been spending a lot of time reading papers about to cut a structured data on disc. If you can assume the discs are very fast in random access and a recent. This trend has been learned index
structures. So basically you start to inject very simplistic like neural Nets or other machine learning approaches to these structures to build out your your index. And in some cases they have just absolutely outrageous speedups for some workloads. Now, if your language can't express those, you're going to get left behind, you've got to be able to have the pieces in your base language, be able to express these types of competition. So that just popped into my head gear it. While you were talking absolute. These are just the pieces that we're probably going to need in
language, is moving forward. As we just are to assume that they're there. Couldn't be a better example of the type of thing that's not doesn't pop up as a traditional machine learning application, but it would in this particular case, I think it's, we're having this and you can just forget about kind of the big AI story and look at it as hey. What can I do now? And Brian examples really great. I'm actually surprised to see how long it's taken by the sea ice in general, to be using these things that a platform like this within this distributed environment, think it's
going to be a really great laboratory to start to build some of the stuff out. Is there's a great comment on the chat that I want to read and get some comments from the panel. I think the beam is special because we're also used to thinking about data is not something. You can move to place from place to place for free. And every other ml framework. I have seen has not had the mindset to think about that important problem, which is truly important for ML. That's interesting.
I read that before and actually, I was, it struck me about that about that answer, because we're saying and leading to having these things become more Apparent at a tea bag needs to be there for future programming language. Starting also become clear to me. It seems that ml framework, send a lot of the mindset, mainly focuses on a lot of the training portion of the problem and not, which is very expensive, and a difficult in there, for a lot of the tooling and potato movement needs to be optimized for that portion of things. But when
you think about where it's going to be used in a need to be able to perform optimizations to be able to deliver to that location, and you need to be able to continue to iterate on your, on your designs and test for those locations, as well as Usefulness for these kinds of applications, the practicality of in the case of just a subset of machine learning. As far as neural networks is concerned can really be useful. In the case of providing new types of sensory input. We are, we see this in the case of industrial iot designs for
factories. You can clearly an easily create electrical logical sensors for common easily testable things that can give you logical, binary feedback at the kind of feedback that we as programmers, are already used to programming against building logical structures in handling and reacting to. But there's always this guy that can look at a machine. And that's probably because I've seen this stuff for so many times. I've seen this before he himself is like this Irreplaceable sensor. You can't really
like modeling some way because the predictability is based off of these probabilities, right? And that's what we're given now with this with this new edition. Like that's why this is so important for languages and Frameworks to embrace this as a new normal. Because the longer to are we bound to just being able to solve a problem school, we can intermix. Now, watch it with probable output probability. That's the machine learning aspect. That is really interesting to me is because it's this new brush that I can paint with. Yeah, that's it.
That's an excellent taken. And I was thinking very much. The same thing is is it when we when we invited you and your work, this happened in that very right now, it's kind of kind of goes hand-in-hand. There's a great comment. Bye. Bye. Can Shirataki. And he said that machine learning is a huge numerical processing field. I'm curious if nxxl a, will become a new standard for at least the basis, or at least the basis of a standard for numerical data process as numerical data process. Not only limited to machine learning and this was just the point
that you made earlier carrot. But also for other interesting process problems such as Digital Signal processing and almost anything related to statistics your mouth. Yep. Yes, so it's it's so the XL a gives you access to the ecosystem of accelerators. It's a profoundly important technology. So that languages of taking advantage of it is it's really a big deal. And so yes, all of that and I think that's to me that's actually more likely the, the trajectory that, you know, what, I suspect by the bat. I think that this would go,
there is a certain amount of momentum that exists within machine running at large and keep in mind that the machine learning is driven by data scientist. And data scientists are using python primarily some other languages and to tell them hey, you know, check out this new language. It's not going to happen but people who are Building Systems and doing these difficult problems are going to I think be driven to look at the beam in the be architecture and the ability to basically Ryback's LA with it, which is really, really sweet.
You said there was a good question by Jim fries. He he's asking which companies on the panel were actually using machine learning. I'm so I just recently switched companies. So I am and I have promised to the community manager at date of dogs that I didn't do anything to talk about my previous company. However, did so I worked for goodwater Capital, which is a small Venture Capital firm and its main differentiator is that it's it's Machine learning has been through Capital. So that's just how it operates. Do Justin and Garrett? And do you answer this question,
but just give us a an interesting. An interesting problem that you're interested, that that has free Pizza Ranch. Is that as you look at these thieves announcement? I found that it's become more apparent that our clients are for machine learning services. Are not asking for it. But don't know that they need them. And one specific one was for an intrusion detection system for Network traffic, that would monitor. Your network, is the problem itself on on Cypress application.
From what we've seen, there's a common misconception that machine learning is all about neural networks in very specific subsets. But as mentioned as well, I'm being that this is an opportunity for us to accelerate any kind of Statistics are not mix. There's a lot of machine would simply Intensive pronoun. What about you? I mean there's a there's a lot of applications of machine learning better under exploitive because of the engineering difficulties of doing, real-time
compute on the massive data streams in the massive. Just real time or near real-time something like a Security application and anomaly detection, where your processing streams of data in real-time. And you may be retrained models and very soft real-time or near real-time on this application. That would not require any necessarily any fancy mathematics, but it would require a pretty, pretty careful balance of distributed systems development and compute and has been talked about, that's, that's difficult. I wouldn't say that there aren't Frameworks that are doing, is that I
would say that anyone who's doing machine learning at that scale. Was dealing with the problem of moving data around weather, be from uniform resident, Ram to GPU memory or CPU memory to pipeline. That's a big problem. That's got a lot of people working on it and just moving data around that works. So people who are doing it or doing that. The one thing I think that is is also under under exploited is is or under address the problem of reliability, of course, the system needs to work for any given the definition of work and the Technologies out. There
are are really scary and turns of that topic. And I would love to see the sensibilities of the the beam Community brought to bear in this area cuz I think it is I never hear talked about. This is the one thing I would say is not talked about this problem domain in the context of building reliable system systems that work. Water. Well under duress looks 11 me to call. This is an application Story. I Heard once I was at the craft company in 2019, and there's another speaker. There. Was there. She was from the planetary Institute,
fascinating conversation on the side stage. One point about the applications that she's investigating planetary Institute is investigating a machine learning applications towards an upcoming mission that they had when they were trying to be able to map an asteroid to identify a landing site. That would be no more there as a small engine, which I'll take perseverance has recently landed on Mars. Instrumental the very similar technique First Landing systems as well. It was able to onboard
capture image data coming from the underside of the the Rover and as it was descending towards the surface. It was able to find and identify myself. Out of Performing, some machine learning models on understanding and measuring, usually, the size of obstacles that were existed on the ground to find a place. That would be the smoothest. Basically, what the problem is suffering from was that they were in order for them to be able to build the model that allowed them to be able to. This was their goal is to build this model that can exorcise. This
actual Descent of the spaceship into the safest area in the asteroid. Will they needed to roll that all the way back and then to the data processing portion of it? That's a that meant they first had to build and engaging experience for a community people in the community. I'm coming in and being given photographs, Mark lines in the photographs that measures that would that would effectively Mark the edges of visually of rocks because this was the product of, of the date of is necessary to feed these. These
the machine learning Training Systems, the picture of to be able to make proper Productions, you know, you have to give a positive on a dating site so that I can understand what is trying to learn. So I was just a fun conversation is thinking about, I don't know, but others here, but one of my, one of my fascination with something to space. Sing that application would be wonderful if we can have Elixir drinks, the processing systems like that. So, Jim has another question. He asked, what does the Xanax have numerical libraries or is work? Being done to develop standard numerical, libraries,
especially things like Matrix meth and and so forth. So I know that there's that there's some early work underway who wants to take a shot at 1. I'm not qualified from what I've seen. It's very early and in the goal is to, is to develop things like numpy. They said a nun, pie and, and Jacks is as her architectural blueprints to follow. So, I would expect something along the lines of the dump. I, so wish would be driven centrally by dataframe type of implementation. And then, all sorts of never-ending
compute & quiri capabilities. On top of that. That would be a gas. Yeah, I agree. We are really early on that front. Very much. So interested in seeing when I go see because like I'm not going to say that they've done it right as far as like no time and implementing other libraries. The part that concerned over. This is a skating a little bit of like in this realm especially getting what's occurring under the hood right now, cuz we talked about Labor and more libraries. On top of things feel to make it easier for more and more
programmers to be able to have access to an end. Maybe at a higher level have understanding of these things. Easy for you to shoot yourself in the foot when it comes to light. I hope I hope that we can take this opportunity to be a little bit different. I may be a question, some of the structures of how many attractions get layered on top sticks. Yes, I think that there were a couple of dogs that were really interesting and I think that they were in the very early days of Elixir, right? So, Jose get this talk called, what a loser is about. And he was talking about the abstractions that we
were building on Taco, top afraid. And then I'm probably three or four years ago. Not sure if it was that long but he gave this talk that Elixir he's done and basically not to say that Elixir has done but the kind of signal that the development open the flood gates, write the development you step in the water, the water spine. Now, we're going to move the Innovation to their libraries around and it kind of also strikes me what happened with with plug? It's
essentially the whole move was to establish a tool kit for building web servers. So I think it's It's great strategy. It's early strategy, and I'm not sure where things were going to go. Putting in the part. That's very good composability. That's what I love about. It's like, it's so easy to be able to just compose things in a way that makes sense, and such as an X and O end soarrc. There's a deep focus on these different subsections on what they're good at and that the composability between all of them, really beautiful. Yeah, yours, there's another one for you. Justin. It's
while we're getting fat paychecks from solving real problems and data science. Jobs is great. Nothing. We don't all agree on that and I think they were all wondering when we can get speech recognition for nerves, for my Nerf project, so I can get an on-board voice control for my Raspberry Pi bonus points for voice sent to this. Justin. Obviously, the best Problem Solver ones where you can be synthesized voice in and have a tickets for you size. A light on a tough part of this problem that I feel
I deeply feel which is is exercising. Some of this stuff doing performing inference on models on edge devices like Raspberry Pi's. There's not a lot of resources on these devices. I hate when we look into what's available on the market. I mean, Raspberry Pi has become quite powerful, but it's it's mainly starting to be able to as a market. With the mobile phones has been dictating and become a lot more gracious with a CPU and GPU. Itself is, is getting better. But there's a lot of other ones that are out. There is more of a GPU Centric. Offering in old video,
for example, has a series of devices that are called The Jetsons series and its effective. An arm 64 processor strapped to an Nvidia graphics card. Do you need to be able to perform this kind of stuff? Like a lot of a community is caught up in the build and train a very large datasets very large models because they want them to be precise and they want to measure them, so that they perform very well and in the lab. Conditions that are in the cloud where you have a massive Farms, a bill to operate on. That's fine. But when you have this tiny little
witch scene, facial recognition, python-based. Facial recognition on edge devices, like, raspberry pies, and stuff operating under heavy loads at like, maybe one in Prince of second. You're not really going to be doing very high-end video, pipeline processing, with Precision, on those in those locations. And so we need to be able to see. We need to be able to more properly, leverage, all of our devices. We can be very efficient with them. Equally need to clean up our mess. When it comes to the Inn efficiencies, or the mass that we've created with these date of
months. And that also produces the knee, this sort of circular mean, Be able to accelerate the training aspects of them so that we can very quickly test Elixir made testing a first-class citizen. It is it kind of broke free from the model at a very long time. Were other languages where would take a long time and see how to spend it? So the market isn't quite ready yet because what Nvidia is offering is effectively like it's it's it's great. It's it's a standard it runs commonly to the desktop machines, but it's a
pretty hot and and I believe that there's newer pieces of Hardware that are on the horizon. I better going to give us the ability to accelerate these kind of things but in a way that might be a little off the beaten path. It's it's becoming clear that we're not just going to have to Target for graphics processors and Nvidia back end or an A. Back-end. We're seeing a lot of these like tensor Processing Unit A6 that are coming out. Specialized. Each manufacturer is going to have a different kind of way to be able to get the data into processor so that it can
run efficiently, which is also what makes it really important for us to be able to create these robust and steady pipelines, confidence in deploying to these devices. And having a building confidence that what we trained in the data centers will actually be able to operate efficiently Target. Yeah, I think that this is that we're getting pretty close to a a time to wrap up and I'd like to I'd love to get to take one more lap around the room and to talk a little bit about run fears
for the beam where we going. Which what's the low low Lang fruit. What is the low line fruit? You want to give it to get it started with this one? It's like my zoom has been crashing. So I actually missed all. But the would you like to get a started? Brian that you take a pass on this one? I'm just Anor Garrett. You want to take a take a crack at it? I mean I was going to ask Brian what he thought to see if I I mean in the context of this of this discussion, I
think that the you know, before we get into Frontiers, I'd like to see What the problem is? I think it's going to be that there that the expertise that we have in the beam community may not be driving. Some of the machine learning, you know, aspirational goals. So we are we going to get an ump, I live. I mean, that we going to get this sort of the equivalent that we will. What? What exists in the python ecosystem. And then in Elixir, and then acts, that will be very interesting. It's a fantastic amount of work and will take a lot of people with one of experience, a lot of
skill. I think what will be interesting to see along the lines of a Brian mentioned, a very tactical application of this to solve problems and maybe start to use the word compute in these in driving these things. And then back off of ml a little bit. I'll be something that will open up a lot of very drink, an application that will absolutely cannot be built without this. So I think that could be a frontier, but I think it will. About to see what is next in terms of applications that will be built on the near-term,
libraries that they come out of us. Yes, I I I definitely agree. With the notion that there are just ate an entire class of problems that we don't really saw today because we don't have the basic tool set to solve them, like the learning indexes that I've been reading about are intentionally, extremely simplistic so that they can be implemented in languages. Like see that, you know, are are not really amiable to this type of work. And also to just get people who
are not traditionally interested in these types of algorithms to realize that they can work for non-traditional applications. I think they the other thing that strikes me as a frontier is I think about the old likes. I think about the current but cell tower deployment of an hour and then it just keeps ticking over. And if one of the pair of computers dies, it's inside of nowhere. Computing at the edge is going to be the big theme so processors are not really getting faster. We are getting more of them. Networks are not really
getting faster. Although we do have fiber coming on one more, but there is fundamentally a limitation to how much data you can ship to a central place to have it processed. So then the question becomes, how do you start the distribute that computation? And I think the the beam is serve ideally suited for Edge computation where that edge can't ever go down or it has to go down in known predictable ways. So those are two to Future avenues that I think are are very interesting to me.
Yeah, that's computer, man. I've definitely been obsessed with this for over the last like year and a half Bruce when I saw you at Lonestar. I've reserved a little on fire over actually actually a little on fire over a top that you gave several lonestar's ago from Matt regarding discussed programming, sort of the pendulum in technology much like fashion coming in and out of style to wear like a we're going to be swinging One Direction. Now, we're where we seen this week. We seen this this the shift of the cloud becoming unsustainable. Now, swinging back into the other Direction Where We need to
distribute the load across the network and get it to different places because we literally, we're running out of air. There's no space to the cooler, these kinds of things. So why not distribute them to places where there's more are, right? And so, I think that all of this time together, Because it's obvious fascination with all of this. The thing that really fascinated me, see this solution of our data traffic problems, back and forth, having to send lots of data over the network, back and forth. In a way, we can, we can sort of I'd like to see us leveraging
aspects of machine learning to be able to sort of like digest this into a smaller subset so that we can be less wasteful. And we can in a better program communication of data that we have from across max distance, make that extra even more robust reliable. I'm going to say I agree with Justin for the most part, but I'm looking for the, the Deep learning types of things where you get rain, most of the model somewhere else. And then, and then actually put these on her face into
the hands of the edge, whether that's boys gesture. Facial recognition, handwriting recognition. I think that all these become a virtual problems and embeddable embedded problems. And I think that the beam the beam is a natural place for such things to live. Don't actually think that you have to have an ex or any of the new libraries everywhere. I just think that they have to be, you have to be able to build a model efficiently from somewhere in the stack and and get the
free relatively. Simple, working model out to the edge. We can do I think that's where, that's what it's all about. So. Let's see, we should. We should say let me share screens real quick. Train heaven. Zoom difficulties for a second. All right. so, I'm going to Share screens wrap up. Hey. So, I want to thank our sponsors and invite you to come join a code beam. Cobian virtual. It's coming up and there's a 20% discount for the registration and we have group tickets available also, so you can come check those out.
And a big thanks to the panelists Garrett. Brian Justin and congratulations if you're listening David, and thank you so much to the audience for being here. We really enjoyed talking to you today. Thanks Bruce. Thank you everyone for coming.
Buy this talk
Buy this video
Our other topics
With ConferenceCast.tv, you get access to our library of the world's best conference talks.