About the talk
Jacquard is an ML-powered ambient computing platform that takes ordinary, familiar objects and enhances them with new digital abilities and experiences, while remaining true to their original purpose. We'll describe how we have trained and deployed resource-constrained machine learning models that get embedded seamlessly into everyday garments and accessories; like your favorite jacket, backpack, or a pair of shoes that you love to wear.
Okay, we just heard from the tensorflow Lite team. It's getting even easier to face machine learning directly on devices. I'm sure this man is you thinking what's possible here. Now I'm going to tell you about how are using your card to do exactly this embed machine learning directly into everyday objects before I jump into the details. Let me tell you a little bit about did you cover your car platform? Such a card is a machine-learning Pard. I'm being Computing platform that extends everyday objects with extraordinary
pars. At the core of the Dakar platform is the Dakar tag. This is a tiny embedded computer that can be seamlessly integrated into everyday objects like your favorite jacket backpack or pair of shoes the tide features of small and bedded arm processor that allows us to run ml models to reckley on the tag with only spark gesture or motion predictions being emitted by Abby Lee to your phone when detected. What's interesting at the tide has a modular design where the ml models can either run directly on the top of a
standalone unit or buy additional lopar compute modules that can be attached along with other sensors custom LED or hot pic Motors. A great example of this. It's a Levi's trucker jacket, but I'm wearing let me show you how this works. So if you can switch over to the overhead camera. so here I can take the car tag on audit to I specifically design sensor module. Which is integrated into the jacket? Let me check that again. What happens is the distance to an m zero processor? It's running on the jacket itself, which is talking to some intuitive sensor lines in the jacket.
The m0 processor not only reads data from the center lines, but it also allows us to run ml directly on the tide. This allies have to do gestures for example on the jacket to control some music. So for example, I can do a double tap gesture. And this to start to play some music or I can use a cover gesture to silence it. Uterus can also use Swipe in iswipe are gestures to control their music drop pins on maps or whatever. They like depending on the abilities. And yeah, what's important here is that all the gesture recognition is actually
running all the M 0 processor. This means that we can run these models at superloop are sending only the events to the users phone via the check car lot. So I'm sure many of her wondering how we're actually getting our ml models. Did we deployed in this kids in a jacket? I'm by the way, this is a real product that you can go to your Levi's store and buy today. So as most of you know, there are three big on device ml challenges that need to be addressed to nibble platforms like your card. So first is high, train high-quality ml models that can fit a memory constrained
devices second. Let's assume we saw a problem 1 on Harbour tensorflow model that's small enough to fit within our memory constraints. How can we actually get it running a little confused embedded devices for real-time inference. I'm third even if we solve problems 1 and 2, it's not going to be a great user experience. If you either have to keep charging their jackets are backpacks. Every few hours models always ready to respond to users actions when needed was still providing multi-day experiences on a single charge. Specifically for Jacquard these challenges have mop to deploy model
the smallest 20 kilobytes in the case of the Levi's jacket or running and all models on little Computing microprocessors like a cortex m0 plus which is Watson bad as here in the cuff of the jacket. The shoyu highway of address the challenges for Jacquard. I'm going to walk you through a specific case study for one of her most recent products. So recent in fact, it actually launched yesterday first. I'll describe the product at a high level and then we can review Harley prandin the Floyd and I'll model stay in this case fit in your shoe. So the latest
Cardinal product is called gamer. This is an exciting new product between built in collaboration between Google Adidas on the EA Sports FIFA mobile team with skimmer. You can insert the same time that's inserted in your jacket into an Adidas insole and glide in the world on play soccer. So you can see here where the tiger inserts at the back piano models in the tag will be able to detect your kicks your motion your Sprint's how far you've Ron your top speed. We can even estimate the speed of the ball to kick it. Then after you play the stream of
predicted soccer events will be synced with your virtual team in the EA FIFA mobile cam. Where are you be rewarded with points by completing various weekly challenges. This is all part by our ml algorithms that runs directly in your shoe as you play. So give her the great example of we're running a mile and Prince on device really pays off as players with typically leave the phone in the locker room and play for up to 90 minutes. But just the Tiger in their shoes here. You really need to have the ml models run directly on device. I'm be smart enough to know when to turn off when the user is
clearly not playing soccer to help save par. So the speaker gives you an idea of just how interesting on machine learning problem. This is unlike say normal running. We would expect to see a nice smooth periodic signal overtime soccer motions are a lot more Dynamic for example in just eight seconds of data here. You can see the player moves from a stationary position on the left starts to run breaks into a Sprint kicks the ball. I'm in. And again to a jog all within an IT second window. Free camera we needed our ml models to be responsive enough to
capture these complex motions. I work across a diverse range of players. Furthermore this all needs to fit within the constraints of the Jacquard tide for Gilmer. We have the following on device memory constraints. We have a ride to 80 kilobytes of rum which needs to use not just for the model wits but also the required UPS the model graphs and of course the supporting code required for plumbing everything together. So this can be plugged into the Jacquard OS we also have a run 16 kilobytes of ram which is needed the buffer the wrong. I knew sensor data on also be used to stretch buffers for
the actual ml inference in real time. So how do we trim models that can detect kicks a player speed and distance and even asking me if the ball speed Within These constraints will the first step is we don't at least initially we Trend much larger models in the cloud to see just how far we can push the models performance. In fact, this is using tf-x which is one of the systems that was shown off earlier today. This helps form the design of a problem space and guide what additional data needs to be collected to boost the models quality. After we start to achieve good model
performance without the constraints on the Clyde. We then use these learnings to design much smaller models that start to approach the constraints of the firmware. This is also when we start to think about not just buy the marbles can fit within the low compute and low memory constraints. But how are they going to run a little far to support multi-day use kisses for Gilmer this letter to design architecture that consists of not one but four year old that works it all work coherently. This design is based on the inside that even during an active soccer match a player only kicks the ball
during a small fraction of gameplay be there for you is much smaller models that are tuned for high recall the first predict if a potential kick or active motion is detected. If not, there's no need to trigger the larger more precise models in the pipeline. So how do we actually get are multiple neural networks to actually run on the Tyco to do this? We have build a custom C model exporter for this. The model exporter is at using a python tool to call the number of co-ops from a look up this then generates
at custom C code with a light with Ops library that can be shared across multiple graphs on the actual da kitchen. C code that you get for each model. This allows us to have zero dependents the overheads for every bite count. Here, for example, you can see an example of one of the sea off that would be called by the library. So this is for a rhyme Three transpose operation which supports multiple IO types such as in dance or 32-bit floats. So with this you can see how we're taking our neural networks and actually getting to them to run Alma
Jacquard tag live. I hope that you're inspired by a budget like the card and this makes you think about things that you could possibly do with tools like TF light. Micro Jacksonville Jeron embedded ml applications.
Buy this talk
Buy this video
With ConferenceCast.tv, you get access to our library of the world's best conference talks.