Duration 17:55
16+
Play
Video

AI = your data | Rasa Summit 2021

Rachael Tatman
Senior Developer Advocate at Rasa
  • Video
  • Table of contents
  • Video
Rasa Summit 2021
February 11, 2021, Online, USA
Rasa Summit 2021
Request Q&A
Video
AI = your data | Rasa Summit 2021
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
432
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

Rachael Tatman
Senior Developer Advocate at Rasa

I’m a Data Scientist & have a PhD in Linguistics from the University of Washington. My research was on computational sociolinguistics, or how our social identity affects the way we use language in computational contexts. I’m especially interested in emoji, and how dialects are produced and perceived in computational contexts.Here are some of the things I’m up to these days:Temporarily on hiatus. I stream livecoding most Fridays at 9:00 AM Pacific and live paper reading Wednesdays at 9:00 AM Pacific (it’s more interesting than it sounds, I promise!). You can join me on Twitch.I’m one of the organizers of R-Ladies Seattle.I moderate the LingStatsChat Slack channel, a place for friendly discussion & help on linguistic and statistical topics. If you’re interested, you can join at this link.

View the profile

About the talk

New algorithms may get the press, but the real heart of any AI project is data collection and curation. This talk will show you why getting to know your data is so important and provide best practices for improving your data curation and annotation.

Presented by Rasa Senior Developer Advocate, Dr. Rachael Tatman at the 2021 Rasa Summit https://rasa.com/summit/.

Rachael is a Senior Developer Advocate at Rasa. She has a PhD in linguistics and her research focused on the intersection of sociolinguistics and NLP but these days she's more focused helping folks build helpful virtual assistants.

- Learn more about Rasa: [https://rasa.com​​](https://www.youtube.com/redirect?even...​)

- Rasa documentation: [http://rasa.com/docs​​](https://www.youtube.com/redirect?even...​)

- Join the Rasa Community: [https://forum.rasa.com​​](https://www.youtube.com/redirect?even...​)

- Twitter: [https://twitter.com/Rasa_HQ​​](https://www.youtube.com/redirect?even...​)

- Facebook: [https://www.facebook.com/RasaHQ​​](https://www.youtube.com/redirect?even...​)

- Linkedin: [https://www.linkedin.com/company/rasa​​](https://www.youtube.com/redirect?even...​)

#conversationalAI​​ #opensource​​ #aichatbot​​

Share

I'm going to talk about how high is actually they do, and what that means, when you start to Build a Bridge Troll, sistance the caveats at this is focused on the Raza way of Building Bridges. So a question, I get kind of a lot which is why do you bother working on chatbots and the stunt text there is generally that the person I'm talking to doesn't like conversational AI projects that experiences that didn't have a good time and that has led me to think a lot about what makes them not work. And very fundamentally. I think there's a conversational AI

project isn't helpful. When it doesn't help people do what they need to do and I do we do what they need to do faster than they could any other way. That's what makes if you ever just had a delightful chat bot experiences. It's because it was very fast and very easy, it was very natural and you were in and out of there and done. So how do you know what people need to do, several approaches to doing this? The one would be making an educated, guess would be doing research and asking people talking about watching the walk to that work processes, or you can look at the data

and use that to infer what people's needs are. So I would say that the educated guess approach is the traditional way of building conversational assistant Pathways that you need, everything that somebody would want to do. You have ready by the time? The body is launched and they're definitely situations but this is a good approach example, here is from video games or you want to have people have basically the same conversation every time in Portugal.

UFC search. I know you actually said she I see. They're so you actually searches in the audience today. Y'all are Rockstar, it's not my thing. If you can, do you actually search during your butt development design and its production and improving over time, please please do. What am I going to talk about it today? You can look at people say that to figure out what it is that they need to do. So the biggest benefit of this is that if you're using the modern machine learning methods like we do, you get a really flexible system where you can do things in different orders for. You can freeze

things in different ways and it will still work, but they're out there is that that's doesn't mean that all you need is a bunch of user-generated data. So I've been using K example for her for a couple years now. And just this year, unfortunately there was another situation where I ended up having really negative impact. So I'll leave you two for the Peppa and very soon after launching started to say some kind of big and also research has found that with, you know, not that much effort. You could

actually get the training data, it was trained on back out of it. So things like account names. And people's addresses that they had used in the training day. So that's no good. So I would suggest a happy medium but you provide additional structure and organization to make sure that you don't get any unexpected negative bad side effects. When I say data in this context, what is it that I mean the data in conversationally is almost always text at the majority of it is going to be in

pre-training models and you will probably not touch this. While you're developing your first pre-trained things like language model things like word embeddings. Dated it was used in the future so you take to turn text into numbers. You're probably not going to have to mess with that. On your first assistant was more relevant, is user-generated text. So things that users have said or written to a system and having a conversation. So what order have things happened in the past and it really good example of this would be Customer Support, log, maybe somebody's chatting with a customer support

agent and ensure privacy policy allows for a talk to a lawyer. I'm not a lawyer, you could use that as your training. And I need to talk about about code here and that is that you don't actually need to change a lot of machine learning code to write a really good assistant. Most assistance will have more or less the same underlying machine learning code, that's why you can build something like other. That's why conversational AI framework is possible and what you don't have to be in order to build an assistant. The thing that is really important. If you have

to be an expert in is your data and what your needs are just need to do and how they communicate, what they need to do. So like let's get down to the real practical advice. What do you do? What steps should you take? So I'm going to talk about intense how to start working with intense if you already have data, from a potion to work, even if you don't have to store use and then how to check it at work. Burlington. I like to think of them as something that user wants to do that they're going to communicate to you in a turn of a conversation and

the little one is test when you're trying to add a new intense, you're looking at intense after. So is it a verb or is it like mostly over if you had to come up with a name for the thing that somebody wanted to do wouldn't be doing, or would it be some pieces of information, or maybe like half a sentence or a lot of other distinguished from other intense a bit about why that's a little bit dangerous. So if you have user data, have me what your intense are, I would recommend using a modified content analysis for your first pass. At figuring out what

your intense arm. So content office is a qualitative research technique from Corpus studies use it And did you go to your data or sample of your data for each address for each turn? Assign that to a Content category? If there isn't one that fits yet, obviously start patting your first one to a new category. I didn't give an injured goals, may be every hour or maybe, you know, when you booked a hundred points or something separate them has had two or three times when you do this. Just so you have an idea that yeah, you're pretty stable groups. I can usually

agree with which I intend this needs to go into Now it's actually feel a lot more mature running background. You may be wondering, what can you automate? This was some sort of unsupervised, text clustering method. Maybe, I will say it's a hard problem in another, a particularly its value, it in the Clusters and seeing, if they are good to do, right, angle here is to help people do the things they need to do. So, even if you don't have data, these next steps for.

So the first thing to focus on, if you do have your intention or continent content, analysis is still applies is start with your most prominent. I do in a conversation or a series of conversations, the things that people want to do or not going to be uniformly, distributed across all the things somebody might want to do. So, for example. Just thinking about a real-life someone, he's probably not going to propose marriage once in every single conversation, right? Even though it may be very important when they do. And this uneven distribution means that you can get a lot of mileage, out of

starting, with the things that happen, most often, and I would really recommend using the expert in your institution. So especially people like support staff, these are trained knowledgeable experts, they can help you out. A really good example of this from when I was a title, is that our customer support team got about 80% of the tickets that they had to deal with? We're dealing with a single issue. So we had a conversation with this, didn't they could handle one issue really. Well, we would have dramatically improved by the lies of our support staff,

smallest possible that you can have two to minimum. It goes in and out of scope intent. And when I expect you to have a thing that you do, we like hey I noticed you're trying to do something that we can't do right now. Here is like the page to go to to find more information. Contact information for the person you need to talk to her phone number. You need to call Wipe your intent. So there's definitely a style of conversational to sign where you have an antenna for everything, your user wants to do, and you

figure those all out before you start launching your assistance. That's not the style that we use her about that with conversation during development. He probably heard about in other taxes. When you start with what is most popular, what you absolutely need and have a way to handle things, or in that course of intent and then ask him if he actually needs them. So, you're spending your time on what people are using and what they're trying to do. Why is it also good idea to have your internet's? So on The Human Side, the more you need, the more it takes more documentation.

Could you tell you everyone who's working on your assistant? What is he supposed to be doing? And that is much easier to do with your ten. Ten, ten, ten thousand and it's much easier to correctly a sign or check. If something is no correctly, I sorted into 10 bins than 10,000 fans. Side performer classifiers scale with the number to so increasing the number of pasta is the cost and time of your assistant training instruction particular fusing like 3 lightweight. So how do you pair down, intense the number one anti pattern? I

see a new route to developers using when they start to design a tent, isn't they store information in an earlier? What I mentioned and attached to be a verb? Not, you know, a noun phrase included on some additional pieces of information. This is what I'm talking about. So if you have a piece of information, you need to store that will change the course of the conversation. Put that in a slot, it does not necessarily need to come from an entity but it might water what you use to keep track of pieces information. Not intent names, are, is I have to intense looking at rain

in booking a plane? And if you look at these, the tokens, hear the words. You'll notice there's a lot of overlap in these, two sets of tokens between these two. So I would recommend combining these into a stimulant and then pick out the tokens that are important for figuring out what to do next as entities and then save those. Am I just very general rule of thumb that you could ask? This is before you get your assistant in front of user? I would Max. And then we have around 10 and 10 at around 20 pieces of training date of her intent floor to

start something, you could always add more if they're if they're needful. The training data for an intense, if you have it, use user-generated data. So if you did, your content analysis, use the group's, you use for your content analysis, and take that, and use that as your training data. You have to pay Medicaid us, or try to imagine what people might say or no use paraphrasing that can be helpful but you generated is better. Just to chat face. Interactions certainly in the English communities. I'm part of tend to be more informal than if I just have a bunch of examples that look like I'm

writing for know the most probably not going to be a very good sample of what my users are actually going to stay in these samples over here are actually from Sarah. So this is user-generated data. How to every utterance in your training day that should be matched to a single intent. So if a piece of text like he's bought him a few things about him down here, good day or chow, or Aloha, which could be used to both to greet and Say Goodbye. Show up. I would not put those in the intent training day. I would put those in end-to-end learning, which is where you have

the conversation and instead of having an intense for her, and you just have the raw text about the user said, if you want to verify that you can use humans, to sort at home, have multiple people do it and then use a measure, like, inter-rater reliability to see how much agree that they have. All right, that was most of the talk on intense. And definitely, the older style conversational design. Most of your work is going to be trying to figure out what happens and what order not so much with Raza North sort of newer style conversation all the time. So I stories with your little little

patterns of conversation, our training data, to help your sister decide what to do next. And if he's exactly what it saw in the training data, it'll do that. And if he seen something sort of like what it's on the training data it will extrapolate from there. So you don't need to have if you're refusing us. It's my cross off every single conversational past knocked out, and in fact trying to do that we'll probably just make you unhappy to be a lot of unnecessary work. So where'd you get them? If you have conversational data, start with the patterns that you see

their the things you want your system to do when you find a new intent or something that you think he's like, oh yeah, I see a lot of this pattern in my conversation with you this a lot. I got your tents, go to the previous steps and then for generating new conversational pattern, good morning, which is where you talk with your mother and the conversation about it. You annotate it incorrectly classifications and then you say that is training day. And then you'll be starting with the most common things. Things you want people to be able to do, then adding errors that

you see, or that you think might come up as soon as possible. Soon, as your sister is actually usable, get it in front of users, they could be text users on your team at internally. We tend to send each other links to Arthur. But you are you never going to be able to gas all the same. Somebody might want to be able to do with your system. So user data is top and if you do need more complex, conditional logic in your conversations, then when you can start looking at rule, but I wouldn't start with rules.

All right. Rachael, I don't always work. How will I know if it actually created a usable product at the number number? Number one, way is by reviewing user conversations by looking all people stayed here, assistant and being like, yes they seem happy or I don't think this conversation went in a way that was very successful for this user and then adjusting your assistance to improve over time test. So tests are in the wrong context people conversations, you know, that they should always happen to a certain order. All the intent should be identified a certain

way. You want to be a hundred percent correct model, validation is Morphine, The Machine learning world and it's checking what your model can guess pretty well. And if somebody told me that they could get something 100% accurately, I'd be a little bit suspicious and the same goes for models. You do not want a hundred percent accuracy on your validation, especially if you're starting with like 20 pieces of painted of her intent above like 98. I start to get Suspicious that you might be overfitting. So, a hundred percent validation High. Probably, not a hundred. So many

takeaways, I know she's in a little bit of a little bit of a world wind but language Ada is what makes arroz assistant work where he was language shade up. Starting at annotating it is the core and the bulk of the work that you need to do to get your system off the ground. Providing structure for that language data is the first step for building and LP system or something like, you know, training teachers. First you need to have a corpus Corpus needs to be organized in a certain way and if there's ever no eject where you are not working first day of

first learning project this because somebody else is already done work for you ahead of time. Start with the fewest possible. Most popular things, the most likely conversation clothes the most likely intent add things as they're needed and you'll know they're needed because you're getting your prototype in front of users as soon as possible, right away at getting user data in there and making sure that it works.

Cackle comments for the website

Buy this talk

Access to the talk “AI = your data | Rasa Summit 2021”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “Rasa Summit 2021”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Artificial Intelligence and Machine Learning”?

You might be interested in videos from this event

October 7 - 20, 2020
Online, Mountain View
19
5.41 K
google, googledevs, it, machinelearning, mlsummit, network, platform, tensorflow, tfx

Similar talks

Nikhil Mane
Conversational AI Engineer at Autodesk
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Tom Bocklisch
Director of Engineering at Rasa
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Jielei Li
Cognitive software engineer at Orange
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “AI = your data | Rasa Summit 2021”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
735 conferences
30224 speakers
11293 hours of content