Table of contents
About the talk
Lab to Live: Using AI to improve human outcomes when it matters most
The stakes are at their highest when AI is in charge of decisions that will impact people’s lives — even entire populations. Mind Foundry is helping governments and organisations harness the power of AI systems when the impact is greatest. Join this session to dig into Mind Foundry’s fascinating case study involving the Scottish government, and learn how to anticipate the challenges when developing a national AI strategy.
Albert King - Chief Data Officer - Scottish Government
Alessandra Tosi - Senior Research Scientist - Mind Foundry
Brian Mullins - CEO - Mind Foundry (Moderator)
I am Bryan Mullins CEO of my Foundry. We were founded by Professor Steve Roberts and Mike Osborne from the University of Oxford. And we focus on important applications of AI, That's high-stakes applications that affect the lives of individuals. I'm joined by two, great panelist today. First my colleague Dr. Alessandra Tosi. She's a senior research scientist at my Foundry and expert in Bayesian optimization and probably stick machine learning. She leads a range of government projects in the sector of energy and sustainability. And throughout her career. She's focused
on pushing the boundaries of a eye with with a passion for the importance of ethics and moral, considerations in bringing a high to real-world problems. Welcome Ally. I'm also joined today by Albert King who is the chief data officer for the Scottish government and is responsible for data and ditch. Play Kennedy throughout his career. He's worked in a vast range of sectors, including including Financial Services, Logistics, Health Care, and the public sector to support the effective, use of data and analytics across the Scottish government, has helped to Foster
Innovation and create policy and create a strategy for the Scottish government and the use of AI, which we will talk more about it in this panel. Welcome Albert. Soap, as we get started, I wanted to start off with the story of how we became came to work together, and it started through a challenge in the tech program in Scotland on the explainable and ethical use of AI. And, and from the mine Thunder perspective. We were very excited with the work that you were doing because of our
focus on explainability, threw out a system. So not just explain ability for experts in the field today. I but explained ability also for the people who use the systems and ultimately for the citizens whose lives are affected by the decisions of their systems, but I think it might be really helpful for anyone in the audience to also know a little bit about the specific program because when it comes to a i and it comes to emerging technology, the ability to work with government is just as important as the project itself and in helping helping young companies to navigate Robert,
maybe. Maybe you could give a brief overview of that program in its goals to to frame up the conversation short. No less than to drive Darien innovation in the public sector collaboratively. So when challenges that make people's lives better to transfer program really encourages people to come forward with challenges that we didn't release ready to invite people from industry from Academia away from where I'm at to come forward with with Solutions, with new ideas and new ways of thinking about those challenges. I'm so when we were approaching this question about the
you destroyed their Brian to buy expendability, and I'll just explain in real-world. This case is really so I should say to finish off, just going to plug the 6th. So Monday, so I can find out more. That's excellent. And we've had great experience and we definitely encourage anyone to to look into that as well. And so, so based on that program. We did some interesting work together, but I think I'm going to start at the high-level today and let's let's talk about what could go wrong
and Alessandro, maybe you could start certainly from the level of the technology and ends and how through through the interaction of the Technologies in the system. What could go wrong? Yes, it's money think that's going to go wrong. When you visit FaceTime. Nothing goes wrong at the very first step. When you when you sister that is turning information from that are not well collected, some problems. And that's why he's at the very first step, but you have problems, even
correct. What's fair and unbiased data, things are evolving in time. We know that this is supposed to be released in the real life change and data updated. And that might be the case that the system is no longer suitable for the new changes and you might not notice that. So, it's very important that your system has been keeping in mind that you need to monitoring. And make sure that it tomorrow. Today with the changes that happened in the outside. Exxon ends in building on that in and really thinking about the work we did. We looked at how getting that right is important
because of the journey. When you interact with with the government's and end if that bias is introduced at at the first point of contact, even in something that looks from the outside like it, a simple automation task that can Define the next steps for you and and these steps and in the system's in in in services that the government offers and and and and delivers for the citizens that that first impression the digital. First impression. I'm going to have a tremendous impact and so there's another piece Albert that that that extends from there and, and, and that is what we refer to as the
direction of the intervention. What what steps do you take as a result of the system? And, and so, when you think about it from the government's perspective and you think about what could Wrong number. You must have a great perspective on. I'm really just just what that could lead to an end and the responsibility that comes with it seriously, and I think it can also be applied in Briand wedding. Changing. All these comments are coming. Define. Trenches magical,
YouTube. Using image processing. Arabic States, we develop these Technologies already sought to amplify those messages radio ring around the risks. So I think Yeah, I think, I think that's it. That's a great point. A great story to to really focus on how high the stakes can be thunder from a technology standpoint. And in a case like that using facial recognition looking at an image of someone to detect criminality, your quick impression of that end. And is that possible? Is that responsible? And we all know the answer is no,
but tell us. Why after what are the tools available from Tucson? Are green? That are pacifiers that are trying to detect matching two images and these tools are not that they're saying What is the way? This dude is a fast and then use that might be slow. And might have some problems. Are we sure that when we train our algorithms to when we see each other and how to do this discussion, how to perform this recognition? We are doing is very unlikely because we do need to see
and we do need to provide us, so we had to change things. They can sense where humans have created, by example arresting people as opposed to the population in general. So we might be in the buyers. That is the human body has to do have when we start the task. Designers. So that's a very big and the fact that it's really very difficult to have the metrics to evaluate. How correct is this magic? That is reflecting our actual ethical concerns and Interpret.
Why a specific judgment I've been matched or why this person has been adjudged to be a criminal from this, in my hand. Is there an extra key to extract interpretable and the transference results from his house from the correct? Correct? The ethics of that's more important than the efficiency of the Absolutely, and I think it's an important takeaway that when you're talking about using a I in in applications with lower Stakes, it's easy to get drawn in by what may be possible and try it out. But when it comes to the lives of individuals that are going to be affected by the decisions and and
the intervention that happens based on that result. You can't afford to get it wrong. It's not like playing a game of chess where you can afford to lose 2 billion games of Chess. You have to make this decision right? The first time and every time because of the stay in, this is great. And so, when we think about, you are our collaboration and and the issues we're able to explore, one of the things that was the backbone of it was the Scottish AI strategy and in. So I think this is probably a good good segue into talking a little bit about
what led up. The Scottish AI strategy and what were you using before that? What was the Catalyst? And and and what is the future look like? Yeah, I mean, I guess time to work through Me. Freedom, First address of the risks in discussing which were having increasing impact on almost all aspects of. So we can afford to ignore the challenges if they bring in a. Thank you note. By the way, we approach adoption. I put people the hawk, I would do that. And that was, that was a spirit in which we embarked on, developing the
reflected. You're behind in a fantastic me to ship from Spy Hunter. Events, which drink together as well, working groups and experts, that the collective leadership that we've established. Challenger approach. Responsible adoption. Is excellent. And I actually like, to dig into a couple of things. You said particularly about the public dialogue in its value, but I'm going to pause that and come back cuz I like to have a little bit of fun off the back. Answer. When I watch the Twitter account for the Cog X event. They actually advertise the panel
as should a, I run a country and so I'd like to ask that question as well. What's up with you Albert and and then and then I'll follow up and ask you as well out. Yeah, I mean, I guess I can do many things but I'm not sure it can run a tendency. Sometimes they're really just found the American Pie recipe to make mundane tasks how to make better decisions, but it's important in our society. I'm so yeah, like I said, I can't resist a few Min Pins live in mind. Good catch, you know,
obviously there is a lot of talk and and maybe help or even a religion about the idea of a super intelligence, AI model. That's so complex. It becomes self-aware. The truth is probably closer to your response Albert as to their limitations and understanding how they fit in that. There is a pretty compelling case to be made that the super intelligences that have existed in the world today are large organizations, like the government's where where individual agents are working on together on a collective set of
goals and objectives with resources that can be applied on behalf of them. And so those people working together, begin to look like a superintelligence thunder from from a technology standpoint. Maybe the other way, what what could we do to make a? I run a government? Yes, I think that one of the powers of power into the hands of those who have passed since that. And that's has given the loss of power and even government and maybe we can reveal how to Relax, everyone gets dark knowledge and create. Positive outcomes out of that and maybe that's going to be the Super
intelligence. We want to be repeated, the goal of us connecting together and putting together all our knowledge. And then what we believe is the greatest good and sometimes The Ole Ole Times come together and all the visits and make things better. But this might be a safe bet that feels right. It feels like if we take a step back and look at the potential of our organizations are governments to act as a super-intelligent. Let's add a, i in a way that the humans can collaborate with the AI
that humans and humans can collaborate together and you can create a system where they all do what's best. So thank thanks for indulging, the tangent there but coming back to creating the Scottish AI strategy. The first of the two questions I have is about the public dialogue. Cuz if we're talking about an organization where you have brilliant, people of all types that includes citizens, they're not just passive. They, they are what makes up a country and Not just a consideration, but they have to be involved. Albert. How did you find that process? How did you make
room for the public dialogue? And and what did you learn? So given the fact that we had to do everything remotely, thanks to covid, but they didn't. Fortunately, we stopped at a longer than anticipated impact, time to be adults in Scotland, in a way, which is responsible. And I think the thing was that, There isn't any contestant messages through both the public auction public consultation that that people didn't recognize it and risk. Angels is already a picture of a life, but they're also really optimistic about
the potential. I think it demonstrates a good quality can help us to make that a policy to help us to make a. I think the other thing that can slay, one of the things that we're building into the way we take forward to work at the airline's, is that continued engagement? I think maybe that the final thing I'm saying is I think that is, is, is the real strength of our approach will keep points O'Brien. We're going to be engage with when the really important things, we can do
some issues that people will be applying a guy to And so when, when you think about engaging the public, my my second question was, how do you handle cultural differences in that? How do you think about the differences in, in, in populations and in groups? And there's a larger question around across nation and, and, and how there's a different understanding of a social norms and an even ethics itself which, which a lot of very, very intelligent people for thousands of
years have been trying to understand and you how do we think about that? How do we think about the human outcomes and, and bias in the system? And maybe maybe the real question is? Can we objectively remove bias from an AI system? And that's that's for both of you. I can pick that song by we might be able to remove somebody has done to see if they do. We do. They're the right tools to keep monitoring our system and being able to the Say that, that is a very
short stuff. That's my metrics to ask. If our system is moving in the conversation or what is the height of a slogan and what is the correct behavior of the system in a desirable behavior? And what's going to be the next Monday? We are willing to stay. Initial objective. Yeah, I guess I would agree with everything that you say I wasn't coming. I guess you know when you start to bring in the dog shared that girl is so beautiful. Full movie operation. Technologies in the
shipping that we can. I lift and shift stuff. And I think it is important follow-up to that as well, which is a pretty similar question. But, you know, a lot of our work together did focus at the fundamental level and it really understanding all the potential for biased. I was reminded when you were answering about the proposed virelai regulation and and in particular that the carve-out for high-quality data, and if you were to assess your data to be of high quality,
you actually kind of got a pass in some situations and the follow-up is, is there such a thing as high-quality data. Can we get to high-quality? Do we have to wait till we get to high-quality? Again? This is for both of. You could visit technical implication as much as there is a logistic inside a lot actually collecting the day. I guess that is all relative. That's the kind of these from the subject might be good enough. So it might be difficult to. I would be apparent
satisfied with the decision of high-quality. Is that the investment in data standards, the reply to it? You make it make when it's pretty recognizing the Pastaria and those Technologies. Citizens data and privacy. Most of the time, people always the case. It's important to explain the date. The date that they can be used for the benefits of the individuals, and Society is processed into society. Lack of transparency to be a blocker to data sharing information. Sorry. I want to leave a few minutes at the end for questions from the audience.
So I'll ask one more, one more question. And and that's as we as we have discussed the problems with the bias with the the data, on the importance of engaging, with the population. Albert. What do you see the next big challenge that you have to get right to do? A, i in the public sector? What? What's what's the next one that we're going to have to look at? I mean, I guess. I think, I think possibly that the big ones Reasons by skills and I'm standing. So not just
technical skills, but I should just let your ass to your room and the patient's, I think the reason is really really important. Is that channel to set an appointment for us to work in collaboration with a I right. So I understand we move these to bring up when they eat inside. It's a piece of information itself. I think, I think for me, I think that's still I'm probably going to that the fundamentals. I mean, you know, the challenges ranging transparency can tell it to you now
from standing of the Technologies to to solve these problems and it's for a population to work for some and lead to better outcomes exciting in and it's really an important use of the technology. Let's take a few questions from the audience. The first one, going back to criminality detection is part of the problem. The absence of a solid scientific basis for the relationships that are analyzed between facial features and criminal Tendencies. And, and I think I'll take that one
in. And then either of you, feel free to jump in afterwards, but absolutely if we didn't make it clear. The the first jump was the idea that those things could actually be equated to each other and it was probably a fund of mine. Flood Pursuit at the very basic level and an add-on to that the the more specific technological limitations that we should be skeptical in in any application that that Alessandra mentioned just just really paints a very clear picture that that's not not the direction to technology can do in end in. Absolutely. We
should not be taking the actions with the assumption that AI is Magic and can do anything and feel free to jump in. If I didn't make it clear enough or if you want to add any points to that. I was agreement between teachers and other behaviors that are often used to the outcomes. You say that the very beginning. Yes. Sanchez for one more. Isn't there a good positive bias? We should hold on to such as things like ethnicity to ensure. They're not overlooked.
I'm not saying I wouldn't that's fine, cuz positive bias, but I would say, that's for example, when you had a cancellation, we're still going to need. You should definitely balance and representation types. I would add just one thing to that, and that's from a system design standpoint. We, we see a lot of approaches that involve trying to just remove what people do to be sensitive categories. Things like gender and race. And that information is, is all over the rest of the
data and then you can infer those things back from the other day, two points that you have an end taking a naive approach to trying to purge. It from the data isn't the right approach. You can't remove bias. The best thing. You can do is understand it and and protect it from moving around and and try to understand where it might end end. So just simply removing them doesn't doesn't give you the the results that you're looking for. You need them or more well-orchestrated approach to understand where that biases in system and intend to be able to
take advantage of it and control it. That's that's all for time today. Thank you so much for Coming. This was a great discussion really appreciated it.
Buy this talk
Interested in topic “IT & Technology”?
You might be interested in videos from this event
Buy this video
Our other topics
With ConferenceCast.tv, you get access to our library of the world's best conference talks.