As Littler's first Chief Data Analytics Officer, I lead the firm’s data analytics practice and Big Data strategy.Leveraging an extensive background focused on the intersection of technology, business, and the law, my team of lawyers, analysts, statisticians, and data scientists work with clients and case teams to harness the power of data to build superior legal strategies and improve legal outcomes. This team also advises the firm and its clients on the development, implementation, and use of analytic models and AI-infused systems, and using data visualization/data storytelling to enable insight, drive understanding, and extract value. Littler’s long-held dedication to innovation is exemplified in this team, and the firm is one of the first in the United States to hire a leader dedicated exclusively to data analytics and Big Data into a position in its C-Suite.Because of my professional experience - which includes time spent as an associate and shareholder at Littler, a stint as Senior Associate General Counsel responsible for eDiscovery at the number one company on the Fortune 500 list, and time as General Counsel and Vice President of Strategy at an Artificial Intelligence start-up - I have a unique understanding of the pressures clients face, and their need to leverage technology to extract maximum value from partnerships with outside counsel.View the profile
Athena is the CEO & founder of HiredScore, an Artificial Intelligence HR tech company powering the Fortune 500. HiredScore leverages the power of data and machine learning to drive deep recruitment process efficiencies, enhance talent mobility, and help organizations move to a fully optimized TA function.Prior to founding HiredScore, Athena was an investor in NYC, most recently at Altaris Capital, where she managed and sourced business process automation and healthcare investments in highly regulated data environments. Prior to that, she was an Investment Banker at Bank of America Merrill Lynch, focused on public technology & media companies.Athena serves on the board of Community Education Alliance of West Philadelphia, a network of Charter Schools with +800 disadvantaged students, focused on preparing youth for the workforce and a successful life. She founded Belmont Sprouts, which builds urban gardens and healthy eating experiences. She is a member of the World Economic Forum’s Global Shapers and Thousand Network. Athena is a member of the 2018 Class of Henry Crown Fellows within the Aspen Global Leadership Network at the Aspen Institute. Athena received a B.S. from Georgetown University’s School of Foreign Service.View the profile
About the talk
Recruitment teams are increasingly reliant on AI, machine learning and automation for sourcing, CRM, and assessment, yet the use of these technologies has raised a number of ethical questions, from the potential for bias in data sets and algorithms to the potential elimination of human agency in the workplace. A point-counterpoint discussion with the CEO of an AI company and a leading expert in the mitigation of bias in corporate tech products, exploring strategies for identifying and eliminating such prejudices.
Please join me in welcoming Aaron and Athena for a point-counterpoint discussion on the ethics of artificial intelligence and talent acquisition. Great. Thank you Peter for having us. Hopefully you can hear me. End a pleasure to always be joined by Aaron Cruz who is one of my favorite singers leaders legal and data and applied AI in our space. Thank you for joining. I think I'm going to just take off with brief introductions and maybe a framing for the discussion. What is our definition of
ethics for the purpose of this discussion and then some grade point Counterpoint for today? So Aaron, maybe want to give a quick intro on the chief data analytics officer Darren Wilson, not a Veracruz and we spend a lot of time in the day to space but we're data and legal and business Collide. And so we actually have a fairly significant Consulting practice around machine learning artificial intelligence helping companies. Bring these tools in as they are using them in the course and scope of business.
As mentioned by Peter and co-founder of a company called score. We do artificial intelligence for total Talent Management Solutions for the global Fortune 500. So for us at 6, and the framing of this dialogue is so critical to what we build what we don't build how it gets deployed how we advised it not be deployed and the lights. So without further Ado. I think just a framing of how we're going to Define ethics for the purpose of this discussion. It's really not from our standpoint a black or white Good vs. Bad type of discussion
is much more, you know, not are we using technology is for murder and harm and clearly Crossing Lines, but much more the space in between is Aaron called it the beige area, which is the ethical framework that we're applying to the problems of And leveraging automated or augmented Technologies to solve them and really how we think about each ethical decision is the framework. We apply it how we determine arms wrists and how we determine the risk within that and
that's really for us. These could we do something but should we do something and the should we often a spectrum of in some places that's in some places? No one and that's what we find so fascinating. I think maybe Aaron just legal expert in this space would be great. If you can maybe kick us off with an understanding of are there laws governing the use of AI in HR how clear those laws National vs. State by state do they explicitly forbidden things and permit things we would love just an update on where we
stand Weekly right now because I spent $100,000 going to law school to learn how to say it depends. I'll give you the wire answer which is it depends on my level and a high-level. There are some very loose rules at the edges of the road, right? So on their things like the Illinois biometric information Privacy Act and there were things like gdpr and CCPA the California consumer protection act and those generally speaking or data protection rating of Privacy Act. And so how you how you collect data
how you use data to build these systems? There are laws on the books that govern that and there are laws on the books that govern how the output of these kinds of tools get evaluated and really that is old school discriminatory disparate impact kind of rules that it existed long before a I was a thing. So, you know going all the way back like the 1950s these have been sort of the rules in the case law in the side of sort of those two very far edges. There is very little in the way of legal or regulatory framework that would
really govern this and so inside of that very broad for the rules of the road. There's a lot of room to move. Yeah, and I think it's a great differentiator between laws that govern how we use data. What day do we use versus what technologies are deployed and what problems they saw a really interesting debate. I'm sure we could spend the next five hours as you and I often do when we get on calls talking about the the layers of the onion of how explainable
should systems be in the Kia should because it's not necessarily clearly Define are codified yet. What data should be used. You know, what testing should be applied. I would kind of love your thoughts on how you guide companies maybe even along the spectrum of risk or the type of decision or the input that a human has or overriding capabilities of a human versus automation. Yes, sir, happy to a really complex topic. Right and so the level of transparency and explainability
really goes directly to the issue of how are you using it? And what are you doing with it? So that use case I say all the time but I don't really need to understand why Netflix keeps recommending that I watch My Little Pony but that could be because I have very small children to watch My Little Pony on my account or alternatively it could be the Netflix on the real window into my soul but even I don't understand but if you end of the day, I don't really need to understand how that algorithm is working or why is recommending My Little Pony but if an algorithm is touching
things that really reach the top of Maslow's pyramid write the the Pyramid of needs people's ability to get a job people's ability to deal in the financial World people's ability to buy a home like Is it really go to court human need generally speaking. We advise a high level of transparency and explain ability. So things that come in the HR and employment space be such a high level of human need and that's generally a space that you know, there's a fair amount of legal contract that governs how that should play out. And so in that space we generally thank you know black boxes are bad.
If somebody came and said, you know, I applied for this job and I didn't get it. Why didn't I get it if you come back with computer said know they're probably going to be kind of annoyed with you and you're probably going to get sued as opposed to being able to say. Well here are the data elements that the machine was looking at and here's how it scored those elements and based on that yours how you compared to the people who are ultimately considered or hired that's a much more powerful conversation and probably makes a lot more sense. And if you're a buyer of these kinds of pools
That's something that you can actually put some stock and some stick behind versus. I don't know the computer said don't hire you so we did right? Yeah. Yeah, and I think common misconception is that all algorithms are Black Box unexplained entities can't be tested can't be validated which I think when you go to the core of Ethics debate if I can't even see what led you to recommend what you recommended or you to fast track someone versus someone else. We will have
more of a dilemma then if I can go in inspect under the hood see what led to it. So maybe even worth noting, you know, I know We work with littler because you guys are able to provide just this intersection of data data science system expertise with legal, but maybe you can even talk a bit about how you're seeing Technologies kind of be released that are more explainable that can be tested that can be audited and then it's more of a we see that this is being used in this bundle. Do we want to remove them
or do we want to keep that or this is being used in creating a very heavy weights or it's being used just as a UI feature and waitlist, maybe talk a little bit more about some of the trends you're seeing there. Yeah. I mean, I think generally speaking but you doing in this space the trend is toward explainable in transparent. Right? And I think one of the things you're hitting on is one of the things we counsel a time, which is the importance of human in the machine, right? I think people who are in this space a lot often forget that science-fiction
going all the way back like tools burn and Hollywood, but more recently right in his condition people to believe that these Technologies are both artificial and intelligent and a knife, right they are they are algorithms running complex statistical analysis on data that said to them and there's not a decision been made. It's an output and we talked a lot about sort of where or how those decisions get back to the pond. If it's kind of a decision tree that a human or a group of humans have already mapped out and says if this thing
is has happened or is true then X should happen that can be automated and human doesn't really need to be in that that decision Loop because humans have already decided what should happen but in the situation like a jar or or recruiting or anything having to do This space right? What you're talkin about is an output that really should be informing human decision-making like a human should be looking at that and going does that output make sense? Can we rely on that? Should we use that how much weight should we give that in the decision-making process? So if more like the
trend that we are arguing for in the weekends Appliance cord is the trend toward transparency in the trend toward human decision-makers taking input and saying yeah, we're going to use this in the following way and we're going to make this decision based on that and then being able to stand up and defend out of some of the challenges. Yeah. I think what we use and how we use it. I see for example clients asking us. Do we need to change our consent forms if we work with your algorithms should we change the consent forms? Is that different by geography?
So the EU vs. Operating, California versus Operating somewhere else. So I think there's a level of make sure the end entity whether that's a candidate an employee or otherwise is aware of what's being used and how it functions. And then I think something that we've spent a lot of time in as a company is after you reach that point. How do you speak zero and ones in a human audited logged way where every not just a company but if and when they go to court they don't need to hire teams of data
scientist to go through code base and understand what led to what then figure out that there was a problem and and have to defend something that was problematic. So, I don't know if I technologies that weren't either pitch themselves as being able to do that and then got to court and weren't able to speak human or clients have to hire team. The data scientist to kind of decipher things and and then defend both you're not seeing a ton of litigation around this yet evil spirits of it here and there but
but you know, we generally think that there's probably a wave of what I call algorithmic driven class action litigation coming right people are going to bring these large-scale claims essentially saying like all your company uses this algorithm in it discriminates against you XYZ protected group and having to defend that dumbass level answer the ability to really have somebody stand up and say Here's how I used it. Here's what it shows me. Here's how I took that information and used it to make a decision is
really really crucial to the defense of those things and In the bulk of Technologies in the space right now what you're seeing is a pivot toward that but then you're seeing these kind of new ich technologies that are kind of making their way into the space and their Technologies Princeton step report to read somebody's truthfulness based on their facial expression, but also next looked at my zoom photo right like half my face is kind of Grey. Last Shadow there so give it a little bit so I don't look quite as a bond villain
limits of these Technologies are not well understood nor are a lot of the risks like those Technologies are largely built in the west right based on future sets that don't have a great deal of racial or even maybe sex-based diversity in them. And we know that like facial recognition tends to not work as well amongst different racial groups and things like that because of how it's been trained. That's always made a parent in these Technologies and that you have companies that dive in and sort of end up caught by a challenge having not really thought this out. That's really the
worst place to be and also because people call this new technology or because it is new technology. We don't think about how old Earl all my plastic lot of States rules that prevent the use of polygraphs in the workplace. Well, I can make a really strong argument than an algorithm. It's designed to read my facial features and score. My truthfulness is using my bodily response to a question to score truthfulness operates at least in substance like a polygraph and probably would be considered that since we buy a Courtright problematic and some kind of getting
those things figured out is where the rubber meets the road on this. Kind of Segway into this may be the spectrum of where and how you apply algorithms and under the framework of of Ethics, which is if we're applying an AI to surface people who were rejected before for example, who would otherwise sit in that database never likely to be seen again or to surface employees who have not raised their hand have not been slated for promotion who meet the qualifications to
actually try to mitigate maybe manager bias or biased that might exist across the or probably subconscious. You know, that's one thing versus if you're using algorithms you and I have discussed this to support ryff Orleans or to automate hiring end-to-end or to do other things that might not have won. The human involvement is human oversight or the human validation. Patience Tab or means something much more dramatic than you know surface me and then call me and see if I'm interested
to goes back to the human need peace. Right? Like are you are you preventing somebody from getting a job or are you potentially surfacing candidates that have been seen before in all of those have? A potential discriminatory parts of them right if it's done badly or whatever but some of those are much more overtly risky than than others, right? So jobs in sang You might want to apply for this because you seem to have the skills and you have applied that certainly anything but that's very different than hey Athena we picked you for this job as opposed
to the other hundred candidates who applied and when the other candidates get upset that they weren't picked having them like file lawsuit for things like that having to defend explain those are very different risks in areas and thinking about what the application is and what we're trying to get to you and then working backwards you okay, I would you do this in a transparent transparent indefensible way. That's really the game. Yep. Yep. And I think it goes back to that definition. We're applying too ethical Frameworks of the risk in the harm or you know the lack of wrist last of harm and
it's a spectrum Polar Opposites, right kind of good versus bad than a metaphysical good versus bad right exactly. Who says I'm an executive recruiter in essence. When you ask people to apply for a job you are taking a risk of getting sued. If they don't get the job using a hive mitigate the risk Aaron may be a a question for you. I can weigh in on when you have a technology, you know, if you're using a high-quality vetted proven, you know, validated technology has been in market for a number of years and a number of companies. You're getting a standardized
consistent process. Right which is that standardized inconsistent process has been vetted and tested and validated is a good thing. If you're applying a standardized can system process that might use data if you shouldn't use might not have documented what data is used or excluded or what Frame Works at used to test. You know, then I think there is a not a black girl. Answer for gym from my standpoint and it's it can yes mitigate risk probably but it could also if done not in the proper
construct test data use documentation. So it's actually even more complicated than that, which is screwed up. So so human process is relatively well accepted to date in the legal field right legal in the world that lives on precedent meaning what happened before and so you have a lot of people were really risk-averse would like we're not getting into it. We understand there's potentially Human by us, but we dealt with that for a long time this this year's scale of
what people are trying to do in the speed which which which people trying to move data from a business standpoint mitigates people to move for these kinds of Technologies. Technology if it's well-built has the ability to let us treat people very equally in a very transparent way getting all the people are being sort of assessed in the same way. The same data points are being booked at similar data points are being scored in the same way. So that waiting is the same there is a way that that has a miss or a risk mitigating function, right,
but Because you don't because the lawyer in me right letting just the machine do that actually carries its own risk and it's called pattern and practice discrimination. Somebody finds the hole in the process the impact like a particular protected group in a negative way. If all you're doing is letting the machine make those decisions cuz everybody's being treated equally by the machine yesterday open yourself up to an entire class of wrist known as powder and practice discrimination. And that's where the fusion of the machine
and the human is really the best mitigation strategy. Right? Because the the machine allows individuals to be assessed in a uniform way and then the human decision maker is using those assessment to decide who the best bit is based on their experience their job function at cetera and those two things. Well documented or the best risk mitigation strategy. That allows you to really leverage these Technologies to move the business forward and really powerful
and I think when we think about if you're going to autocomplete know where why how every place and and the Restless arm of the autocomplete decision versus the benefits and the business. I can't tell you how many people believe that I'm mad because I let you know, I auto complete jumped in there and it didn't end of the human was not in the machine right exactly is our last topic to cover on here, which is just the scale and velocity of business needs business challenges, especially right now and The detriment of companies
the lack of human manual process time energy work even ability a lot of recruiting or words or HR org might not have access to data for a whole number of reasons that are good reasons, but data that could for example, you know, when we take our algorithms net merge employee data with their learning and training data enables them to be unlocked for new career paths in new opportunities that just used the employees resume. They would never have gone considered for cuz how can I know that you're spending your weekends learning,
you know about cloud-based servers and that's a area of high gross but you in my supply chain office and that's an area of shut down because of Any number of reasons where Outsourcing it for example, so would love I think Aaron if you can weigh in maybe on Some of the exciting areas that you're seeing people thinking about deploying Next Generation Technologies, either because of the recent pandemic exposing a huge amount of work or new capability needs or other and you know, I think it's good to talk about the challenges also that we see in some of these
new applications. Can I think after all of them the sort of military deployment should have gone on wide-scale in the last 20 or so years the ability to hire veterans. I think it is something that people really pushing and this kind of Technology actually really improves that I mean veteran resume translation to civilian Market is something that generally there's a massive gas and the ability of an algorithm to get to know know that what a master chief is versus some civilian
hiring director. The translation are super use one you seen those kinds of things come in and mass and Veterans is just one of the examples of being able to unlock skill set that previously. I've been sort of hard to identify or hard to translate or whatever. I think that this whole pandemic it is creating a really interesting space because you have organization that have thousands of people riding inside 30 million people unemployed in 2 months staggering and thinking about what that's going to look like when the economy
recover is Gabi really interesting. How are you going to rehire for those positions? Are you going to be higher for all this position has your organization been thinking about automating technologies that would have required you need to lay off people or level up some of your employees in order to do a continue job with automation. I think you're going to see a lot of that and it cold is is that push automating technology? Adoption in to sort of a v ride as soon as this thing comes back and so you may have an employer who they laid off 40,000 people. They realize this is probably
not the only pandemic that they're likely to run into in. The next handful of years is because of Global Travel in a interconnected world and what not and so being kind of the Frailty of on all human organization. They may bring in automating technology and realize we don't need forty thousand people back. We need $20,000 now up leveled by tech and and they can do that same side of jaw. How do you pick out Kudos? 20,000? All right. Maybe you want to look and give the best of the 40,000 you later off first right of refusal for those jobs. How do you rank those? What do you do to tease
them out? I think you know technology is like we're talking about today can really fast forward that process and I think you're going to be seeing a lot of organization move in that direction post. Because the speed in the velocity by which they're going to need to hire is going to be tremendous mean before the pandemic. We had a client that we were helping bring automating recruiting technology in because they were trying to hire I forget what it is. I want to say likes 500 data scientist in 9 weeks. Good luck that's going to be really hard.
But there is no way a human recruiting team could have really gotten through that at that volume and they were relatively successful leveraging these kind of technology part of the way through that. Yeah, one of the things we're getting asked to do right now, which is actually pretty pretty fascinating Carm risk profile being quite low. When you think about the alternative is employees that have been made aware or you know will be made aware of being at risk of redundancy a risk of job loss. Every time there's a new job
automatically going through that list of people and surfacing them. If they're relevant. I think you then get to wait as long as you haven't called in and it can send those how in what way is people are, you know, using that data of yours. I don't think anyone would say I'm losing my job, but No, thank you. I don't want a new job, you know how to opt into something that's going to ensure that thing to your point. You know you then get into 1. How do we think
about companies that aren't using that so if I'm an employee I'm going to company that's just ripped me. They have some Outsource Placement Firm that I go to you know, my friend is that a company that is using technology is that automatically considering them for jobs or the auto consideration System is using data that is only for the professional Workforce and the blue collar Workforce doesn't have that cater digitized and entered into a system. Are we deepening a disparity? So I think it does raise some questions of Where that's being used
how that's being used. And again go back to the ethical decisions of if I'm a company and I do that for this round of playoffs and I don't do that for a round of layoffs. Is that fair? And again, what's risk of harm to some of those people versus others of that decision about what the end goal is and then working backward through your processing your technology application to make sure that you're comfortable with it before you roll it out so that you can treat everybody uniformly unfairly in the in the legal
sense right that that tends to be a big piece of the problem and quite honestly when when when problems around these kinds of technology to develop is largely because The wings were for the nails on after the plane was pushed off the cliff yuno instead of building the whole thing and then launching it with a certain amount of intentionality and thought I had a time. Yeah. Well, there is something we talked about was even, you know, being more experimental and using new technologies, which is really exciting but facial recognition, you know, a great
question and a great debate. There's a lot of benefits to that or employee monitoring or monitoring of employee productivity when they're remote. I think we're hearing a lot about how might we monitor our employees to see who's effective who's not well a whole new wave of technology is to your point is going to take off and then you're going to nail Wings to it once it gets in the air or think about those things later, right? Absolutely. I was listening to a podcast today about a group of researchers who are working on Discerning emotion from people. walk
but their gate right and that they could babe. They think that they can effectively discern emotion from gate and I saw it like I was singer listening to it. Just shaking my head thinking. Oh my God, I just going to be an absolute mess if it starts making its way into this space, right? Yeah, well, I know where we're up on time. We do have another minute or two if there's any questions from the crowd. Ethiopian community no questions at this time always such a pleasure to to be in discussion with you and then we could have spent all day and and
maybe just a reiteration for the group here of how exciting I think it is in this time right now of using technology to solve hard problems that on a human scale we would be unable to but also just keeping in mind that ethical framework of harm risk-benefit and how to weigh that which Aaron and I I don't think we can answer for you but hopefully just add some color of what the debate has a way of thinking about it if nothing else, right?
Buy this talk
Buy this video
With ConferenceCast.tv, you get access to our library of the world's best conference talks.