BCG’s chief AI ethics officer says using AI at work can lead to a ‘virtuous circle’, with workers reporting better job satisfaction and efficiency.

Artificial intelligence may be the hot topic today, but there is still a lot of hesitation about adopting the rapidly changing technology. More than one in three American workers Are afraid That artificial intelligence can replace them, some HR leaders They feel anxious About its unknown effects on their roles and employees.
HR Brew recently sat down with Stephen Mills, Chief AI Ethics Officer at HR Brew Boston Consulting Groupto demystify some of the risks and opportunities associated with artificial intelligence.
This conversation has been edited for length and clarity.
How do you deal with workers’ hesitations and concerns about AI?
Once people start using technology and realize the value it can offer them, they actually start using it more, and there’s a bit of a virtuous cycle. They actually report higher job satisfaction. They feel more efficient. They feel like they are making better decisions.
However, we also think it’s really important to educate people about technology, including what it’s good at, what it’s not good at, and what you shouldn’t use it for. Personally, I sit somewhere in the middle.
Where do you see the biggest risks related to AI?
For us (Boston Consulting Group), we have a whole process, if it falls within what we consider a high-risk area, there’s a whole review process to say, “Are we comfortable using AI in this way?”
Let’s say we’re going to build technology. It systematically maps out all the risks, which could be things like, what if it gives an answer that’s actually incorrect, or what if it unintentionally directs users to make a bad decision. And then, as we build the product, what is the acceptable level of risk across these different dimensions.
Some people fear that improperly deployed AI could lead to technology learning to reinforce biases and create more possibilities for discrimination. How can we ensure there is a diversity of ideas within the MBA?
We want to evaluate inputs and outputs from a product perspective. Again, it’s about looking at the potential risks, which could be different types of bias, whether that’s bias against any protected group or things like urban versus rural areas. These things can be found in models. We talk a lot about responsible AI by design. It can’t be something you think about when you’re conceptualizing the product, designing it from scratch, thinking about these things, and engaging users in a meaningful way.
What are you hearing from HR leaders about their feelings about AI transformation?
A lot of HR leaders are very passionate about productivity and unlocking the value of technology and want to put it in the hands of their employees. The concern is that we want to make sure that people use technology and feel empowered to use it, but do so in a responsible way.
I like to show cool failures of a system that does silly things that kind of makes you laugh, but it’s just a really good illustration that they’re not perfect at everything. So for people to see that, it helps them realize that I have to think about how to use it.
We work hard with our employees, to make sure they understand that they can’t use AI to do their job. Use it as a thought partner. Use it to help improve your score, but you need to own your own work product at the end of the day.
How can small employers set limits for AI?
For small businesses in particular, it can be as simple as leadership walking into the room and having a discussion about where they feel comfortable using AI. Ultimately, part of this comes down to organizational values, which is why you need to engage the organization’s senior leaders in dialogue. It doesn’t have to be fancy. It can be a literal informal document like, “Here’s how to use it. Here’s how not to use it.”
Do you think AI can impact productivity requirements?
We want to ensure that employees use AI to achieve productivity benefits, but not in a punitive way. It should be more like, if they don’t get it, it’s because we failed. Then we empower them, hone their skills, and help them know how to use the tools.
how You Using artificial intelligence in your business?
I use him a lot as a thought partner… I might share a slide deck that I’m going to use in a big meeting and say, “What questions would you ask if you were the chief risk officer?” It’s just a way to help me with prep. I also use it to give me counterpoints to the arguments I make. It’s important to still have our own ideas, but using this (AI) as a thought partner, something that challenges your ideas. It’s very powerful in those cases.
This was the report Originally published by HR drink.



Post Comment