DEI Insights with Camilla Bruggen: Rethinking DEI in the age of AI

DEI
DEI Insights with Camilla Bruggen: Rethinking DEI in the age of AI

When it comes to AI and diversity, equity and inclusion (DEI), much of the public conversation still focuses on bias in algorithms or the risk of machines making unfair decisions. But, as Camilla Bruggen, DEI leader and former Global Head of DEI within WPP, argues, the deeper shift may be in how AI changes the way we think.

In a recent conversation, we discussed with Camilla how automation is transforming the human side of inclusion,  from the quiet dangers of “metacognitive laziness” to the extraordinary potential of AI to make work more accessible and equitable.

Rethinking how we think

“Metacognition,” Camilla explained, “is about how we think, learn and reflect as human beings. We all know that you don't just read something, take it in and it's there fully understood and perfectly clear. We'll often read something or be thinking about something and it’s hard  to refine your point of view or know how to apply to information, It’s not until you step away from your desk or go for a walk that you have a “eureka!” moment, because your  brain has had an opportunity to unconsciously digest and reflect on the information.

But when large language models can generate essays, reports or strategies in seconds, that reflective loop can be short-circuited. “It’s so easy,” she said, “that ideas can start to bounce off your brain rather than sink in.”

The risk isn’t simply intellectual, in the case of DEI it can rapidly become an organisational issue. In DEI work, critical reflection is everything. If teams accept AI-generated ideas at face value, they risk replacing thinking with templates. “We’re cognitive misers,” Camilla noted. “We like shortcuts. But inclusion and organisational change are never one-size-fits-all.”

When AI flattens nuance

That lack of nuance shows up quickly when you ask AI to draft a DEI strategy. Because most large-language models are trained on vast English-language datasets, much of it U.S.-based, they tend to reproduce American frameworks, language and priorities.

The result is a homogenous DEI narrative: the same definitions, examples and slogans repeated everywhere.

“A lot of what AI produces,” she said, “is rooted in a U.S. context with its own history, legislation and culture. What works there won’t necessarily work elsewhere.”

This U.S. bias seeps into the tone and terminology of AI-generated content too,  often sounding polished but not locally relevant. The risk is that strategies feel disconnected from the realities of different markets, sectors and organisational cultures.

There has also been some interesting discussion about the ‘decolonisation’ of AI, that most of English language data sets ignore the input from the global south and the rest of the world. Which means there is a huge amount of bias in the AI responses. The AI for Good coalition are doing interesting work in this area.

When AI reinforces poor practice or disproven approaches

The danger isn’t only that AI outputs are generic, it’s that they can perpetuate outdated or ineffective practices.

Because these models are trained on decades of corporate material, they continue to surface interventions that research has already shown to have limited impact.

Camilla pointed to unconscious-bias training as a classic example. “Despite evidence showing it has little effect on behaviour, AI still promotes it as part of an effective DEI strategy.” she said. “however it’s been found to give people the false impression of progress and reduced accountability. It is about on  organisations trying to change people rather than structures. What we need to be doing is looking at outcomes that show behavioural change, not whether people have just attended training sessions.

Similarly, AI often recommends events, panels or themed days that may raise visibility which “while they can be great for raising awareness and having good discussions, holding a panel on International Women's Day does very little to shift gender equity within organisations. And so it can almost be a diversion from actually making impactful change.“

Her concern is that poor practice becomes self-reinforcing: “AI amplifies what’s already out there, and a lot of what’s out there in inclusion simply hasn’t worked.”

Generative AI and representation

If language models risk flattening nuance, image models risk distorting representation.

Camilla’s background in advertising gives her a sharp eye for this: “Women are often under-represented, overly sexualised or unrealistically idealised, blonde, thin, blue-eyed. Even when we know the images are fake, they still shape our perceptions of beauty and power.”

For Camilla, this is where DEI teams can play a broader organisational role. “Inclusion specialists should be in the room when companies use generative AI,” she said. “Representation doesn’t happen by accident; it needs deliberate prompting, guidance and guardrails.”

She pointed to initiatives such as Dove’s Real Beauty Prompt Guidelines as examples of how brands can embed inclusive thinking into creative AI use.

AI as an enabler of equity

Despite the risks, Camilla is optimistic. “AI can be a huge force for equity if we use it consciously,” she said.

From accessibility tools to assistive language models, the technology is already helping more people participate fully in the workplace. “For people who are neurodivergent, especially those with dyslexia, AI reduces anxiety around written work and improves accuracy,” she explained. “It lets people focus on their ideas rather than their typos.”

Accessibility, she added, is where AI has made some of its greatest inclusion gains,  from real-time captioning and live translation to visual-description tools that support people with sight loss. “It’s an absolute game-changer for equity,” she said.

But only if businesses know what’s available. Camilla urged DEI and People teams to build closer partnerships with technology providers: “Leaders need to understand what these tools can do and lobby for investment. It’s not just about fairness, it’s about performance and retention.”

Governance, ethics and the future

Looking ahead, Camilla sees the challenge not just in what AI can do, but in how thoughtfully we deploy it. “Thinking about what you need as an organisation, but also from a CSR point of view, what's the kind of social impact and what's the enhanced customer experience? And then thinking about quality control when it comes to AI and really what I would urge people to do is very much maintain critical thinking and make sure that they're bringing subject matter experts in to do the right thing, not just the easiest thing”

“Agentic AI (systems that can make autonomous decisions) are already here. But just because we can do something doesn’t mean we should.”

She advocates for clear governance and ethical frameworks - “traffic-light systems of risk,” as she described them, to help organisations evaluate new tools responsibly. “It’s about taking time to reflect,” she said. “Asking: what problem are we actually trying to solve? What’s the social impact? What are the unintended consequences?”

Her closing point brought the conversation full circle:

“AI shouldn’t replace human judgement. It should expand it. The goal isn’t faster thinking,  it’s better thinking.”

Subscribe to our Newsletter

Subscribe to receive the latest blog posts to your inbox every week.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.