How to Deal with Bias in AI Models?

White mannequin wearing a white mask Photo by Pawel Czerwinski on Unsplash
Photo by Pawel Czerwinski on Unsplash

By Frank Lavigne, Disruptive Technologists, Inc. Chairman of the Board

We, as a society, face a unique challenge when it comes to AI. And no, it’s not about preventing machines from taking over the world like in science fiction. Rather, the dilemma is ensuring that AI benefits everyone equally without leaving people at the mercy of cold, soulless algorithms.

As the Chairman of a STEM nonprofit dedicated to creating opportunities for people from all backgrounds and abilities, this issue is close to my heart. As an AI practitioner, I also know that no algorithm is perfect. AI doesn’t predict the future; it turns the real world into data and finds patterns. It’s math—not magic.

Never trust your models blindly, always verify them.

Frank Lavigne

Even the best AI models make mistakes.

We’re trained to be skeptical of models claiming 100% accuracy. Why? Because it usually means the model has overfit to its training data, making it less effective in the real world. A model with 99% accuracy sounds great—until you realize that’s still 10,000 mistakes for every million decisions. Multiply that across a large population, and the error rate adds up.

But here’s where the real issue lies: these errors often aren’t random. Bias in the training data can lead to certain groups being disproportionately affected. That’s the elephant in the room when it comes to AI bias.

Learning from Cybersecurity: Assume Bias

This is where we can take a page from the cybersecurity playbook. In cybersecurity, there’s a concept called Zero Trust, which operates under the assumption that a breach has already occurred. It’s proactive, not reactive. The same mindset should apply to AI: assume bias exists and plan accordingly.

In cybersecurity, Zero Trust means continuously verifying users, devices, and systems—trust no one by default. In AI, we need to adopt a similar philosophy: never trust your models blindly, always verify them. Don’t assume your AI is bias-free just because it’s working well today. Models need constant validation and reevaluation to ensure they remain fair and accurate.

Here’s how some of these cybersecurity principles can translate to AI:

  • Assume Bias Exists: Just like security experts assume a breach, AI practitioners should assume bias is lurking in their models. Bias could come from training data, model design, or even the way the model is used. Starting with this mindset forces us to stay vigilant.
  • Continuous Verification: Models shouldn’t just be evaluated once and forgotten. Continuous testing on new data and scenarios ensures that models stay accurate and don’t unintentionally reinforce harmful biases.
  • Micro-Segmentation: In cybersecurity, networks are split into smaller sections to limit breaches. Similarly, segmenting data by demographics or other factors can help detect biases that might not be obvious when looking at aggregate data.

Why This Mindset Matters

Adopting an “assume bias” approach doesn’t mean we’ll completely eliminate bias in AI, but it helps us stay proactive rather than reactive. If we assume bias is already present, we can build safeguards, conduct regular audits, and keep improving models to reduce harm over time. This approach is especially crucial when AI is used in high-stakes areas like hiring, lending, or criminal justice.

Bias may never be fully eradicated, just as breaches can never be fully prevented. But with the right mindset—one that assumes bias and continually works to address it—we can make AI fairer and more reliable for everyone.

The challenge isn’t just technical; it’s ethical. By adopting this approach, we’re not just building smarter AI; we’re building systems that reflect our commitment to fairness and accountability.

Author

  • Frank La Vigne Chairman of the Board of Disruptive Technologists

    AI and Quantum Engineer with a deep passion to use technology to make the world a better place. Published author, podcaster, blogger, and live streamer.

    View all posts Disruptive Technologists Chairman of Board
Previous Post
NYU Tandon School of Engineering building

Revolutionizing Accessibility: the 2024 NYU Capstone Project

Next Post
Photo braille writer by Elizabeth Woolner on Unsplash

Accessibility and Technology

Related Posts