Artificial intelligence is a game-changer. There’s no denying that. But it’s important to recognize that AI has the potential to change the game in ways both good and bad.
On one hand is the positive: AI can add value to almost any activity in any industry while freeing employees to focus on more meaningful work. On the other hand, AI can legitimize existing prejudice and bias under a veneer of innovation.
Unfortunately, this latter possibility has already been observed.
In her 2016 book Weapons of Math Destruction, Cathy O’Neil highlights how AI systems making biased decisions can destroy peoples’ lives and livelihoods. One striking example was the use of demographic data in an AI model that determined criminal sentencing. Under the model, Black and Latino males received harsher sentences than individuals of other ethnicities who were convicted of similar offenses. Transparency was non-existent and unfair sentences were upheld solely because “the computer said so.”
If you are introducing AI into your enterprise or organization, regardless of whether that AI is homegrown or off the shelf, you have a duty to ensure its decisions are ethical, unbiased, and free of human prejudice.
Doing this properly is hard, but not impossible, with the help of top-level, senior executive support.
Leadership can take a variety of actions to ensure that as AI advances and proliferates, it evolves to be free of the biases inherent in the humans who design and build it. Here are three steps to get you started on the journey to fair, ethical AI.
1. Build checks and balances
Creating bias-free AI systems starts well before the system analysis and solution design. The “first first” way to address bias is to build diversity into the team that does the work.
Diversity of thought equals diversity of requirements. This tends to yield systems that have built-in checks and balances. Everything from which algorithms the AI system will use to what testing and auditing procedures will be implemented should be addressed collectively before a single line of code is written.
Even agreeing on the definition of fairness is a dauntingly complex task. That helps explain why prominent AI researchers Andrew Ng and Sharon Zhou devote a full week of lectures to it in their recently launched course on Generative Adversarial Networks.
The course readings highlight at least 10 (and likely many more) ways to define and evaluate fairness. As a result, we can’t rely on intuition to distinguish fair and unfair AI systems. Instead we need a set of clearly defined, verifiable metrics to evaluate the solutions we create.
At this early stage, consider hiring a statistician with knowledge of the techniques that detect and eliminate bias. Too many projects of all types fail because an expert was not employed from the beginning, and this is no different.
2. No black boxes
AI algorithms and the data they are trained on must both be transparent. This principle is known as explainability, and it allows humans to understand why an algorithm arrived at the solution or decision it did.
Although Stanford’s 2021 AI Index Report named lack of explainability one of the top three risks organizations face when implementing AI, it’s fairly rare to find true explainability in modern, top-performing algorithms and off-the-shelf AI solutions. As such, they become black boxes to end customers. Without supporting documentation, we have no way of knowing if bad, biased data is leading to bad, biased decisions.
Open source development, in which the world gets a chance to both see and participate in AI development and testing, offers one way to validate solution transparency. It encourages diversity of thought by definition. Still, as critics of transparent algorithms correctly point out, the most powerful AIs today are based on deep neural networks, which rely on billions of impossible-to-trace interactions that occur at the neuron level within those nets.
To that end, conscientious, “good citizen” organizations will need to invest in the research and development of new AI-explainability techniques such as LIME and SHAP.
3. Monitor, audit, retrain, redeploy
Ultimately, the only way to ensure ethical AI is to continuously monitor and audit these solutions and solicit customer feedback on their impact.
Model drift—in which the underlying AI system assumptions change with time and lead to AI that is less relevant and eventually obsolete—is a well-known issue in data science even without accounting for fairness and bias. When this is the case, an AI should be retrained and redeployed and the end-user base informed of that drift and what was done to fix it.
The same is true if an audit or customer demonstrates a system that no longer meets fairness criteria as defined in step one. Models must then be taken offline and redeployed only when those requirements are satisfied again.
Constant vigilance is required. I advocate for it not because I don’t believe in AI but because I strongly do. If deployed fairly and consistently, I believe AI can create trillions of dollars of added value across the global economy.
While the three steps above won’t result in entirely fair systems, they’re a start. At the moment, with the entire industry acknowledging that AI is often biased by design, that’s exactly what we need.