We Celebrate The Innovation Warriors Like You. Follow or subscribe for the latest news in tech.
AN INSIGHTFUL LOOK AT AI, ALGORITHMS AND THEIR RISKS TO SOCIETY

An Insightful Look At AI, Algorithms And Their Risks To Society

Coded Bias is Netflix’s deepest dive yet into the state of artificial intelligence, and the issues it confronts are uncomfortably relevant.

As the film highlights, algorithms and AI models are designed to mirror and predict real life as closely as possible; however, commercially available facial recognition programs have a severe algorithmic bias against women and people of color.

“The systems weren’t as familiar with faces like mine (as those of mostly men and lighter-skinned people),” M.I.T. Media Lab researcher Joy Buolamwini says in the film. In her research, Buolamwini found that Microsoft, IBM, and Amazon’s services were all shown to have common inaccuracies.

Coded Bias examines how flaws in facial recognition technology can spread discrimination across all spheres of daily life. Other AI applications have shown to develop biases in everything from who gets hired, what kind of medical treatment someone receives, who goes to college, the financial credit we get, and the length of a prison term someone serves.

This is the opposite of a healthy society where technology systems are neutral and serve as the ballast for opportunity and advancement for all people, particularly protected classes.

Coded Bias shines a bright light on the opportunity to improve AI systems, from how they leverage data to how they are developed, deployed, and used.  We can shift the frame so that solutions focus on those who bear the most significant burden of its harms.

As Director Shalini Kantayya says, “This is a moment where we are all in a lot of pain. This moment is asking us to drop into a deeper place in our humanity to lead.”

So, what is the way forward when we know countless examples of algorithmic harm? There is rapid innovation within the field of AI/ML, including innovation that will change how AI models are observed and measured to ensure fair, ethical, and absent bias.

We need to start by enabling the people to develop and deploy AI systems with better tools for accountability. Even as the headwinds of society’s existing race-, class- and gender-based inequities blow strong, as an industry, we can make it easier for marginalized populations to benefit from better technology and experience a better world.

Add comment

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Newsletter

We Know You Better!
Subscribe To Innovation Newsletter
Be the first to get latest updates and
exclusive content straight to your email inbox.
No Thanks!
close-link