Opinion: AI For Good Is Often Bad


11.18.2019

Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.

Designed to mitigate poaching, Intel’s TrailGuard AI still won’t detect poaching’s likely causes: corruption, disregarding the rule of law, poverty, smuggling, and the recalcitrant demand for ivory. PHOTOGRAPH: CAROLYN VAN HOUTEN/THE WASHINGTON POST/GETTY IMAGES

Excerpt:

While AI for good programs often warrant genuine excitement, they should also invite increased scrutiny. Good intentions are not enough when it comes to deploying AI for those in greatest need. In fact, the fanfare around these projects smacks of tech solutionism, which can mask root causes and the risks of experimenting with AI on vulnerable people without appropriate safeguards.”

Even when a company’s intentions seem coherent, the reality is that for many AI applications, the current state of the art is pretty bad when applied to global populations. Researchers have found that facial recognition software, in particular, is often biased against people of color, especially those who are women. This has led to calls for a global moratorium on facial recognition and cities like San Francisco to effectively ban it. AI systems built on limited training data create inaccurate predictive models that lead to unfair outcomes. AI for good projects often amount to pilot beta testing with unproven technologies. It’s unacceptable to experiment in the real world on vulnerable people, especially without their meaningful consent. And the AI field has yet to figure out who is culpable when these systems fail and people are hurt as a result.”

To read the article: https://www.wired.com/story/opinion-ai-for-good-is-often-bad/