The ethical questions that haunt facial-recognition research

Nature 587, 354-358 (2020)doi: https://doi.org/10.1038/d41586-020-03187-3

18 NOVEMBER 2020

Journals and researchers are under fire for controversial studies using this technology. And a Nature survey reveals that many researchers in this field think there is a problem.

Richard Van Noorden

In September 2019, four researchers wrote to the publisher Wiley to “respectfully ask” that it immediately retract a scientific paper. The study, published in 2018, had trained algorithms to distinguish faces of Uyghur people, a predominantly Muslim minority ethnic group in China, from those of Korean and Tibetan ethnicity1.

China had already been internationally condemned for its heavy surveillance and mass detentions of Uyghurs in camps in the northwestern province of Xinjiang — which the government says are re-education centres aimed at quelling a terrorist movement. According to media reports, authorities in Xinjiang have used surveillance cameras equipped with software attuned to Uyghur faces.

As a result, many researchers found it disturbing that academics had tried to build such algorithms — and that a US journal had published a research paper on the topic. And the 2018 study wasn’t the only one: journals from publishers including Springer Nature, Elsevier and the Institute of Electrical and Electronics Engineers (IEEE) had also published peer-reviewed papers that describe using facial recognition to identify Uyghurs and members of other Chinese minority groups. (Nature’s news team is editorially independent from its publisher, Springer Nature.)

The complaint, which launched an ongoing investigation, was one foray in a growing push by some scientists and human-rights activists to get the scientific community to take a firmer stance against unethical facial-recognition research. It’s important to denounce controversial uses of the technology, but that’s not enough, ethicists say. Scientists should also acknowledge the morally dubious foundations of much of the academic work in the field — including studies that have collected enormous data sets of images of people’s faces without consent, many of which helped hone commercial or military surveillance algorithms. (A linked feature explores concerns over algorithmic bias in facial-recognition systems.)

Read more at: https://doi.org/10.1038/d41586-020-03187-3

Related Posts