Source: Nature WORLD VIEW, 06 APRIL 2021

https://www.nature.com/articles/d41586-021-00868-5

Kate Crawford

During the pandemic, technology companies have been pitching their emotion-recognition software for monitoring workers and even children remotely. Take, for example, a system named 4 Little Trees. Developed in Hong Kong, the program claims to assess children’s emotions while they do classwork. It maps facial features to assign each pupil’s emotional state into a category such as happiness, sadness, anger, disgust, surprise and fear. It also gauges ‘motivation’ and forecasts grades. Similar tools have been marketed to provide surveillance for remote workers. By one estimate, the emotion-recognition industry will grow to US$37 billion by 2026.

There is deep scientific disagreement about whether AI can detect emotions. A 2019 review found no reliable evidence for it. “Tech companies may well be asking a question that is fundamentally wrong,” the study concluded (L. F. Barrett et al. Psychol. Sci. Public Interest 20, 1–68; 2019).

And there is growing scientific concern about the use and misuse of these technologies. Last year, Rosalind Picard, who co-founded an artificial intelligence (AI) start-up called Affectiva in Boston and heads the Affective Computing Research Group at the Massachusetts Institute of Technology in Cambridge, said she supports regulation. Scholars have called for mandatory, rigorous auditing of all AI technologies used in hiring, along with public disclosure of the findings. In March, a citizen’s panel convened by the Ada Lovelace Institute in London said that an independent, legal body should oversee development and implementation of biometric technologies (see go.nature.com/3cejmtk). Such oversight is essential to defend against systems driven by what I call the phrenological impulse: drawing faulty assumptions about internal states and capabilities from external appearances, with the aim of extracting more about a person than they choose to reveal.

Countries around the world have regulations to enforce scientific rigour in developing medicines that treat the body. Tools that make claims about our minds should be afforded at least the same protection. For years, scholars have called for federal entities to regulate robotics and facial recognition; that should extend to emotion recognition, too. It is time for national regulatory agencies to guard against unproven applications, especially those targeting children and other vulnerable populations.

Read more at: https://www.nature.com/articles/d41586-021-00868-5

Related Posts

%d bloggers like this: