Nov 22, 2019 | Original story from Stanford University
Credit: Photo by Lukas on Unsplash https://unsplash.com/@hauntedeyes
Artificial intelligence has moved into the commercial mainstream thanks to the growing prowess of machine learning algorithms that enable computers to train themselves to do things like drive cars, control robots or automate decision-making.
But as AI starts handling sensitive tasks, such as helping pick which prisoners get bail, policy makers are insisting that computer scientists offer assurances that automated systems have been designed to minimize, if not completely avoid, unwanted outcomes such as excessive risk or racial and gender bias.
A team led by researchers at Stanford and the University of Massachusetts Amherst published a paper Nov. 22 in Science suggesting how to provide such assurances. The paper outlines a new technique that translates a fuzzy goal, such as avoiding gender bias, into the precise mathematical criteria that would allow a machine-learning algorithm to train an AI application to avoid that behavior.
“We want to advance AI that respects the values of its human users and justifies the trust we place in autonomous systems,” said Emma Brunskill, an assistant professor of computer science at Stanford and senior author of the paper.
Read more at: https://www.technologynetworks.com/informatics/news/bots-behaving-badly-can-we-trust-in-an-algorithm-controlled-society-327530?utm_campaign=NEWSLETTER_TN_Breaking%20Science%20News&utm_source=hs_email&utm_medium=email&utm_content=79781295&_hsenc=p2ANqtz-_TmdSunp6ClmtSsWTLS27Iqco6dBUdA9zJDockkECzhIYVZbiqColNIhKO-UalFMMBMUoFp5TZV5gQJEZLz1nUyv4nbg&_hsmi=79781295