Getty Images. GETTY
by Kalev Leetaru
Deep learning’s magic-like capabilities mask its very real limitations. To the general public, AI systems stand poised to sweep aside humanity itself, with algorithms replacing human workers and killer robots ready to run amok. Policymakers rush to consider legislation for self-aware machines, the future of capitalism in a job-free world and arms treaties that consider the impact of mechanized precision and speed in wartime strategy. The press runs breathless headlines that alternate between “the machines are taking over” and “the machines are useless.” The truth stands between all of these stances.
Today’s most advanced AI systems are little more than glorified spreadsheets with exciting backstories. Machines that can perform captivating parlor tricks but whose reality is far less impressive.
Even when not given the Hollywood treatment, deep learning algorithms are portrayed to the public as superhuman intelligences that can learn about the world faster and better than any human.
Most troubling, deep learning algorithms are often portrayed as a solution to society’s underlying biases, with “unbiased” machines able to render judgement free of emotion or discrimination.
Unfortunately, today’s AI systems lack the ability to acquire generalized world knowledge or to distill their lessons into their own distinctive worldview through which they can contextualize new information. Instead, their entire knowledge of the world itself comes through the small amounts of training data they are fed. Lacking external knowledge that could counter the biases in this training data, the resulting algorithms have little choice but to simply encode the biases of their input data into their own programming.
Once encoded in software, however, we no longer think of these patterns as biases, laundered as they are through the magic of mathematics. A biased human is considered biased. A biased algorithm is considered to merely be expressing a societal truth through the precision of mathematics encoded in software and executed by infallible machines.
Lacking an understanding of how machines learn, policymakers are unable to craft legislation that might standardize the auditing of algorithms used in critical areas like judicial sentencing, predictive policing and facial recognition.
In the end, as deep learning infuses into every corner of modern life, if we are to have the necessary societal conversations about AI’s role in our society we need to have an informed press, public and policymaking body that can help us have a conversation about our increasingly automated future.