If you haven’t been following tech news, you might have missed the fact that artificial intelligence has been insinuating itself into every corner of society. It’s making decisions for us, it’s taking our jobs, it’s spying on us and it’s getting more and more personal over time. AI is being used to judge art, rate beauty, and predict criminal behavior before it happens. You know, the kind of stuff we would normally reserve for the wisdom, versatility, experience and diplomacy of humans.
Artificial intelligence evangelists claim that AI could be programmed to be an objective judge of quality and human psychology. They will be Spock-like, in other words, avoiding all the illogical human errors that plague progress. Except that assumption doesn’t seem to take into account the fact that AI implementations are consistently surfacing perspectives that are more amplification than avoidance of our more socially unsavory prejudices. The term, “artificial,” in this context, has yet to earn its synonimity with “higher.”
The AI That Judged A Beauty Contest
Science is starting to identify certain markers of facial structure, complexion, build, and hair type, that are highly correlated with “attractiveness.” Humans and other animals share a preference for symmetrical faces, for instance. Beauty.AI used those patterns to create the first AI-judged beauty contest. They programmed a machine to use data from the faces of highly-attractive people so that users could submit their own photos to the AI and receive an evaluation of their relative beauty.
Unfortunately, the attractive people that were chosen to represent the ideal of beauty were almost entirely white. So the artificial intelligence was under the impression that beautiful people can’t have dark skin. When the 44 most attractive participants were announced, there were a bunch of people with light complexions, and one person who was black.
Beauty.AI has since acknowledged this error and has added “ethnicity” as a subcategory of attractiveness. The hundreds of Indian and African submissions that were deemed unattractive by the algorithm’s racist beauty standards got the message though. Again.
Tay, The Microsoft Millennial Chatbot
Even when the bar is as low as a teenage girl on twitter, AI has a way of lowering it. Microsoft released its “millennial” chatbot Tay to Twitter, and it created an online persona based on the data is received from Twitter users. Within 24 hours it had devolved into a sexist Neo-Nazi, proclaiming in less than 140 characters:
“@brightonus33 Hitler was right I hate the jews.” -TayTweets
Once again, human inputs translate into human offenses. Should we be surprised? Artificial intelligence isn’t programmed to be our conscience; it’s programmed to do whatever there’s a precedent for. Through this lens they start to look less like Spock and more like a megaphone for the vitriol that we’re creating every day.
Chicago’s Predictive Policing Program
Here’s where human-all-too-human veers from the annoying to the invasive. The Chicago Police department recently received $2 million in grant funding from the National Institute of Justice to administer its “predictive policing” program. Using machine-learning software, police are now able to identify “hotspots,” and even pay house calls to “high-risk” people, or those that qualify for a “Strategic Subject List.” The NIJ’s press liaison has claimed that the house visits involve people who are hundreds of times as likely to commit a violent crime, according to the model.
Once again though, they’re multiplying human assumptions programmatic intelligence and then labeling the product as objective. Identification logic aside, the assumption is that paying someone a visit and telling them they’re being watched will reduce crime overall. It’s the same rationale behind stop-and-frisk, only this time they approach you at your own home. What if constantly harassing members of “problem neighborhoods” even before they’ve committed a crime actually increases their hostility? What if it further breaks down trust between police and communities? Will AI be programmed to determine those answers?
A new report by the RAND Corporation, which was granted access to every aspect of the new program, reported that there was no evidence to suggest it had saved lives. Part of the problem was the lack of direction on what to do once subjects were identified as high-risk. The actual “dealing with people” part continues to escape both police intelligence and artificial intelligence. And as the Daily Mail pointed out earlier this year, “shootings are on the rise in Chicago, with the threshold of 1,000 shooting victims crossed one or two months earlier in 2016 than in previous years.” Would it have been worse without the AI? Maybe we need an algorithm to answer that.
The AI That Predicts Lucrative Films
Beauty and propensity to commit crime may come down to a few features, but art must be more complicated, right? Not from the standpoint of money, according to ScriptBook. ScriptBook is designed to make an “objective” assessment of a film’s commercial value based on a few key variables: actors, plot, and ending. It also looks at the number of male and female characters, and the “mood” of a script, similar to Pandora’s mood algorithm or Netflix’s categorization system. Scriptbook is also faster, meaning more scripts can be “considered.” That could be good news for aspiring screenwriters, though how the attachment of a “star” will play out is yet to be seen.
Like most movie executives, ScriptBook is likely to be good at picking a sure thing, but has all the signs of struggling to predict black swan events, or those incalculable moments when art takes a big step forward, or to the left or right. ScriptBook hasn’t demonstrated the ability to catch that magic in a bottle yet. With 1.2 million in recent seed funding, they’ll have the opportunity to expand its predictive power, but this may be one area in which “power” is of little importance.
There is one other concern with ScriptBook; if it’s judging art against what has come before it, its fate may be to perpetuate some of the racist assumptions prevalent in its contemporary entertainment industry. When hundreds of emails between Sony employees were leaked in 2014, they revealed a lot of racist—and patently false–beliefs about how black actors and female leads couldn’t sell movies. The same year the emails were leaked, Star Wars broke box office records with a white female lead and a black male supporting actor. That’s the black swan that committed artists are always looking for, but that may elude the AI the executives enlist. Would it have predicted 12 Years A Slave to win an Oscar? I can’t imagine it would have caught Birdman.
Algorithms Aren’t Neutral
In order for an AI to think, it has to be programmed on how to think. Just like a human, it has to be exposed to stimulus. As long as that stimulus is the raw data of human experience, artificial intelligence is going to amplify human tendencies; more sequels with predictable plot lines, more women with uniform features, and more arrests in the same neighborhoods that are arguably over-policed (and perhaps under-understood).
So far, artificial intelligence seems to have provided us data to justify our choices as often as it has given us reason to question them. If we want to see more creativity, more success and more profound progress in the humanities and social services, and we want to build AI to help us do it, that AI should be a lot smarterer than this. I’m not sure it’s in the cards as long as we’re playing dealer.
(Feature image via UWMpost)