By Tristan Greene
The biggest actual threat faced by humans, when it comes to AI, has nothing to do with robots. Its biased algorithms. And, like almost everything bad, it disproportionately affects the poor and marginalized.
Machine learning algorithms, whether in the form of AI or simple shortcuts for sifting through data, are incapable of making rational decisions because they dont rationalize they find patterns. That government agencies across the US put them in charge of decisions that profoundly impact the lives of humans, seems incomprehensibly unethical.
When an algorithm manages inventory for a grocery store, for example, machine learning helps humans do things that would, otherwise, be harder. The manager probably cant keep track of millions of items in his head; the algorithm can. But, when its used to take away someones freedom or children: Weve given it too much power.
Two years ago, the bias debate broke wide-open when Pro-Publica published a damning article exposing the apparent bias in the COMPAS algorithms a system thats used to sentence accused criminals based on several factors, including race. Basically, the report clearly showed several cases where it was obvious that the big fancy algorithm predicts recidivism rates based on skin tone.
In an age where algorithms are helping government employees do their jobs, if youre not straight, not white, or not living above the poverty line youre at greater risk of unfair bias.
Thats not to say straight, white, rich people cant suffer at the hands of bias, but theyre far less likely to lose their freedom, children, or livelihood. The point here is that were being told the algorithms are helping. Theyre actually making things worse.
Writer Elizabeth Rico believes unfair predictive analysis software may have influenced a social services investigator to take away her children. She wrote about her experience in an article where she describes how social services whether intentionally or not preys upon those who cant afford to avoid the algorithms gaze. Her research revealed a system that equates being poor with being bad.
In the article, published on UNDARK, she says:
the 131 indicators that feed into the algorithm include records for enrollment in Medicaid and other federal assistance programs, as well as public health records regarding mental-health and substance-use treatments. Putnam-Hornstein stresses that engaging with these services is not an automatic recipe for a high score. But more information exists on those who use the services than on those who dont. Families who dont have enough information in the system are excluded from being scored.
If youre accused of being an abusive or neglectful parent, and youve had the means to treat any addictions or mental health problems youve had in a private facility, the algorithm may just skip you. But, if you use government assistance or have a state or county-issued medical card, youre in the cross-hairs.
And thats the problem in a nutshell. The best intentions of researchers and scientists are no match for capitalism and partisan politics. Take, for example, that Stanford researchers algorithm purported to predict gayness it doesnt, but that wont stop people from thinking it does.
It isnt dangerous in the Stanford machine learning lab, but the GOP-helmed Federal government is increasingly anti-LBGTQ+. What happens when it decides that applicants have to pass a gaydar test before entering military service?
Matters of sexuality and race may not be intrinsically related to poverty or disenfranchisement, but the marginalization of minorities is. LBGTQ+ individuals and black men, for example, already face unfair legislation and systemic injustice. Using algorithms to perpetuate that is nothing more than automating cruelty.
We cannot fix social problems by reinforcing them with black box AI or biased algorithms: Its like literally trying to fight fire with fire. Until we develop 100 percent bias-proof AI, using them to take away a persons freedom, children, or future is just wrong.
This article has been previously published on The Next Web.