RacistComputers

ThoughtStorms Wiki

Context : MachineGatekeepers

HP Computers are racist

https://www.theguardian.com/technology/2020/sep/21/twitter-apologises-for-racist-image-cropping-algorithm

Discussion

These "mistakes" are really just cultural presumptions. For example, where I live, a lot of people give their children the names of famous people for aspirational reasons. A girl might well be called "BillGates". Why assume that "Bill Gates" has to mean that particular person. (There are surely tens of thousands of others). Or that it must be a male name?

One of the great problems with this kind of "helpful AI" that's becoming more prevalent is that it bakes all these cultural assumptions and norms into a rigid technological medium that doesn't really have the contextual awareness or sensitivity to know when they're inappropriate as - in Eric Meyer's story of inadvertent algorithmic cruelty :

http://meyerweb.com/eric/thoughts/2014/12/24/inadvertent-algorithmic-cruelty/

To me this seems like another branch of the "NetNeutrality" issue. The more that we want computers to make inferences about what we want and think, the more we're going to bake-in and prioritise assumptions about what's "normal" or "mainstream", and the less useful and more difficult we're going to make them for everyone who is not normal or is outside the mainstream or just having an unusual experience.

My QuoraAnswer to Are Google Racist for Calling Black People Gorillas? Here

Racism is more than just a moral failing of individuals. It's a universal miasma that permiates society, instantiated in the configuration of all our systems. Many of us try not to be racist people. But none of us can avoid finding ourselves participating in racism sometimes.

I don't suppose that anyone thinks that Google engineers sat down to be explicitly rude about black skinned people. Or that they would assent to racial stereotypes or prejudices.

But when, largely lighter-skinned engineers, living in white suburbs in California, and working surrounded mainly by lighter-skinned colleagues in the lab, sat down to look at millions of photographs, and pick their prototypical examples of "humans" for the training sets for their machine learning algorithms, who knows if there was any unconscious bias?

They undoubtedly included photos of black skinned people. But did they use as many as they needed to capture all the important distinctions?

We hear that they had a problem with whiter faces being misclassified as dogs and had to tweak their algorithms and training data to fix it. Did black faces receive as much care and attention to distinguish and differentiate them? Or are blacks "second class" to Google's algorithms, in that they receive only a rougher, coarse-grained distinction-making?

Full credit to Google for owning this. I don't think there's a need for blame ... either for Google or the particular engineers that worked on this. Nothing was deliberate.

But we should all recognise that racisim, the systematic disadvantage of people because of their skin colour or other characteristics, can still be operating, even when we are well intentioned and completely unaware of it.

In fact, it's easier, when we are dealing with impersonal technical systems, to accidentally allow racial (and other) biases to get baked into them without recognising it.

And there should be a huge red warning light / buzzer, flashing and screaming at us here. There's a tsunami breaking over us, of technologies that substitute machine perception and decision-making for human. This goes from Google's own self-driving cars, to phones that schedule our day, to systems for assessing credit-worthiness in banks, to systems that predict criminality and terrorism.

When you deal with human prejudice, it can be easy to spot, possible to argue with, and ultimately challenged in law.

When we deal with racism baked into algorithms in machines it will be MUCH HARDER to recognize (no body language), there'll be no challenging it at the point it's applied ("computer says no!"), there'll be no legal redress (it's an honest mistake, just an incorrect weighting in our dataset.) But the decisions could be repeated tens to millions of times a day. But proving them and getting them fixed could be major undertaking ... re-balancing massive machine-training programs that have built up databases over years.

This isn't only about racism. It's about any machine-decision being only as good as the data that it was fed. Google's self-driving cars have had a lot of training to recognize and avoid other cars. How much training about bikes? Or Segways? Or stray kangaroos running across the road? Of course they have had SOME training with all these types of obstruction. But the tendency of any technology that's applied at scale is to prioritize the most common use-cases to the detriment of the rarer and more obscure.

We're about to be cocooned in machinery that makes decisions about us and for us. And those machines are ALL going to reflect the unconscious biases about what's normal and what's important held by the data and machine-learning engineers. Google were not "evil" here. But we should all educate ourselves to become aware of the risks and what we can do to mitigate them.

NeuralNetworks driven FaceBook app makes photos "hotter" by making skin whiter : https://www.theguardian.com/technology/2017/apr/25/faceapp-apologises-for-racist-filter-which-lightens-users-skintone

Bookmarked 2020-09-27T17:33:31.392607: https://www.technologyreview.com/2017/08/16/149652/ai-programs-are-learning-to-exclude-some-african-american-voices/

Bookmarked 2020-10-18T01:09:20.607614: https://www.theguardian.com/technology/2020/sep/21/twitter-apologises-for-racist-image-cropping-algorithm

Bookmarked 2020-12-05T14:32:00.073164: https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/amp/?__twitter_impression=true

Bookmarked 2021-04-15T12:37:16.986989: https://www.theverge.com/platform/amp/2020/12/7/22158501/timnit-gebru-team-google-public-statement-fired