EvilAlgorithms

ThoughtStorms Wiki

BiasInAlgorithms

FakeFriends

See also :

ThreadView

Thinking more about the latest Google revelations, I think in some sense this is largely a consequence of the modern trend towards data-driven decision making. (🧵)

You may recall a post from a decade ago by a design lead who left Google after Marissa Mayer insisted on running an experiment to choose the shade of blue for a button.

That's a red flag for anyone who cares about artistry, but it actually has much more sinister implications…

By delegating product decisions to data, you effectively remove any sense of moral responsibility - it's not a human making a decision, it's just facts. Facts can't be immoral, so nobody has to be uncomfortable about acting on them.

Deciding to deliberately make users unhappy, or slow down a service to make another one look good? These would be reprehensible acts.

But what about running an experiment to test the effects of those things?

Well, running an experiment on a small sample group isn't so bad. You can turn it off any time, and it will teach you something useful.

It's only a handful of users, for a short time, and it's all in the name of science after all!

But what if (surprise!) the experiment shows user engagement is up, and you make more money?

Well now that you know it will increase profits, you have a duty to your shareholders to enable it for all users. You'd never have proposed it, but you can't unlearn what you now know.

So the engineer who added the feature has a clear conscience, because it was only done as an experiment to test a theory.

And similarly, the manager who set it to 100% didn't ask for the feature, but faced with the numbers they really had no choice but to enable it.

Data-driven design basically turns products into evolution-driven predators. Design choices become little more than random selection, where every permutation is tried and features survive purely based on how they extract revenue from users, not whether anybody likes or wants them

My guess is that the next logical step for big tech companies will be to eliminate the human element altogether. Feature changes will be proposed by ML models and then switched on automatically based on the experiment results.