ThoughtStorms Wiki

Context: OnPhilanthropy


I feel guilty that I haven't really been engaging this community and ideas.

Though whether it's really more than people who discovered utilitarianism and are still figuring out its limitations, I'm not sure.

I think the idea of applying a bit of measurement and logic to your altruism is obviously perfectly sensible. The idea of tying into long term thinking is cool too.

The question becomes, though, does focusing on very abstract, future scenarios like the AIAlignmentProblem justify not addressing more immediate and local concerns? How much is it a displacement activity for CalifornianDreaming geeks who think their moral credits can be earned by writing speculations about AIs while working in industries that are increasingly obviously damaging our current MemeticEcosystem etc?

Certainly shouldn't be justifying financial fraud : https://forum.effectivealtruism.org/posts/j7sDfXKEMeT2SRvLG/ftx-faq

Longtermism has some attractions, but I don't believe we have obligations to PotentialPeople

Discounting the Future

Seems to me the main issue here is trying to do Utilitarianism without discounting the future, due to uncertainty.

If I can baldly state that my actions today get huge utility in the hedonic calculus from their cumulative effects millions of years in the future, but I don't have to discount that utility due to my uncertainty about what will happen in the future, or about the casual chain between my actions and future events, then I can effectively help myself to any degree of justification for any action I like.

This is obvious nonsense to anyone who thinks about it for five seconds. For any action I take today, there is some probability it achieves good things in the predictable near term. But jump more than 5 ot 10 years out, and it will get swamped by other actions and decisions.

Maybe my grandchild will make the scientific breakthrough that takes us to the stars. Or maybe my grandchild is the asshole who kills the genius who would have taken us to the stars in a pointless drunken car accident.

Neither possibility is sufficiently knowable to figure in my calculation as to whether to have children. And that's just two generations out.

I sometimes characterize RightLibertarians as people who have no understanding of non-linear relationships; and who think the fastest way for everyone to get out of the burning theatre is for everyone to run for the door as quickly as possible.

It seems to me a similar mindset here : the idea that the fastest way for humanity to get to the stars is to throw everything into a Hail Mary pass of sending a couple of cans of humans to Mars ASAP rather than, say, building up the robotic infrastructure (PlanB / SpaceColonization); or that maximizing birthrate today on a planet we don't know have to live on sustainably, rather than working on the sustainability and then growing the birthrate over a couple of millennia, is the obvious strategy to maximize people a billion years hence, shows a complete lack of understanding of the non-linear twists and turns of history.

Spend a moment contemplating a simulation based on the Lotka–Volterra equations, or how a hugely dense pattern in ConwaysLife can suddenly collapse into nothing, and you'll realize just how laughably absurd is the claim that "we must all have more children today because that will mean a hundred-trillion extra people a billion years hence".

Of course more haste can mean less speed. Which is why you should prioritise near term problems with intelligible solutions over far future problems with high degrees of uncertainty.