Impact requires a market
Thank you for writing this piece. It is hard to argue that GDP growth and economic growth are the surest ways to reduce "suffering" in the mid-term, 100% agree.
Have you come across any explanation for the seemingly reduction of self-reported happiness in the countries that are currently experiencing the most growth? I did an analysis recently comparing GDP per capita vs HDI vs Happiness over the past 10 years and am having trouble making sense of what it means.
some thoughts on the post (from a 'recently came across EA and have been reading on it' person):
1. diversification of world view is needed among EA orgs as noted by Holden Karnofsky (openphilanthropy.org/research/worldview-diversification). charity recommendations are great for people donating small amounts. they would not spend time for researching better ways and just want to see money well spent. but there should be more diversity of projects for people donating large sums.
2. EA provides framework for how to pitch for funding NGOs. and creates the culture of analysis of impact and tractability. the same must be true for startups pitching to investors by showcasing market demand and possibility.
3. a lot of EA policies is finding the best way, with small amounts of capital. I think as it increases, orgs with the EA philosophy will donate to more risky ventures that can result in much higher returns
Thanks for the shoutout, Rohit! Having spent a bit more time among the EA community, I deeply agree with the critique that much of EA functions as "McKinsey for NGOs" - too much analysis, not enough iteration.
It sounds like impact certs (eg as described by Scott Alexander in https://astralcodexten.substack.com/p/impact-markets-the-annoying-details) would line up with your mainline proposal for a solution? Impact certs (equity in NGOs) do exactly act as a prediction market for how impactful a particular project or org will be. Manifold is looking to launch some kind of impact cert ecosystem before the end of the year - if you (or others) are interested in investing/helping out, please reach out to email@example.com!
I've spent more time thinking about AI xrisk than about EA in general. But of course they're closely related, as AI xrisk is one of the causes embraced by EA. It's my understanding that EA didn't start out with a focus on long-termism. That emerged.
The problem, as your title indicates, is that we're dealing with radical uncertainty. In the case of AI xrisk the fundamental problem is we don't know how to think about AGI in terms of mechanisms, as opposed to FOOM-like magic. The AI xrisk people respond by creating these elaborate predictive contraptions around something where meaningful quantitative reasoning is impossible. You're arguing that the EA folks are doing this as well.
At some point it seems to me that the mechanisms of community have overwhelmed the objectives the community was created to address. So now those objectives function as a reason for engaging in this elaborate ritual intellection. The community is now more engaged in elaborating its rituals than in dealing with the world. How does that happen and why?
We've got community orientation (CO) and reality orientation (RO). CO should be subordinate to RO and should serve it. What has happened is that RO has become subordinate to RO. Put your old McKinsey hat on: How do you measure CO and RO of a group and plot their evolution over time? What's going on at the tipping point where CO surpasses RO? I think that happened in the AI xrisk space at about the time Bostrom published Superintelligence.
Science Fiction: Back in 1989 Ted Turner created the Tomorrow Fellowship for a work of fiction "offering creative and positive solutions to global problems." It was only awarded once, in 1991. https://en.wikipedia.org/wiki/Turner_Tomorrow_Fellowship_Award
Thank you for writing a piece that both made me think and solidified my opinion on a complex subject. For someone who's only touched on the larger issues surrounding altruism in the past, this piece was enlightening. Well done!