Hypothesis: from the perspective of currently living humans and those who will be born in the currrent <4% growth regime only (i.e. pre-AGI takeoff or I guess stagnation) donations currently earmarked for large scale GHW, Givewell-type interventions should be invested (maybe in tech/AI correlated securities) instead with the intent of being deployed for the same general category of beneficiaries in <25 (maybe even <1) years.
The arguments are similar to those for old school “patient philanthropy” except now in particular seems exceptionally uncertain wrt how to help humans because of AI.
For example, it seems plausible that the most important market the global poor don’t have access to is literally the NYSE (ie rather than for malaria nets), because ~any growth associated with (AGI + no ‘doom’) will leave the global poor no better off by default (i.e. absent redistribution or immigration reform)unlike e.g., middle class westerners who might own a bit of the S&P500. A solution could be for e.g. OpenPhil to invest on their behalf.
(More meta: I worry that segmenting off AI as fundamentally longtermist is leaving a lot of good on the table; e.g. insofar as this isn’t currently the case, I think OP’s GHW side should look into what kind of AI-associated projects could do a lot of good for humans and animals in the next few decades.)
I’m skeptical of this take. If you think sufficiently transformative + aligned AI is likely in the next <25 years, then from the perspective of currently living humans and those who will be born in the current <4% growth regime, surviving until transformative AI arrives would be a huge priority. Under that view, you should aim to deploy resources as fast as possible to lifesaving interventions rather than sitting on them.
Hypothesis: from the perspective of currently living humans and those who will be born in the currrent <4% growth regime only (i.e. pre-AGI takeoff or I guess stagnation) donations currently earmarked for large scale GHW, Givewell-type interventions should be invested (maybe in tech/AI correlated securities) instead with the intent of being deployed for the same general category of beneficiaries in <25 (maybe even <1) years.
The arguments are similar to those for old school “patient philanthropy” except now in particular seems exceptionally uncertain wrt how to help humans because of AI.
For example, it seems plausible that the most important market the global poor don’t have access to is literally the NYSE (ie rather than for malaria nets), because ~any growth associated with (AGI + no ‘doom’) will leave the global poor no better off by default (i.e. absent redistribution or immigration reform) unlike e.g., middle class westerners who might own a bit of the S&P500. A solution could be for e.g. OpenPhil to invest on their behalf.
(More meta: I worry that segmenting off AI as fundamentally longtermist is leaving a lot of good on the table; e.g. insofar as this isn’t currently the case, I think OP’s GHW side should look into what kind of AI-associated projects could do a lot of good for humans and animals in the next few decades.)
I’m skeptical of this take. If you think sufficiently transformative + aligned AI is likely in the next <25 years, then from the perspective of currently living humans and those who will be born in the current <4% growth regime, surviving until transformative AI arrives would be a huge priority. Under that view, you should aim to deploy resources as fast as possible to lifesaving interventions rather than sitting on them.