There is a certain hubris in claiming you are going to “build a flourishing future” and “support ambitious projects to improve humanity’s long-term prospects” (as the FFF did on its website) only to not exist 6 months later and for reasons of fraud to boot.
This. We can taboo the words “existential risk” and focus instead on Longtermism. It’s damning that the largest philanthropy focused on Longtermism—the very long term future of humanity—didn’t even last a year. A necessary part of any organisation focused on the long term is a security mindset. It seems that this was lacking in the Future Fund. In particular, nothing was done to secure funding.
You can’t build a temple that lasts 1000 years without first ensuring that it’s on solid ground and has secure foundations. (Or even a house that lasts 10 years for that matter.)
My understanding of the thinking most longtermist causes and interventions is that they are mostly about slightly decreasing the probability of a catastrophic event; or to put it differently, the idea is that there is a high probability that the intervention does nothing and a small probability that it does something incredibly important.
From that perspective I’m not sure that institutional longevity is really a priority and certainly don’t think that we can infer that longtermists aren’t indeed focused on the long term.
Longtermism is wider than catastrophic risk reduction—e.g. it also encompasses “trajectory changes”. It’s about building a flourishing future over the very long term. (Personally I think x-risk from AGI is a short-term issue and should be prioritised, and Longtermism hasn’t done great as a brand so far.)
This. We can taboo the words “existential risk” and focus instead on Longtermism. It’s damning that the largest philanthropy focused on Longtermism—the very long term future of humanity—didn’t even last a year. A necessary part of any organisation focused on the long term is a security mindset. It seems that this was lacking in the Future Fund. In particular, nothing was done to secure funding.
Perhaps, you know, they were focused more on the long term and not the short term?
You can’t build a temple that lasts 1000 years without first ensuring that it’s on solid ground and has secure foundations. (Or even a house that lasts 10 years for that matter.)
Are we trying to build a temple?
My understanding of the thinking most longtermist causes and interventions is that they are mostly about slightly decreasing the probability of a catastrophic event; or to put it differently, the idea is that there is a high probability that the intervention does nothing and a small probability that it does something incredibly important.
From that perspective I’m not sure that institutional longevity is really a priority and certainly don’t think that we can infer that longtermists aren’t indeed focused on the long term.
Longtermism is wider than catastrophic risk reduction—e.g. it also encompasses “trajectory changes”. It’s about building a flourishing future over the very long term. (Personally I think x-risk from AGI is a short-term issue and should be prioritised, and Longtermism hasn’t done great as a brand so far.)