Thanks for sharing! I confess I had been wondering about moving my donations elsewhere due to lack of knowledge about LTFF’s processes, but this and other recent posts will probably imply that I will continue donating to LTFF in the near future.
We are committed to improving the long-term trajectory of civilization, with a particular focus on reducing global catastrophic risks.
Which definition of global catastrophic risks are you considering? I think global catastrophes were originally defined by Nick Bostrom and Milan Ćirković as “events that cause roughly 10 million deaths or $10 trillion in damages or more”. Maybe it would be better to be explicit about the severity of the events in the website?
Note that this grant [in bio] would be controversial within the fund at a $100k funding bar, as some fund managers and advisers would say we shouldn’t fund any biosecurity grants at that level of funding.
I would be curious to know how you compare grants in different areas. For example, could you share which fraction of grants in each area (e.g. AI, bio, nuclear, or other) are successful? I understand you consider AI and bio to be the most pressing areas (emphasis mine):
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.
You also only mentioned grants in AI and bio in the OP. However, even if applications in other areas were as likely as those in AI to be funded, they would still not be (randomly) selected to be in the OP, because applications outside of AI and bio only represent a small fraction of the total.
Which definition of global catastrophic risks are you considering? I think global catastrophes were originally defined by Nick Bostrom and Milan Ćirković as “events that cause roughly 10 million deaths or $10 trillion in damages or more”. Maybe it would be better to be explicit about the severity of the events in the website?
I don’t think that as an organisation we have a specific definition in mind. I think it’s still worth saying we are most focussed in reducing global catastrophic risks as opposed to pursuing other goals like instilling caring about future generations as a value in society or economic growth.
In practice we direct funding towards activities that we think reduce catastrophic risks, but are most focussed on existential risks.
Thanks for sharing! I confess I had been wondering about moving my donations elsewhere due to lack of knowledge about LTFF’s processes, but this and other recent posts will probably imply that I will continue donating to LTFF in the near future.
Which definition of global catastrophic risks are you considering? I think global catastrophes were originally defined by Nick Bostrom and Milan Ćirković as “events that cause roughly 10 million deaths or $10 trillion in damages or more”. Maybe it would be better to be explicit about the severity of the events in the website?
I would be curious to know how you compare grants in different areas. For example, could you share which fraction of grants in each area (e.g. AI, bio, nuclear, or other) are successful? I understand you consider AI and bio to be the most pressing areas (emphasis mine):
You also only mentioned grants in AI and bio in the OP. However, even if applications in other areas were as likely as those in AI to be funded, they would still not be (randomly) selected to be in the OP, because applications outside of AI and bio only represent a small fraction of the total.
I don’t think that as an organisation we have a specific definition in mind. I think it’s still worth saying we are most focussed in reducing global catastrophic risks as opposed to pursuing other goals like instilling caring about future generations as a value in society or economic growth.
In practice we direct funding towards activities that we think reduce catastrophic risks, but are most focussed on existential risks.