If one disagreed with an HRAD-style approach for whatever reason but still wanted to donate money to maximize AI safety, where should one donate? I assume the Far Future EA Fund?
I am very bullish on the Far Future EA Fund, and donate there myself. There’s one other possible nonprofit that I’ll publicize in the future if it gets to the stage where it can use donations (I don’t want to hype this up as an uber-solution, just a nonprofit that I think could be promising).
I unfortunately don’t spend a lot of time thinking about individual donation opportunities, and the things I think are most promising often get partly funded through Open Phil (e.g. CHAI and FHI), but I think diversifying the funding source for orgs like CHAI and FHI is valuable, so I’d consider them as well.
I found ai impacts recently recently.
There is a group I am loosely affiliated that is trying to make a MOOC about ai safety.
If you care about doing something about immense suffering risks (s-risks) you might like the foundational research institute.
There is an overview of other charities but it is more favourable of HRAD style papers.
I would like to set up an organisation that studies autonomy and our response to making more autonomous things (especially with regards to adminstrative autonomy). I have a book slowly brewing. So if you are interested in that get in contact.
If one disagreed with an HRAD-style approach for whatever reason but still wanted to donate money to maximize AI safety, where should one donate? I assume the Far Future EA Fund?
I am very bullish on the Far Future EA Fund, and donate there myself. There’s one other possible nonprofit that I’ll publicize in the future if it gets to the stage where it can use donations (I don’t want to hype this up as an uber-solution, just a nonprofit that I think could be promising).
I unfortunately don’t spend a lot of time thinking about individual donation opportunities, and the things I think are most promising often get partly funded through Open Phil (e.g. CHAI and FHI), but I think diversifying the funding source for orgs like CHAI and FHI is valuable, so I’d consider them as well.
Not super relevant to Peter’s question, but I would be interested in hearing why you’re bullish on the Far Future EA Fund.
On the meta side of things:
I found ai impacts recently recently. There is a group I am loosely affiliated that is trying to make a MOOC about ai safety.
If you care about doing something about immense suffering risks (s-risks) you might like the foundational research institute.
There is an overview of other charities but it is more favourable of HRAD style papers.
I would like to set up an organisation that studies autonomy and our response to making more autonomous things (especially with regards to adminstrative autonomy). I have a book slowly brewing. So if you are interested in that get in contact.