1) What level of funding or attention (or other metrics) would longtermism or AI safety need to receive for it to no longer be considered “neglected”?
2) Does OpenPhil or other EA funders still fund OpenAI? If so, how much of this goes towards capabilities research? How is this justified if we think AI safety is a major risk for humanity? How much EA money is going into capabilities research generally?
(This seems like something that would have been discussed a fair amount, but I would love a distillation of the major cruxes/considerations, as well as what would need to change for OpenAI to be no longer worth funding in future).
Thanks! This makes sense. In my head AI safety feels like a cause area that can just have room for a lot of funding etc, but unlike nuclear war or engineered pandemics which seem to have clearer milestones for success, I don’t know what this looks like in the AI safety space.
I’m imagining a hypothetical scenario where AI safety is overprioritized by EAs, and wondering if or how we will discover this and respond appropriately.
1) What level of funding or attention (or other metrics) would longtermism or AI safety need to receive for it to no longer be considered “neglected”?
2) Does OpenPhil or other EA funders still fund OpenAI? If so, how much of this goes towards capabilities research? How is this justified if we think AI safety is a major risk for humanity? How much EA money is going into capabilities research generally?
(This seems like something that would have been discussed a fair amount, but I would love a distillation of the major cruxes/considerations, as well as what would need to change for OpenAI to be no longer worth funding in future).
See here. (Separating importance and neglectedness is often not useful; just thinking about cost-effectiveness is often better.)
No.
Thanks!
This makes sense. In my head AI safety feels like a cause area that can just have room for a lot of funding etc, but unlike nuclear war or engineered pandemics which seem to have clearer milestones for success, I don’t know what this looks like in the AI safety space.
I’m imagining a hypothetical scenario where AI safety is overprioritized by EAs, and wondering if or how we will discover this and respond appropriately.