I hope you don’t mind me returning to this post. I wanted to follow up on a few things but it has been a super busy month and I have not really had the time.
Essentially I think we did not manage to engage with each others cruxes at all. I said I expect the crux is “connected to EA funders approach to minimising risks.” You did not discuss this at all in your reply (except to agree that there are downsides to policy work). But then you suggest you maybe a crux is “our views of the upsides of some of this work”. And then I replied to you but did not discuss that point at all!!
So overall it felt a bit like we just talked past each other and didn’t really engage fully with the others points of view. So I felt I should just come back and respond to you about the upsides rather than leave this as was.
– –
Some views I have (at least from a UK perspective, cannot comment on US):
DISAGREEMENT: I think there is no shortage of policy proposals to push for. You link to Luke’s post and I reply directly to that here. There is the x-risk database of 250+ policy proposals here!! There is work on policy ideas in Future Proof here. There are more clear cut policies on general risk management and biosecurity but there are good ideas on AI too like: government to becautious in AI use and development by military (see p42 of Future Proof) or ensuring government has incentives to hire competent risk-aware staff . I never found that I was short of low-downside things to advocate for on AI at any point in my 2 years working on policy. So maybe we disagree there. Very happy to discuss further if you want.
AGREEMENT: I think policy may still be low impact. Especially AI policy (in the UK) has a very minimal chance of reducing AI x-risks, a topic that is not currently in the policy Overton window. Bio policy is probably more directly useful. I do expect bio and general risk policy is more likely to be directly useful but still the effects on x-risks are likely to be small, at least for a while. (That said I think most current endeavours to reduce AI x-risks or other x-risks are low probability of success, I put most weight on technical research but even that is highly uncertain). I expect we agree there.
POSSIBLE DISAGREEMENT: I think AI is not the only risk. I didn’t bring up AI at all in my post and none of the examples I know of that applied to the LTFF were AI focused. Yet you and Habryka bring up AI multiple times in your responses. I think AI risks are ~3x more likely than bio and unknown unknow risks (depending on the timeframe) and I think bio and unknown unknow risks are ~6x easier to address though policy work (in the UK). Maybe this is a crux. If the LTFF thinks AI is 1000x more pressing than other risks then maybe this is why you do not value policy work. Could discuss this more if helpful. (If this is true would be great if the LTFF was public about this.)
NEUTRAL:I think policy change is pretty tractable and cheap. The APPG for Future Generations seems to consistently drive multiple policy changes, with one every 9 months / £35k spent (see here). I don’t think you implied you had any views on this.
– – Anyway I hope that is useful and maybe gets a bit more at some of the cruxes. Would be keen to hear views on if you think any of this is correct, even if you don’t have time to respond in depth to any of these points.
Thank you so much for engaging and listening to my feedback – good luck with future grantmaking!!!
Hi Abergal,
I hope you don’t mind me returning to this post. I wanted to follow up on a few things but it has been a super busy month and I have not really had the time.
Essentially I think we did not manage to engage with each others cruxes at all. I said I expect the crux is “connected to EA funders approach to minimising risks.” You did not discuss this at all in your reply (except to agree that there are downsides to policy work). But then you suggest you maybe a crux is “our views of the upsides of some of this work”. And then I replied to you but did not discuss that point at all!!
So overall it felt a bit like we just talked past each other and didn’t really engage fully with the others points of view. So I felt I should just come back and respond to you about the upsides rather than leave this as was.
– –
Some views I have (at least from a UK perspective, cannot comment on US):
DISAGREEMENT: I think there is no shortage of policy proposals to push for. You link to Luke’s post and I reply directly to that here. There is the x-risk database of 250+ policy proposals here!! There is work on policy ideas in Future Proof here. There are more clear cut policies on general risk management and biosecurity but there are good ideas on AI too like: government to be cautious in AI use and development by military (see p42 of Future Proof) or ensuring government has incentives to hire competent risk-aware staff . I never found that I was short of low-downside things to advocate for on AI at any point in my 2 years working on policy. So maybe we disagree there. Very happy to discuss further if you want.
AGREEMENT: I think policy may still be low impact. Especially AI policy (in the UK) has a very minimal chance of reducing AI x-risks, a topic that is not currently in the policy Overton window. Bio policy is probably more directly useful. I do expect bio and general risk policy is more likely to be directly useful but still the effects on x-risks are likely to be small, at least for a while. (That said I think most current endeavours to reduce AI x-risks or other x-risks are low probability of success, I put most weight on technical research but even that is highly uncertain). I expect we agree there.
POSSIBLE DISAGREEMENT: I think AI is not the only risk. I didn’t bring up AI at all in my post and none of the examples I know of that applied to the LTFF were AI focused. Yet you and Habryka bring up AI multiple times in your responses. I think AI risks are ~3x more likely than bio and unknown unknow risks (depending on the timeframe) and I think bio and unknown unknow risks are ~6x easier to address though policy work (in the UK). Maybe this is a crux. If the LTFF thinks AI is 1000x more pressing than other risks then maybe this is why you do not value policy work. Could discuss this more if helpful. (If this is true would be great if the LTFF was public about this.)
NEUTRAL: I think policy change is pretty tractable and cheap. The APPG for Future Generations seems to consistently drive multiple policy changes, with one every 9 months / £35k spent (see here). I don’t think you implied you had any views on this.
– –
Anyway I hope that is useful and maybe gets a bit more at some of the cruxes. Would be keen to hear views on if you think any of this is correct, even if you don’t have time to respond in depth to any of these points.
Thank you so much for engaging and listening to my feedback – good luck with future grantmaking!!!