I was a bit confused by some of these. Posting questions/comments here in case others have the same thoughts:
Earning-to-give buy-out
You’re currently earning to give, because you think that your donations are doing more good than your direct work would. It might be that we think that it would be more valuable if you did direct work. If so we could donate a proportion of the amount that you were donating to wherever you were donating it, and you would move into work.
This made more sense to me after I realised that we should probably assume the person doesn’t think CEA is a top donation target. Otherwise they would have an empirical disagreement about whether they should be doing direct work, and it’s not clear how the offer helps resolve that (though it’s obviously worth discussing).
Anti-Debates / Shark Tank-style career choice discussions / Research working groups
These are all things that might be good, but it’s not obvious how funding would be a bottleneck. Might be worth saying something about that?
For those with a quantitative PhD, it could involve applying for the Google Brain Residency program or AI safety fellowship at ASI.
Similarly I’m confused what the funding is meant to do in these cases.
I’d be keen to see more people take ideas that we think we already know, but haven’t ever been put down in writing, and write them up in a thorough and even-handed way; for example, why existential risk from anthropogenic causes is greater than the existential risk from natural causes
I think you were using this as an example of the type of work, rather than a specific request, but some readers might not know that there’s a paper forthcoming on precisely this topic (if you mean something different from that paper, I’m interested to know what!).
Re Etg buy-out—yes, you’re right. For people who think that CEA is a top donation target, hopefully we could just come to agreement as a trade wouldn’t be possible, or would be prohibitively costly (if there were only slight differences in our views on which places were best to fund).
Re local group activities: These are just examples of some of the things I’d be excited about local groups doing, and I know that at least some local groups are funding constrained (e.g. someone is running them part-time, unpaid, and will otherwise need to get a job).
Re AI safety fellowship at ASI—as I understand it, that is currently funding constrained (they had great applicants who wanted to take the fellowship but ASI couldn’t fund it). For other applications (e.g. Google Brain) it could involve, say, spending some amount of time during or after a physics or math PhD in order to learn some machine learning and be more competitive.
Re anthropogenic existential risks—ah, I had thought that it was only in presentation form. In which case: that paper is exactly the sort of thing I’d love to see more of.
I was a bit confused by some of these. Posting questions/comments here in case others have the same thoughts:
This made more sense to me after I realised that we should probably assume the person doesn’t think CEA is a top donation target. Otherwise they would have an empirical disagreement about whether they should be doing direct work, and it’s not clear how the offer helps resolve that (though it’s obviously worth discussing).
These are all things that might be good, but it’s not obvious how funding would be a bottleneck. Might be worth saying something about that?
Similarly I’m confused what the funding is meant to do in these cases.
I think you were using this as an example of the type of work, rather than a specific request, but some readers might not know that there’s a paper forthcoming on precisely this topic (if you mean something different from that paper, I’m interested to know what!).
Thanks Owen!
Re Etg buy-out—yes, you’re right. For people who think that CEA is a top donation target, hopefully we could just come to agreement as a trade wouldn’t be possible, or would be prohibitively costly (if there were only slight differences in our views on which places were best to fund).
Re local group activities: These are just examples of some of the things I’d be excited about local groups doing, and I know that at least some local groups are funding constrained (e.g. someone is running them part-time, unpaid, and will otherwise need to get a job).
Re AI safety fellowship at ASI—as I understand it, that is currently funding constrained (they had great applicants who wanted to take the fellowship but ASI couldn’t fund it). For other applications (e.g. Google Brain) it could involve, say, spending some amount of time during or after a physics or math PhD in order to learn some machine learning and be more competitive.
Re anthropogenic existential risks—ah, I had thought that it was only in presentation form. In which case: that paper is exactly the sort of thing I’d love to see more of.
In terms of Anti-Debates/Shark Tank etc
These might be things local groups organise, but wouldn’t make a plan and evaluate unless they had more time to do that.