I’d fund Apart Research significantly less (maybe $50k?) and not fund the debate (also because I’ve updated away from public outreach as a valuable strategy).
What caused this update? Perhaps I just need to listen to the talk linked below it, but would be interested if you had any more pointed thoughts to share.
I used to not actually believe in heavy-tailed impact. On some level I thought that early rationalists (and to a lesser extent EAs) had “gotten lucky” in being way more right than academic consensus about AI progress. And I thought on some gut level that e.g. Thiel and Musk and so on kept getting lucky, because I didn’t want to picture a world in which they were actually just skillful enough to keep succeeding (due to various psychological blockers).
Now, thanks to dealing with a bunch of those blockers, I have internalized to a much greater extent that you can actually be good not just lucky. This means that I’m no longer interested in strategies that involve recruiting a whole bunch of people and hoping something good comes out of it. Instead I am trying to target outreach precisely to the very best people, without compromising much.
Relatedly, I’ve updated that the very best thinkers in this space are still disproportionately the people who were around very early. The people you need to soften/moderate your message to reach (or who need social proof in order to get involved) are seldom going to be the ones who can think clearly about this stuff. And we are very bottlenecked on high-quality thinking.
(My past self needed a lot of social proof to get involved in AI safety in the first place, but I also “got lucky” in the sense of being exposed to enough world-class people that I was able to update my mental models a lot—e.g. watching the OpenAI board coup close up, various conversations with OpenAI cofounders, etc. This doesn’t seem very replicable—though I’m trying to convey a bunch of the models I’ve gained on my blog, e.g. in this post.)
Still working my way through the talk and post mentioned, so pardon the tardiness, but does that mean you expect the highest quality talent will naturally find it’s way to the field?
I suppose I see a tension between “outreach only to the best” and generally walking away from outreach. E.g. do the fellowships seem like a reasonable bet to you now that they’re super competitive and raising their bar, or are they still too general in scope and we should instead be doing something like running an exclusive side event at NeurIPS?
Put more succinctly: should we be raising the bar for the quality of talent reached, or working to pivot outreach to those who already show strong signs of success in relevant fields?
Helpful updates though, thanks for taking the time to share them.
What caused this update? Perhaps I just need to listen to the talk linked below it, but would be interested if you had any more pointed thoughts to share.
I used to not actually believe in heavy-tailed impact. On some level I thought that early rationalists (and to a lesser extent EAs) had “gotten lucky” in being way more right than academic consensus about AI progress. And I thought on some gut level that e.g. Thiel and Musk and so on kept getting lucky, because I didn’t want to picture a world in which they were actually just skillful enough to keep succeeding (due to various psychological blockers).
Now, thanks to dealing with a bunch of those blockers, I have internalized to a much greater extent that you can actually be good not just lucky. This means that I’m no longer interested in strategies that involve recruiting a whole bunch of people and hoping something good comes out of it. Instead I am trying to target outreach precisely to the very best people, without compromising much.
Relatedly, I’ve updated that the very best thinkers in this space are still disproportionately the people who were around very early. The people you need to soften/moderate your message to reach (or who need social proof in order to get involved) are seldom going to be the ones who can think clearly about this stuff. And we are very bottlenecked on high-quality thinking.
(My past self needed a lot of social proof to get involved in AI safety in the first place, but I also “got lucky” in the sense of being exposed to enough world-class people that I was able to update my mental models a lot—e.g. watching the OpenAI board coup close up, various conversations with OpenAI cofounders, etc. This doesn’t seem very replicable—though I’m trying to convey a bunch of the models I’ve gained on my blog, e.g. in this post.)
Still working my way through the talk and post mentioned, so pardon the tardiness, but does that mean you expect the highest quality talent will naturally find it’s way to the field?
I suppose I see a tension between “outreach only to the best” and generally walking away from outreach. E.g. do the fellowships seem like a reasonable bet to you now that they’re super competitive and raising their bar, or are they still too general in scope and we should instead be doing something like running an exclusive side event at NeurIPS?
Put more succinctly: should we be raising the bar for the quality of talent reached, or working to pivot outreach to those who already show strong signs of success in relevant fields?
Helpful updates though, thanks for taking the time to share them.