“It now feels to me like the systematic, weighted-factor-model approach we used for project research wasn’t the best choice. I think that something more focused on getting and really understanding the views of central AI x-risk people would have been better.”
I’d be interested in a bit more detail about this if you don’t mind sharing? Why did you conclude that it wasn’t a great approach, and why would better understanding the views of central AI x-risk people help?
Like a lot of this post, this is a bit of an intuition-based ‘hot take’. But some quick things that come to mind: i) iirc it didn’t seem like our initial intuitions were very different to the WFM results, ii) when we filled in the weighted factor model I think we had a pretty limited understanding of what each project involved (so you might not expect super useful results), iii) I got a bit more of a belief that it just matters a lot that central-AI-x-risk people have a lot of context (and that this more than offsets the a risk of bias and groupthink) so understanding their view is very helpful, iv) having a deep understanding of the project and the space just seems very important for figuring out what if anything should be done and what kinds of profiles might be best for the potential founders
“It now feels to me like the systematic, weighted-factor-model approach we used for project research wasn’t the best choice. I think that something more focused on getting and really understanding the views of central AI x-risk people would have been better.”
I’d be interested in a bit more detail about this if you don’t mind sharing? Why did you conclude that it wasn’t a great approach, and why would better understanding the views of central AI x-risk people help?
Like a lot of this post, this is a bit of an intuition-based ‘hot take’. But some quick things that come to mind: i) iirc it didn’t seem like our initial intuitions were very different to the WFM results, ii) when we filled in the weighted factor model I think we had a pretty limited understanding of what each project involved (so you might not expect super useful results), iii) I got a bit more of a belief that it just matters a lot that central-AI-x-risk people have a lot of context (and that this more than offsets the a risk of bias and groupthink) so understanding their view is very helpful, iv) having a deep understanding of the project and the space just seems very important for figuring out what if anything should be done and what kinds of profiles might be best for the potential founders