One thing I disagree with: the importance of forming inside views for community epistemic health. I think it’s pretty important. E.g. I think that ~2 years ago, the arguments for the longterm importance of AGI safety were pretty underdeveloped; that since then lots more people have come out with their insidee views about it; and that now the arguments are in much better shape.
I want to push back against this. The aggregate benefit may have been high, but when you divide it by all the people trying, I’m not convinced it’s all that high.
Further, that’s an overestimate—the actual question is more like ‘if the people who are least enthusiastic about it stop trying to form inside views, how bad is that?’. And I’d both guess that impact is fairly heavy tailed, and that the people most willing to give up are the least likely to have a major positive impact.
I’m not confident in the above, but it’s definitely not obvious
I want to push back against this. The aggregate benefit may have been high, but when you divide it by all the people trying, I’m not convinced it’s all that high.
Further, that’s an overestimate—the actual question is more like ‘if the people who are least enthusiastic about it stop trying to form inside views, how bad is that?’. And I’d both guess that impact is fairly heavy tailed, and that the people most willing to give up are the least likely to have a major positive impact.
I’m not confident in the above, but it’s definitely not obvious
Thanks—good points, I’m not very confident either way now