Agreed this would be bad if it happened, I’m not actually sure that people trust this? I do hear comments like “maybe it was in one of those secret google docs” but I wouldn’t really say that those people trust that process.
FWIW, I feel like I’ve heard a fair amount of comments suggesting that people basically trust the process. Though maybe it became a bit less frequent over time. Most of this was about very large documents on AI safety and strategy issues allegedly existing within OpenAI and MIRI.
I’m glad when things do get published. E.g. Eric Drexler’s Reframing Superintelligence used to be a collection of Google docs.
But I find it hard to say to what extent non-published Google docs are suboptimal, i.e. worse than alternatives. E.g. to some extent it does seem correct that I give a bit more weight to someone’s view on, say, AI timelines, if I hear that they’ve thought about it that much that they were able to write a 200-page document about it. Similarly, there can be good reasons not to publish documents—either because they contain information hazards (though I think that outside of bio many EAs are way too worried about this, and overestimate the effects marginal publication by non-prominent researchers can have on the world) or because the author can use their time better than to make these docs publishable.
My best guess is that the status quo is significantly suboptimal, and could be improved. But that is based on fairly generic a priori considerations (e.g. “people tend to be more worried about their ‘reputation’ than warranted and so tend to be too reluctant to publish non-polished documents”) I could easily be wrong about. In some sense, the fact that the whole process is that intransparent, and so hard to ascertain how good it is from the outside, is the biggest problem.
It also means that trust in the everyday sense really plays an important role, which means that people outside EA circles who don’t have independent reasons to trust the involved people (e.g. because of social/personal ties or independent work relationships) won’t give as much epistemic weight to it, and they will largely be correct in doing so. I.e. perhaps the main cost is not to epistemic coordination within EA, but rather to EA’s ability to convince skeptical ‘outsiders’.
Most of this was about very large documents on AI safety and strategy issues allegedly existing within OpenAI and MIRI.
I agree people trust MIRI’s conclusions a bunch based on supposed good internal reasoning / the fact that they are smart, and I think this is bad. However, I think this is pretty limited to MIRI.
I haven’t seen anything similar with OpenAI though of course it is possible.
FWIW, I feel like I’ve heard a fair amount of comments suggesting that people basically trust the process. Though maybe it became a bit less frequent over time. Most of this was about very large documents on AI safety and strategy issues allegedly existing within OpenAI and MIRI.
I’m glad when things do get published. E.g. Eric Drexler’s Reframing Superintelligence used to be a collection of Google docs.
But I find it hard to say to what extent non-published Google docs are suboptimal, i.e. worse than alternatives. E.g. to some extent it does seem correct that I give a bit more weight to someone’s view on, say, AI timelines, if I hear that they’ve thought about it that much that they were able to write a 200-page document about it. Similarly, there can be good reasons not to publish documents—either because they contain information hazards (though I think that outside of bio many EAs are way too worried about this, and overestimate the effects marginal publication by non-prominent researchers can have on the world) or because the author can use their time better than to make these docs publishable.
My best guess is that the status quo is significantly suboptimal, and could be improved. But that is based on fairly generic a priori considerations (e.g. “people tend to be more worried about their ‘reputation’ than warranted and so tend to be too reluctant to publish non-polished documents”) I could easily be wrong about. In some sense, the fact that the whole process is that intransparent, and so hard to ascertain how good it is from the outside, is the biggest problem.
It also means that trust in the everyday sense really plays an important role, which means that people outside EA circles who don’t have independent reasons to trust the involved people (e.g. because of social/personal ties or independent work relationships) won’t give as much epistemic weight to it, and they will largely be correct in doing so. I.e. perhaps the main cost is not to epistemic coordination within EA, but rather to EA’s ability to convince skeptical ‘outsiders’.
I agree people trust MIRI’s conclusions a bunch based on supposed good internal reasoning / the fact that they are smart, and I think this is bad. However, I think this is pretty limited to MIRI.
I haven’t seen anything similar with OpenAI though of course it is possible.
I agree with all the other things you write.