Most of this was about very large documents on AI safety and strategy issues allegedly existing within OpenAI and MIRI.
I agree people trust MIRI’s conclusions a bunch based on supposed good internal reasoning / the fact that they are smart, and I think this is bad. However, I think this is pretty limited to MIRI.
I haven’t seen anything similar with OpenAI though of course it is possible.
I agree people trust MIRI’s conclusions a bunch based on supposed good internal reasoning / the fact that they are smart, and I think this is bad. However, I think this is pretty limited to MIRI.
I haven’t seen anything similar with OpenAI though of course it is possible.
I agree with all the other things you write.