the proper ‘bar’ for new ideas: consideration of the details, and refutation of those details. If refutation cannot be done by them, then they have no defense against your arguments!
Yes, and to be clear: we have very much been working on writing up those details in ways hopefully more understandable to AI Safety researchers.
But we are really not working in a context of “neutral” evaluation here, which is why we’re not rushing to put those details out onto the Alignment/LW/EA Forum (many details though can already be found across posts on Forrest’s blog).
Thank you too for responding here, Anthony. It feels tough trying to explain this stuff with people around me, so just someone pointing out what is actually needed to make constructive conversations work here is helpful.
Yes, and to be clear: we have very much been working on writing up those details in ways hopefully more understandable to AI Safety researchers.
But we are really not working in a context of “neutral” evaluation here, which is why we’re not rushing to put those details out onto the Alignment/LW/EA Forum (many details though can already be found across posts on Forrest’s blog).
Thank you too for responding here, Anthony. It feels tough trying to explain this stuff with people around me, so just someone pointing out what is actually needed to make constructive conversations work here is helpful.