Thank you for speaking up, even as they again cast doubt: where Gregory Lewis supposed that the way to find truth was that “We could litigate which is more likely—or, better, find what the ideal ‘bar’ insiders should have on when to look into outsider/heterodox/whatever work, and see whether what has been presented so far gets far enough along the ?crackpot/?genius spectrum to warrant the consultation” He entirely ignores the proper ‘bar’ for new ideas: consideration of the details, and refutation of those details. If refutation cannot be done by them, then they have no defense against your arguments! Yet, they claim such a circumstance is their victory, by supposing that some ‘bar’ of opinion-mongering should decide a worthy thought. This forum is very clearly defending its ‘tuft’ from outsiders; the ‘community’ here in the Bay Area is similarly cliquish, blacklisting members and then hiding that fact from prospective members and donors.
the proper ‘bar’ for new ideas: consideration of the details, and refutation of those details. If refutation cannot be done by them, then they have no defense against your arguments!
Yes, and to be clear: we have very much been working on writing up those details in ways hopefully more understandable to AI Safety researchers.
But we are really not working in a context of “neutral” evaluation here, which is why we’re not rushing to put those details out onto the Alignment/LW/EA Forum (many details though can already be found across posts on Forrest’s blog).
Thank you too for responding here, Anthony. It feels tough trying to explain this stuff with people around me, so just someone pointing out what is actually needed to make constructive conversations work here is helpful.
Thank you for speaking up, even as they again cast doubt: where Gregory Lewis supposed that the way to find truth was that “We could litigate which is more likely—or, better, find what the ideal ‘bar’ insiders should have on when to look into outsider/heterodox/whatever work, and see whether what has been presented so far gets far enough along the ?crackpot/?genius spectrum to warrant the consultation” He entirely ignores the proper ‘bar’ for new ideas: consideration of the details, and refutation of those details. If refutation cannot be done by them, then they have no defense against your arguments! Yet, they claim such a circumstance is their victory, by supposing that some ‘bar’ of opinion-mongering should decide a worthy thought. This forum is very clearly defending its ‘tuft’ from outsiders; the ‘community’ here in the Bay Area is similarly cliquish, blacklisting members and then hiding that fact from prospective members and donors.
Yes, and to be clear: we have very much been working on writing up those details in ways hopefully more understandable to AI Safety researchers.
But we are really not working in a context of “neutral” evaluation here, which is why we’re not rushing to put those details out onto the Alignment/LW/EA Forum (many details though can already be found across posts on Forrest’s blog).
Thank you too for responding here, Anthony. It feels tough trying to explain this stuff with people around me, so just someone pointing out what is actually needed to make constructive conversations work here is helpful.