Per the LW discussion, I suspect youâd fare better spending effort actually presenting the object level case rather than meta-level bulverism to explain why these ideas (whatever they are?) are getting a chilly reception.
Error theories along the lines of âPresuming I am right, why do people disagree with me?â are easy to come by. Suppose indeed Landryâs/âyour work is indeed a great advance in AI safety: then perhaps indeed it is being neglected thanks to collective epistemic vices in the AI safety community. Suppose instead this work is bunk: then perhaps indeed epistemic vice on your part explains your confidence (complete with persecution narrative) in the work despite its lack of merit.
We could litigate which is more likelyâor, better, find what the ideal âbarâ insiders should have on when to look into outsider/âheterodox/âwhatever work (too high, and existing consensus is too entrenched, and you miss too many diamonds in the rough; too low, expert time is squandered submerged in dross), and see whether what has been presented so far gets far enough along the ?crackpot/â?genius spectrum to warrant the consultation and interpretive labour you assert you are rightly due.
This would be an improvement on the several posts so far just offering âhere are some biases which we propose explains why our work is not recognisedâ. Yet it would still largely miss the point: the âbarâ of how receptive an expert community will be is largely a given, and seldom that amenable to protests from those currently screened out it should be lowered. If the objective is to persuade this community to pay attention to your work, then even if in some platonic sense their bar is âtoo highâ is neither here nor there: you still have to meet it else they will keep ignoring you.
Taking your course of action instead has the opposite of the desired effect. The base rates here are not favourable, but extensive ârowing with the refâ whilst basically keeping the substantive details behind the curtain with a promissory note of âThis is great, but you wouldnât understand its value unless you were willing to make arduous commitments to carefully study why weâre rightâ is a further adverse indicator.
Yes, we will gradually present the object-level arguments. Just not in one go, because it takes up time for people to sync up on the definitions a and distinctions.
Per the LW discussion, I suspect youâd fare better spending effort actually presenting the object level case rather than meta-level bulverism to explain why these ideas (whatever they are?) are getting a chilly reception.
Error theories along the lines of âPresuming I am right, why do people disagree with me?â are easy to come by. Suppose indeed Landryâs/âyour work is indeed a great advance in AI safety: then perhaps indeed it is being neglected thanks to collective epistemic vices in the AI safety community. Suppose instead this work is bunk: then perhaps indeed epistemic vice on your part explains your confidence (complete with persecution narrative) in the work despite its lack of merit.
We could litigate which is more likelyâor, better, find what the ideal âbarâ insiders should have on when to look into outsider/âheterodox/âwhatever work (too high, and existing consensus is too entrenched, and you miss too many diamonds in the rough; too low, expert time is squandered submerged in dross), and see whether what has been presented so far gets far enough along the ?crackpot/â?genius spectrum to warrant the consultation and interpretive labour you assert you are rightly due.
This would be an improvement on the several posts so far just offering âhere are some biases which we propose explains why our work is not recognisedâ. Yet it would still largely miss the point: the âbarâ of how receptive an expert community will be is largely a given, and seldom that amenable to protests from those currently screened out it should be lowered. If the objective is to persuade this community to pay attention to your work, then even if in some platonic sense their bar is âtoo highâ is neither here nor there: you still have to meet it else they will keep ignoring you.
Taking your course of action instead has the opposite of the desired effect. The base rates here are not favourable, but extensive ârowing with the refâ whilst basically keeping the substantive details behind the curtain with a promissory note of âThis is great, but you wouldnât understand its value unless you were willing to make arduous commitments to carefully study why weâre rightâ is a further adverse indicator.
Thanks for the thoughts.
Yes, we will gradually present the object-level arguments. Just not in one go, because it takes up time for people to sync up on the definitions a and distinctions.