I worry less about EAs conforming. I think it’s mostly lower-competence people who are tempted to irrationally conform to grab status, and higher-competence people are tempted to irrationally diverge to grab status.[1]
I’m wary of pushing the “go out and explore” angle too much without adequately communicating just how much wisdom you can find in EA/LW. Of course there’s value to exploring far and wide, but there’s something uniquely valuable that you can get here, and I don’t know where else you could cost-effectively get it. I also want to push against the idea that people should prioritise reading papers and technical literature before they read a bunch of wishy-washy LW posts. Don’t Goodhart on the impressive and legible, just keep reading what you notice speeds you through the search tree the fastest (minding that you’re still following the optimal tree search strategy).
Except I worry about our implicit social structures sending the message “all the cool people hang around the centrally EA spaces” in a way that doesn’t really support people to actually go and do these exploring moves while being engaged in and encouraged by EA.
Umm, maybe think of them like “decoy prestige”? It could be usefwl to have downwards-legibly competent[2] people who get a lot of attention, because they’ll attract the attention of competences below them, and this lets people who are higher competence to congregate without interference from below. Higher-competence people have an easier time discerning people’s true competence around that level, so decoy prestige won’t dazzle them. And it’s crucial for higher-competence people to find other higher-competence people to congregate with, since this fine-tunes their prestige-seeking impulses to optimise more cleanly for what’s good and true.[3]
I suspect a lot of high-competence alignment researchers are suitably embarrassed that their cause has gone mainstream in EA. I’m not of their competence, but I sometimes feel like apologising for prioritising AI simply because it’s so mainstream now. (“If the AI cause is mainstream now, surely I’m competent enough to beat the mainstream and find something better?”)
That is, their competence is legible to people way below that competence level. So even people with very low competence can tell that this person is higher competence.
Case in point, if I were surrounded by people I judged to be roughly as competent as me at what I do,[4] I wouldn’t be babbling such blatant balderdash in public. Well, I would, because I’m incorrigible, but that’s besides the point.
I worry less about EAs conforming. I think it’s mostly lower-competence people who are tempted to irrationally conform to grab status, and higher-competence people are tempted to irrationally diverge to grab status.[1]
I’m wary of pushing the “go out and explore” angle too much without adequately communicating just how much wisdom you can find in EA/LW. Of course there’s value to exploring far and wide, but there’s something uniquely valuable that you can get here, and I don’t know where else you could cost-effectively get it. I also want to push against the idea that people should prioritise reading papers and technical literature before they read a bunch of wishy-washy LW posts. Don’t Goodhart on the impressive and legible, just keep reading what you notice speeds you through the search tree the fastest (minding that you’re still following the optimal tree search strategy).
Umm, maybe think of them like “decoy prestige”? It could be usefwl to have downwards-legibly competent[2] people who get a lot of attention, because they’ll attract the attention of competences below them, and this lets people who are higher competence to congregate without interference from below. Higher-competence people have an easier time discerning people’s true competence around that level, so decoy prestige won’t dazzle them. And it’s crucial for higher-competence people to find other higher-competence people to congregate with, since this fine-tunes their prestige-seeking impulses to optimise more cleanly for what’s good and true.[3]
I suspect a lot of high-competence alignment researchers are suitably embarrassed that their cause has gone mainstream in EA. I’m not of their competence, but I sometimes feel like apologising for prioritising AI simply because it’s so mainstream now. (“If the AI cause is mainstream now, surely I’m competent enough to beat the mainstream and find something better?”)
That is, their competence is legible to people way below that competence level. So even people with very low competence can tell that this person is higher competence.
Case in point, if I were surrounded by people I judged to be roughly as competent as me at what I do,[4] I wouldn’t be babbling such blatant balderdash in public. Well, I would, because I’m incorrigible, but that’s besides the point.
“competent at what I do” just means they have a particular kind of crazy epistemology that I endorse.[5]
The third level of meta is where all the cool footnotes hang out.
Thanks, I liked all of this. I particularly agree that “adequately communicating just how much wisdom you can find in EA/LW” is important.
Maybe I want it to be perceived more like a degree you can graduate from?
Yes. Definitely. Full agreement there. At the risk of seeming inconsistent, let me quote a former mentor of mine.
(God, I hate the rest of that poem though, haha!)