Note: edited significantly for clarity the next day
Tl;dr: Weirdness is still a useful sign of sub-optimal community building. Legibility is the appropriate fix to weirdness.
I know I used the terms “nuanced” and “high-fidelity” first but after thinking about it a few more days, maybe “legibility” more precisely captures what we’re pointing to here?
Me having the hunch that the advice “don’t be weird” would lead community builders to be more legible now seems like the underlying reason I liked the advice in the first place. However, you’ve very much convinced me you can avoid sounding weird by just not communicating any substance. Legibility seems to capture what community builders should do when they sense they are being weird and alienating.
EA community builders probably should stop and reassess when they notice they are being weird, “weirdness” is a useful smoke alarm for a lack of legibility.They should then aim to be more legible. To be legible, they’re probably strategically picking their battles on what claims they prioritize justifying to newcomers. They are legibly communicating something, but they’re probably not making alienating uncontextualized claims they can’t back-up in a single conversation.
They are also probably using clear language the people they’re talking to can understand.
I now think the advice “make EA more legible” captures the upside without the downsides of the advice “make EA sound less weird”. Does that seem right to you?
I still agree with the title of the post. I think EA could and should sound less weird by prioritizing legibility at events where newcomers are encouraged to attend.
Noticing and preventing weirdness by being more legible seems important as we get more media attention and brand lock-in over the coming years.
Cool . I’m curious, how does this feeling change for you if you found out today that AI timelines are almost certainly less than a decade?
I’m curious because my intuitions change momentarily whenever a consideration pops into my head that makes me update towards AI timelines being shorter.
I think my intuitions change when I update towards shorter AI timelines because legibility/the above outlined community building strategy has a longer timeline before the payoffs. Managing reputation and goodwill seem like good strategies if we have a couple of decades or more before AGI.
If we have time, investing in goodwill and legibility to a broader range of people than the ones who end up becoming immediately highly dedicated seems way better to me.
Legible high-fidelity messages are much more spreadable than less legible messages but they still some take more time to disseminate. Why? The simple bits of it sound like platitudes. And the interesting takeaways require too many steps in logic from the platitudes to go viral.
However, word of mouth spread of legible messages that require multiple steps in logic still seem like they might spread exponentially (just with a lower growth rate than simpler viral messages).
If AI timelines are short enough, legibility wouldn’t matter in those possible worlds. Therefore, if you believe timelines are extremely short then you probably don’t care about legibility or reputation (and you also don’t advise people to do ML PhDs because by the time they are done, it’s too late).
Idk, what are you trying to do with your illegible message?
If you’re trying to get people to do technical research, then you probably just got them to work on a different version of the problem that isn’t the one that actually mattered. You’d probably be better off targeting a smaller number of people with a legible message.
If you’re trying to get public support for some specific regulation, then yes by all means go ahead with the illegible message (though I’d probably say the same thing even given longer timelines; you just don’t get enough attention to convey the legible message).
TL;DR: Seems to depend on the action / theory of change more than timelines.
Note: edited significantly for clarity the next day
Tl;dr: Weirdness is still a useful sign of sub-optimal community building. Legibility is the appropriate fix to weirdness.
I know I used the terms “nuanced” and “high-fidelity” first but after thinking about it a few more days, maybe “legibility” more precisely captures what we’re pointing to here?
Me having the hunch that the advice “don’t be weird” would lead community builders to be more legible now seems like the underlying reason I liked the advice in the first place. However, you’ve very much convinced me you can avoid sounding weird by just not communicating any substance. Legibility seems to capture what community builders should do when they sense they are being weird and alienating.
EA community builders probably should stop and reassess when they notice they are being weird, “weirdness” is a useful smoke alarm for a lack of legibility. They should then aim to be more legible. To be legible, they’re probably strategically picking their battles on what claims they prioritize justifying to newcomers. They are legibly communicating something, but they’re probably not making alienating uncontextualized claims they can’t back-up in a single conversation.
They are also probably using clear language the people they’re talking to can understand.
I now think the advice “make EA more legible” captures the upside without the downsides of the advice “make EA sound less weird”. Does that seem right to you?
I still agree with the title of the post. I think EA could and should sound less weird by prioritizing legibility at events where newcomers are encouraged to attend.
Noticing and preventing weirdness by being more legible seems important as we get more media attention and brand lock-in over the coming years.
Yeah I’m generally pretty happy with “make EA more legible”.
Cool . I’m curious, how does this feeling change for you if you found out today that AI timelines are almost certainly less than a decade?
I’m curious because my intuitions change momentarily whenever a consideration pops into my head that makes me update towards AI timelines being shorter.
I think my intuitions change when I update towards shorter AI timelines because legibility/the above outlined community building strategy has a longer timeline before the payoffs. Managing reputation and goodwill seem like good strategies if we have a couple of decades or more before AGI.
If we have time, investing in goodwill and legibility to a broader range of people than the ones who end up becoming immediately highly dedicated seems way better to me.
Legible high-fidelity messages are much more spreadable than less legible messages but they still some take more time to disseminate. Why? The simple bits of it sound like platitudes. And the interesting takeaways require too many steps in logic from the platitudes to go viral.
However, word of mouth spread of legible messages that require multiple steps in logic still seem like they might spread exponentially (just with a lower growth rate than simpler viral messages).
If AI timelines are short enough, legibility wouldn’t matter in those possible worlds. Therefore, if you believe timelines are extremely short then you probably don’t care about legibility or reputation (and you also don’t advise people to do ML PhDs because by the time they are done, it’s too late).
Does that seem right to you?
Idk, what are you trying to do with your illegible message?
If you’re trying to get people to do technical research, then you probably just got them to work on a different version of the problem that isn’t the one that actually mattered. You’d probably be better off targeting a smaller number of people with a legible message.
If you’re trying to get public support for some specific regulation, then yes by all means go ahead with the illegible message (though I’d probably say the same thing even given longer timelines; you just don’t get enough attention to convey the legible message).
TL;DR: Seems to depend on the action / theory of change more than timelines.