when effective altruism is communicated in a nuanced way, it doesn’t sound weird.
rushing to the bottom line and leaving it unjustified means the person has neither a good understanding of the conclusion nor of the reasoning
I want newcomers to have a nuanced view of effective altruism
I think newcomers only understanding a rushed version of the bottom line without the reasoning is worse than them only understanding the very first step of the reasoning.
I think it’s fine for people to go away with 10% of the reasoning.
I don’t think it’s fine for people to go away with the conclusion and 0% of the reasoning.
I want to incentivize people to communicate in a nuanced way rather than just quickly rush to a bottom line they can’t justify in the time they have.
Therefore, I think “make EA ideas sound less weird” is much better than no advice.
Imagine a person at MIT comes to an EA society event and has some conversations about AI and then never comes back. Eventually, they end up working at Google and making decisions about Deepmind’s strategy.
Which soundbite do I want to have given them? What is the best 1 dimensional (it’s a quick conversation, we don’t have time for the 2 dimensional explanation) message I could possibly leave this person with?
Option 1: “an AI might kill us all” (they think we think a Skynet scenario is really likely and we have poor reasoning skills because a war with walking robots is not that reasonable) Option 2: “an AI system might be hard to control and because of that, some experts think it could be really dangerous” (this statement accurately applies to the “accidentally breaks the child’s finger” case and also world-ending scenarios, in my mind at least, so they’ve fully understood my meaning even if I haven’t yet managed to explain my personal bottom line)
I think they will be better primed to make good decisions about safe AI if I focus on trying to convey my reasoning before I try and communicate my conclusion. Why? My conclusion is actually not that helpful to a smart person who wants to think for themselves without all the context that makes that conclusion reasonable. If I start with my reasoning, even if I don’t take this person to my bottom line, someone else down the road who believes the same thing as me can take them through the next layer up. Each layer of truth matters.
If it sounds weird, it’s probably because I’ve not given enough context for them to understand the truth and therefore I haven’t really done any good by sharing that with them (all I’ve done is made them think I’m unreasonable).
My guess is this person who came to one EA event and ended up being a key decisionmaker at Deepmind is going to be a lot less resistant when they hear about AI alignment in their job if they heard option 2 and not option 1. Partly because the groundwork to the ideas was better laid out. Partly because they trust the “effective altruism” brand more because they have the impression the effective altruism people, associated with “AI alignment” (an association that could stick if we keep going the way we’ve been going), to be full of reasonable people who think reasonable things.
What matters is whether we’ve conveyed useful truth, not just technically true statements. I don’t want us to shy away from communicating about AI, but I do want us to shy away from communicating about AI in a confronting way that is counterproductive to giving someone a very nuanced understanding later on.
I think the advice “make AI sound less weird” is better than no advice because I think that communicating my reasoning well (which won’t sound weird because I’ll build it up layer by layer) is more important than communicating my current bottom line (leaving an impression of my bottom line that has none of the context attached to make it meaningful, let alone nuanced and high-fidelity) as quickly as possible.
PS: I still don’t think I’ve actually done a good job of laying the reasoning for my views clearly here so I’m going to write a post at some point (I don’t have time to fix the gaps I see now). It is helpful for you to point out gaps you see out explicitly so I can fill them in future writing if they actually can be filled (or change my mind if not).
In the meantime, I wanted to say that I’ve really valued this exchange. It has been very helpful for forcing me to see if I can make my intuitions/gut feeling more explicit and legible.
I agree that if the listener interprets “make EA sound less weird” as “communicate all of your reasoning accurately such that it leads the listener to have correct beliefs, which will also sound less weird”, then that’s better than no advice.
I don’t think that’s how the typical listener will interpret “make EA sound less weird”; I think they would instead come up with surface analogies that sound less weird but don’t reflect the underlying mechanisms, which listeners might notice then leading to all the problems you describe.
I definitely don’t think we should just say all of our conclusions without giving our reasoning.
(I think we mostly agree on what things are good to do and we’re now hung up on this not-that-relevant question of “should we say ‘make EA sound less weird’” and we probably should just drop it. I think both of us would be happier with the advice “communicate a nuanced, accurate view of EA beliefs” and that’s what we should go with.)
Note: edited significantly for clarity the next day
Tl;dr: Weirdness is still a useful sign of sub-optimal community building. Legibility is the appropriate fix to weirdness.
I know I used the terms “nuanced” and “high-fidelity” first but after thinking about it a few more days, maybe “legibility” more precisely captures what we’re pointing to here?
Me having the hunch that the advice “don’t be weird” would lead community builders to be more legible now seems like the underlying reason I liked the advice in the first place. However, you’ve very much convinced me you can avoid sounding weird by just not communicating any substance. Legibility seems to capture what community builders should do when they sense they are being weird and alienating.
EA community builders probably should stop and reassess when they notice they are being weird, “weirdness” is a useful smoke alarm for a lack of legibility.They should then aim to be more legible. To be legible, they’re probably strategically picking their battles on what claims they prioritize justifying to newcomers. They are legibly communicating something, but they’re probably not making alienating uncontextualized claims they can’t back-up in a single conversation.
They are also probably using clear language the people they’re talking to can understand.
I now think the advice “make EA more legible” captures the upside without the downsides of the advice “make EA sound less weird”. Does that seem right to you?
I still agree with the title of the post. I think EA could and should sound less weird by prioritizing legibility at events where newcomers are encouraged to attend.
Noticing and preventing weirdness by being more legible seems important as we get more media attention and brand lock-in over the coming years.
Cool . I’m curious, how does this feeling change for you if you found out today that AI timelines are almost certainly less than a decade?
I’m curious because my intuitions change momentarily whenever a consideration pops into my head that makes me update towards AI timelines being shorter.
I think my intuitions change when I update towards shorter AI timelines because legibility/the above outlined community building strategy has a longer timeline before the payoffs. Managing reputation and goodwill seem like good strategies if we have a couple of decades or more before AGI.
If we have time, investing in goodwill and legibility to a broader range of people than the ones who end up becoming immediately highly dedicated seems way better to me.
Legible high-fidelity messages are much more spreadable than less legible messages but they still some take more time to disseminate. Why? The simple bits of it sound like platitudes. And the interesting takeaways require too many steps in logic from the platitudes to go viral.
However, word of mouth spread of legible messages that require multiple steps in logic still seem like they might spread exponentially (just with a lower growth rate than simpler viral messages).
If AI timelines are short enough, legibility wouldn’t matter in those possible worlds. Therefore, if you believe timelines are extremely short then you probably don’t care about legibility or reputation (and you also don’t advise people to do ML PhDs because by the time they are done, it’s too late).
Idk, what are you trying to do with your illegible message?
If you’re trying to get people to do technical research, then you probably just got them to work on a different version of the problem that isn’t the one that actually mattered. You’d probably be better off targeting a smaller number of people with a legible message.
If you’re trying to get public support for some specific regulation, then yes by all means go ahead with the illegible message (though I’d probably say the same thing even given longer timelines; you just don’t get enough attention to convey the legible message).
TL;DR: Seems to depend on the action / theory of change more than timelines.
This comment fills in more of the gaps I see that I didn’t get time to fill out above. It fleshes out more of the connection between the advice “be less weird” and “communicate reasoning over conclusions”.
Doing my best to be legible to the person I am talking to is, in practice, what I do to avoid coming across as weird/alienating.
there is a trade-off between contextualizing and getting to the final point
we could be in danger of never risking saying anything controversial so we do need to encourage people to still get to the bottom line after giving the context that makes it meaningful
right now, we seem to often state an insufficiently contextualized conclusion in a way that seems net negative to me
we cause bad impressions
we cause bad impressions while communicating points I see as less fundamentally important to communicate
communicating reasoning/our way of thinking seems more important than the bottom line without the reasoning
AI risk can often take more than a single conversation to contextualize well enough for it to move from a meaningless topic to an objectionable claim that can be discussed with scepticism but still some curiosity
I think we’re better off trying to get community builders to be more patient and jump the gun less on the alienating bottom line
The soundbite “be less weird” probably does move us in a direction I think is net positive
I suspect that this is what most community builders will lay the groundwork to more legibly support conclusions when given advice like “get to the point if you can, don’t beat around the bush, but don’t be weird and jump the gun and say something without the needed context for the person you are talking to to make sense of what you are saying”
I feel like making arguments about stuff that is true is a bit like sketching out a maths proof for a maths student. Each link in the chain is obvious if you do it well, at the level of the person with whom you are taking through the proof, but if you start with the final conclusion, they are completely lost.
You have to make sure they’re with you every step of the way because everyone gets stuck at a different step.
You get away with stating your conclusion without the proof in maths because there is a lot of trust that you can back up your claim (the worst thing that happens is the person you are talking to loses confidence in their ability to understand maths if you start with the conclusion before walking them through it at a pace they can follow).
We don’t have that trust with newcomers until we build it. They won’t suspect we’re right unless we can show we’re right in the conversation we made the claim in.
They’ll lose trust and therefore interest very fast if we make a claim that requires at least 3 months of careful thought to come to a nuanced view on it. AI risk takes a tonne of time to develop inside views on. There is a lot of deference because it’s hard to think the whole sequence through yourself for yourself and explore various objections until you feel like it’s your view and not just something dictated to you. Deference is weird too (and gets a whole lot less weird when you just admit that you’re deferring a bit and what exactly made you trust the person you are deferring to to come to reasonable views in the first place).
I feel like “don’t sound weird” ends up translating to “don’t say things you can’t backup to the person you are talking to”. In my mind, “don’t sound weird” sounds a lot like “don’t make the person you are talking to feel alienated”, which in practice means “be legible to the person you are talking to”.
People might say much less when they have to make the person they are talking to understand all the steps along the way, but I think that’s fine. We don’t need everyone to get to the bottom line. It’s also often worse than neutral to communicate the bottom line without everything above it that makes it reasonable.
Ideally, community builders don’t go so glacially slowly that they are at a standstill, never getting to any bottom lines that sound vaguely controversial, but while we’ve still got a decent mass of people who know the bottom line and enough of the reasoning paths that can take people there, it seems fine to increase the number of vague messages in order to decrease the number of negativeimpressions.
I still want lots of people who understand the reasoning and the current conclusions, I just don’t think starting with an unmotivated conclusion is the best strategy for achieving this and I think “don’t be weird” plus some other advice to stop community builders from literally stagnating and never getting to the point seems much better than the current status quo.
tl;dr:
when effective altruism is communicated in a nuanced way, it doesn’t sound weird.
rushing to the bottom line and leaving it unjustified means the person has neither a good understanding of the conclusion nor of the reasoning
I want newcomers to have a nuanced view of effective altruism
I think newcomers only understanding a rushed version of the bottom line without the reasoning is worse than them only understanding the very first step of the reasoning.
I think it’s fine for people to go away with 10% of the reasoning.
I don’t think it’s fine for people to go away with the conclusion and 0% of the reasoning.
I want to incentivize people to communicate in a nuanced way rather than just quickly rush to a bottom line they can’t justify in the time they have.
Therefore, I think “make EA ideas sound less weird” is much better than no advice.
Imagine a person at MIT comes to an EA society event and has some conversations about AI and then never comes back. Eventually, they end up working at Google and making decisions about Deepmind’s strategy.
Which soundbite do I want to have given them? What is the best 1 dimensional (it’s a quick conversation, we don’t have time for the 2 dimensional explanation) message I could possibly leave this person with?
Option 1: “an AI might kill us all” (they think we think a Skynet scenario is really likely and we have poor reasoning skills because a war with walking robots is not that reasonable)
Option 2: “an AI system might be hard to control and because of that, some experts think it could be really dangerous” (this statement accurately applies to the “accidentally breaks the child’s finger” case and also world-ending scenarios, in my mind at least, so they’ve fully understood my meaning even if I haven’t yet managed to explain my personal bottom line)
I think they will be better primed to make good decisions about safe AI if I focus on trying to convey my reasoning before I try and communicate my conclusion. Why? My conclusion is actually not that helpful to a smart person who wants to think for themselves without all the context that makes that conclusion reasonable. If I start with my reasoning, even if I don’t take this person to my bottom line, someone else down the road who believes the same thing as me can take them through the next layer up. Each layer of truth matters.
If it sounds weird, it’s probably because I’ve not given enough context for them to understand the truth and therefore I haven’t really done any good by sharing that with them (all I’ve done is made them think I’m unreasonable).
My guess is this person who came to one EA event and ended up being a key decisionmaker at Deepmind is going to be a lot less resistant when they hear about AI alignment in their job if they heard option 2 and not option 1. Partly because the groundwork to the ideas was better laid out. Partly because they trust the “effective altruism” brand more because they have the impression the effective altruism people, associated with “AI alignment” (an association that could stick if we keep going the way we’ve been going), to be full of reasonable people who think reasonable things.
What matters is whether we’ve conveyed useful truth, not just technically true statements. I don’t want us to shy away from communicating about AI, but I do want us to shy away from communicating about AI in a confronting way that is counterproductive to giving someone a very nuanced understanding later on.
I think the advice “make AI sound less weird” is better than no advice because I think that communicating my reasoning well (which won’t sound weird because I’ll build it up layer by layer) is more important than communicating my current bottom line (leaving an impression of my bottom line that has none of the context attached to make it meaningful, let alone nuanced and high-fidelity) as quickly as possible.
PS: I still don’t think I’ve actually done a good job of laying the reasoning for my views clearly here so I’m going to write a post at some point (I don’t have time to fix the gaps I see now). It is helpful for you to point out gaps you see out explicitly so I can fill them in future writing if they actually can be filled (or change my mind if not).
In the meantime, I wanted to say that I’ve really valued this exchange. It has been very helpful for forcing me to see if I can make my intuitions/gut feeling more explicit and legible.
I agree that if the listener interprets “make EA sound less weird” as “communicate all of your reasoning accurately such that it leads the listener to have correct beliefs, which will also sound less weird”, then that’s better than no advice.
I don’t think that’s how the typical listener will interpret “make EA sound less weird”; I think they would instead come up with surface analogies that sound less weird but don’t reflect the underlying mechanisms, which listeners might notice then leading to all the problems you describe.
I definitely don’t think we should just say all of our conclusions without giving our reasoning.
(I think we mostly agree on what things are good to do and we’re now hung up on this not-that-relevant question of “should we say ‘make EA sound less weird’” and we probably should just drop it. I think both of us would be happier with the advice “communicate a nuanced, accurate view of EA beliefs” and that’s what we should go with.)
Note: edited significantly for clarity the next day
Tl;dr: Weirdness is still a useful sign of sub-optimal community building. Legibility is the appropriate fix to weirdness.
I know I used the terms “nuanced” and “high-fidelity” first but after thinking about it a few more days, maybe “legibility” more precisely captures what we’re pointing to here?
Me having the hunch that the advice “don’t be weird” would lead community builders to be more legible now seems like the underlying reason I liked the advice in the first place. However, you’ve very much convinced me you can avoid sounding weird by just not communicating any substance. Legibility seems to capture what community builders should do when they sense they are being weird and alienating.
EA community builders probably should stop and reassess when they notice they are being weird, “weirdness” is a useful smoke alarm for a lack of legibility. They should then aim to be more legible. To be legible, they’re probably strategically picking their battles on what claims they prioritize justifying to newcomers. They are legibly communicating something, but they’re probably not making alienating uncontextualized claims they can’t back-up in a single conversation.
They are also probably using clear language the people they’re talking to can understand.
I now think the advice “make EA more legible” captures the upside without the downsides of the advice “make EA sound less weird”. Does that seem right to you?
I still agree with the title of the post. I think EA could and should sound less weird by prioritizing legibility at events where newcomers are encouraged to attend.
Noticing and preventing weirdness by being more legible seems important as we get more media attention and brand lock-in over the coming years.
Yeah I’m generally pretty happy with “make EA more legible”.
Cool . I’m curious, how does this feeling change for you if you found out today that AI timelines are almost certainly less than a decade?
I’m curious because my intuitions change momentarily whenever a consideration pops into my head that makes me update towards AI timelines being shorter.
I think my intuitions change when I update towards shorter AI timelines because legibility/the above outlined community building strategy has a longer timeline before the payoffs. Managing reputation and goodwill seem like good strategies if we have a couple of decades or more before AGI.
If we have time, investing in goodwill and legibility to a broader range of people than the ones who end up becoming immediately highly dedicated seems way better to me.
Legible high-fidelity messages are much more spreadable than less legible messages but they still some take more time to disseminate. Why? The simple bits of it sound like platitudes. And the interesting takeaways require too many steps in logic from the platitudes to go viral.
However, word of mouth spread of legible messages that require multiple steps in logic still seem like they might spread exponentially (just with a lower growth rate than simpler viral messages).
If AI timelines are short enough, legibility wouldn’t matter in those possible worlds. Therefore, if you believe timelines are extremely short then you probably don’t care about legibility or reputation (and you also don’t advise people to do ML PhDs because by the time they are done, it’s too late).
Does that seem right to you?
Idk, what are you trying to do with your illegible message?
If you’re trying to get people to do technical research, then you probably just got them to work on a different version of the problem that isn’t the one that actually mattered. You’d probably be better off targeting a smaller number of people with a legible message.
If you’re trying to get public support for some specific regulation, then yes by all means go ahead with the illegible message (though I’d probably say the same thing even given longer timelines; you just don’t get enough attention to convey the legible message).
TL;DR: Seems to depend on the action / theory of change more than timelines.
Goal of this comment:
This comment fills in more of the gaps I see that I didn’t get time to fill out above. It fleshes out more of the connection between the advice “be less weird” and “communicate reasoning over conclusions”.
Doing my best to be legible to the person I am talking to is, in practice, what I do to avoid coming across as weird/alienating.
there is a trade-off between contextualizing and getting to the final point
we could be in danger of never risking saying anything controversial so we do need to encourage people to still get to the bottom line after giving the context that makes it meaningful
right now, we seem to often state an insufficiently contextualized conclusion in a way that seems net negative to me
we cause bad impressions
we cause bad impressions while communicating points I see as less fundamentally important to communicate
communicating reasoning/our way of thinking seems more important than the bottom line without the reasoning
AI risk can often take more than a single conversation to contextualize well enough for it to move from a meaningless topic to an objectionable claim that can be discussed with scepticism but still some curiosity
I think we’re better off trying to get community builders to be more patient and jump the gun less on the alienating bottom line
The soundbite “be less weird” probably does move us in a direction I think is net positive
I suspect that this is what most community builders will lay the groundwork to more legibly support conclusions when given advice like “get to the point if you can, don’t beat around the bush, but don’t be weird and jump the gun and say something without the needed context for the person you are talking to to make sense of what you are saying”
I feel like making arguments about stuff that is true is a bit like sketching out a maths proof for a maths student. Each link in the chain is obvious if you do it well, at the level of the person with whom you are taking through the proof, but if you start with the final conclusion, they are completely lost.
You have to make sure they’re with you every step of the way because everyone gets stuck at a different step.
You get away with stating your conclusion without the proof in maths because there is a lot of trust that you can back up your claim (the worst thing that happens is the person you are talking to loses confidence in their ability to understand maths if you start with the conclusion before walking them through it at a pace they can follow).
We don’t have that trust with newcomers until we build it. They won’t suspect we’re right unless we can show we’re right in the conversation we made the claim in.
They’ll lose trust and therefore interest very fast if we make a claim that requires at least 3 months of careful thought to come to a nuanced view on it. AI risk takes a tonne of time to develop inside views on. There is a lot of deference because it’s hard to think the whole sequence through yourself for yourself and explore various objections until you feel like it’s your view and not just something dictated to you. Deference is weird too (and gets a whole lot less weird when you just admit that you’re deferring a bit and what exactly made you trust the person you are deferring to to come to reasonable views in the first place).
I feel like “don’t sound weird” ends up translating to “don’t say things you can’t backup to the person you are talking to”. In my mind, “don’t sound weird” sounds a lot like “don’t make the person you are talking to feel alienated”, which in practice means “be legible to the person you are talking to”.
People might say much less when they have to make the person they are talking to understand all the steps along the way, but I think that’s fine. We don’t need everyone to get to the bottom line. It’s also often worse than neutral to communicate the bottom line without everything above it that makes it reasonable.
Ideally, community builders don’t go so glacially slowly that they are at a standstill, never getting to any bottom lines that sound vaguely controversial, but while we’ve still got a decent mass of people who know the bottom line and enough of the reasoning paths that can take people there, it seems fine to increase the number of vague messages in order to decrease the number of negative impressions.
I still want lots of people who understand the reasoning and the current conclusions, I just don’t think starting with an unmotivated conclusion is the best strategy for achieving this and I think “don’t be weird” plus some other advice to stop community builders from literally stagnating and never getting to the point seems much better than the current status quo.