In particular, I think the people who are good at “not making EA seem weird” (while still communicating all the things that matter – I agree with the points Rohin is making in the thread) are also (often) the ones who have a deeper (or more “authentic”) understanding of the content.
There are counterexamples, but consider, for illustration, that Yudkowsky’s argument style and the topics he focuses on would seem a whole lot weirder if he wasn’t skilled at explaining complex issues. So, understanding what you talk about doesn’t always make your points “not weird,” but it (at least) reduces weirdness significantly.
I think that’s mostly beneficial and I think fewer people coming into contact with EA ideas but where they do come into contact with them, they hear them from exponents with a particularly deep, “authentic” understanding, seems like a good thing!
I think it is counterproductive for people who don’t understand the argument they are making well enough to put the arguments into plain English to instead parrot off some jargon.
Instead of (just) “jargon” you could also say “talking points.”
I am not sure that the pressure on community builders to communicate all the things that matter is having good consequences.
This pressure makes people try to say too much, too fast.
Making too many points too fast makes reasoning less clear.
We want a community full of people who have good reasoning skills.
We therefore want to make sure community builders are demonstrating good reasoning skills to newcomers
We therefore want community builders to take the time they need to communicate the key points
This sometimes realistically means not getting to all the points that matter
I completely agree that you could replace “jargon” with “talking points”.
I also agree with Rohan that it’s important to not shy away from getting to the point if it is possible you can make the point in a well-reasoned way.
However, I actually think it’s possibly quite important for improving the epistemics of people new to the community for there to be less pressure to communicate “all the things that matter”. At least, I think there needs to be less pressure to communicate all the things that matter all at once.
The sequences are long for a reason. Legible, clear reasoning is slow. I think too much pressure to get to every bottom line in a very short time makes people skip steps. This means that not only are we not showing newcomers what good reasoning processes look like, we are going to be off-putting to people who want to think for themselves and aren’t willing to make huge jumps that are missing important parts of the logic.
Pushing community builders to get to all the important key points, many bottom lines, will maybe make it hard for newcomers to feel like they have permission to think for themselves and make their own minds up. To feel rushed to a conclusion, to feel like you must come to the same conclusion as everyone else, no matter how important it is, will always make clear thinking harder.
If we want a community full of people who have good reasoning processes, we need to create environments where good reasoning processes can thrive. I think this, like most things, is a hard trade-off and requires community builders to be pretty skilled or to have much less asked of them.
If it’s a choice between effective altruism societies creating environments where good reasoning processes can occur or communicating all the bottom lines that matter, I think it might be better to focus on the former. I think it makes a lot of sense to have effective altruism societies to be about exploration.
We still need people to execute. I think having AI risk specific societies, bio-risk societies, broad longtermism societies, poverty societies (and many other more conclusion focused mini-communities) might help make this less of a hard trade-off (especially as the community grows and there becomes more room for more than one effective altruism related society on any given campus). It is much less confusing to be rushed to a conclusion when that conclusion is well-labelled from the get-go (and effective altruism societies then can point interested people in the right direction to find out why certain people think certain bottom lines are sound).
Whatever the solution, I do worry rushing people to too many bottom lines too quickly does not create the community we want. I suspect we need to ask community builders to communicate less (we maybe need to triage our key points more), in order for them to communicate those key points in well-reasoned way.
Does that make sense?
Also, I’m glad you liked my comment (sorry for writing an essay objecting to a point made in passing, especially since your reply was so complementary; clearly succinctness is not my strength so perhaps other people face this trade-off much less than me :p).
I liked this comment!
In particular, I think the people who are good at “not making EA seem weird” (while still communicating all the things that matter – I agree with the points Rohin is making in the thread) are also (often) the ones who have a deeper (or more “authentic”) understanding of the content.
There are counterexamples, but consider, for illustration, that Yudkowsky’s argument style and the topics he focuses on would seem a whole lot weirder if he wasn’t skilled at explaining complex issues. So, understanding what you talk about doesn’t always make your points “not weird,” but it (at least) reduces weirdness significantly.
I think that’s mostly beneficial and I think fewer people coming into contact with EA ideas but where they do come into contact with them, they hear them from exponents with a particularly deep, “authentic” understanding, seems like a good thing!
Instead of (just) “jargon” you could also say “talking points.”
tl;dr:
I am not sure that the pressure on community builders to communicate all the things that matter is having good consequences.
This pressure makes people try to say too much, too fast.
Making too many points too fast makes reasoning less clear.
We want a community full of people who have good reasoning skills.
We therefore want to make sure community builders are demonstrating good reasoning skills to newcomers
We therefore want community builders to take the time they need to communicate the key points
This sometimes realistically means not getting to all the points that matter
I completely agree that you could replace “jargon” with “talking points”.
I also agree with Rohan that it’s important to not shy away from getting to the point if it is possible you can make the point in a well-reasoned way.
However, I actually think it’s possibly quite important for improving the epistemics of people new to the community for there to be less pressure to communicate “all the things that matter”. At least, I think there needs to be less pressure to communicate all the things that matter all at once.
The sequences are long for a reason. Legible, clear reasoning is slow. I think too much pressure to get to every bottom line in a very short time makes people skip steps. This means that not only are we not showing newcomers what good reasoning processes look like, we are going to be off-putting to people who want to think for themselves and aren’t willing to make huge jumps that are missing important parts of the logic.
Pushing community builders to get to all the important key points, many bottom lines, will maybe make it hard for newcomers to feel like they have permission to think for themselves and make their own minds up. To feel rushed to a conclusion, to feel like you must come to the same conclusion as everyone else, no matter how important it is, will always make clear thinking harder.
If we want a community full of people who have good reasoning processes, we need to create environments where good reasoning processes can thrive. I think this, like most things, is a hard trade-off and requires community builders to be pretty skilled or to have much less asked of them.
If it’s a choice between effective altruism societies creating environments where good reasoning processes can occur or communicating all the bottom lines that matter, I think it might be better to focus on the former. I think it makes a lot of sense to have effective altruism societies to be about exploration.
We still need people to execute. I think having AI risk specific societies, bio-risk societies, broad longtermism societies, poverty societies (and many other more conclusion focused mini-communities) might help make this less of a hard trade-off (especially as the community grows and there becomes more room for more than one effective altruism related society on any given campus). It is much less confusing to be rushed to a conclusion when that conclusion is well-labelled from the get-go (and effective altruism societies then can point interested people in the right direction to find out why certain people think certain bottom lines are sound).
Whatever the solution, I do worry rushing people to too many bottom lines too quickly does not create the community we want. I suspect we need to ask community builders to communicate less (we maybe need to triage our key points more), in order for them to communicate those key points in well-reasoned way.
Does that make sense?
Also, I’m glad you liked my comment (sorry for writing an essay objecting to a point made in passing, especially since your reply was so complementary; clearly succinctness is not my strength so perhaps other people face this trade-off much less than me :p).