Thanks so much for writing this! I think it could be a top-level post, Iām sure many others would find it very helpful.
My 2 cents:
2 is complicatedāwhen people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?
I think itās definitely bad to āUse framings, arguments and examples that you donāt think hold water but work at getting people to join your groupā. If I understand correctly it can cause point 5. Also āgetting people to join your groupā is rarely an instrumental goal, and āgetting people to join your group for the wrong reasonsā is probably not that useful in the long term.
Something that I think is very important that seems missing from this is that thereās a significant probability that weāre wrong about important things (i.e. EA as a question). We could be wrong about the impact of bednets, wrong about AI being the most important thing, wrong about population ethics, etc. I think itās a huge difference from the ācultā mindset.
I think I want to say something like āare you acting in the interests of the people youāre talking toā, but that doesnāt work eitherāIām not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way itās not a crux.
The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I donāt think EA can help them. If they value the wellbeing of others then EA can help them achieve their values better. From my personal perspective this is strongly related to the point on uncertainty: I donāt want to push other people to work on my values because from an outside view I donāt think my values are more important than their values, or more likely to be ācorrectā. I donāt know if it makes any sense, really curious to hear your thoughts, you have certainly thought about this more than I.
I think itās definitely bad to āUse framings, arguments and examples that you donāt think hold water but work at getting people to join your groupā. If I understand correctly it can cause point 5. Also āgetting people to join your groupā is rarely an instrumental goal, and āgetting people to join your group for the wrong reasonsā is probably not that useful in the long term.
Agree about the ānot holding waterā, I was trying to say that āaddresses cruxes you donāt haveā might look similar to this bad thing, but Iām not totally sure thatās true.
I disagree about getting people to join your groupāthat definitely seems like an instrumental goal, though definitely āget the relevant people to join your groupā is more the thingābut different people might have different views on how relevant they need to be, or what their goal with the group is.
Something that I think is very important that seems missing from this is that thereās a significant probability that weāre wrong about important things (i.e. EA as a question).
I kind of agree here; I think there are things in EA Iām not particularly uncertain of, and while Iām open to being shown Iām wrong, I donāt want to pretend more uncertainty than I have.
The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I donāt think EA can help them. If value the wellbeing of others then EA can help them achieve their values better.
Iāve definitely heard that frame, but it honestly doesnāt resonate for me. I think some people are wrong about what values are right and arguing with me sometimes convinces them of that. Iāve definitely had my values changed by argumentation! Or at least values on some level of abstractionānot on the level of solipsism vs altruism, but there are many layers between that and ājust an empirical questionā.
I donāt want to push other people to work on my values because from an outside view I donāt think my values are more important than their values, or more likely to be ācorrectā
I incorporate an inside view on my valuesāif I didnāt think they were right, Iād do something else with my time!
Thanks so much for writing this! I think it could be a top-level post, Iām sure many others would find it very helpful.
My 2 cents:
I think itās definitely bad to āUse framings, arguments and examples that you donāt think hold water but work at getting people to join your groupā. If I understand correctly it can cause point 5. Also āgetting people to join your groupā is rarely an instrumental goal, and āgetting people to join your group for the wrong reasonsā is probably not that useful in the long term.
Something that I think is very important that seems missing from this is that thereās a significant probability that weāre wrong about important things (i.e. EA as a question).
We could be wrong about the impact of bednets, wrong about AI being the most important thing, wrong about population ethics, etc. I think itās a huge difference from the ācultā mindset.
The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I donāt think EA can help them. If they value the wellbeing of others then EA can help them achieve their values better.
From my personal perspective this is strongly related to the point on uncertainty: I donāt want to push other people to work on my values because from an outside view I donāt think my values are more important than their values, or more likely to be ācorrectā.
I donāt know if it makes any sense, really curious to hear your thoughts, you have certainly thought about this more than I.
Thanks, Lorenzo!
Agree about the ānot holding waterā, I was trying to say that āaddresses cruxes you donāt haveā might look similar to this bad thing, but Iām not totally sure thatās true.
I disagree about getting people to join your groupāthat definitely seems like an instrumental goal, though definitely āget the relevant people to join your groupā is more the thingābut different people might have different views on how relevant they need to be, or what their goal with the group is.
I kind of agree here; I think there are things in EA Iām not particularly uncertain of, and while Iām open to being shown Iām wrong, I donāt want to pretend more uncertainty than I have.
Iāve definitely heard that frame, but it honestly doesnāt resonate for me. I think some people are wrong about what values are right and arguing with me sometimes convinces them of that. Iāve definitely had my values changed by argumentation! Or at least values on some level of abstractionānot on the level of solipsism vs altruism, but there are many layers between that and ājust an empirical questionā.
I incorporate an inside view on my valuesāif I didnāt think they were right, Iād do something else with my time!