Hey Michael, sorry I am slightly late with my comment.
To start I broadly agree that we should not be misleading about EA in conversation, however my impression is that this is not a large problem (although we might have very different samples).
I am unsure where I stand on moral inclusivity/exclusivity, although as I discuss later I think this is not actually a particularly major problem, as most people do not have a set moral theory.
I am wondering what your ideal inclusive effective altruism outreach looks like?
I am finding it hard to build up a cohesive picture from your post and comments, and I think some of your different points don’t quite gel together in my head (or at least not in an inconvenient possible world).
You give an example of this of beginning a conversation with global poverty before transitioning to explaining the diversity of EA views by:
Point out people understand this in different ways because of their philosophical beliefs about what matters: some focus on helping humans alive today, others on animals, others on trying to make sure humanity doesn’t accidentally wipe itself out, etc.
For those worried about how to ‘sell’ AI in particular, I recently heard Peter Singer give a talk when he said something like (can’t remember exactly): “some people are very worried about about the risks from artificial intelligence. As Nick Bostrom, a philosopher at the University of Oxford pointed out to me, it’s probably not a very good idea, from an evolutionary point of view, to build something smarter than ourselves.” At which point the audience chuckled. I thought it was a nice, very disarming way to make the point.
However trying to make this match the style of an event a student group could actually run, it seems like the closet match (other than a straightforward into to EA event) would be a talk on effective global poverty charity, follow by an addendum on EA being more broad at the end. (I think this due to a variety of practical concerns, such a there being far more good speakers and big names in global poverty, and it providing many concrete examples of how to apply EA concepts etc.)
I am however skeptical that a addendum on the end of a way would create nearly as strong an impression as the subject matter of the talk itself, and people would still leave with a much stronger impression of EA as being about global poverty than e.g. x-risk.
You might say a more diverse approach would be to have talks etc. roughly in proportion to what EAs actually believe is important, so if to make things simple, a third of EAs thought Global poverty was most important, a third x-risk and a third animal suffering, then a third of the talks should be a global poverty, a third on x-risk etc. Each of these could then end with this explanation of EA being more broad etc.
However if people’s current perception that global poverty events is best way to get new people into EA is in fact right (at least in the short term) either by having better attendance or conversion ratios this approach could still lead to the majority of new EAs first introduction to EA being through a global poverty talk.
This due to the previous problem of the addendum not really changing peoples impressions enough we could still end up with the situation you say we should want to avoid where:
People should not feel surprised about what EAs value when they get more involved in the movement.
I am approaching this all more from the student group perspective, and so don’t have strong views on the website stuff, although I will note that my impression was that 80k does a good job on being inclusive, and GWWC is more of an issue with a lack of updates than anything.
One thing you don’t particularly seem to be considering is that almost all people don’t actually have strongly formed moral views that conform to one of the common families (utilitarian, virtue ethics etc.) so I doubt (but could be wrong, as there would probably be a lot of survivor bias in this) that a high percentage of newcomers to EA feel excluded by the current implicit assumptions that might often be made of e.g. future people matter.
Ah ok, I think I generally agree with your points then (that intro events and websites should be morally inclusive and explain (to some degree) the diversity of EA. My current impression is that this is not much of a problem at the moment. From talking to people working at EA orgs and the reading the advice given to students running into events I think people do advocate for honesty and moral inclusiveness, and when/if it is lacking this is more due to a lack of time/honest mistakes as opposed to conscious planning. (Although possibly we should try to dedicate much more time to it to try and ensure it is never neglected?)
In particular I associate the whole ‘moral uncertainty’ thing pretty strongly with EA, and in particular CEA and GWWC (but this might just be due to Toby and Will’s work on it) which strikes fairly strongly against part 3 in your main post.
How much of a problem do you think this currently is? The title and tone (use of plea etc.) in your post makes me think you feel we are currently in pretty dire straights.
I also think that generally student run talks (and not specific intro to EA events) are the way most people initially hear about EA (although could be very wrong about this) and so actually the majority of the confusion about what EA is really about would not get addressed by people fully embracing the recommendations in your post. (Although I may just be heavily biased towards how the EA societies I have been involved with have worked).