Is “EA aligned” a useful phrase? (Yes) Polis survey and results
Tl;dr;
Someone tweeted about this so I ran a small poll
Most people think “EA aligned” is a useful shorthand for agreement across a range of useful norms/behaviours
Those who are less engaged in social activities or who care less about x-risk are much more likely to feel they aren’t “ingroup” enough
Most people thought the phrase “aligned” was worse than “EA aligned”
People were uncertain if “mission-aligned” was a normal phrase among other subcultures
Here is the Pol.is (featured in 80k podcast) poll. 63 people have already voted. Vote on it https://pol.is/6ktynyaffe
Try to write comments that create useful cruxes
There are two currently two main clusters:
Welfare-maximising utilitarian EAs who think the phrase is fine
Other kinds of EAs, and non-EAs who don’t like it, who are characterised as being less part of the social scene and a worry that they aren’t ingroup enough
Then I give some context to this discussion
Current survey results (n=104, not random or representative)
Full results here: https://pol.is/report/r7ksdnnjzbsmnewa4yevm
A nice graph showing the 4 big groups
First, we’ll look at the things everyone thought, then look at the two subgroups A and B. There are some people in neither subgroup.
Majority
As you can see, most people think “EA aligned is a pretty useful phrase”. They mostly think that EA should be welcoming to people with different “vibes” (energies, modes of being, social presentations, non-central beliefs). Most people didn’t feel they had been dismissed for “not being EA enough”.
Group A
Group A is characterised by calling themselves EAs. They care about x-risk and they don’t worry about not being ingroup enough.
Group B
Group B don’t work in EA fields and don’t like “EA aligned as a phrase”
Group C
Group C is characterised by not like “EA aligned”, working in an EA field and having heard people describing others as “not aligned”
Group D
Group D is characterised by worrying they are not ingroup enough and thinking EA is a bit culty. They are often involved in EA social life.
Tentative Conclusions
Do these current clusters hold? I guess they will (70%), but I don’t know, please answer and we can see https://pol.is/6ktynyaffe
A big chunk of those who wouldn’t identify as EAs work in EA fields. It seems potentially valuable that they don’t think they aren’t axious about their place due to “not being EA enough”. And I mean potentially valuable in the $5mn - $50mn range
Context
Someone tweeted that they didn’t like when EAs used the phrase EA alinged. It got 21 retweets and quote tweets and a number of comments so seemingly people thought it was notable.
As we can see above, it’s not clear that the phrase itself is that relevant but it does speak to a deeper sense among some EAs that they aren’t “EA enough”. Can you think of a way to fix this?
If you thought this was interesting please do the poll: https://pol.is/6ktynyaffe
Polis seems cool, especially after hearing about it on Audrey Tang’s podcast. But I found it frustrating to have to either pick ‘agree’, ‘disagree’, or ‘unsure’. Some claims I really endorsed, while some I felt the claim didn’t represent my opinion, but I landed marginally closer to ‘agree’ than any other option. Both cases are treated identical in Polis. How much more difficult would it be to build qualified agreement into Polis? Simplest case would be a Likert scale (e.g. strongly disagree to strongly agree), more complex would be two 100-point scales for agreement and for confidence of agreement. Maybe that’d make analysis intractable, but I dunno.
I don’t think this is the key bottleneck. I think Pol.is is just too hard to use and get feedback from.
I’d love to see polis polls like this as a first class forum feature. Imagine what we’d know about each other’s beliefs if we always got clusters like this? I imagine it would allow us to have much better discussions!