I’m the chief of staff and also work on strategy at 80k. Before that, I was the initial program officer for global catastrophic risk at Open Phil. Comments here are my own views only, not my present or past employers’, unless otherwise specified.
HowieL
+1
Indeed, IIRC, EAs tend to be more progressive/left-of-center than the general population. I can’t find the source for this claim right now.
The 2019 EA Survey says:
“The majority of respondents (72%) reported identifying with the Left or Center Left politically and just over 3% were on the Right or Center Right, very similar to 2018.”
I figured some people might be interested in whether the orientation toward longtermism that Michelle describes above is common at EA orgs, so I wanted to mention that almost everything in this post could also be describing my personal experience. (I’m the director of strategy at 80,000 Hours.)
I think this request undermines how karma systems should work on a website. ‘Only people who have engaged with a long set of prerequisites can decide to make this post less visible’ seems like it would systematically prevent posts people want to see less of from being downvoted.
I really like Holly Elmore’s blogpost “Kicking an Addiction to Self-Loathing.”
Most native English speakers from outside of particular nerd cultures also would have no clue what it means.
Fair enough.
Fwiw, the forum explicitly discourages unnecessary rudeness (and encourages kindness). I think tone is part of that and the voting system is a reasonable mechanism for setting that norm. But there’s room for disagreement.
If the original poster came back and edited in response to feedback or said that the tone wasn’t intentional, I’d happily remove my downvote.
I downvoted this. “Please, if you disagree with me, carry your precious opinion elsewhere” reads to me as more than slightly rude and effectively an intentional insult to people who disagree with the OP and would otherwise have shared their views. I think it’s totally reasonable to worry in advance about a thread veering away from the topic you want to discuss and to preempt that with a request to directly answer your question [Edited slightly] and I wouldn’t have downvoted without the reference to other people’s “precious views.”
Lobbying v. grassroots advocacy
This is just semantic but I think you probably don’t want to call what you’re proposing a “lobbying group.” Lobbying usually refers to one particular form of advocacy (face to face meetings with legislators) and in many countries[1] it is regulated more heavily than other forms of advocacy.
(It’s possible that in the UK, “lobbying group” means something more general but in the U.S.)
[1] This is true in the U.S., which I know best. Wikipedia suggests it’s true in the EU but appears less true in the UK.
Who else is working on this?
Here are a couple small examples of things being done along these lines, though I agree there is little overall:
-Resolve to Save Lives claims to do some advocacy for epidemic preparedness in low-income countries in collaboration with the Global Health Advocacy Incubator. The latter group seems to be hiring an Advocacy Director though the posting is old so I wouldn’t be surprised if it’s out of date.
-PATH has done some advocacy to encourage the U.S. government to invest in global health security.
I didn’t actually become a member until after the wording of the pledge changed but I do vividly remember the first wave of press because all my friends sent me articles showing that there were some kids in Oxford who were just like me.
Learning about Giving What We Can (and, separately, Jeff and Julia) made me feel less alone in the world and I feel really grateful for that.
Hi RandomEA,
Thanks for pointing this out (and for the support).
We only update the ‘Last updated’ field for major updates not small ones. I think we’ll rename it ‘Last major update’ to make it clearer.
The edit you noticed wasn’t intended to indicate that we’ve changed our view on the effectiveness of existential risk reduction work. That paragraph was only meant to demonstrate how it’s possible that x-risk reduction could be competitive with top charities from a present-lives-saved perspective. The author decided we could make this point better by using illustrative figures that are more conservative than 80k’s actual rough guess and made the edit. We’ve tinkered with the wording to make it clearer that they are not actual cost-effectiveness estimates.
Also, note that in both cases the paragraph was about hypothetical effectiveness if you only cared about present lives, which is very different from our actual estimate of cost effectiveness.
Hope this helps clear things up.
Not an expert but, fwiw, my impression is that this is more common in CS than philosophy and the social science areas I know best.
I’m very worried that staff at EA orgs (myself included) seem to know very little about Gen Z social media and am really glad you’re learning about this.
I think it’s especially dangerous to use this word when talking about high schoolers, especially given the number of cult and near-cult groups that have arisen in communities adjacent to EA.
Seems reasonable
“People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive”
I haven’t read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I’d expect fewer people to find it useful (and for a relatively broad summary to be biased against them).
That said, as you know, I think your summaries/collections are useful and underprovided.
This all seems reasonable to me though I haven’t thought much about my overall take.
I think the details matter a lot for “Even among individual researchers who work independently, or whose org isn’t running surveys, probably relatively few should run their own, relatively publicly advertised individual surveys”
A lot of people might get a lot of the value from a fairly small number of responses, which would minimise costs and negative externalities. I even think it’s often possible to close a survey after a certain number of responses.
A counterargument is that the people who respond earliest might be unrepresentative. But for a lot of purposes, it’s not obvious to me you need a representative sample. “Among the people who are making the most use of my research, how is it useful” can be pretty informative on its own.
[Not meant to express an overall view.] I don’t think you mention the time of the respondents as a cost of these surveys, but I think it can be one of the main costs. There’s also risk of survey fatigue if EA researchers all double down on surveys.
I find it off-putting though I don’t endorse my reaction and overall think the time savings mean I’m personally net better off when other people use it.
I think for me, it’s about taking something that used to be a normal human interaction and automating it instead. Feels unfriendly somehow. Maybe that’s a status thing?
Also seems relevant that both 80k and CEA went through YC (though I didn’t work for 80k back then and don’t know all the details).