Are you happy with where EA as a movement has ended up? If you could go back and nudge its course, what would you change?
HenryStanley
Promote formal diversity and inclusion programs
I’m sceptical; diversity programs often don’t work (Google spent $300m+ on diversity programs and didn’t move the needle) and in many cases reduce diversity.
This seems to be “not even wrong”—FTX’s business model isn’t and never was in question. The issue is Sam committing fraud and misappropriating customer funds, and there being a total lack of internal controls at FTX that made this possible.
I guess I don’t even really understand her relevance. Fully a third of the TIME article is about her mediation in an EA house, and makes her bad behaviour out to be emblematic of problems at the core of EA, but she’s… just some random person, right?
From some online digging: she’s listed as an attendee at EA Global 2016. She appeared on the Clearer Thinking podcast in 2021. She’s never posted on the EA Forum or LessWrong, at least not under her own name that I can find. Her relationship with EA seems at the most to be very, very slight. Am I missing something about her relevance in this whole thing?
Thanks for putting this together. I haven’t had a chance to go through your cost-effectiveness estimate in detail, but I do plan to. However:
I would attribute XR 10-50% of the credit for shifting the previously agreed net-zero date from 2050 to 2030, due to their Overton Window-shifting demand of net-zero by 2025 and huge popularity in the UK
YouGov compiles a list of famous UK charities and their popularity; XR is the second-most disliked charity on the list (38% of people surveyed saying they dislike). The only more-disliked charity is the far-right English Defence League. The majority of Britons are against XR’s protests. If your estimate of 10–50% is based partly on them being popular, I would view that as suspect.
(As an aside, I think there could have been reputational risks to EA if we had publicly endorsed XR at the height of their power. Partly this is down to XR’s unpopularity and the divisiveness of their protest tactics, but also because their poor epistemic practices could have reasonably led others to question our own.)
Thoughtful post!
I don’t agree with your analysis in (3) - neglectedness to me is asking not ‘is enough being done’ but ‘is this the thing that can generate the most benefit on the margin’.
For climate change it seems most likely not; hundreds of billions of dollars (and likely millions of work-years) are already spent every year on climate change mitigation (research, advocacy, or energy subsidies). The whole EA movement might move, what, a few hundred million dollars per year? Given the relatively scarce resources we have, both in time and money, it seems like there are places where we could do more good (the whole of the AI safety field has only a couple hundred people IIRC).
Huge congratulations on the book!
My question isn’t really related – it was triggered by the New Yorker/Time pieces and hearing your interview with Rob on the 80,000 Hours podcast (which I thought was really charming; the chemistry between you two comes across clearly). Disregard if it’s not relevant or too personal or if you’ve already answered elsewhere online.
How did you get so dang happy?
Like, in the podcast you mention being one of the happiest people you know. But you also talk about your struggles with depression and mental ill-health, so you’ve had some challenges to overcome.
Is the answer really as simple as making mental health your top priority, or is there more to it? Becoming 5–10x happier doesn’t strike me as typical (or even feasible) for most depressives[1]; do you think you’re a hyper-responder in some regard? Or is it just that people tend to underindex on how important mental health is and how much time they should spend working at it (e.g. finding meds that are kinda okay and then stopping the search there instead of persisting)?
How should talented EA software engineers best put their skills to use?
I remember going to a ‘fireside chat’ at EAGxOxford a few years ago—the first such conference I’d been to. The topic was general wellbeing amongst EAs. Hearing Will and the other participants talk candidly about difficulties they’d faced was very humbling and humanising.
I don’t think we should necessarily shy away from such questions.
Much of this argument could be short-circuited by pulling apart what Scott means by ‘eugenics’ - it’s clear from the context (missing from the OP’s post) that he’s referring to liberal eugenics, which argues that parents should have the right to have some sort of genetic choice over their offspring (and has almost nothing in common with the coercive “eugenics” to which the OP refers).
Liberal eugenics is already widespread, in a sense. Take embryo selection, where parents choose which embryo to bring to term depending on its genetic qualities. We’ve had chorionic villus sampling to check an embryo for Down syndrome for decades; it’s commonplace.
Just dropping the word “eugenics” again and again with no clarification or context is very misleading.
“What about eugenics? Do I support eugenics? No, not as the term is commonly understood.”—This is just not a useful thing to mention in an apology about racism, or at least, not in this way
I actually think this was quite reasonable. He’s a bioethicist, after all – ‘eugenics’ has a bunch of different meanings in that field and it’s important to distinguish between them
My response to (b): the word is probably beyond rehabilitation now, but I also think that people ought to be able to have discussions about bioethics without having to clarify their terms every ten seconds. I actually think it is unreasonable of someone to skim someone’s post on something, see a word that looks objectionable, and cast aspersions over their whole worldview as a result.
Reminds me of when I saw a recipe which called for palm sugar. The comments were full of people who were outraged at the inclusion of such an exploitative, unsustainable ingredient. Of course, they were actually thinking of palm oil (palm sugar production is largely sustainable) but had just pattern-matched ‘palm’ as ‘that bad food thing’.
An open letter from 500 of ~700 OpenAI employees to the board, calling on them to resign (also on The Verge).
Suggests there’s an enormous amount of bad feeling about the decision internally. It also seems like a bad sign that the board was unwilling to provide any ‘written evidence’ of wrongdoing, though maybe something will appear in the coming days.
But all told it looks pretty bad for EA. Seems like there’s an enormous backlash online—initially against OpenAI for firing everyone’s favourite AI CEO, and now against “EA” “woke” “decelerationist” types.[1][2]
It’s also seemed to trigger a flurry of tweets from Nick Cammarata, saying that EAs are overwhelmingly self-flagellating and self-destructive and that EA caused him and his friends enormous harm. I think his claims are flatly wrong (though they may be true for him and his friends), and some of the replies seem to agree, but it has 500K views as I publish.
Seems like the whole episode (combined with at least one prominent EA seemingly saying it’s emblematic dreadful and toxic) has the potential to cause a lot of reputational damage, especially if the board chooses not to clarify its actions (although it’s possibly too late for that).
Of note: “ACE is not able to share any additional information about any of the anonymous allegations”, and yet ACE turned down GFI’s offer to investigate the complaints further:
GFI would be happy to participate fully in an investigation of the complaints to better understand and address them, and we offered to hire an external investigator. ACE declined
Which makes is sound as though GFI were willing to make efforts to resolve/investigate these anonymous complaints but ACE were not willing to pursue this.
As Pablo noted, concerns over the uncertain impact of cell-cultured products aren’t new, so it would be surprising if that was the real reason GFI was stripped of their title. Feels like ACE is burying the lede here.
- 2 Dec 2021 11:59 UTC; 11 points) 's comment on The EA Animal Welfare Fund Has Significant Room For More Funding by (
Larks’ view that they “do not place any value on diversity.”
I assume Larks means ‘racial diversity’ in the context of this thread (and based on their comment, which talks about increasing diverse viewpoints through other means).
You might be aware of this, but Lant Pritchett largely agrees with your criticisms at the end of the piece—that the focus on RCTs isn’t likely to be helpful in finding interventions that accelerate economic development.
On a meta level, I’m surprised by how unpopular Sjlver and DukeGartzea’s comments are in this discussion relative to others’.
For me it was seeing arguments made from emotion (“It is very clear that violence against men is less of an issue than violence against women”, no evidence provided) when responding to comments that contained data on men being the majority of victims of violence. When challenged they performed a bait-and-switch by offering stats for sexual assault (which is indeed more common in women, and a deeply serious issue, but is a subset of assault generally).
Agreed that FGM is horrifying beyond belief. But the flippancy from Sjilver around male circumcision and its purported sex benefits to men (which are not backed by the evidence), accompanied by a winky face, were enough to earn a downvote from me.
Maybe I’m missing something, but it seems like the wiki isn’t labelled as such—as in, there isn’t a part of the site called the ‘wiki’. There’s also the ‘tags portal’ which refers to the ‘EA Forum Wiki’, but as I understand it that page essentially is the wiki. The language is confusing.
Should there be a section of the site called the ‘wiki’ that lists all these pages? Or maybe even consider renaming tags to wiki—where posts on the forum can be ‘tagged’ with a wiki article.
(The URLs for tags should probably more conventionally be in the format
/tags/<tagname>
, not/tag/<tagname>
. Going to/tag
gives a 404.)
Even though ‘utilitarianism’ gets several times the search traffic of terms like ‘effective altruism,’ ‘givewell,’ or ‘peter singer’, there’s currently no good online introduction to utilitarianism. This seems like a missed opportunity.
What similar gaps in easily-accessible EA topics do you think exist?
(I think Rob Wiblin’s now-archived effective altruism FAQ was the best intro to EA around—much better than anything similar offered ‘officially’. I’ve also toyed with writing up some of David Pearce’s work in a more accessible format.)
(I’m sorry your experience has been so bad.)
It feels like there’s a motte and bailey here.
Motte: powerful men who wield control over EA money shouldn’t use that power for sexual gain. Baileys, as I see them: EAs shouldn’t get into relationships with one another, we should implement strict rules to enforce this, women who are “redpilled” have basically been brainwashed by polyamorous EAs, EAs sleeping together somehow contributed to the FTX debacle(?).
Your point about Title IX seems especially strange—as I understand it Title IX has led to universities dealing with sexual misconduct claims internally, the opposite of your proposal to have the police deal with them (which I totally agree with).