It seems somewhat irresponsible to title this post “every mention of EA in Going Infinite” if it only includes a handful of the many mentions of EA in Going Infinite. Appreciate you for clarifying!
burner
Wrong lessons from the FTX catastrophe
Tentative Reasons You Might Be Underrating Having Kids
This post is a bit weak in making its case but it is blindingly obvious that Helena is a grift and I’m a bit unimpressed by galaxy brain’d reasons (hit based, etc) for thinking it might be good.
But in the big picture, occasionally a grant is bad. We can’t treat every bad grant as a scandal.
It’s surprising to me that polyamory continues to be such a sacred cow of EA. It’s been highly negative for EA’s public image, and now it seems to be connected to a substantial amount of abuse. There’s a number of reasons our priors should suggest that non-monogomous relationships in high trust, insular communities can easily lead to abuse. It’s always seemed overly optimistic to think EA could avoid these problems. Of course, there have been similar ongoing discussions in the Berkeley Rationalist community for a number of years now.
This seems like one of the most important community issues to reflect on.
From Ben: “After this, there were further reports of claims of Kat professing her romantic love for Alice, and also precisely opposite reports of Alice professing her romantic love for Kat. I am pretty confused about what happened.”
Could you comment?
FYI anything you write to Teddy could end up in an article. I suggest you read some of his pieces before engaging. I worry this piece makes him sound more EA than he is, although of course he could be pivoting.
When I have read grants, most have (unfortunately) fallen closer to: “This idea doesn’t make any sense” than “This idea would be perfect if they just had one more thing”. When a grant falls into the latter, I suspect recipients do often get advice.
I think the problem is that most feedback would be too harsh and fundamental—these are very difficult and emotionally costly conversations to have. It can also make applicants more frustrated and spread low fidelity advice on what the grant maker is looking for. A rejection (hopefully) encourages the applicant to read and network more to form better plans.
I would encourage rejected applicants to speak with accepted ones for better advice.
I am very surprised by the warm reception to this post. To my mind, this is exactly the type of rhetoric we should be discouraging on the Forums. It’s insinuating all kinds of scandals
(I am tired of drama, scandals, and PR. I am tired of being in a position where I have to apologize for sexism, racism, and other toxic ideologies within this movement)
without making any specific allegations or points, which becomes somehow acceptable within the emotional frame of “I am TIRED.” Presumably many other people, including those directly impacted by these things, are tired too, and we need to use reason to adjudicate how we should respond.
I’m very surprised by this. There are number of anthropological findings which connect monogamous norms to greater gender equality and other positive social outcomes. Recently arguments along these lines have been advanced by Joseph Henrich, one of the most prominent evolutionary biologists.
Phrases like “EA elevates people” are becoming common, but it is very unclear what it means. Nick Bostrom created groundbreaking philosophical ideas. Will MacAskill has written extremely popular books and built communities and movements. Sam Bankman Fried became the richest man under 30 in a matter of months. All of these people have influenced and inspired many EAs because of their actions.
Under any reasonable sense of the word, people are elevating themselves. I think EA is incredibly free from ‘cult of personality’ problems—in fact it’s amazing how quickly people will turn against popular EAs. But in any group, some people are going to get status for doing their work well.
but I am very concerned with just how little cause prioritization seems to be happening at my university group
I’ve heard this critique in different places and never really understood it. Presumably undergraduates who have only recently heard of the empirical and philosophical work related to cause prioritization are not in the best position to do original work on it. Instead they should review arguments others have made and judge them, as you do in the Arete Fellowship. It’s not surprising to me that most people converge on the most popular position within the broader movement.
Could you (or someone else) actually make the case for “good apologies” (in the sense you outline in this post) that goes beyond PR concerns?
I understand the desire to know what Bostrom really thinks, but the attention on the structural quality of his apology seems completely undue. None of these elements would presumably reveal more about how Bostrom really thinks than his actual apology.
In fact, it seems like if our preference is to understand how Bostrom really feels, your “good apology” approach might take us further away from that! Your emphasis is on appearing “sincere and genuine” which again, fair enough for PR concerns, but presumably we are after some sort of larger reconciliation here that necessitates being honest and forthright?
If an apology was terribly written—but was in fact genuine and sincere—that seems preferable? If a good apology is just to “sell forgiveness”, what could the point be beyond PR?
My apologies if I am missing something here, but you seem to be writing a guide for some kind of dishonesty? And if you mean it to be about true honesty, I think this scheme really fails.
I disagree with this. For one, OpenPhil has a higher bar now. There’s a lot of work that needs to be done. ASB and others might already think this was a very bad grant. There’s a cost to dwelling on these things, especially as EA Forums drama rather than a high quality post mortem.
Influencing the creation of Professor Quirrel in HPMOR and being influenced by Professor Quirrel in HPMOR both seem to correlate with being a bad actor in EA—a potential red flag to watch out for.
Something that is above question or criticism or question (see here), in this case because discourse is often cast as intolerant or phobic
Thanks, that’s helpful context!
I find it a bit weird—possibly unhelpful—to blend a big picture cause prioritization argument and the promotion of a specific matching campaign.
GiveDirectly, Effective Altruism Australia, EA Aotearoa New Zealand, Every.org, The Life You Can Save
What’s going on with the coauthorship here—multiple organizations wrote this post together? Should this be read as endorsements, or something else?
I think it is very difficult to litigate point three further without putting certain people on trial and getting into their personal details, which I am not interested in doing and don’t think is a good use of the Forum. For what it’s worth, I haven’t seen your Twitter or anything from you.
I should have emphasized more that there are consistent critics of EA who I don’t think are acting in bad faith at all. Stuart Buck seems to have been right early on a number of things, for example.
Your Bayesian argument may apply in some cases but it fails in others (for instance, when X = EAs are eugenicists).
Just apply Bayes’ rule: if P(events of the last week | X) > P(events of the last week | not-X), then you should increase your credence in X upon observing the events of the last week.
I also emphasize there are a few people who I have strong reason to believe are “deliberate effort to sow division within the EA movement” and this was the focus of my comment, publicly evidenced (NB: this is a very small part of my overall evidence) by them “taking glee in this disaster or mocking the appearances and personal writing of FTX/Alameda employees.” I do not think a productive conversation is possible in these cases.
Most of this seems focused on Alice’s experience and allegations. As I understand it, most parties involved—including Kat—believe Chloe to be basically reliable, or at least much more reliable.
Given all that, I’m surprised that this piece does not do more to engage with what Chloe herself wrote about her experience in the original post: https://forum.effectivealtruism.org/posts/32LMQsjEMm6NK2GTH/sharing-information-about-nonlinear?commentId=gvjKdRaRaggRrxFjH