Data scientist working on AI forecasting through Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.
Robi Rahman
(For what it’s worth, I don’t think you’re irrational, you’re just mistaken about Scott being racist and what happened with the Cade Metz article. If someone in EA is really racist, and you complain to EA leadership and they don’t do anything about it, you could reasonably be angry with them. If the person in question is not in fact racist, and you complain about them to CEA and they don’t do anything about it, they made the right call and you’d be upset due to the mistaken beliefs, but conditional on those beliefs, it wasn’t irrational to be upset.)
Thanks, that’s a great reason to downvote my comment and I appreciate you explaining why you did it (though it has gotten some upvotes so I wouldn’t have noticed anyone downvoted except that you mentioned it). And yes, I misread whom your paragraph was referring to; thanks for the clarification.
However, you’re incorrect that those factual errors aren’t relevant. Your feelings toward EA leadership are based on a false factual premise, and we shouldn’t be making decisions about branding with the goal of appealing to people who are offended based on their own misunderstanding.
Leadership betrayal: My reasoning is anecdotal, because I went through EA adjacency before it was cool. Personally, I became “EA Adjacent” when Scott Alexander’s followers attacked a journalist for daring to scare him a little—that prompted me to look into him a bit, at which point I found a lot of weird race IQ, Nazis-on-reddit, and neo-reactionary BS that went against my values.
Scott Alexander isn’t in EA leadershipThis is also extremely factually inaccurate—every clause in the part of your comment I’ve italicized is at least half false.
This is actually disputed. While so-called “bird watchers” and other pro-bird factions may tell you there are many birds, the rival scientific theory contends that birds aren’t real.
Birds are the only living animals with feathers.
That’s not true, you forgot about the platypus.
When a reward or penalty is so small, it is less effective than no incentive at all, sometimes by replacing an implicit incentive.
In the study, the daycare had a problem with parents showing up late to pick up their kids, making the daycare staff stay late to watch them. They tried to fix this problem by implementing a small fine for late pickups, but it had the opposite of the intended effect, because parents decided they were okay with paying the fine.
In this case, if you believe recruiting people to EA does a huge amount of good, you might think that it’s very valuable to refer people to EAG, and there should be a big referral bounty.
From an altruistic cause prioritization perspective, existential risk seems to require longtermism
No it doesn’t! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.
When I’m talking to non-philosophers, I prefer an “existential risk” framework to a “long-termism” framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it’s non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we’re all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)
working on AI x-risk is mostly about increasing the value of the future, because, in his view, it isn’t likely to lead to extinction
Ah yes I get it now. Thanks!
What is maxevas? Couldn’t find anything relevant by googling.
Hope I’m not misreading your comment, but I think you might have voted incorrectly, as if the scale is flipped.
On the current margin, improving our odds of survival seems much more crucial to the long-term value of civilization. My reason for believing this is that there are some dangerous technologies which I expect will be invented soon, and are more likely to lead to extinction in their early years than later on. Therefore, we should currently spend more effort on ensuring survival, because we will have more time to improve the value of the future after that.
(Counterpoint: ASI is the main technology that might lead to extinction, and the period when it’s invented might be equally front-loaded in terms of setting values as it is in terms of extinction risk.)
stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID
I’m out of the loop, who’s this allegedly EA person who works at DOGE?
The idea of haggling doesn’t sit well with me or my idea of what a good society should be like. It feels competitive, uncooperative, and zero-sum, when I want to live in a society where people are honest and cooperative.
Counterpoint: some people are more price-sensitive than typical consumers, and really can’t afford things. If we prohibit or stigmatize haggling, society is leaving value on the table, in terms of sale profits and consumer surplus generated by transactions involving these more financially constrained consumers. (When the seller is a monopolist, they even introduce opportunities like this through the more sinister-sounding practice of price discrimination.)
I think EA’s have the mental strength to handle diverse political views well.
No, I think you would expect EAs to have the mental strength to handle diverse political views, but in practice most of them don’t. For example, see this heavily downvoted post about demographic collapse by Malcolm and Simone Collins. Everyone is egregiously misreading it as being racist or maybe just downvoting it because of some vague right-wing connotations they have of the authors.
If you don’t aim to persuade anyone else to agree with your moral framework and take action along with you, you’re not doing the most good within your framework.
(Unless your framework says that any good/harm done by anyone other than yourself is morally valueless and therefore you don’t care about SBF, serial killers, the number of people taking the GWWC pledge, etc.)
embrace of the “Meat-Eater Problem” inbuilt into both the EA Community and its core ideas
Embrace of the meat-eater problem is not built into the EA community. I’m guessing a large majority of EAs, especially the less engaged ones who don’t comment on the Forum, would not take the meat-eater problem seriously as a reason we ought to save fewer human lives.
I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.
Can you give specifics? Any crucial considerations that EA is not considering or under-weighting?
I actually found it more persuasive that buying broilers from a reformed scenario seems to get you both a reduction in pain and a more climate-positive outcome
How did you conclude that? How are the broilers reformed to not be painful?
Wow, incredible that this has 0 agree votes and 43 disagree votes. EAs have had our brains thoroughly fried by politics. I was not expecting to agree with this but was pleasantly surprised at some good points.
Now that the election is over, I’d love to see a follow-up post on what will probably happen during the next administration, and what will be good and bad from an EA perspective.
Hey! You might be interested in applying to the CTO opening at my org:
https://careers.epoch.ai/en/postings/f5f583f5-3b93-4de2-bf59-c471a6869a81