Data scientist working on AI forecasting through Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.
Robi Rahman
Morality is Objective
There’s no evidence of this, and the burden of proof is on people who think it’s true. I’ve never even heard a coherent argument in favor of this proposition without assuming god exists.
This doesn’t answer the question for people who live in high-income countries and don’t feel envy. Should they abstain? Should they answer about whether they would envy someone in their own position if they were less advantaged?
If you’re someone with an impressive background, you can answer this by asking yourself if you feel that you would be valued even without that background. Using myself as an example, I...
went to a not so well-known public college
worked an unimpressive job
started participating in EA
quit the unimpressive job, studied at fancy university
worked at high-status ingroup organizations
posted on the forum and got upvotes
Was I warmly accepted into EA back when my resume was much weaker than it is now? Do I think I would have gotten the same upvotes if I had posted anonymously? Yes and yes. So on the question of whether I’m valued within EA regardless of my background, I voted agree.
EA Forum posts have been pretty effective in changing community direction in the past, so the downside risk seems low
But giving more voting power to people with lots of karma entrenches the position/influence of people who are already high in the community based on its current direction, so it would be an obstacle to the possibility of influencing the community through forum posts.
If you think it’s important for forum posts to be able to change community direction, you should be against vote power scaling with karma.
Vote power should scale with karma
- Apr 29, 2025, 6:40 AM; 7 points) 's comment on Will Aldred’s Quick takes by (
@Ben Kuhn has a great presentation on this topic. Relatedly, nonprofits have worse names: see org name bingo
Hey! You might be interested in applying to the CTO opening at my org:
https://careers.epoch.ai/en/postings/f5f583f5-3b93-4de2-bf59-c471a6869a81
(For what it’s worth, I don’t think you’re irrational, you’re just mistaken about Scott being racist and what happened with the Cade Metz article. If someone in EA is really racist, and you complain to EA leadership and they don’t do anything about it, you could reasonably be angry with them. If the person in question is not in fact racist, and you complain about them to CEA and they don’t do anything about it, they made the right call and you’d be upset due to the mistaken beliefs, but conditional on those beliefs, it wasn’t irrational to be upset.)
Thanks, that’s a great reason to downvote my comment and I appreciate you explaining why you did it (though it has gotten some upvotes so I wouldn’t have noticed anyone downvoted except that you mentioned it). And yes, I misread whom your paragraph was referring to; thanks for the clarification.
However, you’re incorrect that those factual errors aren’t relevant. Your feelings toward EA leadership are based on a false factual premise, and we shouldn’t be making decisions about branding with the goal of appealing to people who are offended based on their own misunderstanding.
Leadership betrayal: My reasoning is anecdotal, because I went through EA adjacency before it was cool. Personally, I became “EA Adjacent” when Scott Alexander’s followers attacked a journalist for daring to scare him a little—that prompted me to look into him a bit, at which point I found a lot of weird race IQ, Nazis-on-reddit, and neo-reactionary BS that went against my values.
Scott Alexander isn’t in EA leadershipThis is also extremely factually inaccurate—every clause in the part of your comment I’ve italicized is at least half false.
This is actually disputed. While so-called “bird watchers” and other pro-bird factions may tell you there are many birds, the rival scientific theory contends that birds aren’t real.
Birds are the only living animals with feathers.
That’s not true, you forgot about the platypus.
When a reward or penalty is so small, it is less effective than no incentive at all, sometimes by replacing an implicit incentive.
In the study, the daycare had a problem with parents showing up late to pick up their kids, making the daycare staff stay late to watch them. They tried to fix this problem by implementing a small fine for late pickups, but it had the opposite of the intended effect, because parents decided they were okay with paying the fine.
In this case, if you believe recruiting people to EA does a huge amount of good, you might think that it’s very valuable to refer people to EAG, and there should be a big referral bounty.
From an altruistic cause prioritization perspective, existential risk seems to require longtermism
No it doesn’t! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.
When I’m talking to non-philosophers, I prefer an “existential risk” framework to a “long-termism” framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it’s non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we’re all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)
working on AI x-risk is mostly about increasing the value of the future, because, in his view, it isn’t likely to lead to extinction
Ah yes I get it now. Thanks!
What is maxevas? Couldn’t find anything relevant by googling.
Hope I’m not misreading your comment, but I think you might have voted incorrectly, as if the scale is flipped.
On the current margin, improving our odds of survival seems much more crucial to the long-term value of civilization. My reason for believing this is that there are some dangerous technologies which I expect will be invented soon, and are more likely to lead to extinction in their early years than later on. Therefore, we should currently spend more effort on ensuring survival, because we will have more time to improve the value of the future after that.
(Counterpoint: ASI is the main technology that might lead to extinction, and the period when it’s invented might be equally front-loaded in terms of setting values as it is in terms of extinction risk.)
stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID
I’m out of the loop, who’s this allegedly EA person who works at DOGE?
No, that wouldn’t prove moral realism at all. That would just show you and a bunch of aliens happen to have the same opinions.