Researching Causality and Safe AI at Oxford.
Previously, founder (with help from Trike Apps) of the EA Forum.
Discussing research etc at https://twitter.com/ryancareyai.
Researching Causality and Safe AI at Oxford.
Previously, founder (with help from Trike Apps) of the EA Forum.
Discussing research etc at https://twitter.com/ryancareyai.
Is this very different from $100k/yr of GDP/cap adjusted for purchasing power differences?
Any updates on this, now that a couple of years have passed? Based on the website, I guess you decided not to hire a chair in the end? Also, was there only $750k granted in 2025?
It will do a service to your reader if you choose a title that explains what your post is arguing.
Another relevant comment:
Overall a nice system card for Opus 4! It’s a strange choice to contract Apollo to evaluate sabotage, get feedback that “[early snapshot] schemes and deceives at such high rates that we advise against deploying this model....” and then not re-contracting Apollo for final evals
I think we should keep our eye the most important role that online EA (and adjacent) platforms have played historically. Over the last 20 years, there has always been one or two key locations for otherwise isolated EAs and utilitarians to discover like-minds online, get feedback on their ideas, and then become researchers or real-life contributors. Originally, it was (various incarnations of) Felicifia, and then the EA Forum. The rationalist community benefited in similar ways from the extropians mailing list, SL4, Overcoming Bias and LessWrong. The sheer geographical coverage, and the element of in-depth intellectual engagement aren’t practically replaceable by other community-building efforts.
I think that fulfilling this role is a lot more important than growing the EA community, and other goals that the EA Forum might have, and that it is worth doing until a better new venue comes along. Currently, I don’t think a better venue exists. I don’t think r/effectivegiving or LessWrong would be a great successor. You could make a case for Substack+Twitter, but that may flip to something else in a few years time, how people want to connect online can change completely on that kind of timescale. Overall, I think it important to keep things running for the next 5-10 as the future of EA and the future of online discussion declare themselves.
Of course, this role could be performed without a lot of new technology.
The other thing I wonder is: if the online team stopped stewarding the EA Forum’s content, would it really turn into a mere bulletin board? I’m not so sure. I can imagine that plenty of people might continue to use the Forum to discuss EA matters and to post original research. If so, then this might be another way to cut costs with less change to the forum’s core role, compared to declaring it a bulletin board or moving conversation to a different platform.
Nice, I’ll look forward to reading this!
How is EAIF performing in the value proposition that it provides to funding applicants, such as the speed of decisions, responsiveness to applicants’ questions, and applicants’ reported experiences? Historically your sister fund was pretty bad to applicants, and some were really turned off by the experience.
I guess a lot of these faulty ideas come from the role of morality as a system of rules for putting up boundaries around acceptable behaviour, and for apportioning blameworthiness moreso than praiseworthiness. Similar to how the legal system usually gives individuals freedom so long as they’re not doing harm, our moral system mostly speaks to harms (rather than benefits) from actions (rather than inaction). By extension, the basis of the badness of these harms has to be a violation of “rights” (things that people deserve not to have done to them). Insofar as morality serves as a series of heuristics for people to follow, having a negativity-bias and action-bias are not necessarily wrong. It causes problems, however, if it this distorted lens is used to make claims about intrinsic right and wrong, or the idea that non-existence is an ideal.
Another relevant dimension is that the forum (and Groups) are the most targeted to EAs, so they will be most sensitive to fluctuations in the size of the EA community, whereas 80k will be the least sensitive, and Events will be somewhere in-between.
Given this and the sharp decline in applications to events, it seems like the issue is really a decrease in the size of, or enthusiasm in the EA community, rather than anything specific to the forum.
I’m sure I have some thoughts, but to begin with, it would helpful for understanding what’s going on if the dashboard would tell us how 2024 went for the events and groups teams.
Worth noting that although high EA salaries increase the risk to EA organisations, they reduce risk to EA individuals, because people can spend less than their full salary, thereby saving for a time when EA funding dries up.
I think the core issue is that the lottery wins you government dollars, which you can’t actually spend freely. Government dollars are simply worth less, to Pablo, than Pablo’s personal dollars. One way to see this is that if Pablo could spend the government dollars on the other moonshot opportunities, then it would be fine that he’s losing his own money.
So we should stipulate that after calculating abstract dollar values, you have to convert them, by some exchange rate, to personal dollars. The exchange rate simply depends on how much better the opportunities are for personal spending, versus spending government money.
The fact that opportunities can get larger than your budget size seems not to be the core issue for the reason that you mention—that at realistic sizes of opportunity, it is possible to instead buy a lottery for a chance at the opportunity instead.
Also Nick Bostrom, Nick Beckstead, Will Macaskill, Ben Todd, some of whom have been lifelong academics.
Probably different factors in different cases.
It sounds like you would prefer the rationalist community prevent its members from taking taboo views on social issues? But in my view, an important characteristic of the rationalist community, perhaps its most fundamental, is that it’s a place where people can re-evaluate the common wisdom, with a measure of independence from societal pressure. If you want the rationalist community (or any community) to maintain that character, you need to support the right of people to express views that you regard as repulsive, not just the views that you like. This could be different if the views were an incitement to violence, but proposing a hypothesis for socio-economic differences isn’t that.
In my view, what’s going on is largely these two things:
[rationalists etc] are well to the left of the median citizens, but they are to the right of [typical journalists and academics]
Of course. And:
biodeterminism… these groups are very, very right-wing on… eugenics, biological race and gender differences etc.-but on everything else they are centre-left.
Yes, ACX readers do believe that genes influence a lot of life outcomes, and favour reproductive technologies like embryo selection, which are right-coded views. These views are actually not restricted to the far-right, however. Most people will choose to have an abortion when they know their child will have a disability, for example.
Various of your other hypotheses don’t ring true to me. I think:
People aren’t self-deceiving about their own politics very much. They know which politicians and intellectuals they support, and who they vote for.
Rationalist leadership is not very politically different from the rationalist membership.
Sexual misbehaviour doesn’t change perceived political alignment very much.
The high % of male rationalist is at most a minor factor in the difference between perceived and actual politics.
This was just a “where do you rate yourself from 1-10” type question, but you can see more of the questions and data here.
I think the trend you describe is mostly an issue with “progressives”, i.e. “leftists” rather than an issue for all those left of center. And the rationalists don’t actually lean right in my experience. They average more like anti-woke and centrist. The distribution in the 2024 ACX survey below has perhaps a bit more centre-left and a bit less centre and centre-right than the rationalists at large but not by much, in my estimation.
There is one caveat: if someone acting on behalf on an EA organisation truly did something wrong which contributed to this fraud, then obviously we need to investigate that. But I am not aware of any evidence to suggest that happened.
I tend to think EA did. Back in September 2023, I argued:
EA contributed to a vast financial fraud, through its:
People. SBF was the best-known EA, and one of the earliest 1%. FTX’s leadership was mostly EAs. FTXFF was overwhelmingly run by EAs, including EA’s main leader, and another intellectual leader of EA.
Resources. FTX had some EA staff and was funded by EA investors.
PR. SBF’s EA-oriented philosophy on giving, and purported frugality served as cover for his unethical nature.
Ideology. SBF apparently had an RB ideology, as a risk-neutral act-utilitarian, who argued a decade ago why stealing was not in-principle wrong, on Felicifia. In my view, his ideology, at least as he professed it, could best be understood as an extremist variant of EA.
Of course, you can argue that contributing (point 1) people-time and (2) resources is consistent with us having just been victims, although I think that glosses over the extent to which EA folks at FTX had bought into Sam’s vision, and folks at FTXFF might have more mildly lapsed in judgment. And we could regard (3) the PR issue as minor. But even so, (4) the ideology is important. FTX wasn’t just any scam. It was one that a mostly-EA group was motivated to commit, to some degree or other, based on EA-style/consequentialist reasoning. There were several other instances of crypto-related crimes in and around the EA community. And the FTX implosion shared some characteristics with those events, and with other EA scandals. As I argued:
Other EA scandals, similarly, often involve multiple of these elements:
[Person #1]: past sexual harassment issues, later reputation management including Wiki warring and misleading histories. (norm-violation, naive conseq.)
[Person #2]: sexual harassment (norm-violation? naive conseq?)
[Person #3] [Person #4] [Person #5]: three more instances of crypto crimes (scope sensitivity? Norm-violation, naive conseq.? naivete?)
Intentional Insights: aggressive PR campaigns (norm-violation, naive conseq., naivete?)
Leverage Research, including partial takeover of CEA (risk appetite, norm-violation, naive conseq, unilateralism, naivete)
(We’ve seen major examples of sexual misbehaviour and crypto crimes in the rationalist community too.)
You could argue still that some of these elements are things that are shared with all financial crime. But then why have EAs committed >10% of the largest financial frauds of all-time, while consisting of about one millionth of the world’s population, and less than 0.1% and perhaps 0.01% of its startups? You can suppose that we were just unlucky, but I don’t find this particularly convincing.
I think that at this point, you should want to concede that EA appears to have contributed to FTX in quite a number of ways, and not all of them can be dismissed easily. That’s why I think a more thorough investigation is needed.
As for PR, I simply think that shouldn’t be the primary focus, and that it far from the most important consideration on the current margin. First, we need to get the facts in order. And then we need to describe the strategy. And then based on what kind of future EA deserves to have, we could decide how and whether to try to defend its image.
Yeah, the cost of cheap shared housing is something like $20k/yr of 2026 dollars, whereas your impact would be worth a lot more than that, either because you are making hundreds of thousands of post-tax dollars per year, or because you’re foregoing those potential earnings to do important research or activism. Van-living is usually penny-wise, but pound-foolish.