Researching Causality and Safe AI at Oxford.
Previously, founder (with help from Trike Apps) of the EA Forum.
Discussing research etc at https://twitter.com/ryancareyai.
Researching Causality and Safe AI at Oxford.
Previously, founder (with help from Trike Apps) of the EA Forum.
Discussing research etc at https://twitter.com/ryancareyai.
Yeah I think EA just neglects the downside of career whiplash a bit. Another instance is how EA orgs sometimes offer internships where only a tiny fraction of interns will get a job, or hire and then quickly fire staff. In a more ideal world, EA orgs would value rejected & fired applicants much more highly than non-EA orgs, and so low-hit-rate internships, and rapid firing would be much less common in EA than outside.
It looks like, on net, people disagree with my take in the original post.
I just disagreed with the OP because it’s a false dichotomy; we could just agree with the true things that activists believe, and not the false ones, and not go based on vibes. We desire to believe that mech-interp is mere safety-washing iff it is, and so on.
My response to the third link is: https://forum.effectivealtruism.org/posts/i7DWM6zhhPr2ccq35/thoughts-on-ea-post-ftx?commentId=pJMPeYWzpPYNrhbHT
On the meta-level, anonymously sharing negative psychoanalyses of people you’re debating seems like very poor behaviour.
Now, I’m a huge fan of anonymity. Sometimes, one must criticise some vindictive organisation, or political orthodoxy, and it’s needed, to avoid some unjust social consequences.
In other cases, anonymity is inessential. One wants to debate in an aggressive style, while avoiding the just social consequences of doing so. When anonymous users misbehave, we think worse of anonymous users in general. If people always write anonymously, then writing anonymously is no-longer a political statement. We no-longer see anonymous writing as a sign that someone might be hiding from unjust retaliation.
Now, Aprilsun wants EA to mostly continue as normal, which is a majority position in EA leadership. And not to look to deeply into who is to blame for FTX, which helps to defend EA leadership. I don’t see any vindictive parties or social orthodoxies being challenged. So why would anonymity be needed?
I’m sorry, but it’s not an “overconfident criticism” to accuse FTX of investing stolen money, when this is something that 2-3 of the leaders of FTX have already pled guilty to doing.
This interaction is interesting, but I wasn’t aware of it (I’ve only reread a fraction of Hutch’s messages since knowing his identity) so to the extent that your hypothesis involves me having had some psychological reaction to it, it’s not credible.
Moreover, these psychoanalyses don’t ring true. I’m in a good headspace, giving FTX hardly any attention. Of course, I am not without regret, but I’m generally at peace with my past involvement in EA—not in need of exotic ways to process any feelings. If these analyses being correct would have taken some wind out of my sails, then their being so silly ought to put some more wind in.
Creditors are expected by manifold markets to receive only 40c on each dollar that was invested on the platform (I didn’t notice this info in the post when I previously viewed it). And, we do know why there is money missing: FTX stole it and invested it in their hedge fund, which gambled away and lost the money.
The annual budgets of Bellingcat and Propublica are in the single-digit millions. (The latter has had negative experiences with EA donations, but is still relevant for sizing up the space.)
It’s hard to say, but the International Federation of Journalists has 600k members, so maybe there exists 6M journalists worlwide, of which maybe 10% are investigative journalists (600k IJs). If they are paid like $50k/year, that’s $30B used for IJ.
Surely from browsing the internet and newspapers, it’s clear than less than 1% (<60k) of journalists are “investigative”. And I bet that half of the impact comes from an identifiable 200-2k of them, such as former Pulitzer Prize winners, Propublica, Bellingcat, and a few other venues.
Anthropic is small compared with Google and OpenAI+Microsoft.
I hope this is just cash and not a strategic partnership, because if it is, then it would mean there is now a third major company in the AGI race.
There are also big incentive gradients within longtermism:
To work on AI experiments rather than AI theory (higher salary, better locations, more job security)
To work for a grantmaker rather than a grantee (for job security), and
To work for an AI capabilities company rather than outside (higher salary)
this is something that ends up miles away from ‘winding down EA’, or EA being ‘not a movement’.
To be clear, winding down EA is something I was arguing we shouldn’t be doing.
I feel like we’re closer to agreement here, but on reflection the details of your plan here don’t sum up to ‘end EA as a movement’ at all.
At a certain point it becomes semantic, but I guess readers can decide, when you put together:
the changes in sec 11 of the main post
ideas about splitting into profession-oriented subgroups, and
shifting whether we “motivate members re social pressures” and expose junior members to risk
whether or not it counts as changing from being a “movement” to something else.
JWS, do you think EA could work as a professional network of “impact analysts” or “impact engineers” rather than as a “movement”? Ryan, do you have a sense of what that would concretely look like?
Well I’m not sure it makes sense to try to fit all EAs into one professional community that is labelled as such, since we often have quite different jobs and work in quite different fields. My model would be a patchwork of overlapping fields, and a professional network that often extends between them.
It could make sense for there to be a community focused on “effective philanthropy”, which would include OpenPhil, Longview, philanthropists, and grant evaluators. That would be as close to “impact analysis” as you would get, in my proposal.
There would be an effective policymaking community too.
And then a bevy of cause-specific research communities: evidence-based policy, AI safety research, AI governance research, global priorities research, in vitro meat, global catastrophic biorisk research, global catastrophic risk analysis, global health and development, and so on.
Lab heads and organisation leaders in these research communities would still know that they ought to apply to the “effective philanthropy” orgs to fund their activities. And they would still give talks at universities to try to attract top talent. But there wouldn’t be a common brand or cultural identity, and we would frown upon the risk-increasing factors that come from the social movement aspect.
Roughly yes, with some differences:
I think the disasters would scale sublinearly
I’m also worried about Leverage and various other cults and disasters, not just FTX.
I wouldn’t think of the separate communities as “movements” per se. Rather, each cause area would have a professional network of nonprofits and companies.
Basically, why do mid-sized companies usually not spawn cults and socially harm their members like movements like EA and the animal welfare community sometimes do? I think it’s because movements by their nature try to motivate members towards their goals, using social pressures. This attracts young idealists, some of whom will be impressionable. People will try radical stuff like traveling to locations where they’re unsupported, going on intensive retreats, circling, drugs, polyamory, etc. These things benefit some people in some situations, but in they also can put people in vulnerable situations. My hypothesis is that predators detect this vulnerability and then start even crazier and more cultish projects, arguably including Leverage and FTX, under the guise of advancing the movement’s goals.
Companies rarely put junior staff in such vulnerable positions. People generally know not to sleep with subordinates, and better manage conflicts of interest. They don’t usually give staff a pass for misbehaviour due to being value-aligned.
We don’t need to lose our goals, or our social network, but we could strip away a lot of risk-increasing behaviour that “movements” do, and take on some risk-reducing “professionalising” measures that’s more typical of companies..
I agree that ideas are powerful things, and that people will continue to want to follow those ideas to their conclusions, in collaboration with others. But I’m suggesting to be faithful to those ideas might be to shape up a little bit and practice them somewhat differently. For the case of Christianity, it’s not like telling Christians to disavow the holy Trinity. It’s more like noticing abuse in a branch of Christianity, and thinking “we’ve got to do some things differently”. Except that EA is smaller and thousands of years younger, so can be more ambitious in the ways we try to reform.
Scott’s already said what I believe
Yes, I had this exact quote in mind when I said in Sect 5 that “Religions can withstand persecution by totalitarian governments, and some feel just about as strongly about EA.”
People would believe them, want to co-ordinate on it. Then they’d want to organise to help make their own ideas more efficient and boom, we’re just back to an EA movement all over again.
One of my main theses is supposed to be that people can and should coordinate their activities without acting like a movement.
I still want concern about reducing the suffering on non-human animals to grow, I still want humanity to expand its moral circle beyond the parochial, I still want us to find the actions individually and collectively that will lead to humanity flourishing. Apologies if I’m misinterpreting, but this sentance really seems to come out of left-field from me given the rest of your post.
This feels like the same misunderstanding. Spreading EA ideas and values seems fine and good to me. It’s the collectivism, branding, identity-based reasoning, and other “movement-like” characteristics that concern me.
I think again, given the ideas of EA exist, these cause-specific communities would find themselves connected
This seems like black and white thinking to me. Of course these people will connect over their shared interests in consequentialism, RCTs, and so on. But this is different from branding and recruiting together, regulating this area as one community, hosting student chapters, etc.
A lot of the comments seem fixated on, and wanting to object to the idea of “reputational collapse” in a way that I find hard to relate to. This wasn’t a particularly load-bearing part of my argument, it was only used to argue that the idea that EA is a particularly promising way to get people interested in x-risk has become less plausible. Which was only one of three reasons not to promote EA in order to promote x-risk. Which was only one of many strategic suggestions.
That said, I find it hard not to notice that the reputation of, and enthusiasm for EA has changed, to a degree that must affect recruitment via EA to AI safety. If you’re surrounded by EAs, it feels obvious. Trajan had a funereal atmosphere for weeks. Some were depressed for months. In news articles and on the forum, there was a cascade of PR disasters took much airtime from Q4 2022 to Q1 2023. There’s been nothing like it in my 15 years around this community. The polling would have to have been pretty extraordinary to convince me that somehow I’ve misperceived what is really a pretty clear social reality.
The polling had some interesting findings, but not necessarily in a good way. The widely touted figure was that people’s recalled satisfaction dropped “only” 0.5 on a ten-point scale. But most people rate their satisfaction around 7 most of the time, so this looks like an effect size of Cohen’s d=0.4 or so. And this is in the more enthusiastic sample who were willing to keep answering the EA survey even after these disasters. Scanning over the next few questions, you then see that 55%+ of respondants now have some form of concerns about the EA community’s meta organisations, and likewise the community and its norms—much more than the 25% who had some concerns with the philosophy. Moreover, 39% agree in some way that they want to see the community look very different, and the same number say they are less likely to associate with EA. And 31% substantially lost trust in EA public figures or leadership. Those who were more engaged were in most ways more concerned, which would fit with the selection effect hypothesis (those of the less engaged EAs who became disaffected simply left, and didn’t respond to the survey). I find it really hard to understand those who would want to regard these results as “pretty compelling evidence” that EA has not suffered a major hit that would affect its viability as a way of recruiting to AIS.
The polling of people outside the EA community is least convincing to me for a variety of reasons. Firstly, these people currently know least, and many of them will hear more in the future, such as when SBF’s trial happens, the Michael Lewis book is released, or some of the nine film/video adaptations come. Importantly, if any of them become interested in EA, they are likely to hear about such things, and come to reflect the first cohort to a greater extent. But already, ~5% of them mention FTX in the interview, and ~1% of them mention it in the context of the meaning of EA or how they heard about it. In other words, the “scenario where promoting EA could go badly” is something that a community-builder would likely experience at least once. And those who know about FTX have a much more negative view (d=1.5 with high uncertainty). So although this is the more positive of the two batches of polling, I wouldn’t necessarily gloss it as “there’s no big problem”.
I’m sorry you feel that way. I’m a bit confused about the distinction, unless by ‘EA Movement’ you mean ‘EA Community’
I mean I’ve lost enthusiasm for the community/movement element, at least on a gut level. I’ve no objection to people donating, and living a consequentialist-leaning philosophy—rather I’m in favour of that so long as they’re applying the ideas carefully.
1 - it’s because Sam was publicly claiming that the trading platform and the traders were completely firewalled from one another, and had no special info, as would normally (e.g. in the US) be legally required to make trading fair, but which is impossible if the CEOs are dating
2 - I’m not objecting the spending. It was clear at the time that he was promoting an image of frugality that wasn’t accurate. One example here, but there are many more.
3 - A lot of different Alameda people warned some people at the time of the split. For a variety of reasons, I believe that those who were more involved would have been warned commensurately more than I was (someone who was barely involved).
4 - Perhaps, but it is negligent not to follow this rule, when you’re donating money from an offshore crypto exchange.
5 - I’m just going off recollection, but I believe he was under serious-sounding US govt investigation.
See here. Among people who know EA as well as I do, many—perhaps 25% - are about as pessimistic as me, and some of the remainder have conflicts of interest, or have left.
I agree the primary role of EAs here was as victims, and that presumably only a couple of EAs intentionally conspired with Sam. But I wouldn’t write it off as just social naivete; I think there was also some negligence in how we boosted him, e.g.:
Some EAs knew about his relationship with Caroline, which would undermine the public story about FTX<->Alameda relations, but didn’t disclose this.
Some EAs knew that Sam and FTX weren’t behaving frugally, which would undermine his public image, but also didn’t disclose.
Despite warnings from early-Alameda people, FTX received financial and other support from EAs.
EAs granted money from FTX’s foundation before it had been firewalled in a foundation bank account.
EA leaders invited him to important meetings, IIRC, even once he was under govt investigation.
It might be that naive consequentialist thinking, a strand of EA’s cultural DNA, played a role here, too. In general I think it would be fruitful to think about ways that our ambitions, attitudes, or practices might have made us negligent, not just ways that we might have been too trusting.
It’s a disappointing outcome—it currently seems that OpenAI is no more tied to its nonprofit goals than before. A wedge has been driven between the AI safety community and OpenAI staff, and to an extent, Silicon Valley generally.
But in this fiasco, we at least were the good guys! The OpenAI CEO shouldn’t control its nonprofit board, or compromise the independence of its members, who were doing broadly the right thing by trying to do research and perform oversight. We have much to learn.