I’ve heard multiple reports of people being denied jobs around AI policy because of their history in EA. I’ve also seen a lot of animosity against EA from top organizations I think are important—like A16Z, Founders Fund (Thiel), OpenAI, etc. I’d expect that it would be uncomfortable for EAs to apply or work in to these latter places at this point.
This is very frustrating to me.
First, it makes it much more difficult for EAs to collaborate with many organizations where these perspectives could be the most useful. I want to see more collaborations and cooperation—not having EAs be allowed in many orgs makes this very difficult.
Second, it creates a massive incentive for people not to work in EA or on EA topics. If you know it will hurt your career, then you’re much less likely to do work here.
And a lighter third—it’s just really not fun to have a significant stigma associated with you. This means that many of the people I respect the most, and think are doing some of the most valuable work out there, will just have a much tougher time in life.
Who’s at fault here? I think the first big issue is that resistances get created against all interesting and powerful groups. There are similar stigmas against people across the political spectrum, for example, to certain crowds. A big part of “talking about morality and important issues, while having something non-obvious to say” is being hated by a bunch of people. In this vein, arguably we should be aiming for a world where it winds up that there’s a larger stigma.
But a lot clearly has to do with the decisions made by what seems like a few EAs. FTX hurt the most. I think the OpenAI board situation resulted in a lot of ea-paranoia, arguably with very little upside. More recently, I think that certain EA actions in ai policy are getting a lot of flak.
There was a brief window, pre-FTX-fail, where there was a very positive EA media push. I’ve seen almost nothing since. I think that “EA marketing” has been highly neglected, and that doesn’t seem to be changing.
Also, I suspect that the current EA AI policy arm could find ways to be more diplomatic and cooperative. When this arm upsets people, all of EA gets blamed.
My guess is that there are many other changes to do here too.
CEA is the obvious group to hypothetically be in charge of the EA parts of this. In practice, it seems like CEA has been very busy with post-FTX messes and leadership changes.
So I think CEA, as it becomes stable, could do a lot of good work making EA marketing work somehow. And I hope that the AI safety governance crowd can get better at not pissing off people. And hopefully, other EAs can figure out other ways to make things better and not worse.
If the above doesn’t happen, honestly, it could be worth it for EAs themselves to try to self-fund or coordinate efforts on this. The issue isn’t just one of “hurting long-term utility”, it’s one that just directly hurts EAs—so it could make a lot of sense for them to coordinate on improvements, even just in their personal interests.
On the positive front, I know its early days but GWWC have really impressed me with their well produced, friendly yet honest public facing stuff this year—maybe we can pick up on that momentum?
Also EA for Christians is holding a British conference this year where Rory Stewart and the Archbishop of Canterbury (biggest shot in the Anglican church) are headlining which is a great collaboration with high profile and well respected mainstream Christian / Christian-adjacent figures.
I think in general their public facing presentation and marketing seems a cut above any other EA org—happy to be proven wrong by other orgs which are doing a great job too. What I love is how they present their messages with such positivity, while still packing a real punch and not watering down their message. Check out their web-page and blog to see their work.
A few concrete examples - This great video “How rich are you really?”
- Nice rebranding of “Giving what we can pledge” to the snappier and clearer “10% pledge”— The diamond symbol as a simple yet strong sign of people taking the pledge, both on the forum here and linkedin - An amazing linked-in push with lots of people putting the diamond and explaining why they took the pledge. Many posts have been received really positively on my wall.
(Jumping in for our busy comms/exec team) Understanding the status of the EA brand and working to improve it is a top priority for CEA :) We hope to share more work on this in future.
I wrote a downvoted post recently about how we should be warning AI Safety talent about going into labs for personal branding reasons (I think there are other reasons not to join labs, but this is worth considering).
I think people are still underweighting how much the public are going to hate labs in 1-3 years.
I think from an advocacy standpoint it is worth testing that message, but based on how it is being received on the EAF, it might just bounce off people.
My instinct as to why people don’t find it a compelling argument;
They don’t have short timelines like me, and therefore chuck it out completely
Are struggling to imagine a hostile public response to 15% unemployment rates
At least at the time, Holly Elmore seemed to consider it at least somewhat compelling. I mentioned this was an argument I provided framed in the context of movements like PauseAI—a more politicized, and less politically averse coalition movement, that includes at least one arm of AI safety as one of its constituent communities/movements, distinct from EA.
>They don’t have short timelines like me, and therefore chuck it out completely
Among the most involved participants in PauseAI, presumably there may estimates of short timelines comparable to the rate of such estimates among effective altruists.
>Are struggling to imagine a hostile public response to 15% unemployment rates
Those in PauseAI and similar movements don’t.
>Copium
While I sympathize with and appreciate why there would be high rates of huffing copium among effective altruists (and adjacent communities, such as rationalists), others who have been picking up slack effective altruists have dropped in the last couple years, are reacting differently. At least in terms of safeguarding humanity from both the near-term and long-term vicissitudes of advancing AI, humanity has deserved better than EA has been able to deliver. Many have given up hope that EA will ever rebound to the point it’ll be able to muster living up to the promise of at least trying to safeguard humanity. That includes both many former effective altruists, and those who still are effective altruists. I consider there to still be that kind of ‘hope’ on a technical level, though on a gut level I don’t have faith in EA. I definitely don’t blame those who have any faith left in EA, let alone those who see hope in it.
Much of the difference here is the mindset towards ‘people’, and how they’re modeled, between those still firmly planted in EA but somehow with a fatalistic mindset, and those who still care about AI safety but have decided to move on in EA. (I might be somewhere in between, though my perspective as a single individual among general trends is barely relevant.) The last couple years have proven that effective altruists direly underestimated the public, and the latter group of people didn’t. While many here on the EA Forum may not agree that much—or even most—of what movements like PauseAI are doing are as effective as they could or should be, they at least haven’t succumbed to a plague of doomerism beyond what can seemingly even be justified.
To quote former effective altruist Kerry Vaughan, in a message addressed to those who still are effective altruists: “now is not the time for moral cowardice.” There are some effective altruists who heeded that sort of call when it was being made. There are others who weren’t effective altruists who heeded it too, when they saw most effective altruists had lost the will to even try picking up the ball again after they dropped it a couple times. New alliances between emotionally determined effective altruists and rationalists, and thousands of other people the EA community always underestimated, might from now on be carrying the team that is the global project of AI risk reduction—from narrow/near-term AI, to AGI/ASI.
EA can still change, though either it has to go beyond self-reflection and just change already, or get used to no longer being team captain of AI Safety.
Very sorry to hear these reports, and was nodding along as I read the post.
If I can ask, how do they know EA affiliation was the decision? Is this an informal ‘everyone knows’ thing through policy networks in the US? Or direct feedback for the prospective employer than EA is a PR-risk?
Of course, please don’t share any personal information, but I think it’s important for those in the community to be as aware as possible of where and why this happens if it is happening because of EA affiliation/history of people here.
I think that certain EA actions in ai policy are getting a lot of flak.
On Twitter, a lot of VCs and techies have ranted heavily about how much they dislike EAs.
See this segment from Marc Andreeson, where he talks about the dangers of Eliezer and EA. Marc seems incredibly paranoid about the EA crowd now.
(Go to 1 hour, 11min in, for the key part. I tried linking to the timestamp, but couldn’t get it to work in this editor after a few minutes of attempts)
Organized, yes. And so this starts with a mailing list. In the nineties is a transhumanist mailing list called the extropions. And these extropions, they might have got them wrong, extropia or something like that, but they believe in the singularity. So the singularity is a moment of time where AI is progressing so fast, or technology in general progressing so fast that you can’t predict what happens. It’s self evolving and it just. All bets are off. We’re entering a new world where you.
[00:25:27]
Just can’t predict it, where technology can’t.
[00:25:29]
Be controlled, technology can’t be controlled. It’s going to remake, remake everything. And those people believe that’s a good thing because the world now sucks so much and we are imperfect and unethical and all sorts of irrational whatever. And so they really wanted for the singularity to happen. And there’s this young guy on this list, his name’s Iliezer Itkowski, and he claims he can write this AI and he would write really long essays about how to build this AIH suspiciously. He never really publishes code, and it’s all just prose about how he’s going to be able to build AI anyways. He’s able to fundraise. They started this thing called the Singularity Institute. A lot of people were excited about the future, kind of invested in him. Peter Thiel, most famously. And he spent a few years trying to build an AI again, never published code, never published any real progress. And then came out of it saying that not only you can’t build AI, but if you build it, it will kill everyone. So he switched from being this optimist. Singularity is great to actually, AI will for sure kill everyone. And then he was like, okay, the reason I made this mistake is because I was irrational.
[00:26:49]
And the way to get people to understand that AI is going to kill everyone is to make them rational. So he started this blog called less wrong and less wrong walks you through steps to becoming more rational. Look at your biases, examine yourself, sit down, meditate on all the irrational decisions you’ve made and try to correct them. And then they start this thing called center for Advanced Rationality or something like that. Cifar. And they’re giving seminars about rationality, but.
[00:27:18]
The intention seminar about rationality, what’s that like?
[00:27:22]
I’ve never been to one, but my guess would be they will talk about the biases, whatever, but they have also weird things where they have this almost struggle session like thing called debugging. A lot of people wrote blog posts about how that was demeaning and it caused psychosis in some people. 2017, that community, there was collective psychosis. A lot of people were kind of going crazy. And this all written about it on the Internet, debugging.
[00:27:48]
So that would be kind of your classic cult technique where you have to strip yourself bare, like auditing and Scientology or. It’s very common, yes.
[00:27:57]
Yeah.
[00:27:59]
It’s a constant in cults.
[00:28:00]
Yes.
[00:28:01]
Is that what you’re describing?
[00:28:02]
Yeah, I mean, that’s what I read on these accounts. They will sit down and they will, like, audit your mind and tell you where you’re wrong and all of that. And it caused people huge distress on young guys all the time talk about how going into that community has caused them huge distress. And there were, like, offshoots of this community where there were so suicides, there were murders, there were a lot of really dark and deep shit. And the other thing is, they kind of teach you about rationality. They recruit you to AI risk, because if you’re rational, you’re a group. We’re all rational now. We learned the art of rationality, and we agree that AI is going to kill everyone. Therefore, everyone outside of this group is wrong, and we have to protect them. AI is going to kill everyone. But also they believe other things. Like, they believe that polyamory is rational and everyone that.
[00:28:57]
Polyamory?
[00:28:57]
Yeah, you can have sex with multiple partners, essentially, but they think that’s.
[00:29:03]
I mean, I think it’s certainly a natural desire, if you’re a man, to sleep with more indifferent women, for sure. But it’s rational in the sense how, like, you’ve never meth happy, polyamorous, long term, and I’ve known a lot of them, not a single one.
[00:29:21]
So how would it might be self serving, you think, to recruit more impressionable.
[00:29:27]
People into and their hot girlfriends?
[00:29:29]
Yes.
[00:29:30]
Right. So that’s rational.
[00:29:34]
Yeah, supposedly. And so they, you know, they convince each other of all these cult like behavior. And the crazy thing is this group ends up being super influential because they recruit a lot of people that are interested in AI. And the AI labs and the people who are starting these companies were reading all this stuff. So Elon famously read a lot of Nick Bostrom as kind of an adjacent figure to the rationale community. He was part of the original mailing list. I think he would call himself a rationale part of the rational community. But he wrote a book about AI and how AI is going to kill everyone, essentially. I think he monitored his views more recently, but originally he was one of the people that are kind of banging the alarm. And the foundation of OpenAI was based on a lot of these fears. Elon had fears of AI killing everyone. He was afraid that Google was going to do that. And so they group of people, I don’t think everyone at OpenAI really believed that. But some of the original founding story was that, and they were recruiting from that community so much.
[00:30:46]
So when Sam Altman got fired recently, he was fired by someone from that community, someone who started with effective altruism, which is another offshoot from that community, really. And so the AI labs are intermarried in a lot of ways with this community. And so it ends up, they kind of borrowed a lot of their talking points, by the way, a lot of these companies are great companies now, and I think they’re cleaning up house.
[00:31:17]
But there is, I mean, I’ll just use the term. It sounds like a cult to me. Yeah, I mean, it has the hallmarks of it in your description. And can we just push a little deeper on what they believe? You say they are transhumanists.
[00:31:31]
Yes.
[00:31:31]
What is that?
[00:31:32]
Well, I think they’re just unsatisfied with human nature, unsatisfied with the current ways we’re constructed, and that we’re irrational, we’re unethical. And so they long for the world where we can become more rational, more ethical, by transforming ourselves, either by merging with AI via chips or what have you, changing our bodies and fixing fundamental issues that they perceive with humans via modifications and merging with machines.
[00:32:11]
It’s just so interesting because. And so shallow and silly. Like a lot of those people I have known are not that smart, actually, because the best things, I mean, reason is important, and we should, in my view, given us by God. And it’s really important. And being irrational is bad. On the other hand, the best things about people, their best impulses, are not rational.
[00:32:35]
I believe so, too.
[00:32:36]
There is no rational justification for giving something you need to another person.
[00:32:41]
Yes.
[00:32:42]
For spending an inordinate amount of time helping someone, for loving someone. Those are all irrational. Now, banging someone’s hot girlfriend, I guess that’s rational. But that’s kind of the lowest impulse that we have, actually.
[00:32:53]
We’ll wait till you hear about effective altruism. So they think our natural impulses that you just talked about are indeed irrational. And there’s a guy, his name is Peter Singer, a philosopher from Australia.
[00:33:05]
The infanticide guy.
[00:33:07]
Yes.
[00:33:07]
He’s so ethical. He’s for killing children.
[00:33:09]
Yeah. I mean, so their philosophy is utilitarian. Utilitarianism is that you can calculate ethics and you can start to apply it, and you get into really weird territory. Like, you know, if there’s all these problems, all these thought experiments, like, you know, you have two people at the hospital requiring some organs of another third person that came in for a regular checkup or they will die. You’re ethically, you’re supposed to kill that guy, get his organ, and put it into the other two. And so it gets. I don’t think people believe that, per se. I mean, but there’s so many problems with that. There’s another belief that they have.
[00:33:57]
But can I say that belief or that conclusion grows out of the core belief, which is that you’re God. Like, a normal person realizes, sure, it would help more people if I killed that person and gave his organs to a number of people. Like, that’s just a math question. True, but I’m not allowed to do that because I didn’t create life. I don’t have the power. I’m not allowed to make decisions like that because I’m just a silly human being who can’t see the future and is not omnipotent because I’m not God. I feel like all of these conclusions stem from the misconception that people are gods.
[00:34:33]
Yes.
[00:34:34]
Does that sound right?
[00:34:34]
No, I agree. I mean, a lot of the. I think it’s, you know, they’re at roots. They’re just fundamentally unsatisfied with humans and maybe perhaps hate, hate humans.
[00:34:50]
Well, they’re deeply disappointed.
[00:34:52]
Yes.
[00:34:53]
I think that’s such a. I’ve never heard anyone say that as well, that they’re disappointed with human nature, they’re disappointed with human condition, they’re disappointed with people’s flaws. And I feel like that’s the. I mean, on one level, of course. I mean, you know, we should be better, but that, we used to call that judgment, which we’re not allowed to do, by the way. That’s just super judgy. Actually, what they’re saying is, you know, you suck, and it’s just a short hop from there to, you should be killed, I think. I mean, that’s a total lack of love. Whereas a normal person, a loving person, says, you kind of suck. I kind of suck, too. But I love you anyway, and you love me anyway, and I’m grateful for your love. Right? That’s right.
[00:35:35]
That’s right. Well, they’ll say, you suck. Join our rationality community. Have sex with us. So.
[00:35:43]
But can I just clarify? These aren’t just like, you know, support staff at these companies? Like, are there?
[00:35:50]
So, you know, you’ve heard about SBF and FDX, of course.
[00:35:52]
Yeah.
[00:35:52]
They had what’s called a polycule.
[00:35:54]
Yeah.
[00:35:55]
Right. They were all having sex with each other.
[00:35:58]
Given. Now, I just want to be super catty and shallow, but given some of the people they were having sex with, that was not rational. No rational person would do that. Come on now.
[00:36:08]
Yeah, that’s true. Yeah. Well, so, you know. Yeah. It’s what’s even more disturbing, there’s another ethical component to their I philosophy called longtermism, and this comes from the effective altruist branch of rationality, long termism. Long termism. What they think is, in the future, if we made the right steps, there’s going to be a trillion humans, trillion minds. They might not be humans, that might be AI, but they’re going to be trillion minds who can experience utility, who can experience good things, fun things, whatever. If you’re a utilitarian, you have to put a lot of weight on it, and maybe you discount that, sort of like discounted cash flows. Uh, but you still, you know, have to pause it that, you know, you know, if. If there are trillions, perhaps many more people in the future, you need to value that very highly. Even if you discount it a lot, it ends up being valued very highly. So a lot of these communities end up all focusing on AI safety, because they think that AI, because they’re rational. They arrived, and we can talk about their arguments in a second. They arrived at the conclusion that AI is going to kill everyone.
[00:37:24]
Therefore, effective altruists and rational community, all these branches, they’re all kind of focused on AI safety, because that’s the most important thing, because we want a trillion people in the future to be great. But when you’re assigning value that high, it’s sort of a form of Pascal’s wager. It is sort of. You can justify anything, including terrorism, including doing really bad things, if you’re really convinced that AI is going to kill everyone and the future holds so much value, more value than any living human today has value. You might justify really doing anything. And so built into that, it’s a.
[00:38:15]
Dangerous framework, but it’s the same framework of every genocidal movement from at least the French Revolution. To present a glorious future justifies a bloody present.
[00:38:28]
Yes.
[00:38:30]
And look, I’m not accusing them of genocidal intent, by the way. I don’t know them, but those ideas lead very quickly to the camps.
[00:38:37]
I feel kind of weird just talking about people, because generally I like to talk about ideas about things, but if they were just like a silly Berkeley cult or whatever, and they didn’t have any real impact on the world, I wouldn’t care about them. But what’s happening is that they were able to convince a lot of billionaires of these ideas. I think Elon maybe changed his mind, but at some point he was convinced of these ideas. I don’t know if he gave them money. I think there was a story at some point, Wall Street Journal, that he was thinking about it. But a lot of other billionaires, billionaires gave them money, and now they’re organized, and they’re in DC lobbying for AI regulation. They’re behind the AI regulation in California and actually profiting from it. There was a story in pirate wares where the main sponsor, Dan Hendrix, behind SB 1047, started a company at the same time that certifies the safety of AI. And as part of the bill, it says that you have to get certified by a third party. So there’s aspects of it that are kind of. Let’s profit from it.
[00:39:45]
By the way, this is all allegedly based on this article. I don’t know for sure. I think Senator Scott Weiner was trying to do the right thing with the bill, but he was listening to a lot of these cult members, let’s call them, and they’re very well organized, and also a lot of them still have connections to the big AI labs, and some of them work there, and they would want to create a situation where there’s no competition in AI regulatory capture, per se. I’m not saying that these are the direct motivations. All of them are true believers. But you might infiltrate this group and direct it in a way that benefits these corporations.
[00:40:32]
Yeah, well, I’m from DC, so I’ve seen a lot of instances where my bank account aligns with my beliefs. Thank heaven. Just kind of happens. It winds up that way. It’s funny. Climate is the perfect example. There’s never one climate solution that makes the person who proposes it poorer or less powerful.
To be fair to the CEO of Replit here, much of that transcript is essentially true, if mildly embellished. Many of those events or outcomes associated with EA, or adjacent communities during their histories, that should be the most concerning to anyone other than any FTX-related events and for reasons beyond just PR concerns, can and have been well-substantiated.
My guess is this is obvious, but the “debugging” stuff seems as far as I can tell completely made up.
I don’t know of any story in which “debugging” was used in any kind of collective way. There was some Leverage-research adjacent stuff that kind of had some attributes like this, “CT-charting”, which maybe is what it refers to, but that sure would be the wrong word, and I also don’t think I’ve ever heard of any psychoses or anything related to that.
The only in-person thing I’ve ever associated with “debugging” is when at CFAR workshops people were encouraged to create a “bugs-list”, which was just a random list of problems in your life, and then throughout the workshop people paired with other people where they could choose any problem of their choosing, and work with their pairing partner on fixing it. No “auditing” or anything like that.
I haven’t read the whole transcript in-detail, but this section makes me skeptical of describing much of that transcript as “essentially true”.
I have personally heard several CFAR employees and contractors use the word “debugging” to describe all psychological practices, including psychological practices done in large groups of community members. These group sessions were fairly common.
In that section of the transcript, the only part that looks false to me is the implication that there was widespread pressure to engage in these group psychology practices, rather than it just being an option that was around. I have heard from people in CFAR who were put under strong personal and professional pressure to engage in *one-on-one* psychological practices which they did not want to do, but these cases were all within the inner ring and AFAIK not widespread. I never heard any stories of people put under pressure to engage in *group* psychological practices they did not want to do.
For what it’s worth, I was reminded of Jessica Taylor’s account of collective debugging and psychoses as I read that part of the transcript. (Rather than trying to quote pieces of Jessica’s account, I think it’s probably best that I just link to the whole thing as well as Scott Alexander’s response.)
I presume this account is their source for the debugging stuff, wherein an ex-member of the rationalist Leverage institute described their experiences. They described the institute as having “debugging culture”, described as follows:
In the larger rationalist and adjacent community, I think it’s just a catch-all term for mental or cognitive practices aimed at deliberate self-improvement.
At Leverage, it was both more specific and more broad. In a debugging session, you’d be led through a series of questions or attentional instructions with goals like working through introspective blocks, processing traumatic memories, discovering the roots of internal conflict, “back-chaining” through your impulses to the deeper motivations at play, figuring out the roots of particular powerlessness-inducing beliefs, mapping out the structure of your beliefs, or explicating irrationalities.
and:
1. 2–6hr long group debugging sessions in which we as a sub-faction (Alignment Group) would attempt to articulate a “demon” which had infiltrated our psyches from one of the rival groups, its nature and effects, and get it out of our systems using debugging tools.
The podcast statements seem to be an embellished retelling of the contents of that blog post (and maybe the allegations made by scott alexander in the comments of this post). I don’t think describing them as “completely made up” is accurate.
Leverage was an EA-aligned organization, that was also part of the rationality community (or at least ‘rationalist-adjacent’), about a decade ago or more. For Leverage to be affiliated with the mantles of either EA or the rationality community was always contentious. From the side of EA, the CEA, and the side of the rationality community, largely CFAR, Leverage faced efforts to be shoved out of both within a short order of a couple of years. Both EA and CFAR thus couldn’t have then, and couldn’t now, say or do more to disown and disavow Leverage’s practices from the time Leverage existed under the umbrella of either network/ecosystem/whatever. They have. To be clear, so has Leverage in its own way.
At the time of the events as presented by Zoe Curzi in those posts, Leverage was basically shoved out the door of both the rationality and EA communities with—to put it bluntly—the door hitting Leverage on ass on the on the way out, and the door back in firmly locked behind them from the inside. In time, Leverage came to take that in stride, as the break-up between Leverage, and the rest of the institutional polycule that is EA/rationality, was extremely mutual.
Ien short, the course of events, and practices at Leverage that led to them, as presented by Zoe Curzi and others as a few years ago from that time circa 2018 to 2022, can scarcely be attributed to either the rationality or EA communities. That’s a consensus between EA, Leverage, and the rationality community agree on—one of few things left that they still agree on at all.
From the side of EA, the CEA, and the side of the rationality community, largely CFAR, Leverage faced efforts to be shoved out of both within a short order of a couple of years. Both EA and CFAR thus couldn’t have then, and couldn’t now, say or do more to disown and disavow Leverage’s practices from the time Leverage existed under the umbrella of either network/ecosystem/whatever…
At the time of the events as presented by Zoe Curzi in those posts, Leverage was basically shoved out the door of both the rationality and EA communities with—to put it bluntly—the door hitting Leverage on ass on the on the way out, and the door back in firmly locked behind them from the inside.
While I’m not claiming that “practices at Leverage” should be “attributed to either the rationality or EA communities”, or to CEA, the take above is demonstrably false. CEA definitely could have done more to “disown and disavow Leverage’s practices” and also reneged on commitments that would have helped other EAs learn about problems with Leverage.
Circa 2018 CEA was literally supporting Leverage/Paradigm on an EA community building strategy event. In August 2018 (right in the middle of the 2017-2019 period at Leverage that Zoe Curzi described in her post), CEA supported and participated in an “EA Summit” that was incubated by Paradigm Academy (intimately associated with Leverage). “Three CEA staff members attended the conference” and the keynote was delivered by a senior CEA staff member (Kerry Vaughan). Tara MacAulay, who was CEO of CEA until stepping down less than a year before the summit to co-found Alameda Research, personally helped fund the summit.
At the time, “the fact that Paradigm incubated the Summit and Paradigm is connected to Leverage led some members of the community to express concern or confusion about the relationship between Leverage and the EA community.” To address those concerns, Kerry committed to “address this in a separate post in the near future.” This commitment was subsequently dropped with no explanation other than “We decided not to work on this post at this time.”
This whole affair was reminiscent of CEA’s actions around the 2016 Pareto Fellowship, a CEA program where ~20 fellows lived in the Leverage house (which they weren’t told about beforehand), “training was mostly based on Leverage ideas”, and “some of the content was taught by Leverage staff and some by CEA staff who were very ‘in Leverage’s orbit’.” When CEA was fundraising at the end of that year, a community member mentioned that they’d heard rumors about a lack of professionalism at Pareto. CEA staff replied, on multiple occasions, that “a detailed review of the Pareto Fellowship is forthcoming.” This review was never produced.
Several years later, details emerged about Pareto’s interview process (which nearly 500 applicants went through) that confirmed the rumors about unprofessional behavior. One participant described it as “one of the strangest, most uncomfortable experiences I’ve had over several years of being involved in EA… It seemed like unscientific, crackpot psychology… it felt extremely cultish… The experience left me feeling humiliated and manipulated.”
I’ll also note that CEA eventually added a section to its mistakes page about Leverage, but not until 2022, and only after Zoe had published her posts and a commenter on Less Wrong explicitly asked why the mistakes page didn’t mention Leverage’s involvement in the Pareto Fellowship. The mistakes page now acknowledges other aspects of the Leverage/CEA relationship, including that Leverage had “a table at the careers fair at EA Global several times.” Notably, CEA has never publicly stated that working with Leverage was a mistake or that Leverage is problematic in any way.
The problems at Leverage were Leverage’s fault, not CEA’s. But CEA could have, and should have, done more to distance EA from Leverage.
Quick point—I think the relationship between CEA and Leverage was pretty complicated during a lot of this period.
There was typically a large segment of EAs who were suspicious of Leverage, ever since their founding. But Leverage did collaborate with EAs on some specific things early on (like the first EA Summit). It felt like an uncomfortable alliance type situation. If you go back on the forum / Lesswrong, you can read artifacts.
I think the period of 2018 or so was unusual. This was a period where a few powerful people at CEA (Kerry, Larissa) were unusually pro-Leverage and got to power fairly quickly (Tara left, somewhat suddenly). I think there was a lot of tension around this decision, and when they left (I think this period lasted around 1 year), I think CEA became much less collaborative with Leverage.
One way to square this a bit is that CEA was just not very powerful for a long time (arguably, its periods of “having real ability/agency to do new things” have been very limited). There were periods where Leverage had more employees (I’m pretty sure). The fact that CEA went through so many different leaders, each with different stances and strategies, makes it more confusing to look back on.
I would really love for a decent journalist to do a long story on this history, I think it’s pretty interesting.
Huh, yeah, that sure refers to those as “debugging”. I’ve never really heard Leverage people use those words, but Leverage 1.0 was a quite insular and weird place towards the end of its existence, so I must have missed it.
I think it’s kind of reasonable to use Leverage as evidence that people in the EA and Rationality community are kind of crazy and have indeed updated on the quotes being more grounded (though I also feel frustration with people equivocating between EA, Rationality and Leverage).
(Relatedly, I don’t particularly love you calling Leverage “rationalist” especially in a context where I kind of get the sense you are trying to contrast it with “EA”. Leverage has historically been much more connected to the EA community, and indeed had almost successfully taken over CEA leadership in ~2019, though IDK, I also don’t want to be too policing with language here)
I think it might describe how some people experienced internal double cruxing. I wouldn’t be that surprised if some people also found the ’debugging” frame in general to give too much agency to others relative to themselves, I feel like I’ve heard that discussed.
Based on the things titotal said, seems like it very likely refers to some Leverage stuff, which I feel a bit bad about seeing equivocated with the rest of the ecosystem, but also seems kind of fair. And the Zoe Curzi post sure uses the term “debugging” for those sessions (while also clarifying that the rest of the rationality community doesn’t use the term that way, but they sure seemed to)
I wouldn’t and didn’t describe that section of the transcript, as a whole, as essentially true. I said much of it is. As the CEO might’ve learned from Tucker Carlson, who in turned learned from FOX News, we should seek to be ‘fair and balanced.’
As to the debugging part, that’s an exaggeration that must have come out the other side of a game of broken telephone on the internet. It seems that on the other side of that telephone line would’ve been some criticisms or callouts I’ve read years ago of some activities happening in or around CFAR. I don’t recollect them in super-duper precise detail right now, nor do I have the time today to spend an hour or more digging them up on the internet
For the perhaps wrongheaded practices that were introduced into CFAR workshops for a period of time other than the ones from Leverage Research, I believe the others were some introduced by Valentine (e.g., ‘againstness,’ etc.). As far as I’m aware, at least as it was applied at one time, some past iterations of Connection Theory bore at least a superficial resemblance to some aspects of ‘auditing’ as practiced by Scientologists.
As to perhaps even riskier practices, I mean they happened not “in” but “around” CFAR in the sense of not officially happening under the auspices of CFAR, or being formally condoned by them, though they occurred within the CFAR alumni community and the Bay Area rationality community. It’s murky, though there was conduct in the lives of private individuals that CFAR informally enabled or emboldened, and could’ve/should’ve done more to prevent. For the record, I’m aware CFAR has effectively admitted those past mistakes, so I don’t want to belabor any point of moral culpability beyond what has been drawn out to death on LessWrong years ago.
Anyway, activities that occurred among rationalists in the social network that in CFAR’s orbit, that arguably arose to the level of triggering behaviour comparable in extremity to psychosis, include ‘dark arts’ rationality, and some of the edgier experiments of post-rationalists. That includes some memes spread and behaviours induced in some rationalists by Michael Vassar, Brent Dill, etc.
To be fair, I’m aware much of that was a result not of spooky, pseudo-rationality techniques, but some unwitting rationalists being effectively bullied into taking wildly mind-altering drugs, as guinea pigs in some uncontrolled DIY experiment. While responsibility for these latter outcomes may not be as attributable to CFAR, they can be fairly attributed to some past mistakes of the rationality community, albeit on a vague, semi-collective level.
I think it’s worth noting that the two examples you point to are right-wing, which the vast majority of Silicon Valley is not. Right-wing tech ppl likely have higher influence in DC, so that’s not to say they’re irrelevant, but I don’t think they are representative of silicon valley as a whole
I think Garry Tan is more left-wing, but I’m not sure. A lot of the e/acc community fights with EA, and my impression is that many of them are leftists.
I think that the right-wing techies are often the loudest, but there are also lefties in this camp too.
(Honestly though, the right-wing techies and left-wing techies often share many of the same policy ideas. But they seem to disagree on Trump and a few other narrow things. Many of the recent Trump-aligned techies used to be more left-coded.)
Garry Tan is the head of YCombinator, which is basically the most important/influential tech incubator out there. Around 8 years back, relations were much better, and 80k and CEA actually went through YCombinator.
I’d flag that Garry specifically is kind of wacky on Twitter, compared to previous heads of YC. So I definitely am not saying it’s “EA’s fault”—I’m just flagging that there is a stigma here.
I personally would be much more hesitant to apply to YC knowing this, and I’d expect YC would be less inclined to bring in AI safety folk and likely EAs.
Also, I suspect that the current EA AI policy arm could find ways to be more diplomatic and cooperative
My impression is that the current EA AI policy arm isn’t having much active dialogue with the VC community and the like. I see Twitter spats that look pretty ugly, I suspect that this relationship could be improved on with more work.
At a higher level, I suspect that there could be a fair bit of policy work that both EAs and many of these VCs and others would be more okay with than what is currently being pushed. My impression is that we should be focused on narrow subsets of risks that matter a lot to EAs, but don’t matter much to others, so we can essentially trade and come out better than we are now.
My impression is that we should be focused on narrow subsets of risks that matter a lot to EAs, but don’t matter much to others, so we can essentially trade and come out better than we are now.
That seems like the wrong play to me. We need to be focused on achieving good outcomes and not being popular.
My personal take is that there are a bunch of better trade-offs between the two that we could be making. I think that the narrow subset of risks is where most of the value is, so from that standpoint, that could be a good trade-off.
I’ve heard multiple reports of people being denied jobs around AI policy because of their history in EA. I’ve also seen a lot of animosity against EA from top organizations I think are important—like A16Z, Founders Fund (Thiel), OpenAI, etc. I’d expect that it would be uncomfortable for EAs to apply or work in to these latter places at this point.
This is very frustrating to me.
First, it makes it much more difficult for EAs to collaborate with many organizations where these perspectives could be the most useful. I want to see more collaborations and cooperation—not having EAs be allowed in many orgs makes this very difficult.
Second, it creates a massive incentive for people not to work in EA or on EA topics. If you know it will hurt your career, then you’re much less likely to do work here.
And a lighter third—it’s just really not fun to have a significant stigma associated with you. This means that many of the people I respect the most, and think are doing some of the most valuable work out there, will just have a much tougher time in life.
Who’s at fault here? I think the first big issue is that resistances get created against all interesting and powerful groups. There are similar stigmas against people across the political spectrum, for example, to certain crowds. A big part of “talking about morality and important issues, while having something non-obvious to say” is being hated by a bunch of people. In this vein, arguably we should be aiming for a world where it winds up that there’s a larger stigma.
But a lot clearly has to do with the decisions made by what seems like a few EAs. FTX hurt the most. I think the OpenAI board situation resulted in a lot of ea-paranoia, arguably with very little upside. More recently, I think that certain EA actions in ai policy are getting a lot of flak.
There was a brief window, pre-FTX-fail, where there was a very positive EA media push. I’ve seen almost nothing since. I think that “EA marketing” has been highly neglected, and that doesn’t seem to be changing.
Also, I suspect that the current EA AI policy arm could find ways to be more diplomatic and cooperative. When this arm upsets people, all of EA gets blamed.
My guess is that there are many other changes to do here too.
CEA is the obvious group to hypothetically be in charge of the EA parts of this. In practice, it seems like CEA has been very busy with post-FTX messes and leadership changes.
So I think CEA, as it becomes stable, could do a lot of good work making EA marketing work somehow. And I hope that the AI safety governance crowd can get better at not pissing off people. And hopefully, other EAs can figure out other ways to make things better and not worse.
If the above doesn’t happen, honestly, it could be worth it for EAs themselves to try to self-fund or coordinate efforts on this. The issue isn’t just one of “hurting long-term utility”, it’s one that just directly hurts EAs—so it could make a lot of sense for them to coordinate on improvements, even just in their personal interests.
On the positive front, I know its early days but GWWC have really impressed me with their well produced, friendly yet honest public facing stuff this year—maybe we can pick up on that momentum?
Also EA for Christians is holding a British conference this year where Rory Stewart and the Archbishop of Canterbury (biggest shot in the Anglican church) are headlining which is a great collaboration with high profile and well respected mainstream Christian / Christian-adjacent figures.
Any examples you wish to highlight?
I think in general their public facing presentation and marketing seems a cut above any other EA org—happy to be proven wrong by other orgs which are doing a great job too. What I love is how they present their messages with such positivity, while still packing a real punch and not watering down their message. Check out their web-page and blog to see their work.
A few concrete examples
- This great video “How rich are you really?”
- Nice rebranding of “Giving what we can pledge” to the snappier and clearer “10% pledge”—
The diamond symbol as a simple yet strong sign of people taking the pledge, both on the forum here and linkedin
- An amazing linked-in push with lots of people putting the diamond and explaining why they took the pledge. Many posts have been received really positively on my wall.
That’s just what I’ve noticed.
(Jumping in for our busy comms/exec team) Understanding the status of the EA brand and working to improve it is a top priority for CEA :) We hope to share more work on this in future.
Thanks, good to hear! Looking forward to seeing progress here.
I wrote a downvoted post recently about how we should be warning AI Safety talent about going into labs for personal branding reasons (I think there are other reasons not to join labs, but this is worth considering).
I think people are still underweighting how much the public are going to hate labs in 1-3 years.
I was telling organizers with PauseAI like Holly Elmore they should be emphasizing this more several months ago.
I think from an advocacy standpoint it is worth testing that message, but based on how it is being received on the EAF, it might just bounce off people.
My instinct as to why people don’t find it a compelling argument;
They don’t have short timelines like me, and therefore chuck it out completely
Are struggling to imagine a hostile public response to 15% unemployment rates
Copium
At least at the time, Holly Elmore seemed to consider it at least somewhat compelling. I mentioned this was an argument I provided framed in the context of movements like PauseAI—a more politicized, and less politically averse coalition movement, that includes at least one arm of AI safety as one of its constituent communities/movements, distinct from EA.
>They don’t have short timelines like me, and therefore chuck it out completely
Among the most involved participants in PauseAI, presumably there may estimates of short timelines comparable to the rate of such estimates among effective altruists.
>Are struggling to imagine a hostile public response to 15% unemployment rates
Those in PauseAI and similar movements don’t.
>Copium
While I sympathize with and appreciate why there would be high rates of huffing copium among effective altruists (and adjacent communities, such as rationalists), others who have been picking up slack effective altruists have dropped in the last couple years, are reacting differently. At least in terms of safeguarding humanity from both the near-term and long-term vicissitudes of advancing AI, humanity has deserved better than EA has been able to deliver. Many have given up hope that EA will ever rebound to the point it’ll be able to muster living up to the promise of at least trying to safeguard humanity. That includes both many former effective altruists, and those who still are effective altruists. I consider there to still be that kind of ‘hope’ on a technical level, though on a gut level I don’t have faith in EA. I definitely don’t blame those who have any faith left in EA, let alone those who see hope in it.
Much of the difference here is the mindset towards ‘people’, and how they’re modeled, between those still firmly planted in EA but somehow with a fatalistic mindset, and those who still care about AI safety but have decided to move on in EA. (I might be somewhere in between, though my perspective as a single individual among general trends is barely relevant.) The last couple years have proven that effective altruists direly underestimated the public, and the latter group of people didn’t. While many here on the EA Forum may not agree that much—or even most—of what movements like PauseAI are doing are as effective as they could or should be, they at least haven’t succumbed to a plague of doomerism beyond what can seemingly even be justified.
To quote former effective altruist Kerry Vaughan, in a message addressed to those who still are effective altruists: “now is not the time for moral cowardice.” There are some effective altruists who heeded that sort of call when it was being made. There are others who weren’t effective altruists who heeded it too, when they saw most effective altruists had lost the will to even try picking up the ball again after they dropped it a couple times. New alliances between emotionally determined effective altruists and rationalists, and thousands of other people the EA community always underestimated, might from now on be carrying the team that is the global project of AI risk reduction—from narrow/near-term AI, to AGI/ASI.
EA can still change, though either it has to go beyond self-reflection and just change already, or get used to no longer being team captain of AI Safety.
Very sorry to hear these reports, and was nodding along as I read the post.
If I can ask, how do they know EA affiliation was the decision? Is this an informal ‘everyone knows’ thing through policy networks in the US? Or direct feedback for the prospective employer than EA is a PR-risk?
Of course, please don’t share any personal information, but I think it’s important for those in the community to be as aware as possible of where and why this happens if it is happening because of EA affiliation/history of people here.
(Feel free to DM me Ozzie if that’s easier)
I’m thinking of around 5 cases. I think in around 2-3 they were told, the others it was strongly inferred.
Would you be happy to expand on these points?
On Twitter, a lot of VCs and techies have ranted heavily about how much they dislike EAs.
See this segment from Marc Andreeson, where he talks about the dangers of Eliezer and EA. Marc seems incredibly paranoid about the EA crowd now.
(Go to 1 hour, 11min in, for the key part. I tried linking to the timestamp, but couldn’t get it to work in this editor after a few minutes of attempts)
I also came across this transcript, from Amjad Masad, CEO of Replit, on Tucker Carlson, recently
https://www.happyscribe.com/public/the-tucker-carlson-show/amjad-masad-the-cults-of-silicon-valley-woke-ai-and-tech-billionaires-turning-to-trump
To be fair to the CEO of Replit here, much of that transcript is essentially true, if mildly embellished. Many of those events or outcomes associated with EA, or adjacent communities during their histories, that should be the most concerning to anyone other than any FTX-related events and for reasons beyond just PR concerns, can and have been well-substantiated.
My guess is this is obvious, but the “debugging” stuff seems as far as I can tell completely made up.
I don’t know of any story in which “debugging” was used in any kind of collective way. There was some Leverage-research adjacent stuff that kind of had some attributes like this, “CT-charting”, which maybe is what it refers to, but that sure would be the wrong word, and I also don’t think I’ve ever heard of any psychoses or anything related to that.
The only in-person thing I’ve ever associated with “debugging” is when at CFAR workshops people were encouraged to create a “bugs-list”, which was just a random list of problems in your life, and then throughout the workshop people paired with other people where they could choose any problem of their choosing, and work with their pairing partner on fixing it. No “auditing” or anything like that.
I haven’t read the whole transcript in-detail, but this section makes me skeptical of describing much of that transcript as “essentially true”.
I have personally heard several CFAR employees and contractors use the word “debugging” to describe all psychological practices, including psychological practices done in large groups of community members. These group sessions were fairly common.
In that section of the transcript, the only part that looks false to me is the implication that there was widespread pressure to engage in these group psychology practices, rather than it just being an option that was around. I have heard from people in CFAR who were put under strong personal and professional pressure to engage in *one-on-one* psychological practices which they did not want to do, but these cases were all within the inner ring and AFAIK not widespread. I never heard any stories of people put under pressure to engage in *group* psychological practices they did not want to do.
For what it’s worth, I was reminded of Jessica Taylor’s account of collective debugging and psychoses as I read that part of the transcript. (Rather than trying to quote pieces of Jessica’s account, I think it’s probably best that I just link to the whole thing as well as Scott Alexander’s response.)
I presume this account is their source for the debugging stuff, wherein an ex-member of the rationalist Leverage institute described their experiences. They described the institute as having “debugging culture”, described as follows:
and:
The podcast statements seem to be an embellished retelling of the contents of that blog post (and maybe the allegations made by scott alexander in the comments of this post). I don’t think describing them as “completely made up” is accurate.
Leverage was an EA-aligned organization, that was also part of the rationality community (or at least ‘rationalist-adjacent’), about a decade ago or more. For Leverage to be affiliated with the mantles of either EA or the rationality community was always contentious. From the side of EA, the CEA, and the side of the rationality community, largely CFAR, Leverage faced efforts to be shoved out of both within a short order of a couple of years. Both EA and CFAR thus couldn’t have then, and couldn’t now, say or do more to disown and disavow Leverage’s practices from the time Leverage existed under the umbrella of either network/ecosystem/whatever. They have. To be clear, so has Leverage in its own way.
At the time of the events as presented by Zoe Curzi in those posts, Leverage was basically shoved out the door of both the rationality and EA communities with—to put it bluntly—the door hitting Leverage on ass on the on the way out, and the door back in firmly locked behind them from the inside. In time, Leverage came to take that in stride, as the break-up between Leverage, and the rest of the institutional polycule that is EA/rationality, was extremely mutual.
Ien short, the course of events, and practices at Leverage that led to them, as presented by Zoe Curzi and others as a few years ago from that time circa 2018 to 2022, can scarcely be attributed to either the rationality or EA communities. That’s a consensus between EA, Leverage, and the rationality community agree on—one of few things left that they still agree on at all.
While I’m not claiming that “practices at Leverage” should be “attributed to either the rationality or EA communities”, or to CEA, the take above is demonstrably false. CEA definitely could have done more to “disown and disavow Leverage’s practices” and also reneged on commitments that would have helped other EAs learn about problems with Leverage.
Circa 2018 CEA was literally supporting Leverage/Paradigm on an EA community building strategy event. In August 2018 (right in the middle of the 2017-2019 period at Leverage that Zoe Curzi described in her post), CEA supported and participated in an “EA Summit” that was incubated by Paradigm Academy (intimately associated with Leverage). “Three CEA staff members attended the conference” and the keynote was delivered by a senior CEA staff member (Kerry Vaughan). Tara MacAulay, who was CEO of CEA until stepping down less than a year before the summit to co-found Alameda Research, personally helped fund the summit.
At the time, “the fact that Paradigm incubated the Summit and Paradigm is connected to Leverage led some members of the community to express concern or confusion about the relationship between Leverage and the EA community.” To address those concerns, Kerry committed to “address this in a separate post in the near future.” This commitment was subsequently dropped with no explanation other than “We decided not to work on this post at this time.”
This whole affair was reminiscent of CEA’s actions around the 2016 Pareto Fellowship, a CEA program where ~20 fellows lived in the Leverage house (which they weren’t told about beforehand), “training was mostly based on Leverage ideas”, and “some of the content was taught by Leverage staff and some by CEA staff who were very ‘in Leverage’s orbit’.” When CEA was fundraising at the end of that year, a community member mentioned that they’d heard rumors about a lack of professionalism at Pareto. CEA staff replied, on multiple occasions, that “a detailed review of the Pareto Fellowship is forthcoming.” This review was never produced.
Several years later, details emerged about Pareto’s interview process (which nearly 500 applicants went through) that confirmed the rumors about unprofessional behavior. One participant described it as “one of the strangest, most uncomfortable experiences I’ve had over several years of being involved in EA… It seemed like unscientific, crackpot psychology… it felt extremely cultish… The experience left me feeling humiliated and manipulated.”
I’ll also note that CEA eventually added a section to its mistakes page about Leverage, but not until 2022, and only after Zoe had published her posts and a commenter on Less Wrong explicitly asked why the mistakes page didn’t mention Leverage’s involvement in the Pareto Fellowship. The mistakes page now acknowledges other aspects of the Leverage/CEA relationship, including that Leverage had “a table at the careers fair at EA Global several times.” Notably, CEA has never publicly stated that working with Leverage was a mistake or that Leverage is problematic in any way.
The problems at Leverage were Leverage’s fault, not CEA’s. But CEA could have, and should have, done more to distance EA from Leverage.
Quick point—I think the relationship between CEA and Leverage was pretty complicated during a lot of this period.
There was typically a large segment of EAs who were suspicious of Leverage, ever since their founding. But Leverage did collaborate with EAs on some specific things early on (like the first EA Summit). It felt like an uncomfortable alliance type situation. If you go back on the forum / Lesswrong, you can read artifacts.
I think the period of 2018 or so was unusual. This was a period where a few powerful people at CEA (Kerry, Larissa) were unusually pro-Leverage and got to power fairly quickly (Tara left, somewhat suddenly). I think there was a lot of tension around this decision, and when they left (I think this period lasted around 1 year), I think CEA became much less collaborative with Leverage.
One way to square this a bit is that CEA was just not very powerful for a long time (arguably, its periods of “having real ability/agency to do new things” have been very limited). There were periods where Leverage had more employees (I’m pretty sure). The fact that CEA went through so many different leaders, each with different stances and strategies, makes it more confusing to look back on.
I would really love for a decent journalist to do a long story on this history, I think it’s pretty interesting.
Huh, yeah, that sure refers to those as “debugging”. I’ve never really heard Leverage people use those words, but Leverage 1.0 was a quite insular and weird place towards the end of its existence, so I must have missed it.
I think it’s kind of reasonable to use Leverage as evidence that people in the EA and Rationality community are kind of crazy and have indeed updated on the quotes being more grounded (though I also feel frustration with people equivocating between EA, Rationality and Leverage).
(Relatedly, I don’t particularly love you calling Leverage “rationalist” especially in a context where I kind of get the sense you are trying to contrast it with “EA”. Leverage has historically been much more connected to the EA community, and indeed had almost successfully taken over CEA leadership in ~2019, though IDK, I also don’t want to be too policing with language here)
I think it might describe how some people experienced internal double cruxing. I wouldn’t be that surprised if some people also found the ’debugging” frame in general to give too much agency to others relative to themselves, I feel like I’ve heard that discussed.
Based on the things titotal said, seems like it very likely refers to some Leverage stuff, which I feel a bit bad about seeing equivocated with the rest of the ecosystem, but also seems kind of fair. And the Zoe Curzi post sure uses the term “debugging” for those sessions (while also clarifying that the rest of the rationality community doesn’t use the term that way, but they sure seemed to)
I wouldn’t and didn’t describe that section of the transcript, as a whole, as essentially true. I said much of it is. As the CEO might’ve learned from Tucker Carlson, who in turned learned from FOX News, we should seek to be ‘fair and balanced.’
As to the debugging part, that’s an exaggeration that must have come out the other side of a game of broken telephone on the internet. It seems that on the other side of that telephone line would’ve been some criticisms or callouts I’ve read years ago of some activities happening in or around CFAR. I don’t recollect them in super-duper precise detail right now, nor do I have the time today to spend an hour or more digging them up on the internet
For the perhaps wrongheaded practices that were introduced into CFAR workshops for a period of time other than the ones from Leverage Research, I believe the others were some introduced by Valentine (e.g., ‘againstness,’ etc.). As far as I’m aware, at least as it was applied at one time, some past iterations of Connection Theory bore at least a superficial resemblance to some aspects of ‘auditing’ as practiced by Scientologists.
As to perhaps even riskier practices, I mean they happened not “in” but “around” CFAR in the sense of not officially happening under the auspices of CFAR, or being formally condoned by them, though they occurred within the CFAR alumni community and the Bay Area rationality community. It’s murky, though there was conduct in the lives of private individuals that CFAR informally enabled or emboldened, and could’ve/should’ve done more to prevent. For the record, I’m aware CFAR has effectively admitted those past mistakes, so I don’t want to belabor any point of moral culpability beyond what has been drawn out to death on LessWrong years ago.
Anyway, activities that occurred among rationalists in the social network that in CFAR’s orbit, that arguably arose to the level of triggering behaviour comparable in extremity to psychosis, include ‘dark arts’ rationality, and some of the edgier experiments of post-rationalists. That includes some memes spread and behaviours induced in some rationalists by Michael Vassar, Brent Dill, etc.
To be fair, I’m aware much of that was a result not of spooky, pseudo-rationality techniques, but some unwitting rationalists being effectively bullied into taking wildly mind-altering drugs, as guinea pigs in some uncontrolled DIY experiment. While responsibility for these latter outcomes may not be as attributable to CFAR, they can be fairly attributed to some past mistakes of the rationality community, albeit on a vague, semi-collective level.
I think it’s worth noting that the two examples you point to are right-wing, which the vast majority of Silicon Valley is not. Right-wing tech ppl likely have higher influence in DC, so that’s not to say they’re irrelevant, but I don’t think they are representative of silicon valley as a whole
I think Garry Tan is more left-wing, but I’m not sure. A lot of the e/acc community fights with EA, and my impression is that many of them are leftists.
I think that the right-wing techies are often the loudest, but there are also lefties in this camp too.
(Honestly though, the right-wing techies and left-wing techies often share many of the same policy ideas. But they seem to disagree on Trump and a few other narrow things. Many of the recent Trump-aligned techies used to be more left-coded.)
Random Tweet from today: https://x.com/garrytan/status/1820997176136495167
Garry Tan is the head of YCombinator, which is basically the most important/influential tech incubator out there. Around 8 years back, relations were much better, and 80k and CEA actually went through YCombinator.
I’d flag that Garry specifically is kind of wacky on Twitter, compared to previous heads of YC. So I definitely am not saying it’s “EA’s fault”—I’m just flagging that there is a stigma here.
I personally would be much more hesitant to apply to YC knowing this, and I’d expect YC would be less inclined to bring in AI safety folk and likely EAs.
I find it very difficult psychologically to take someone seriously if they use the word ‘decels’.
Want to say that I called this ~9 months ago.[1]
I will re-iterate that clashes of ideas/worldviews[2] are not settled by sitting them out and doing nothing, since they can be waged unilaterally.
Especially if you look at the various other QTs about this video across that side of Twitter
Or ‘memetic wars’, YMMV
My impression is that the current EA AI policy arm isn’t having much active dialogue with the VC community and the like. I see Twitter spats that look pretty ugly, I suspect that this relationship could be improved on with more work.
At a higher level, I suspect that there could be a fair bit of policy work that both EAs and many of these VCs and others would be more okay with than what is currently being pushed. My impression is that we should be focused on narrow subsets of risks that matter a lot to EAs, but don’t matter much to others, so we can essentially trade and come out better than we are now.
That seems like the wrong play to me. We need to be focused on achieving good outcomes and not being popular.
My personal take is that there are a bunch of better trade-offs between the two that we could be making. I think that the narrow subset of risks is where most of the value is, so from that standpoint, that could be a good trade-off.