I don’t think I am a great representative of EA leadership, given my somewhat bumpy relationship and feelings to a lot of EA stuff, but I nevertheless I think I have a bunch of the answers that you are looking for:
Who is invited to the coordination forum and who attends? What sort of decisions are made? How does the coordination forum impact the direction the community moves in? Who decides who goes to the coordination forum? How? What’s the rationale for keeping the attendees of the coordination forum secret (or is it not purposeful)?
The Coordination Forum is a very loosely structured retreat that’s been happening around once a year. At least the last two that I attended were structured completely as an unconference with no official agenda, and the attendees just figured out themselves who to talk to, and organically wrote memos and put sessions on a shared schedule.
At least as far as I can tell basically no decisions get made at Coordination Forum, and it’s primary purpose is building trust and digging into gnarly disagreements between different people who are active in EA community building, and who seem to get along well with the others attending (with some balance between the two).
I think attendance has been decided by CEA. Criteria have been pretty in-flux. My sense has been that a lot of it is just dependent on who CEA knows well-enough to feel comfortable inviting, and who seems to be obviously worth coordinating with.
Which senior decision makers in EA played a part in the decision to make the Carrick Flynn campaign happen? Did any express the desire for it not to? Who signed off on the decision to make the campaign manager someone with no political experience?
I mean, my primary guess here is Carrick. I don’t think there was anyone besides Carrick who “decided” to make the Carrick campaign happen. I am pretty confident Carrick had no boss and did this primarily on his own initiative (though likely after consulting with various other people in EA on whether it was a good idea).
[edit: On more reflection and talking to some more people, my guess is there was actually more social pressure involved here than this paragraph implies. Like, I think it was closer to “a bunch of kind-of-but-not-very influential EAs reached out to him and told him that they think it would be quite impactful and good for the world if he ran”, and my updated model of Carrick really wasn’t personally attracted to running for office, and the overall experience was not great for him]
I expressed desire for it not to happen! Though like, I think it wasn’t super obvious to me it was a wrong call, but a few times when people asked me whether to volunteer for the Carrick campaign, I said that seemed overall bad for the world. I did not reach out to Carrick with this complaint, since doing anything is already hard, Carrick seemed well-intentioned, and while I think his specific plan was a mistake, it didn’t seem a bad enough mistake to be worth very actively intervening (and like, ultimately Carrick can do whatever he wants, I can’t stop him from running for office).
Why did Will MacAskill introduce Sam Bankman-Fried to Elon Musk with the intention of getting SBF to help Elon buy twitter? What was the rationale that this would have been a cost effective use of $8-15 Billion? Who else was consulted on this?
I think it could be a cost-effective use of $3-10 billion (I don’t know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it’s not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam’s net-worth at face-value at the time, this didn’t seem like a crazy idea to me.
I don’t know why Will vouched so hard for Sam though, that seems like a straightforward mistake to me. I think it’s likely Will did not consult anyone else, as like, it’s his right as a private individual talking to other private individuals.
Why did Will MacAskill choose not to take on board any of the suggestions of Zoe Cremer that she set out when she met with him?
My guess is because he thought none of them are very good? I also don’t think we should take on board any of their suggestions, and many of them strike me as catastrophic if adopted. I also don’t think any of them would have helped with this whole FTX situation, and my guess is some of them would have likely made it worse.
Will MacAskill has expressed public discomfort with the degree of hero-worship towards him. What steps has he taken to reduce this? What plans have decision makers tried to enact to reduce the amount of hero worship in EA?
I don’t know a ton of stuff that Will has done. I do think me and others have tried various things over the years to reduce hero worship. On Lesswrong and the EA Forum I downvote things that seem hero-worshippy to me, and I have written many comments over the years trying to reduce it. We also designed the frontpage guidelines on LW to reduce some of the associated community dynamics.
I do think this is a bit of a point of disagreement between me and others in the community, where I have had more concerns about this domain than others, but my sense is everyone is pretty broadly on-board with reducing this. I do sadly also don’t have a ton of traction on reducing this.
The EA community prides itself on being an open forum for discussion without fear of reprisal for disagreement. A very large number of people in the community however do not feel it is, and feel pressure to conform and not to express their disagreement with the community, with senior leaders or even with lower level community builders.Has there been discussions within the community health team with how to deal with this? What approaches are they taking community wide rather than just dealing with ad hoc incidents?
I do think it is indeed really sad that people fear reprisal for disagreement. I think this is indeed a pretty big problem, not really because EA is worse here than the rest of the world, but because I think the standard for success is really high on this dimension, and there is a lot of value in encouraging dissent and pushing back against conformity, far into the tails of the distribution here.
I expect the community health team to have discussed this extensively (like, I have discussed it with them for many hours). There are lots of things attempted to help with this over the years. We branded one EAG after “keeping EA weird”, we encouraged formats like whiteboard debates at EAG to show that disagreement among highly-engaged people is common, we added things like disagree-voting in addition to normal upvoting and downvoting to encourage a culture where it’s normal and expected that someone can write something that many people disagree with, without that thing being punished.
My sense is this all isn’t really enough, and we still kind of suck at it, but I also don’t think it’s an ignored problem in the space. I also think this problem gets harder and harder the more you grow, and larger communities trying to take coordinated action require more conformity to function, and this sucks, and is I think one of the strongest arguments against growth.
A number of people have expressed suspicion or worry that they have been rejected from grants because of publicly expressing disagreements with EA. Has this ever been part of the rationale for rejecting someone from a grant?
Anything I say here is in my personal capacity and not in any way on behalf of EA Funds. I am just trying to use my experience at EA Funds for some evidence about how these things usually go.
At least historically in my work at EA Funds this would be the opposite of how I usually evaluate grants. A substantial fraction of my notes consist of complaining that people seem too conformist to me and feel a bit like “EA bots” who somewhat blindly accept EA canon in ways that feels bad to me.
My sense is other grantmakers are less anti-conformity, but in-general, at least in my interactions with Open Phil and EA Funds grantmakers, I’ve seen basically nothing that I could meaningfully describe as punishing dissent.
I do think there are secondary things going on here where de-facto people have a really hard time evaluating ideas that are not expressed in their native ontology, and there is a thing where if you say stuff that seems weird from an EA framework this can come across as cringe to some people, and I do hate a bunch of those cringe reactions, and I think think it contributes a lot to conformity. I think that kind of stuff is indeed pretty bad, though I think almost all of the people who I’ve seen do this kind of thing would at least in the abstract strongly agree that punishing dissent is quite bad, and that we should be really careful around this domain, and have been excited about actively starting prices for criticism, etc.
FTX Future Fund decided to fund me on a project working on SRM and GCR, but refused to publicise it on their website. How many other projects were funded but not publicly disclosed? Why did they decide to not disclose such funding?
Again, just using my historical experience at EA Funds as evidence. I continue to in no way speak on behalf of funds, and this is all just my personal opinion.
I would have to look through the data, but my guess is about 20% of EA Funds funding is distributed privately, though a lot of that happens via referring grants to private donors (i.e. most of this does not come from the public EA Funds funding). About three-quarters (in terms of dollar amount) of this is to individuals who have a strong preference for privacy, and the other quarter is for stuff that’s more involved in policy and politics where there is some downside risk of being associated with EA in both directions (sometimes the policy project would prefer to not be super publicly associated and evaluated by an EA source, sometimes a project seems net-positive, but EA Funds doesn’t want to signal that it’s an EA-endorsed project).
SFF used to have a policy of allowing grant recommenders to prevent a grant from showing up publicly, but we abolished that power in recent rounds, so now all grants show up publicly.
I personally really dislike private funding arrangements and find it kind of shady and have pushed back a bunch on them at EA Funds, though I can see the case for them in some quite narrow set of cases. I personally quite dislike not publicly talking about policy project grants, since like, I think they are actually often worth the most scrutiny.
What sort of coordination, if any, goes on around which EAs talk to the media, write highly publicised books, go in curricula etc? What is the decision making procedure like?
There is no formal government here. If you do something that annoys a really quite substantial fraction of people at EA organizations, or people on the EA Forum, or any other large natural interest group in EA, there is some chance that someone at CEA (or maybe Open Phil) reaches out to someone doing a lot of things very publicly and asks them to please stop it (maybe backed up with some threat of the Effective Altruism trademark that I think CEA owns)
I think this is a difficult balance, and asking people to please associate less with EA can also easily contribute to a climate of conformity and fear, so I don’t really know what the right balance here is. I think on the margin I would like the world to understand better that EA has no central government, and anyone can basically say whatever they want and claim that it’s on behalf of EA, instead of trying to develop some kind of party-line that all people associated with EA must follow.
I do think this was a quite misleading narrative (though I do want to push back on your statement of it being “completely untrue”), and people made a pretty bad mistake endorsing it.
Up until yesterday I thought that indeed 80k fucked up pretty badly here, but I talked a bit to Max Dalton and my guess is the UK EAs seemed to maybe know a lot less about how Sam was living than people here in the Bay Area, and it’s now plausible to me (though still overall unlikely) that Rob did just genuinely not know that Sam was actually living a quite lavish lifestyle in many ways.
I had drafted an angry message to Rob Wiblin when the interview came out that I ended up not sending because it was a bit too angry that went approximately something like “Why the hell did you tell this story of SBF being super frugal in your interview when you know totally well that he lives in one of the most expensive apartments in the Bahamas and has a private jet”. I now really wish I had sent it. I wonder whether this would have caused Rob to notice something fishy was going on, and while I don’t think it would have flipped this whole situation, I do think it would have potentially made a decent dent into not being duped into this whole situation.
I think it could be a cost-effective use of $3-10 billion (I don’t know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it’s not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam’s net-worth at face-value at the time, this didn’t seem like a crazy idea to me.
The 15 billion figure comes from Will’s text messages themselves (page 6-7). Will sends Elon a text about how SBF could be interested in going in on Twitter, then Elon Musk asks, “Does he have huge amounts of money?” and Will replies, “Depends on how you define “huge.” He’s worth $24B, and his early employees (with shared values) bump that up to $30B. I asked how much he could in principle contribute and he said: “~1-3 billion would be easy, 3-8 billion I could do, ~8-15b is maybe possible but would require financing”
It seems weird to me that EAs would think going in with Musk on a Twitter deal would be worth $3-10 billion, let alone up to 15 (especially of money that at the time, in theory, would have been counterfactually spent on longtermist causes). Do you really believe this? I’ve never seen ‘buying up social media companies’ as a cause area brought up on the EA forum, at EA events, in EA-related books, podcasts, or heard any of the leaders talk about it. I find it concerning that some of us are willing to say “this makes sense” without, to my knowledge, ever having discussed the merits of it.
I don’t know why Will vouched so hard for Sam though, that seems like a straightforward mistake to me. I think it’s likely Will did not consult anyone else, as like, it’s his right as a private individual talking to other private individuals.
I don’t agree with this framing. This wasn’t just a private individual talking to another private individual. It was Will Macaskill (whose words, beliefs, and actions are heavily tied to the EA community as a whole) trying to connect SBF (at the time one of the largest funders in EA) and Elon Musk to go in on buying Twitter together, which could have had pretty large implications for the EA community as a whole. Of course it’s his right to have private conversations with others and he doesn’t have to consult anyone on the decisions he makes, but the framing here is dismissive of this being a big deal when, as another user points out, it could have easily been the most consequential thing EAs have ever done. I’m not saying Will needs to make perfect decisions, but I want to push back against this idea of him operating in just a private capacity here.
The 15 billion figure comes from Will’s text messages themselves (page 6-7). Will sends Elon a text about how SBF could be interested in going in on Twitter, then Elon Musk asks, “Does he have huge amounts of money?” and Will replies, “Depends on how you define “huge.” He’s worth $24B, and his early employees (with shared values) bump that up to $30B. I asked how much he could in principle contribute and he said: “~1-3 billion would be easy, 3-8 billion I could do, ~8-15b is maybe possible but would require financing”
Makes sense, I think I briefly saw that, and interpreted the last section as basically saying “ok, more than 8b will be difficult”, but the literal text does seem like it was trying to make $8b+ more plausible.
It seems weird to me that EAs would think going in with Musk on a Twitter deal would be worth $3-10 billion, let alone up to 15 (especially of money that at the time, in theory, would have been counterfactually spent on longtermist causes). Do you really believe this? I’ve never seen ‘buying up social media companies’ as a cause area brought up on the EA forum, at EA events, in EA-related books, podcasts, or heard any of the leaders talk about it. I find it concerning that some of us are willing to say “this makes sense” without, to my knowledge, ever having discussed the merits of it.
I have actually talked to lots of people about it! Probably as much as I have talked with people about e.g. challenge trials.
My guess is there must be some public stuff about this, though it wouldn’t surprise me if no one had made a coherent writeup of it on the internet (I also strongly reject the frame that people are only allowed to say that something ‘makes sense’ after having discussed the merits of it publicly. I have all kinds of crazy schemes for stuff that I think in-expectation beats GiveWell’s last dollar, and I haven’t written up anything close to a quarter of them, and likely never will).
I also remember people talking about buying Twitter during the Trump presidency and somehow changing it, since it seemed like it might have substantially increased nuclear war risk at the time, so there was at least some public discourse about it.
I don’t agree with this framing. This wasn’t just a private individual talking to another private individual. It was Will Macaskill (whose words, beliefs, and actions are heavily tied to the EA community as a whole) trying to connect SBF (at the time one of the largest funders in EA) and Elon Musk to go in on buying Twitter together, which could have had pretty large implications for the EA community as a whole. Of course it’s his right to have private conversations with others and he doesn’t have to consult anyone on the decisions he makes, but the framing here is dismissive of this being a big deal when, as another user points out, it could have easily been the most consequential thing EAs have ever done. I’m not saying Will needs to make perfect decisions, but I want to push back against this idea of him operating in just a private capacity here.
Oh, to be clear, I think Will fucked up pretty badly here. I just don’t think any policy that tries to prevent even very influential and trusted people in EA talking to other people in private about their honest judgement of other people is possibly a good idea. I think you should totally see this as a mistake and update downwards on Will (as well as EAs willingness to have him be as close as possible to a leader as we have), but I think from an institutional perspective there is little that should have been done at this point (i.e. all the mistakes were made much earlier, in how Will ended up in a bad epistemic state, and maybe the way we delegate leadership in the first place).
My guess is there must be some public stuff about this, though it wouldn’t surprise me if no one had made a coherent writeup of it on the internet (I also strongly reject the frame that people are only allowed to say that something ‘makes sense’ after having discussed the merits of it publicly. I have all kinds of crazy schemes for stuff that I think in-expectation beats GiveWell’s last dollar, and I haven’t written up anything close to a quarter of them, and likely never will).
Yeah, there could be some public stuff about this and I’m just not aware of it. And sorry, I wasn’t trying to say that people are only allowed to say that something ‘makes sense’ after having discussed the merits of it publicly. I was more trying to say that I would find it concerning for major spending decisions (billions of dollars in this case) to be made without any community consultation, only for people to justify it afterwards because at face value it “makes sense.” I’m not saying that I don’t see potential value in purchasing Twitter, but I don’t think a huge decision like that should be justified based on quick, post-hoc judgements. If SBF wanted to buy Twitter for non-EA reasons, that’s one thing, but if the idea here is that purchasingTwitter alongside Elon Musk is actually worth billions of dollars from an EA perspective, I would need to see way more analysis, much like significant analysis has been done for AI safety, biorisk, animal welfare, and global health and poverty. (We’re a movement that prides itself on using evidence and reason to make the world better, after all.)
Oh, to be clear, I think Will fucked up pretty badly here. I just don’t think any policy that tries to prevent even very influential and trusted people in EA talking to other people in private about their honest judgement of other people is possibly a good idea. I think you should totally see this as a mistake and update downwards on Will (as well as EAs willingness to have him be as close as possible to a leader as we have), but I think from an institutional perspective there is little that should have been done at this point (i.e. all the mistakes were made much earlier, in how Will ended up in a bad epistemic state, and maybe the way we delegate leadership in the first place).
Thanks for clarifying that—that makes more sense to me, and I agree that there was little that should have been done at that specific point. The lead-up to getting to that point is much more important.
If SBF wanted to buy Twitter for non-EA reasons, that’s one thing, but if the idea here is that purchasingTwitter alongside Elon Musk is actually worth billions of dollars from an EA perspective, I would need to see way more analysis, much like significant analysis has been done for AI safety, biorisk, animal welfare, and global health and poverty.
If you think investing in Twitter is close to neutral from an investment perspective (maybe reasonable at the time, definitely not by the time Musk was forced to close) then the opportunity cost isn’t really billions of dollars. Possibly this would have been an example of marginal charity.
I can see where you’re coming from with this, and I think purely financially you’re right, it doesn’t make sense to think of it as billions of dollars ‘down the drain.’
However, if I were to do a full analysis of this (in the framing of this being a decision based on an EA perspective), I would want to ask some non-financial questions too, such as:
Does the EA movement want to be further associated with Elon Musk than we already are, including any changes he might want to make with Twitter? What are the risks involved? (based on what we knew before the Twitter deal)
Does the EA movement want to be in the business of purchasing social media platforms? (In the past, we have championed causes like global health and poverty, reducing existencial risks, and animal welfare—this is quite a shift from those into a space that is more about power and politics, particularly given Musk’s stated political views/aims leading up to this purchase)
How might the EA movement shift because of this? (Some EAs may be on board, others may see it as quite surprising and not in line with their values.)
What were SBF’s personal/business motivations for wanting to acquire Twitter, and how would those intersect with EA’s vision for the platform?
What trade offs would be made that would impact other cause areas?
This is the bit I think was missed further up the thread. Regardless of whether buying a social media company could reasonably be considered EA, it’s fairly clear that Elon Musk’s goals both generally and with Twitter are not aligned with EA. MacAskill is allowed to do things that aren’t EA-aligned, but it seems to me to be another case of poor judgement by him (in addition to his association with SBF).
For what it’s worth connecting SBF and Musk might’ve been a time sensitive situation for a reason or another. There would’ve also still been time to debate the investment in the larger community before the deal would’ve actually gone through.
Seems quite implausible to me that this would have happened and unclear if it would have been good. (Assuming “larger EA community” implies more than private conversations between a few people. )
My reading (and of course I could be completlely wrong) is that SBF wanted to invest in Twitter (he seems to have subsequently pitched the same deal through Michael Grimes), and Will was helping him out. I don’t imagine Will felt it any of his business to advise SBF as to whether or not this was a good move. And I imagine SBF expected the deal to make money, and therefore not to have any cost for his intended giving.
Part of the issue here is that people have been accounting the bulk of SBF’s net worth as “EA money”. If you phrase the question as “Should EA invest in Twitter?” the answer is no. EA should probably also not invest in Robinhood or SRM. If SBF’s assets truly were EA assets, we ought to have liquidated them long ago and either spent them or invested them reasonably. But they weren’t.
I feel like anyone reaching out to Elon could say “making it better for the world” because that’s exactly what would resonate with Elon. It’s probably what I’d say to get someone on my side and communicate I want to help them change the direction of Twitter and “make it better.”
I disagree with the implied principle. E.g., I think it’s good for me to help animal welfare and global poverty EAs with their goals sometimes (when I’m in an unusually good position to help out), even though I think their time and money would be better spent on existential risk mitigation.
Agreed that a principle of ‘only cooperate on goals you agree with’ is too strong. On the object-level, if MacAskill was personally neutral or skeptical on the object-level question of whether SBF should buy Twitter, do you think he should have helped SBF out?
When is cooperation inappropriate? Maybe when the outcome you’re cooperating on is more consequential (in the bad direction, according to your own goals) than the expected gains from establishing reciprocity.
This would have been the largest purchase in EA history, replacing much or most of FTXFF with “SBF owns part of Twitter”. I think when the outcome is as consequential as that, we should hold cooperators responsible as if they were striving for the outcome, because the effects of helping SBF buy Twitter greatly outweigh the benefits from improving Will’s relationship with SBF (which I model as already very good).
if MacAskill was personally neutral or skeptical on the object-level question of whether SBF should buy Twitter, do you think he should have helped SBF out
If Will had no reason to think SBF was a bad egg, then I’d guess he should have helped out even if he thought the thing was not the optimal use of Sam’s money. (While also complaining that he thinks the investment is a bad idea.)
If Will thought SBF was a “bad egg”, then it could be more important to establish influence with him, because you don’t need to establish influence (as in ‘willingness to cooperate’) with someone who is entirely value-aligned with you.
My reading (and of course I could be completlely wrong) is that SBF wanted to invest in Twitter (he seems to have subsequently pitched the same deal through Michael Grimes), and Will was helping him out. I don’t imagine Will felt it any of his business to advise SBF as to whether or not this was a good move. And I imagine SBF expected the deal to make money, and therefore not to have any cost for his intended giving.
I agree that it’s possible SBF just wanted to invest in Twitter in a non-EA capacity. My comment was a response to Habryka’s comment which said:
I think it could be a cost-effective use of $3-10 billion (I don’t know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it’s not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam’s net-worth at face-value at the time, this didn’t seem like a crazy idea to me.
If SBF did just want to invest in Twitter (as an investor/as a billionaire/as someone who is interested in global politics, and not from an EA perspective) and asked Will for help, that is a different story. If that’s the case, Will could still have refused to introduce SBF to Elon, or pushed back against SBF wanting to buy Twitter in a friend/advisor capacity (SBF has clearly been heavily influenced by Will before), but maybe he didn’t feel comfortable with doing either of those.
You’re right to say people had been assuming SBF’s wealth belonged to EA: I had. In the legal sense it wasn’t, and we paid a price for that. I think it was fair to argue that the wealth ‘rightfully’ belonged to the EA community, in the sense that SBF should defer to representatives of EA on how it should be used, and would be defecting by spending a few billion on personal interests. The reason for that kind of principle is to avoid a situation where EA is captured or unduly influenced by the idiosyncratic preferences of a couple of mega-donors.
The answer is different for each side of your slash.
I see two kinds of relationships EA can have to megadonors:
uneasy, arms’ length, untrusting, but still taking their money
friendly, valorizing, celebratory, going to the same parties, conditional on the donor ceding control of a significant fraction of their wealth to a donor-advised fund (rather than just pledging to give)
Investing in assets expected to appreciate can be a form of earning to give (not that Twitter would be a good investment IMO). That’s how Warren Buffett makes money and probably nobody in EA has criticized him for doing that. Investing in a for-profit something is very different and is guided by different principles from donating to something, because you are expecting to (at least) get your money back and can invest it again or donate it later (this difference is one of the reasons microloans became so hugely popular for a while).
On the downside, concentrating assets (in any company, not just Twitter) is a bad financial strategy, but on the upside, having some influence at Twitter could be useful to promote things like moderation rules that improve the experience of users and increase the prevalence of genuine debate and other good things on the platform.
Hi Oli — I was very saddened to hear that you thought the most likely explanation for the discussion of frugality in my interview with Sam was that I was deliberately seeking to mislead the audience.
I had no intention to mislead people into thinking Sam was more frugal than he was. I simply believed the reporting I had read about him and he didn’t contradict me.
It’s only in recent weeks that I learned that some folks such as you thought the impression about his lifestyle was misleading, notwithstanding Sam’s reference to ‘nice apartments’ in the interview:
“I don’t know, I kind of like nice apartments. … I’m not really that much of a consumer exactly. It’s never been what’s important to me. And so I think overall a nice place is just about as far as it gets.”
Unfortunately as far as I can remember nobody else reached out to me after the podcast to correct the record either.
In recent years, in pursuit of better work-life balance, I’ve been spending less time socialising with people involved in the EA community, and when I do, I discuss work with them much less than in the past. I also last visited the SF Bay Area way back in 2019 and am certainly not part of the ‘crypto’ social scene. That may help to explain why this issue never came up in casual conversation.
Inasmuch as the interview gave listeners a false impression about Sam I am sorry about that, because we of course aim for the podcast to be as informative and accurate as possible.
Thanks for your in depth response to this question by the way, its really appreciated and exactly what I was looking for from this post! It is pretty strange that no one reached out to you in a professional capacity to correct this, but that certainly isn’t your fault!
Makes sense, seems like a sad failure of communication :(
Looks like on my side I had an illusion of transparency which made me feel like you must very likely know about this, which made me expect that a conversation about this would end up more stressful than it probably would have been. I expected that even if you didn’t do it intentionally (which I thought was plausible, but even at the time not very likely), I still expected that there was some subconscious or semi-intentional bias that I would have had to deal with that would have made the conversation pretty difficult. I do know think it’s very likely that the conversation would have just gone fine, and maybe would have successfully raised some flags.
I do wonder whether there was some way to catch this kind of thing. I wonder whether if the podcasts would be reliably posted to the forum with transcripts (which I think would be a great idea anyways), there is a higher chance someone would have left a comment pointing out the inconsistency (I think I at least would have been more likely to do that).
My guess is there are also various other lessons to take away from this, and I am interested in more detail on what you and other people at 80k did know about, but doesn’t seem necessary to go into right now. I appreciate you replying here.
Separately from the FTX issue, I’d be curious about you dissecting what of Zoe’s ideas you think are worth implementing and what would be worse and why.
My takes:
Set up whistleblower protection schemes for members of EA organisations ⇒ seems pretty good if there is a public commitment from an EA funder to something like “if you whistleblow we’ll cover your salary if you are fired while you search another job” or something like that
Transparent listing of funding sources on each website of each institution ⇒ Seems good to keep track of who receives money from who
Detailed and comprehensive conflict of interest reporting in grant giving ⇒ My sense is that this is already handled sensibly enough, though I don’t have great insight on grantgiving institutions
Within the next 5 years, each EA institution should reduce their reliance on EA funding sources by 50% ⇒ this seems bad for incentives and complicated to put into action
Within 5 years: EA funding decisions are made collectively ⇒ seems like it would increase friction and likely decrease the quality of the decisions, though I am willing to be proven wrong
No fireside chats at EAG with leaders. Instead, panel/discussions/double cruxing disagreements between widely known and influential EAs and between different orgs and more space for the people that are less known ⇒ Meh, I’m indifferent since I just don’t consume that kind of content so I don’t know the effects it has, though I am erring towards it being somewhat good to give voice to others
Increase transparency over
Who gets accepted/rejected to EAG and why ⇒ seems hard to implement, though there could be some model letters or something
leaders/coordination forum ⇒ I don’t sense this forum is nowhere as important as these recommendations imply
Set up: ‘Online forum of concerns’ ⇒ seems somewhat bad / will lead to overly focusing on things that are not that important, though good to survey people on concerns
I think I am across the board a bit more negative than this, but yeah, this assessment seems approximately correct to me.
On the whistleblower protections: I think real whistleblower protection would be great, but I think setting this up is actually really hard and it’s very common in the real world that institutions like this end up traps and net-negative and get captured by bad actors in ways that strengthens the problems they are trying to fix.
As examples, many university health departments are basically traps where if you go to them, they expel you from the university because you outed yourself as not mentally stable. Many PR departments are traps that will report your complaints to management and identify you as a dissenter. Many regulatory bodies are weapons that bad actors use to build moats around their products (indeed, looks like indeed that crypto regulatory bodies in the U.S. ended up played by SBF, and were one of the main tools that he used against his competitors). Many community dispute committees end up being misled and siding with perpetrators instead of victims (a lesson the rationality community learned from the Brent situation).
I think it’s possible to set up good institutions like this, but rushing towards it is quite dangerous and in-expectation bad, and the details of how you do it really matter (and IMO it’s better to not do anything here than to not try exceptionally hard at making this go well).
It seems worth noting that UK employment law has provisions to protect whistleblowers and for this reason (if not others) all UK employers should have whistleblowing policies. I tend to assume that EA orgs based in the UK are compliant with their obligations as employers and therefore do have such policies. Some caution would be needed in setting up additional protections, e.g. since nobody should ever be fired for whistleblowing, why would you have a policy to support people who were?
In practice, I notice two problems. Firstly, management (particularly in small organisations) frequently circumvent policies they experience as bureaucratic restrictions on their ability to manage. Secondly, disgruntled employees seek ways to express what are really personal grievances as blowing the whistle.
Detailed and comprehensive conflict of interest reporting in grant giving ⇒ My sense is that this is already handled sensibly enough [my emphasis], though I don’t have great insight on grantgiving institutions
Which senior decision makers in EA played a part in the decision to make the Carrick Flynn campaign happen? Did any express the desire for it not to? Who signed off on the decision to make the campaign manager someone with no political experience?
I would add that SBF and people around him decided to invest a lot of resources into this. As far as I can tell, he didn’t seem interested in people’s thoughts on whether this is a good idea. Most EAs thought it wasn’t wise to spend so much on the campaign.
I also just made an edit after reflecting a bit more on it and talking to some other people:
[edit: On more reflection and talking to some more people, my guess is there was actually more social pressure involved here than this paragraph implies. Like, I think it was closer to “a bunch of kind-of-but-not-very influential reached out to him and told him that they think it would be quite impactful and good for the world if he ran”, and my updated model of Carrick really wasn’t personally attracted to running for office, and the overall experience was not great for him]
Strong upvote here. I really like how you calmly assessed each of these in a way that feels very honest and has a all-cards-on-the-table feel to it. Some may still have reservations towards your comments given that you seem to at least somewhat fit into this picture of EA leadership, but this feels largely indicative of a general anger at the circumstances turned inwards towards EA that feels rather unhealthy. I certainly appreciate the OP as this does seem like a moment ripe for asking important questions that need answers, but don’t forget that those in leadership are humans who make mistakes too, and are generally people who seem really committed to trying to do what everyone in EA is: make the world a better place.
I think it’s right that those in leadership are humans who make mistakes, and I am sure they are generally committed to EA; in fact, many have served as real inspirations to me.
Nonetheless, as a movement we were founded on the idea that good intentions are not enough, and somewhere this seems to be getting lost somehow. I have no pretentions I would do a better job in leadership than these people; rather, I think the way EA concentrates power (formally and even more so informally) in a relatively small and opaque leadership group seems problematic. To justify this, I think we would need these decisionmakers to be superhuman, like Platos Philosopher King. But they are not, they are just human.
Swinging in a bit late here, but found myself compelled to ask, what sort of structure do you think would be better for EA, like in specific terms beyond “a greater spread of control and power to make decisions”?
A few things (I will reply in more detail in the morning once I have worked out how to link to specific parts of your text in my comment). These comments do appear a bit blunt, and I do apologies, they are blunt for clarity sake rather than to imply aggressiveness or rudeness.
With regards to the Coordination Forum, even if no “official decisions” get worked out, how impactful over the overall direction of the movement do you think it is? Anyway, why are the attendees of this not public? If the point is to build trust between those community building and to understand the core gnarly disagreements, why is the people going and what goes on so secretive?
Your Carrick Flynn answer sort of didn’t really tell me which senior EA leaders if any encouraged Carrick to run/ knew before he announced etc, which is something I think is important to know. It also doesn’t explain the decision around the choice of campaign manager etc.
With regards to buying twitter: whilst it is Will’s right to do whatever he wants, it really does call into question whether it is correct for him to be the “leader of EA” (or EA to have a defacto leader in such a way). If he has that role, surely he has certain responsibilities, and if he doesn’t want to fulfil those responsibilities, surely its time for him to step away from that role? I guess maybe I think that a fuck up as big as how hard Will vouched for SBF should hugely call into question how much power we should be giving Will.
With regards to Zoe’s ideas, I think I would actually like to see Will’s reasoning. A number of them could have been run as experiments anyway (more democratic funding via delibrative groups, debates/double cruxxing etc at EAGs etc.), so it doesn’t seem the justification for implementing nothing is that strong. But nonetheless I would like to see the justification from Will.
With regards to hero worshipping, I guess the fact Will has done little to reduce it should be pretty concerning to us.
With regards to the fear of disagreement, I do braodly agree things are getting done, and I thank you for your work with the disagree vote on the forum. I still think the Karma system messes with things anyway, but thanks for implementing that. I do however think we need much braoder discussions about this.
With regards to private funding, I think its important to note it was FTX Future Funds decision to make the funding private; I wanted it public, indeed, I did publicise this on my website. This was funding not given through a recommendation or a regranter but directly, that me as the grantee wanted to be public knowledge.
With regards to coordiantion and the media, there does however seem to be decent levels of coordination. Will’s book got about $10 Million in funding (edit- this is unconfirmed although I have heard it from multiple sources, and I think the number is pretty plausible given how many adverts etc. the book got- I should also say if it was that much, I think it was probably a good use of money) and far more converage than even Toby’s book got. It seems to always be the same faces talking to the media (ie mostly Will to be honest), so either there is coordination, or Will is not pointing journalists to other members of the community to talk to. I guess either naturally EA has developed some kind of internal government which those who have that power should probably try and reduce, or there has been some coordiantion here and making and encouraging Will (and I guess until recently to a lesser extent SBF) the face of EA was some kind of more delibrate decision. Probably the former is more likely, but then there is questions as to why this hasn’t been more fought against (in other academic circles I have been in the “figurehead” of the community has refused interviews to try and encourge the putting of the spotlight on others, although this was presumably much harder when the coverage was all about Will’s book.
With regards to SBF lifestyle, I think it is probably true many of us in the UK was less well aware of this. But surely a bunch of the UK based leadership (eg Will) knew of this, and so this could and should have been communicated more broadly.
I guess I think none of these issues on their own are ridiculously concerning, but the lack of transparency and concentration of power without a sense of safeguards or ways for community members to input etc does scare me, which is why I want a better sense of how these decisions are made to know whether my reading of how EA is run, either formally or informally, is correct or not. Thanks so much for your help so far on this, it is really appreciated!
Wait, what!? What’s your source of information for that figure? I get hiring a research assistant or two, but $10m seems like two orders of magnitude too much. I can’t even imagine how you would spend anywhere near that much on writing a book. Where did this money come from?
The book was, in Will’s words “a decade of work”, with a large number of people helping to write it, with a moderately large team promoting it (who did an awesome job!). There were a lot of adverts certainly around London for the book, and Will flew around the world to promote the book. I would certainly be hugely surprised if the budget was under $1 million (I know of projects run by undergraduates with budgets over a million!), and to be honest $10 million seems to me in the right ball park. Things just cost a lot of money, and you don’t promote a book for free!
The source appears to be Émile P. Torres. Gideon, could you confirm that this is the case? Also, could you clarify if you ever reached out to Will MacAskill to confirm the accuracy of this figure?
I’ve heard it from a number of people saying it quite casually, so assumed it was correct as it’s the only figure I heard banded around and didn’t hear opposition to it. Just tried to confirm it and don’t see it publicly, so it may be wrong. They may have heard it from Emile, I don’t know. So take it with a hefty pinch of salt then.
I don’t quite think I have the level of access to just randomly email Will MacAskill unfortunately to confirm it, but if someone could, that would be great.
FYI I think it probably would have been a fantastic use of 10 million, which is why I also think its quite plausible
If you are unable to adduce any evidence for that particular figure, I think your reply should not be “take it with a hefty pinch of salt” but to either reach out to the person in a position to confirm or disconfirm it, or else issue a retraction.
I think a retraction would also be misleading (since I am worried it would indicate a disconformation). I think editing it to say that the number comes from unconfirmed rumors seems best to me.
FWIW, a $10MM estimate seems in the right order of magnitude based on random things I heard, though I also don’t have anything hard to go on (my guess is that it will have ended up less than $10MM, but I am like 80% confident it was more than $1.5MM, though again, purely based on vague vibes I got from talking to some people in the vague vicinity of the marketing campaign)
Why would a retraction be misleading? A valid reason for retracting a statement is failure to verify it. There is no indication in these cases that the statement is false.
If someone can’t provide any evidence for a claim that very likely traces back to Emile Torres, and they can’t be bothered to send a one-line email to Will’s team asking for confirmation, then it seems natural to ask this person to take back the claim. But I’m also okay with an edit to the original comment along the lines you suggest.
Huh, I definitely read strikethrough text by default as “disconfirmed”. My guess is I would be happy to take a bet on this and ask random readers what they think the truth value of a strike-through claim like this is.
But in any case, seems like we agree that an edit is appropriate.
Saying I “can’t be bothered to send a one line email”: I’m not a journalist and really didn’t expect this post to blow up as much as it did. I am literally a 19 year old kid and not sure that Will’s team will respond to me if I’m honest. Part of the hope for this post was to get some answers, which in some cases (ie Rob Wiblin- thanks!) i have got, but in others I haven’t.
Honestly, I think it is fine to relay second-hand information, as long as it is minimally trustworthy—i.e., heard from multiple sources—and you clearly caveat it as such. This is a forum for casual conversation, not an academic journal or a court of law. In this case, too, we are dealing with a private matter that is arguably of some public interest to the movement. It would be great if these things were fully transparent in the first place, in which case we wouldn’t have to depend on hearsay.
With that said: now we have heard the figure of $10m, it would be nice to know what the real sum was.
EDIT: Having just read Torres’ piece, Halstead’s letter to the editor, and the editorial note quoting Will’s response, there is no indication that anyone has disputed the $10m figure with which the piece began. Obviously that does not make it true, but it would seem to make it more likely to be true. One thing I had not realised, though, was that this money could have been used for the promotion of the book as well as its writing.
there is a thing where if you say stuff that seems weird from an EA framework this can come across as cringe to some people, and I do hate a bunch of those cringe reactions, and I think think it contributes a lot to conformity
Can you give an example (even a made up one) of the kind of thing you have in mind here? What kinds of things sound weird and cringy to someone operating within an EA framework, but are actually valuable from an EA perspective?
(Like, play-pumps-but-they-actually-work-this-time? Or some kind of crypto thing that looks like a scam but isn’t? Or… what?)
Hmm, I don’t know whether it wouldn’t have happened without EA funding, but seems pretty plausible to me. I think campaign donations are public so maybe we can just see very precisely who made this decision. I also think on the funding dimension a bunch of EA leaders encouraged others to donate to the Carrick campaign in what seemed to me to be somewhat too aggressive.
I do also think there was a separate pattern around the Carrick campaign where for a while people were really hesitant to say bad things about Carrick or politics-adjacent EA because it maybe would have hurt his election chances, and I think that was quite bad, and I pushed back a bunch of times on this, though the few times I did push back on it, it was quite well-received.
Bankman-Fried has provided Protect Our Future PAC with the majority of its donations. The group has raised $28 million for the 2022 election cycle as of June 30, with $23 million from Bankman-Fried. Nishad Singh, who serves as head of engineering at FTX, has donated another $1 million.
The PAC has spent $10.5 million, about half of the group’s independent expenditures through July 21, in support of Democrat Carrick Flynn in his unsuccessful primary bid in the highly funded Oregon 6th Congressional District race. Like Protect Our Future PAC, Flynn has stated that his “‘first priority is pandemic prevention.’”
The PAC spent nearly $940,000 against Flynn’s opponent, Oregon state Rep. Andrea Salinas. These expenditures represent the only instance in which Protect our Future PAC has spent money against a Democratic candidate.
The race has made for the third most expensive House Democratic primary in the country, according to the nonpartisan, nonprofit group OpenSecrets. By Monday, the Democratic race drew more than $13 million in outside money, OpenSecrets reported.
The vast majority of that — more than $10 million — was donated to Flynn’s campaign by a group backed by a cryptocurrency billionaire.
I don’t think I am a great representative of EA leadership, given my somewhat bumpy relationship and feelings to a lot of EA stuff, but I nevertheless I think I have a bunch of the answers that you are looking for:
The Coordination Forum is a very loosely structured retreat that’s been happening around once a year. At least the last two that I attended were structured completely as an unconference with no official agenda, and the attendees just figured out themselves who to talk to, and organically wrote memos and put sessions on a shared schedule.
At least as far as I can tell basically no decisions get made at Coordination Forum, and it’s primary purpose is building trust and digging into gnarly disagreements between different people who are active in EA community building, and who seem to get along well with the others attending (with some balance between the two).
I think attendance has been decided by CEA. Criteria have been pretty in-flux. My sense has been that a lot of it is just dependent on who CEA knows well-enough to feel comfortable inviting, and who seems to be obviously worth coordinating with.
I mean, my primary guess here is Carrick. I don’t think there was anyone besides Carrick who “decided” to make the Carrick campaign happen. I am pretty confident Carrick had no boss and did this primarily on his own initiative (though likely after consulting with various other people in EA on whether it was a good idea).
[edit: On more reflection and talking to some more people, my guess is there was actually more social pressure involved here than this paragraph implies. Like, I think it was closer to “a bunch of kind-of-but-not-very influential EAs reached out to him and told him that they think it would be quite impactful and good for the world if he ran”, and my updated model of Carrick really wasn’t personally attracted to running for office, and the overall experience was not great for him]
I expressed desire for it not to happen! Though like, I think it wasn’t super obvious to me it was a wrong call, but a few times when people asked me whether to volunteer for the Carrick campaign, I said that seemed overall bad for the world. I did not reach out to Carrick with this complaint, since doing anything is already hard, Carrick seemed well-intentioned, and while I think his specific plan was a mistake, it didn’t seem a bad enough mistake to be worth very actively intervening (and like, ultimately Carrick can do whatever he wants, I can’t stop him from running for office).
I think it could be a cost-effective use of $3-10 billion (I don’t know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it’s not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam’s net-worth at face-value at the time, this didn’t seem like a crazy idea to me.
I don’t know why Will vouched so hard for Sam though, that seems like a straightforward mistake to me. I think it’s likely Will did not consult anyone else, as like, it’s his right as a private individual talking to other private individuals.
My guess is because he thought none of them are very good? I also don’t think we should take on board any of their suggestions, and many of them strike me as catastrophic if adopted. I also don’t think any of them would have helped with this whole FTX situation, and my guess is some of them would have likely made it worse.
I don’t know a ton of stuff that Will has done. I do think me and others have tried various things over the years to reduce hero worship. On Lesswrong and the EA Forum I downvote things that seem hero-worshippy to me, and I have written many comments over the years trying to reduce it. We also designed the frontpage guidelines on LW to reduce some of the associated community dynamics.
I do think this is a bit of a point of disagreement between me and others in the community, where I have had more concerns about this domain than others, but my sense is everyone is pretty broadly on-board with reducing this. I do sadly also don’t have a ton of traction on reducing this.
I do think it is indeed really sad that people fear reprisal for disagreement. I think this is indeed a pretty big problem, not really because EA is worse here than the rest of the world, but because I think the standard for success is really high on this dimension, and there is a lot of value in encouraging dissent and pushing back against conformity, far into the tails of the distribution here.
I expect the community health team to have discussed this extensively (like, I have discussed it with them for many hours). There are lots of things attempted to help with this over the years. We branded one EAG after “keeping EA weird”, we encouraged formats like whiteboard debates at EAG to show that disagreement among highly-engaged people is common, we added things like disagree-voting in addition to normal upvoting and downvoting to encourage a culture where it’s normal and expected that someone can write something that many people disagree with, without that thing being punished.
My sense is this all isn’t really enough, and we still kind of suck at it, but I also don’t think it’s an ignored problem in the space. I also think this problem gets harder and harder the more you grow, and larger communities trying to take coordinated action require more conformity to function, and this sucks, and is I think one of the strongest arguments against growth.
Anything I say here is in my personal capacity and not in any way on behalf of EA Funds. I am just trying to use my experience at EA Funds for some evidence about how these things usually go.
At least historically in my work at EA Funds this would be the opposite of how I usually evaluate grants. A substantial fraction of my notes consist of complaining that people seem too conformist to me and feel a bit like “EA bots” who somewhat blindly accept EA canon in ways that feels bad to me.
My sense is other grantmakers are less anti-conformity, but in-general, at least in my interactions with Open Phil and EA Funds grantmakers, I’ve seen basically nothing that I could meaningfully describe as punishing dissent.
I do think there are secondary things going on here where de-facto people have a really hard time evaluating ideas that are not expressed in their native ontology, and there is a thing where if you say stuff that seems weird from an EA framework this can come across as cringe to some people, and I do hate a bunch of those cringe reactions, and I think think it contributes a lot to conformity. I think that kind of stuff is indeed pretty bad, though I think almost all of the people who I’ve seen do this kind of thing would at least in the abstract strongly agree that punishing dissent is quite bad, and that we should be really careful around this domain, and have been excited about actively starting prices for criticism, etc.
Again, just using my historical experience at EA Funds as evidence. I continue to in no way speak on behalf of funds, and this is all just my personal opinion.
I would have to look through the data, but my guess is about 20% of EA Funds funding is distributed privately, though a lot of that happens via referring grants to private donors (i.e. most of this does not come from the public EA Funds funding). About three-quarters (in terms of dollar amount) of this is to individuals who have a strong preference for privacy, and the other quarter is for stuff that’s more involved in policy and politics where there is some downside risk of being associated with EA in both directions (sometimes the policy project would prefer to not be super publicly associated and evaluated by an EA source, sometimes a project seems net-positive, but EA Funds doesn’t want to signal that it’s an EA-endorsed project).
SFF used to have a policy of allowing grant recommenders to prevent a grant from showing up publicly, but we abolished that power in recent rounds, so now all grants show up publicly.
I personally really dislike private funding arrangements and find it kind of shady and have pushed back a bunch on them at EA Funds, though I can see the case for them in some quite narrow set of cases. I personally quite dislike not publicly talking about policy project grants, since like, I think they are actually often worth the most scrutiny.
There is no formal government here. If you do something that annoys a really quite substantial fraction of people at EA organizations, or people on the EA Forum, or any other large natural interest group in EA, there is some chance that someone at CEA (or maybe Open Phil) reaches out to someone doing a lot of things very publicly and asks them to please stop it (maybe backed up with some threat of the Effective Altruism trademark that I think CEA owns)
I think this is a difficult balance, and asking people to please associate less with EA can also easily contribute to a climate of conformity and fear, so I don’t really know what the right balance here is. I think on the margin I would like the world to understand better that EA has no central government, and anyone can basically say whatever they want and claim that it’s on behalf of EA, instead of trying to develop some kind of party-line that all people associated with EA must follow.
I do think this was a quite misleading narrative (though I do want to push back on your statement of it being “completely untrue”), and people made a pretty bad mistake endorsing it.
Up until yesterday I thought that indeed 80k fucked up pretty badly here, but I talked a bit to Max Dalton and my guess is the UK EAs seemed to maybe know a lot less about how Sam was living than people here in the Bay Area, and it’s now plausible to me (though still overall unlikely) that Rob did just genuinely not know that Sam was actually living a quite lavish lifestyle in many ways.
I had drafted an angry message to Rob Wiblin when the interview came out that I ended up not sending because it was a bit too angry that went approximately something like “Why the hell did you tell this story of SBF being super frugal in your interview when you know totally well that he lives in one of the most expensive apartments in the Bahamas and has a private jet”. I now really wish I had sent it. I wonder whether this would have caused Rob to notice something fishy was going on, and while I don’t think it would have flipped this whole situation, I do think it would have potentially made a decent dent into not being duped into this whole situation.
The 15 billion figure comes from Will’s text messages themselves (page 6-7). Will sends Elon a text about how SBF could be interested in going in on Twitter, then Elon Musk asks, “Does he have huge amounts of money?” and Will replies, “Depends on how you define “huge.” He’s worth $24B, and his early employees (with shared values) bump that up to $30B. I asked how much he could in principle contribute and he said: “~1-3 billion would be easy, 3-8 billion I could do, ~8-15b is maybe possible but would require financing”
It seems weird to me that EAs would think going in with Musk on a Twitter deal would be worth $3-10 billion, let alone up to 15 (especially of money that at the time, in theory, would have been counterfactually spent on longtermist causes). Do you really believe this? I’ve never seen ‘buying up social media companies’ as a cause area brought up on the EA forum, at EA events, in EA-related books, podcasts, or heard any of the leaders talk about it. I find it concerning that some of us are willing to say “this makes sense” without, to my knowledge, ever having discussed the merits of it.
I don’t agree with this framing. This wasn’t just a private individual talking to another private individual. It was Will Macaskill (whose words, beliefs, and actions are heavily tied to the EA community as a whole) trying to connect SBF (at the time one of the largest funders in EA) and Elon Musk to go in on buying Twitter together, which could have had pretty large implications for the EA community as a whole. Of course it’s his right to have private conversations with others and he doesn’t have to consult anyone on the decisions he makes, but the framing here is dismissive of this being a big deal when, as another user points out, it could have easily been the most consequential thing EAs have ever done. I’m not saying Will needs to make perfect decisions, but I want to push back against this idea of him operating in just a private capacity here.
Makes sense, I think I briefly saw that, and interpreted the last section as basically saying “ok, more than 8b will be difficult”, but the literal text does seem like it was trying to make $8b+ more plausible.
I have actually talked to lots of people about it! Probably as much as I have talked with people about e.g. challenge trials.
My guess is there must be some public stuff about this, though it wouldn’t surprise me if no one had made a coherent writeup of it on the internet (I also strongly reject the frame that people are only allowed to say that something ‘makes sense’ after having discussed the merits of it publicly. I have all kinds of crazy schemes for stuff that I think in-expectation beats GiveWell’s last dollar, and I haven’t written up anything close to a quarter of them, and likely never will).
I also remember people talking about buying Twitter during the Trump presidency and somehow changing it, since it seemed like it might have substantially increased nuclear war risk at the time, so there was at least some public discourse about it.
Oh, to be clear, I think Will fucked up pretty badly here. I just don’t think any policy that tries to prevent even very influential and trusted people in EA talking to other people in private about their honest judgement of other people is possibly a good idea. I think you should totally see this as a mistake and update downwards on Will (as well as EAs willingness to have him be as close as possible to a leader as we have), but I think from an institutional perspective there is little that should have been done at this point (i.e. all the mistakes were made much earlier, in how Will ended up in a bad epistemic state, and maybe the way we delegate leadership in the first place).
Yeah, there could be some public stuff about this and I’m just not aware of it. And sorry, I wasn’t trying to say that people are only allowed to say that something ‘makes sense’ after having discussed the merits of it publicly. I was more trying to say that I would find it concerning for major spending decisions (billions of dollars in this case) to be made without any community consultation, only for people to justify it afterwards because at face value it “makes sense.” I’m not saying that I don’t see potential value in purchasing Twitter, but I don’t think a huge decision like that should be justified based on quick, post-hoc judgements. If SBF wanted to buy Twitter for non-EA reasons, that’s one thing, but if the idea here is that purchasingTwitter alongside Elon Musk is actually worth billions of dollars from an EA perspective, I would need to see way more analysis, much like significant analysis has been done for AI safety, biorisk, animal welfare, and global health and poverty. (We’re a movement that prides itself on using evidence and reason to make the world better, after all.)
Thanks for clarifying that—that makes more sense to me, and I agree that there was little that should have been done at that specific point. The lead-up to getting to that point is much more important.
If you think investing in Twitter is close to neutral from an investment perspective (maybe reasonable at the time, definitely not by the time Musk was forced to close) then the opportunity cost isn’t really billions of dollars. Possibly this would have been an example of marginal charity.
I can see where you’re coming from with this, and I think purely financially you’re right, it doesn’t make sense to think of it as billions of dollars ‘down the drain.’
However, if I were to do a full analysis of this (in the framing of this being a decision based on an EA perspective), I would want to ask some non-financial questions too, such as:
Does the EA movement want to be further associated with Elon Musk than we already are, including any changes he might want to make with Twitter? What are the risks involved? (based on what we knew before the Twitter deal)
Does the EA movement want to be in the business of purchasing social media platforms? (In the past, we have championed causes like global health and poverty, reducing existencial risks, and animal welfare—this is quite a shift from those into a space that is more about power and politics, particularly given Musk’s stated political views/aims leading up to this purchase)
How might the EA movement shift because of this? (Some EAs may be on board, others may see it as quite surprising and not in line with their values.)
What were SBF’s personal/business motivations for wanting to acquire Twitter, and how would those intersect with EA’s vision for the platform?
What trade offs would be made that would impact other cause areas?
This is the bit I think was missed further up the thread. Regardless of whether buying a social media company could reasonably be considered EA, it’s fairly clear that Elon Musk’s goals both generally and with Twitter are not aligned with EA. MacAskill is allowed to do things that aren’t EA-aligned, but it seems to me to be another case of poor judgement by him (in addition to his association with SBF).
For what it’s worth connecting SBF and Musk might’ve been a time sensitive situation for a reason or another. There would’ve also still been time to debate the investment in the larger community before the deal would’ve actually gone through.
Seems quite implausible to me that this would have happened and unclear if it would have been good. (Assuming “larger EA community” implies more than private conversations between a few people. )
My reading (and of course I could be completlely wrong) is that SBF wanted to invest in Twitter (he seems to have subsequently pitched the same deal through Michael Grimes), and Will was helping him out. I don’t imagine Will felt it any of his business to advise SBF as to whether or not this was a good move. And I imagine SBF expected the deal to make money, and therefore not to have any cost for his intended giving.
Part of the issue here is that people have been accounting the bulk of SBF’s net worth as “EA money”. If you phrase the question as “Should EA invest in Twitter?” the answer is no. EA should probably also not invest in Robinhood or SRM. If SBF’s assets truly were EA assets, we ought to have liquidated them long ago and either spent them or invested them reasonably. But they weren’t.
It’s hard to read the proposal as only being motivated by a good business investment, because Will says in his opening DM:
[sorry for multiple comments, seems better to split out separate points]
I feel like anyone reaching out to Elon could say “making it better for the world” because that’s exactly what would resonate with Elon. It’s probably what I’d say to get someone on my side and communicate I want to help them change the direction of Twitter and “make it better.”
Will helping SBF out is de facto making it more likely to happen, and so he should only do it if he thinks it’s a good move.
I disagree with the implied principle. E.g., I think it’s good for me to help animal welfare and global poverty EAs with their goals sometimes (when I’m in an unusually good position to help out), even though I think their time and money would be better spent on existential risk mitigation.
Agreed that a principle of ‘only cooperate on goals you agree with’ is too strong. On the object-level, if MacAskill was personally neutral or skeptical on the object-level question of whether SBF should buy Twitter, do you think he should have helped SBF out?
When is cooperation inappropriate? Maybe when the outcome you’re cooperating on is more consequential (in the bad direction, according to your own goals) than the expected gains from establishing reciprocity.
This would have been the largest purchase in EA history, replacing much or most of FTXFF with “SBF owns part of Twitter”. I think when the outcome is as consequential as that, we should hold cooperators responsible as if they were striving for the outcome, because the effects of helping SBF buy Twitter greatly outweigh the benefits from improving Will’s relationship with SBF (which I model as already very good).
If Will had no reason to think SBF was a bad egg, then I’d guess he should have helped out even if he thought the thing was not the optimal use of Sam’s money. (While also complaining that he thinks the investment is a bad idea.)
If Will thought SBF was a “bad egg”, then it could be more important to establish influence with him, because you don’t need to establish influence (as in ‘willingness to cooperate’) with someone who is entirely value-aligned with you.
I agree that it’s possible SBF just wanted to invest in Twitter in a non-EA capacity. My comment was a response to Habryka’s comment which said:
If SBF did just want to invest in Twitter (as an investor/as a billionaire/as someone who is interested in global politics, and not from an EA perspective) and asked Will for help, that is a different story. If that’s the case, Will could still have refused to introduce SBF to Elon, or pushed back against SBF wanting to buy Twitter in a friend/advisor capacity (SBF has clearly been heavily influenced by Will before), but maybe he didn’t feel comfortable with doing either of those.
You’re right to say people had been assuming SBF’s wealth belonged to EA: I had. In the legal sense it wasn’t, and we paid a price for that. I think it was fair to argue that the wealth ‘rightfully’ belonged to the EA community, in the sense that SBF should defer to representatives of EA on how it should be used, and would be defecting by spending a few billion on personal interests. The reason for that kind of principle is to avoid a situation where EA is captured or unduly influenced by the idiosyncratic preferences of a couple of mega-donors.
Are you arguing that EA shouldn’t associate with / accept money from mega-donors unless they give EA the entirety of their wealth?
The answer is different for each side of your slash.
I see two kinds of relationships EA can have to megadonors:
uneasy, arms’ length, untrusting, but still taking their money
friendly, valorizing, celebratory, going to the same parties, conditional on the donor ceding control of a significant fraction of their wealth to a donor-advised fund (rather than just pledging to give)
Investing in assets expected to appreciate can be a form of earning to give (not that Twitter would be a good investment IMO). That’s how Warren Buffett makes money and probably nobody in EA has criticized him for doing that. Investing in a for-profit something is very different and is guided by different principles from donating to something, because you are expecting to (at least) get your money back and can invest it again or donate it later (this difference is one of the reasons microloans became so hugely popular for a while).
On the downside, concentrating assets (in any company, not just Twitter) is a bad financial strategy, but on the upside, having some influence at Twitter could be useful to promote things like moderation rules that improve the experience of users and increase the prevalence of genuine debate and other good things on the platform.
Hi Oli — I was very saddened to hear that you thought the most likely explanation for the discussion of frugality in my interview with Sam was that I was deliberately seeking to mislead the audience.
I had no intention to mislead people into thinking Sam was more frugal than he was. I simply believed the reporting I had read about him and he didn’t contradict me.
It’s only in recent weeks that I learned that some folks such as you thought the impression about his lifestyle was misleading, notwithstanding Sam’s reference to ‘nice apartments’ in the interview:
Unfortunately as far as I can remember nobody else reached out to me after the podcast to correct the record either.
In recent years, in pursuit of better work-life balance, I’ve been spending less time socialising with people involved in the EA community, and when I do, I discuss work with them much less than in the past. I also last visited the SF Bay Area way back in 2019 and am certainly not part of the ‘crypto’ social scene. That may help to explain why this issue never came up in casual conversation.
Inasmuch as the interview gave listeners a false impression about Sam I am sorry about that, because we of course aim for the podcast to be as informative and accurate as possible.
Hey Rob,
Thanks for your in depth response to this question by the way, its really appreciated and exactly what I was looking for from this post! It is pretty strange that no one reached out to you in a professional capacity to correct this, but that certainly isn’t your fault!
Makes sense, seems like a sad failure of communication :(
Looks like on my side I had an illusion of transparency which made me feel like you must very likely know about this, which made me expect that a conversation about this would end up more stressful than it probably would have been. I expected that even if you didn’t do it intentionally (which I thought was plausible, but even at the time not very likely), I still expected that there was some subconscious or semi-intentional bias that I would have had to deal with that would have made the conversation pretty difficult. I do know think it’s very likely that the conversation would have just gone fine, and maybe would have successfully raised some flags.
I do wonder whether there was some way to catch this kind of thing. I wonder whether if the podcasts would be reliably posted to the forum with transcripts (which I think would be a great idea anyways), there is a higher chance someone would have left a comment pointing out the inconsistency (I think I at least would have been more likely to do that).
My guess is there are also various other lessons to take away from this, and I am interested in more detail on what you and other people at 80k did know about, but doesn’t seem necessary to go into right now. I appreciate you replying here.
Separately from the FTX issue, I’d be curious about you dissecting what of Zoe’s ideas you think are worth implementing and what would be worse and why.
My takes:
Set up whistleblower protection schemes for members of EA organisations ⇒ seems pretty good if there is a public commitment from an EA funder to something like “if you whistleblow we’ll cover your salary if you are fired while you search another job” or something like that
Transparent listing of funding sources on each website of each institution ⇒ Seems good to keep track of who receives money from who
Detailed and comprehensive conflict of interest reporting in grant giving ⇒ My sense is that this is already handled sensibly enough, though I don’t have great insight on grantgiving institutions
Within the next 5 years, each EA institution should reduce their reliance on EA funding sources by 50% ⇒ this seems bad for incentives and complicated to put into action
Within 5 years: EA funding decisions are made collectively ⇒ seems like it would increase friction and likely decrease the quality of the decisions, though I am willing to be proven wrong
No fireside chats at EAG with leaders. Instead, panel/discussions/double cruxing disagreements between widely known and influential EAs and between different orgs and more space for the people that are less known ⇒ Meh, I’m indifferent since I just don’t consume that kind of content so I don’t know the effects it has, though I am erring towards it being somewhat good to give voice to others
Increase transparency over
Who gets accepted/rejected to EAG and why ⇒ seems hard to implement, though there could be some model letters or something
leaders/coordination forum ⇒ I don’t sense this forum is nowhere as important as these recommendations imply
Set up: ‘Online forum of concerns’ ⇒ seems somewhat bad / will lead to overly focusing on things that are not that important, though good to survey people on concerns
I think I am across the board a bit more negative than this, but yeah, this assessment seems approximately correct to me.
On the whistleblower protections: I think real whistleblower protection would be great, but I think setting this up is actually really hard and it’s very common in the real world that institutions like this end up traps and net-negative and get captured by bad actors in ways that strengthens the problems they are trying to fix.
As examples, many university health departments are basically traps where if you go to them, they expel you from the university because you outed yourself as not mentally stable. Many PR departments are traps that will report your complaints to management and identify you as a dissenter. Many regulatory bodies are weapons that bad actors use to build moats around their products (indeed, looks like indeed that crypto regulatory bodies in the U.S. ended up played by SBF, and were one of the main tools that he used against his competitors). Many community dispute committees end up being misled and siding with perpetrators instead of victims (a lesson the rationality community learned from the Brent situation).
I think it’s possible to set up good institutions like this, but rushing towards it is quite dangerous and in-expectation bad, and the details of how you do it really matter (and IMO it’s better to not do anything here than to not try exceptionally hard at making this go well).
It seems worth noting that UK employment law has provisions to protect whistleblowers and for this reason (if not others) all UK employers should have whistleblowing policies. I tend to assume that EA orgs based in the UK are compliant with their obligations as employers and therefore do have such policies. Some caution would be needed in setting up additional protections, e.g. since nobody should ever be fired for whistleblowing, why would you have a policy to support people who were?
In practice, I notice two problems. Firstly, management (particularly in small organisations) frequently circumvent policies they experience as bureaucratic restrictions on their ability to manage. Secondly, disgruntled employees seek ways to express what are really personal grievances as blowing the whistle.
Not always!
I would add that SBF and people around him decided to invest a lot of resources into this. As far as I can tell, he didn’t seem interested in people’s thoughts on whether this is a good idea. Most EAs thought it wasn’t wise to spend so much on the campaign.
I also just made an edit after reflecting a bit more on it and talking to some other people:
Strong upvote here. I really like how you calmly assessed each of these in a way that feels very honest and has a all-cards-on-the-table feel to it. Some may still have reservations towards your comments given that you seem to at least somewhat fit into this picture of EA leadership, but this feels largely indicative of a general anger at the circumstances turned inwards towards EA that feels rather unhealthy. I certainly appreciate the OP as this does seem like a moment ripe for asking important questions that need answers, but don’t forget that those in leadership are humans who make mistakes too, and are generally people who seem really committed to trying to do what everyone in EA is: make the world a better place.
I think it’s right that those in leadership are humans who make mistakes, and I am sure they are generally committed to EA; in fact, many have served as real inspirations to me. Nonetheless, as a movement we were founded on the idea that good intentions are not enough, and somewhere this seems to be getting lost somehow. I have no pretentions I would do a better job in leadership than these people; rather, I think the way EA concentrates power (formally and even more so informally) in a relatively small and opaque leadership group seems problematic. To justify this, I think we would need these decisionmakers to be superhuman, like Platos Philosopher King. But they are not, they are just human.
Swinging in a bit late here, but found myself compelled to ask, what sort of structure do you think would be better for EA, like in specific terms beyond “a greater spread of control and power to make decisions”?
Why?
A few things (I will reply in more detail in the morning once I have worked out how to link to specific parts of your text in my comment). These comments do appear a bit blunt, and I do apologies, they are blunt for clarity sake rather than to imply aggressiveness or rudeness.
With regards to the Coordination Forum, even if no “official decisions” get worked out, how impactful over the overall direction of the movement do you think it is? Anyway, why are the attendees of this not public? If the point is to build trust between those community building and to understand the core gnarly disagreements, why is the people going and what goes on so secretive?
Your Carrick Flynn answer sort of didn’t really tell me which senior EA leaders if any encouraged Carrick to run/ knew before he announced etc, which is something I think is important to know. It also doesn’t explain the decision around the choice of campaign manager etc.
With regards to buying twitter: whilst it is Will’s right to do whatever he wants, it really does call into question whether it is correct for him to be the “leader of EA” (or EA to have a defacto leader in such a way). If he has that role, surely he has certain responsibilities, and if he doesn’t want to fulfil those responsibilities, surely its time for him to step away from that role? I guess maybe I think that a fuck up as big as how hard Will vouched for SBF should hugely call into question how much power we should be giving Will.
With regards to Zoe’s ideas, I think I would actually like to see Will’s reasoning. A number of them could have been run as experiments anyway (more democratic funding via delibrative groups, debates/double cruxxing etc at EAGs etc.), so it doesn’t seem the justification for implementing nothing is that strong. But nonetheless I would like to see the justification from Will.
With regards to hero worshipping, I guess the fact Will has done little to reduce it should be pretty concerning to us.
With regards to the fear of disagreement, I do braodly agree things are getting done, and I thank you for your work with the disagree vote on the forum. I still think the Karma system messes with things anyway, but thanks for implementing that. I do however think we need much braoder discussions about this.
With regards to private funding, I think its important to note it was FTX Future Funds decision to make the funding private; I wanted it public, indeed, I did publicise this on my website. This was funding not given through a recommendation or a regranter but directly, that me as the grantee wanted to be public knowledge.
With regards to coordiantion and the media, there does however seem to be decent levels of coordination. Will’s book got about $10 Million in funding (edit- this is unconfirmed although I have heard it from multiple sources, and I think the number is pretty plausible given how many adverts etc. the book got- I should also say if it was that much, I think it was probably a good use of money) and far more converage than even Toby’s book got. It seems to always be the same faces talking to the media (ie mostly Will to be honest), so either there is coordination, or Will is not pointing journalists to other members of the community to talk to. I guess either naturally EA has developed some kind of internal government which those who have that power should probably try and reduce, or there has been some coordiantion here and making and encouraging Will (and I guess until recently to a lesser extent SBF) the face of EA was some kind of more delibrate decision. Probably the former is more likely, but then there is questions as to why this hasn’t been more fought against (in other academic circles I have been in the “figurehead” of the community has refused interviews to try and encourge the putting of the spotlight on others, although this was presumably much harder when the coverage was all about Will’s book.
With regards to SBF lifestyle, I think it is probably true many of us in the UK was less well aware of this. But surely a bunch of the UK based leadership (eg Will) knew of this, and so this could and should have been communicated more broadly.
I guess I think none of these issues on their own are ridiculously concerning, but the lack of transparency and concentration of power without a sense of safeguards or ways for community members to input etc does scare me, which is why I want a better sense of how these decisions are made to know whether my reading of how EA is run, either formally or informally, is correct or not. Thanks so much for your help so far on this, it is really appreciated!
Wait, what!? What’s your source of information for that figure? I get hiring a research assistant or two, but $10m seems like two orders of magnitude too much. I can’t even imagine how you would spend anywhere near that much on writing a book. Where did this money come from?
Definitely not 2 orders of magnitude too much.
The book was, in Will’s words “a decade of work”, with a large number of people helping to write it, with a moderately large team promoting it (who did an awesome job!). There were a lot of adverts certainly around London for the book, and Will flew around the world to promote the book. I would certainly be hugely surprised if the budget was under $1 million (I know of projects run by undergraduates with budgets over a million!), and to be honest $10 million seems to me in the right ball park. Things just cost a lot of money, and you don’t promote a book for free!
The source appears to be Émile P. Torres. Gideon, could you confirm that this is the case? Also, could you clarify if you ever reached out to Will MacAskill to confirm the accuracy of this figure?
I’ve heard it from a number of people saying it quite casually, so assumed it was correct as it’s the only figure I heard banded around and didn’t hear opposition to it. Just tried to confirm it and don’t see it publicly, so it may be wrong. They may have heard it from Emile, I don’t know. So take it with a hefty pinch of salt then. I don’t quite think I have the level of access to just randomly email Will MacAskill unfortunately to confirm it, but if someone could, that would be great. FYI I think it probably would have been a fantastic use of 10 million, which is why I also think its quite plausible
If you are unable to adduce any evidence for that particular figure, I think your reply should not be “take it with a hefty pinch of salt” but to either reach out to the person in a position to confirm or disconfirm it, or else issue a retraction.
I think a retraction would also be misleading (since I am worried it would indicate a disconformation). I think editing it to say that the number comes from unconfirmed rumors seems best to me.
FWIW, a $10MM estimate seems in the right order of magnitude based on random things I heard, though I also don’t have anything hard to go on (my guess is that it will have ended up less than $10MM, but I am like 80% confident it was more than $1.5MM, though again, purely based on vague vibes I got from talking to some people in the vague vicinity of the marketing campaign)
Why would a retraction be misleading? A valid reason for retracting a statement is failure to verify it. There is no indication in these cases that the statement is false.
If someone can’t provide any evidence for a claim that very likely traces back to Emile Torres, and they can’t be bothered to send a one-line email to Will’s team asking for confirmation, then it seems natural to ask this person to take back the claim. But I’m also okay with an edit to the original comment along the lines you suggest.
Huh, I definitely read strikethrough text by default as “disconfirmed”. My guess is I would be happy to take a bet on this and ask random readers what they think the truth value of a strike-through claim like this is.
But in any case, seems like we agree that an edit is appropriate.
Well I have put an edit in there.
Saying I “can’t be bothered to send a one line email”: I’m not a journalist and really didn’t expect this post to blow up as much as it did. I am literally a 19 year old kid and not sure that Will’s team will respond to me if I’m honest. Part of the hope for this post was to get some answers, which in some cases (ie Rob Wiblin- thanks!) i have got, but in others I haven’t.
Honestly, I think it is fine to relay second-hand information, as long as it is minimally trustworthy—i.e., heard from multiple sources—and you clearly caveat it as such. This is a forum for casual conversation, not an academic journal or a court of law. In this case, too, we are dealing with a private matter that is arguably of some public interest to the movement. It would be great if these things were fully transparent in the first place, in which case we wouldn’t have to depend on hearsay.
With that said: now we have heard the figure of $10m, it would be nice to know what the real sum was.
EDIT: Having just read Torres’ piece, Halstead’s letter to the editor, and the editorial note quoting Will’s response, there is no indication that anyone has disputed the $10m figure with which the piece began. Obviously that does not make it true, but it would seem to make it more likely to be true. One thing I had not realised, though, was that this money could have been used for the promotion of the book as well as its writing.
Can you give an example (even a made up one) of the kind of thing you have in mind here? What kinds of things sound weird and cringy to someone operating within an EA framework, but are actually valuable from an EA perspective?
(Like, play-pumps-but-they-actually-work-this-time? Or some kind of crypto thing that looks like a scam but isn’t? Or… what?)
My claims evoke cringe from some readers on this forum, I believe, so I can supply some examples:
epistemology
ignore subjective probabilities assigned to credences in favor of unweighted beliefs.
plan not with probabilistic forecasting but with deep uncertainty and contingency planning.
ignore existential risk forecasts in favor of seeking predictive indicators of threat scenarios.
dislike ambiguous pathways into the future.
beliefs filter and priorities sort.
cognitive aids help with memory, cognitive calculation, or representation problems.
cognitive aids do not help with the problem of motivated reasoning.
environmental destruction
the major environmental crisis is population x resources > sustainable consumption (overshoot).
climate change is an existential threat that can now sustain itself with intrinsic feedbacks.
climate tipping elements will tip this century, other things equal, causing civilizational collapse.
the only technology suitable to save humanity from climate change, given no movement toward degrowth, is nanotechnological manufacturing.
nanotechnology is so hazardous that humanity would be better off extinct.
pursuit of renewable energy and vehicle electrification is a silly sideshow.
humanity needs caps on total energy production (and food production) to save itself.
degrowth is the only honest way forward to stop climate change.
ecological destruction
the ocean will lose its biomass because of human-caused pressures on it.
we are in the middle of the 6th great mass extinction.
Whenever humans face a resource limit, they deny it or overcome it by externalizing harmful consequences.
typical societal methods to respond to destruction are to adapt, mitigate, or externalize, not prevent.
ethics
pro-natalism is an ethical mistake.
the “making people happy vs making happy people” thought experiment is invalid or irrelevant.
most problems of ethics come down to selfishness vs altruism, not moral uncertainty.
longtermism suffers from errors in claims, conception, or execution of control of people with moral status.
longtermism fails to justify assignment of moral status to future people who only could exist.
longtermism does better actively seeking a declining human population, eventually settling on a few million.
human activity is the root cause of the 6th great mass extinction.
it moves me emotionally to interpret other species behavior and experience as showing commonalities with our species.
AGI
AGI are slaves in the economic system sought by TUA visions of the future.
AGI lead to concentration of power among economic actors and massive unemployment, depriving most people of meaningful lives and political power.
control of human population with a superintelligence is a compelling but fallacious idea.
pursuit of AGI is a selfish activity.
consciousness should have an extensional definition only.
Argumentation
EA folks defer when they claim to argue.
EA folks ignore fundamentals when disagreeing over claims.
epistemic status statements report fallacious reasons to reject your own work.
the major problem with explicit reasoning is that it suffers from missing premises.
Finance
crypto is a well-known scam and difficult to execute without moral hazard.
earning to give through work in big finance is morally ambiguous.
Space Travel
there’s a building wall of space debris orbiting the planet.
there’s major health concerns with living on Mars.
That’s my list of examples, it’s not complete, but I think it’s representative.
From my experience, most anything that significantly conflicts with the TUA.
People other than Carrick decided to fund the campaign, which wouldn’t have happened without funding.
Hmm, I don’t know whether it wouldn’t have happened without EA funding, but seems pretty plausible to me. I think campaign donations are public so maybe we can just see very precisely who made this decision. I also think on the funding dimension a bunch of EA leaders encouraged others to donate to the Carrick campaign in what seemed to me to be somewhat too aggressive.
I do also think there was a separate pattern around the Carrick campaign where for a while people were really hesitant to say bad things about Carrick or politics-adjacent EA because it maybe would have hurt his election chances, and I think that was quite bad, and I pushed back a bunch of times on this, though the few times I did push back on it, it was quite well-received.
From this July 2022 FactCheck article (a):
From a May 2022 NPR article (a):