Some important questions for the EA Leadership
The recent FTX scandal has, I think, caused a major dent in the confidence many in the EA Community have in our leadership. It seems to me increasingly less obvious that the control of a lot of EA by a narrow group of funders and thought leaders is the best way for this community full of smart and passionate people to do good in the world. The assumption I had is we defer a lot of power, both intellectual, social and financial, to a small group of broadly unaccountable, non-transparent people on the assumption they are uniquely good at making decisions, noticing risks to the EA enterprise and combatting them, and that this unique competence is what justifies the power structures we have in EA. A series of failure by the community this year, including the Carrick Flynn campaign and now the FTX scandal has shattered my confidence in this group. I really think EA is amazing, and I am proud to be on the committee of EA Oxford (this represent my own views), having been a summer research fellow at CERI and having spoken at EAGx Rotterdam; my confidence in the EA leadership, however, is exceptionally low, and I think having an answer to some of these questions would be very useful.
An aside: maybe I’m wrong about power structures in EA being unaccountable, centralised and non-transparent. If so, the fact it feels like that is also a sign something is going wrong.
Thus, I have a number of questions for the “leadership group” about how decisions are made in EA and rationale for these. This list is neither exhaustive nor meant as an attack; there possibly are innocuous answers to many of these questions. Moreover, not all of these are linked to SBF and that scandal, and many of these probably have perfectly rational explanation.
Nonetheless, I think now is the appropriate time to ask difficult questions of the EA leadership, so this is just my list of said questions. I do apologise if people take offence to any of these (I know it is a difficult time for everyone), as we really are I am sure all trying our best, but nonetheless I do think we can only have as positive an impact as possible if we are really willing to examine ourselves and see what we have done wrong.
Who is invited to the coordination forum and who attends? What sort of decisions are made? How does the coordination forum impact the direction the community moves in? Who decides who goes to the coordination forum? How? What’s the rationale for keeping the attendees of the coordination forum secret (or is it not purposeful)?
Which senior decision makers in EA played a part in the decision to make the Carrick Flynn campaign happen? Did any express the desire for it not to? [The following question has been answered]Who signed off on the decision to make the campaign manager someone with no political experience(edit: I have now recieved information that the campaign did their own hiring of a campaign manager and had experienced consultants assist through the campaign. So whether I agree with this or not, it seems the campaign manager is quite different from the issues I raise elsewhere in this post)
Why did Will MacAskill introduce Sam Bankman-Fried to Elon Musk with the intention of getting SBF to help Elon buy twitter? What was the rationale that this would have been a cost effective use of $8-15 Billion? Who else was consulted on this?
Why did Will MacAskill choose not to take on board any of the suggestions of Zoe Cremer that she set out when she met with him?
Will MacAskill has expressed public discomfort with the degree of hero-worship towards him. What steps has he taken to reduce this? What plans have decision makers tried to enact to reduce the amount of hero worship in EA?
The EA community prides itself on being an open forum for discussion without fear of reprisal for disagreement. A very large number of people in the community however do not feel it is, and feel pressure to conform and not to express their disagreement with the community, with senior leaders or even with lower level community builders.Has there been discussions within the community health team with how to deal with this? What approaches are they taking community wide rather than just dealing with ad hoc incidents?
A number of people have expressed suspicion or worry that they have been rejected from grants because of publicly expressing disagreements with EA. Has this ever been part of the rationale for rejecting someone from a grant?
FTX Future Fund decided to fund me on a project working on SRM and GCR, but refused to publicise it on their website. How many other projects were funded but not publicly disclosed? Why did they decide to not disclose such funding?
What sort of coordination, if any, goes on around which EAs talk to the media, write highly publicised books, go in curricula etc? What is the decision making procedure like?
The image, both internally and externally, of SBF was that he lived a frugal lifestyle, which it turns out was completely untrue (and not majorly secret). Was this known when Rob Wiblin interviewed SBF on the 80000 Hours podcast and held up SBF for his frugality?
- Doing EA Better by 17 Jan 2023 20:09 UTC; 261 points) (
- Keep EA high-trust by 22 Dec 2022 14:58 UTC; 156 points) (
- Reflections on Wytham Abbey by 10 Jan 2023 18:30 UTC; 82 points) (
- 18 Nov 2022 0:21 UTC; 37 points) 's comment on Media attention on EA (again) by (
- 18 Nov 2022 15:04 UTC; 28 points) 's comment on Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely. by (
- What does the FTX bankruptcy affidavit tell us? A summary by 17 Nov 2022 20:55 UTC; 23 points) (
- EA & LW Forums Weekly Summary (14th Nov − 27th Nov 22′) by 29 Nov 2022 22:59 UTC; 22 points) (
- EA & LW Forums Weekly Summary (14th Nov − 27th Nov 22′) by 29 Nov 2022 23:00 UTC; 21 points) (LessWrong;
I don’t think I am a great representative of EA leadership, given my somewhat bumpy relationship and feelings to a lot of EA stuff, but I nevertheless I think I have a bunch of the answers that you are looking for:
The Coordination Forum is a very loosely structured retreat that’s been happening around once a year. At least the last two that I attended were structured completely as an unconference with no official agenda, and the attendees just figured out themselves who to talk to, and organically wrote memos and put sessions on a shared schedule.
At least as far as I can tell basically no decisions get made at Coordination Forum, and it’s primary purpose is building trust and digging into gnarly disagreements between different people who are active in EA community building, and who seem to get along well with the others attending (with some balance between the two).
I think attendance has been decided by CEA. Criteria have been pretty in-flux. My sense has been that a lot of it is just dependent on who CEA knows well-enough to feel comfortable inviting, and who seems to be obviously worth coordinating with.
I mean, my primary guess here is Carrick. I don’t think there was anyone besides Carrick who “decided” to make the Carrick campaign happen. I am pretty confident Carrick had no boss and did this primarily on his own initiative (though likely after consulting with various other people in EA on whether it was a good idea).
[edit: On more reflection and talking to some more people, my guess is there was actually more social pressure involved here than this paragraph implies. Like, I think it was closer to “a bunch of kind-of-but-not-very influential EAs reached out to him and told him that they think it would be quite impactful and good for the world if he ran”, and my updated model of Carrick really wasn’t personally attracted to running for office, and the overall experience was not great for him]
I expressed desire for it not to happen! Though like, I think it wasn’t super obvious to me it was a wrong call, but a few times when people asked me whether to volunteer for the Carrick campaign, I said that seemed overall bad for the world. I did not reach out to Carrick with this complaint, since doing anything is already hard, Carrick seemed well-intentioned, and while I think his specific plan was a mistake, it didn’t seem a bad enough mistake to be worth very actively intervening (and like, ultimately Carrick can do whatever he wants, I can’t stop him from running for office).
I think it could be a cost-effective use of $3-10 billion (I don’t know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it’s not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam’s net-worth at face-value at the time, this didn’t seem like a crazy idea to me.
I don’t know why Will vouched so hard for Sam though, that seems like a straightforward mistake to me. I think it’s likely Will did not consult anyone else, as like, it’s his right as a private individual talking to other private individuals.
My guess is because he thought none of them are very good? I also don’t think we should take on board any of their suggestions, and many of them strike me as catastrophic if adopted. I also don’t think any of them would have helped with this whole FTX situation, and my guess is some of them would have likely made it worse.
I don’t know a ton of stuff that Will has done. I do think me and others have tried various things over the years to reduce hero worship. On Lesswrong and the EA Forum I downvote things that seem hero-worshippy to me, and I have written many comments over the years trying to reduce it. We also designed the frontpage guidelines on LW to reduce some of the associated community dynamics.
I do think this is a bit of a point of disagreement between me and others in the community, where I have had more concerns about this domain than others, but my sense is everyone is pretty broadly on-board with reducing this. I do sadly also don’t have a ton of traction on reducing this.
I do think it is indeed really sad that people fear reprisal for disagreement. I think this is indeed a pretty big problem, not really because EA is worse here than the rest of the world, but because I think the standard for success is really high on this dimension, and there is a lot of value in encouraging dissent and pushing back against conformity, far into the tails of the distribution here.
I expect the community health team to have discussed this extensively (like, I have discussed it with them for many hours). There are lots of things attempted to help with this over the years. We branded one EAG after “keeping EA weird”, we encouraged formats like whiteboard debates at EAG to show that disagreement among highly-engaged people is common, we added things like disagree-voting in addition to normal upvoting and downvoting to encourage a culture where it’s normal and expected that someone can write something that many people disagree with, without that thing being punished.
My sense is this all isn’t really enough, and we still kind of suck at it, but I also don’t think it’s an ignored problem in the space. I also think this problem gets harder and harder the more you grow, and larger communities trying to take coordinated action require more conformity to function, and this sucks, and is I think one of the strongest arguments against growth.
Anything I say here is in my personal capacity and not in any way on behalf of EA Funds. I am just trying to use my experience at EA Funds for some evidence about how these things usually go.
At least historically in my work at EA Funds this would be the opposite of how I usually evaluate grants. A substantial fraction of my notes consist of complaining that people seem too conformist to me and feel a bit like “EA bots” who somewhat blindly accept EA canon in ways that feels bad to me.
My sense is other grantmakers are less anti-conformity, but in-general, at least in my interactions with Open Phil and EA Funds grantmakers, I’ve seen basically nothing that I could meaningfully describe as punishing dissent.
I do think there are secondary things going on here where de-facto people have a really hard time evaluating ideas that are not expressed in their native ontology, and there is a thing where if you say stuff that seems weird from an EA framework this can come across as cringe to some people, and I do hate a bunch of those cringe reactions, and I think think it contributes a lot to conformity. I think that kind of stuff is indeed pretty bad, though I think almost all of the people who I’ve seen do this kind of thing would at least in the abstract strongly agree that punishing dissent is quite bad, and that we should be really careful around this domain, and have been excited about actively starting prices for criticism, etc.
Again, just using my historical experience at EA Funds as evidence. I continue to in no way speak on behalf of funds, and this is all just my personal opinion.
I would have to look through the data, but my guess is about 20% of EA Funds funding is distributed privately, though a lot of that happens via referring grants to private donors (i.e. most of this does not come from the public EA Funds funding). About three-quarters (in terms of dollar amount) of this is to individuals who have a strong preference for privacy, and the other quarter is for stuff that’s more involved in policy and politics where there is some downside risk of being associated with EA in both directions (sometimes the policy project would prefer to not be super publicly associated and evaluated by an EA source, sometimes a project seems net-positive, but EA Funds doesn’t want to signal that it’s an EA-endorsed project).
SFF used to have a policy of allowing grant recommenders to prevent a grant from showing up publicly, but we abolished that power in recent rounds, so now all grants show up publicly.
I personally really dislike private funding arrangements and find it kind of shady and have pushed back a bunch on them at EA Funds, though I can see the case for them in some quite narrow set of cases. I personally quite dislike not publicly talking about policy project grants, since like, I think they are actually often worth the most scrutiny.
There is no formal government here. If you do something that annoys a really quite substantial fraction of people at EA organizations, or people on the EA Forum, or any other large natural interest group in EA, there is some chance that someone at CEA (or maybe Open Phil) reaches out to someone doing a lot of things very publicly and asks them to please stop it (maybe backed up with some threat of the Effective Altruism trademark that I think CEA owns)
I think this is a difficult balance, and asking people to please associate less with EA can also easily contribute to a climate of conformity and fear, so I don’t really know what the right balance here is. I think on the margin I would like the world to understand better that EA has no central government, and anyone can basically say whatever they want and claim that it’s on behalf of EA, instead of trying to develop some kind of party-line that all people associated with EA must follow.
I do think this was a quite misleading narrative (though I do want to push back on your statement of it being “completely untrue”), and people made a pretty bad mistake endorsing it.
Up until yesterday I thought that indeed 80k fucked up pretty badly here, but I talked a bit to Max Dalton and my guess is the UK EAs seemed to maybe know a lot less about how Sam was living than people here in the Bay Area, and it’s now plausible to me (though still overall unlikely) that Rob did just genuinely not know that Sam was actually living a quite lavish lifestyle in many ways.
I had drafted an angry message to Rob Wiblin when the interview came out that I ended up not sending because it was a bit too angry that went approximately something like “Why the hell did you tell this story of SBF being super frugal in your interview when you know totally well that he lives in one of the most expensive apartments in the Bahamas and has a private jet”. I now really wish I had sent it. I wonder whether this would have caused Rob to notice something fishy was going on, and while I don’t think it would have flipped this whole situation, I do think it would have potentially made a decent dent into not being duped into this whole situation.
The 15 billion figure comes from Will’s text messages themselves (page 6-7). Will sends Elon a text about how SBF could be interested in going in on Twitter, then Elon Musk asks, “Does he have huge amounts of money?” and Will replies, “Depends on how you define “huge.” He’s worth $24B, and his early employees (with shared values) bump that up to $30B. I asked how much he could in principle contribute and he said: “~1-3 billion would be easy, 3-8 billion I could do, ~8-15b is maybe possible but would require financing”
It seems weird to me that EAs would think going in with Musk on a Twitter deal would be worth $3-10 billion, let alone up to 15 (especially of money that at the time, in theory, would have been counterfactually spent on longtermist causes). Do you really believe this? I’ve never seen ‘buying up social media companies’ as a cause area brought up on the EA forum, at EA events, in EA-related books, podcasts, or heard any of the leaders talk about it. I find it concerning that some of us are willing to say “this makes sense” without, to my knowledge, ever having discussed the merits of it.
I don’t agree with this framing. This wasn’t just a private individual talking to another private individual. It was Will Macaskill (whose words, beliefs, and actions are heavily tied to the EA community as a whole) trying to connect SBF (at the time one of the largest funders in EA) and Elon Musk to go in on buying Twitter together, which could have had pretty large implications for the EA community as a whole. Of course it’s his right to have private conversations with others and he doesn’t have to consult anyone on the decisions he makes, but the framing here is dismissive of this being a big deal when, as another user points out, it could have easily been the most consequential thing EAs have ever done. I’m not saying Will needs to make perfect decisions, but I want to push back against this idea of him operating in just a private capacity here.
Makes sense, I think I briefly saw that, and interpreted the last section as basically saying “ok, more than 8b will be difficult”, but the literal text does seem like it was trying to make $8b+ more plausible.
I have actually talked to lots of people about it! Probably as much as I have talked with people about e.g. challenge trials.
My guess is there must be some public stuff about this, though it wouldn’t surprise me if no one had made a coherent writeup of it on the internet (I also strongly reject the frame that people are only allowed to say that something ‘makes sense’ after having discussed the merits of it publicly. I have all kinds of crazy schemes for stuff that I think in-expectation beats GiveWell’s last dollar, and I haven’t written up anything close to a quarter of them, and likely never will).
I also remember people talking about buying Twitter during the Trump presidency and somehow changing it, since it seemed like it might have substantially increased nuclear war risk at the time, so there was at least some public discourse about it.
Oh, to be clear, I think Will fucked up pretty badly here. I just don’t think any policy that tries to prevent even very influential and trusted people in EA talking to other people in private about their honest judgement of other people is possibly a good idea. I think you should totally see this as a mistake and update downwards on Will (as well as EAs willingness to have him be as close as possible to a leader as we have), but I think from an institutional perspective there is little that should have been done at this point (i.e. all the mistakes were made much earlier, in how Will ended up in a bad epistemic state, and maybe the way we delegate leadership in the first place).
Yeah, there could be some public stuff about this and I’m just not aware of it. And sorry, I wasn’t trying to say that people are only allowed to say that something ‘makes sense’ after having discussed the merits of it publicly. I was more trying to say that I would find it concerning for major spending decisions (billions of dollars in this case) to be made without any community consultation, only for people to justify it afterwards because at face value it “makes sense.” I’m not saying that I don’t see potential value in purchasing Twitter, but I don’t think a huge decision like that should be justified based on quick, post-hoc judgements. If SBF wanted to buy Twitter for non-EA reasons, that’s one thing, but if the idea here is that purchasingTwitter alongside Elon Musk is actually worth billions of dollars from an EA perspective, I would need to see way more analysis, much like significant analysis has been done for AI safety, biorisk, animal welfare, and global health and poverty. (We’re a movement that prides itself on using evidence and reason to make the world better, after all.)
Thanks for clarifying that—that makes more sense to me, and I agree that there was little that should have been done at that specific point. The lead-up to getting to that point is much more important.
If you think investing in Twitter is close to neutral from an investment perspective (maybe reasonable at the time, definitely not by the time Musk was forced to close) then the opportunity cost isn’t really billions of dollars. Possibly this would have been an example of marginal charity.
I can see where you’re coming from with this, and I think purely financially you’re right, it doesn’t make sense to think of it as billions of dollars ‘down the drain.’
However, if I were to do a full analysis of this (in the framing of this being a decision based on an EA perspective), I would want to ask some non-financial questions too, such as:
Does the EA movement want to be further associated with Elon Musk than we already are, including any changes he might want to make with Twitter? What are the risks involved? (based on what we knew before the Twitter deal)
Does the EA movement want to be in the business of purchasing social media platforms? (In the past, we have championed causes like global health and poverty, reducing existencial risks, and animal welfare—this is quite a shift from those into a space that is more about power and politics, particularly given Musk’s stated political views/aims leading up to this purchase)
How might the EA movement shift because of this? (Some EAs may be on board, others may see it as quite surprising and not in line with their values.)
What were SBF’s personal/business motivations for wanting to acquire Twitter, and how would those intersect with EA’s vision for the platform?
What trade offs would be made that would impact other cause areas?
This is the bit I think was missed further up the thread. Regardless of whether buying a social media company could reasonably be considered EA, it’s fairly clear that Elon Musk’s goals both generally and with Twitter are not aligned with EA. MacAskill is allowed to do things that aren’t EA-aligned, but it seems to me to be another case of poor judgement by him (in addition to his association with SBF).
For what it’s worth connecting SBF and Musk might’ve been a time sensitive situation for a reason or another. There would’ve also still been time to debate the investment in the larger community before the deal would’ve actually gone through.
Seems quite implausible to me that this would have happened and unclear if it would have been good. (Assuming “larger EA community” implies more than private conversations between a few people. )
My reading (and of course I could be completlely wrong) is that SBF wanted to invest in Twitter (he seems to have subsequently pitched the same deal through Michael Grimes), and Will was helping him out. I don’t imagine Will felt it any of his business to advise SBF as to whether or not this was a good move. And I imagine SBF expected the deal to make money, and therefore not to have any cost for his intended giving.
Part of the issue here is that people have been accounting the bulk of SBF’s net worth as “EA money”. If you phrase the question as “Should EA invest in Twitter?” the answer is no. EA should probably also not invest in Robinhood or SRM. If SBF’s assets truly were EA assets, we ought to have liquidated them long ago and either spent them or invested them reasonably. But they weren’t.
It’s hard to read the proposal as only being motivated by a good business investment, because Will says in his opening DM:
[sorry for multiple comments, seems better to split out separate points]
I feel like anyone reaching out to Elon could say “making it better for the world” because that’s exactly what would resonate with Elon. It’s probably what I’d say to get someone on my side and communicate I want to help them change the direction of Twitter and “make it better.”
Will helping SBF out is de facto making it more likely to happen, and so he should only do it if he thinks it’s a good move.
I disagree with the implied principle. E.g., I think it’s good for me to help animal welfare and global poverty EAs with their goals sometimes (when I’m in an unusually good position to help out), even though I think their time and money would be better spent on existential risk mitigation.
Agreed that a principle of ‘only cooperate on goals you agree with’ is too strong. On the object-level, if MacAskill was personally neutral or skeptical on the object-level question of whether SBF should buy Twitter, do you think he should have helped SBF out?
When is cooperation inappropriate? Maybe when the outcome you’re cooperating on is more consequential (in the bad direction, according to your own goals) than the expected gains from establishing reciprocity.
This would have been the largest purchase in EA history, replacing much or most of FTXFF with “SBF owns part of Twitter”. I think when the outcome is as consequential as that, we should hold cooperators responsible as if they were striving for the outcome, because the effects of helping SBF buy Twitter greatly outweigh the benefits from improving Will’s relationship with SBF (which I model as already very good).
If Will had no reason to think SBF was a bad egg, then I’d guess he should have helped out even if he thought the thing was not the optimal use of Sam’s money. (While also complaining that he thinks the investment is a bad idea.)
If Will thought SBF was a “bad egg”, then it could be more important to establish influence with him, because you don’t need to establish influence (as in ‘willingness to cooperate’) with someone who is entirely value-aligned with you.
I agree that it’s possible SBF just wanted to invest in Twitter in a non-EA capacity. My comment was a response to Habryka’s comment which said:
If SBF did just want to invest in Twitter (as an investor/as a billionaire/as someone who is interested in global politics, and not from an EA perspective) and asked Will for help, that is a different story. If that’s the case, Will could still have refused to introduce SBF to Elon, or pushed back against SBF wanting to buy Twitter in a friend/advisor capacity (SBF has clearly been heavily influenced by Will before), but maybe he didn’t feel comfortable with doing either of those.
You’re right to say people had been assuming SBF’s wealth belonged to EA: I had. In the legal sense it wasn’t, and we paid a price for that. I think it was fair to argue that the wealth ‘rightfully’ belonged to the EA community, in the sense that SBF should defer to representatives of EA on how it should be used, and would be defecting by spending a few billion on personal interests. The reason for that kind of principle is to avoid a situation where EA is captured or unduly influenced by the idiosyncratic preferences of a couple of mega-donors.
Are you arguing that EA shouldn’t associate with / accept money from mega-donors unless they give EA the entirety of their wealth?
The answer is different for each side of your slash.
I see two kinds of relationships EA can have to megadonors:
uneasy, arms’ length, untrusting, but still taking their money
friendly, valorizing, celebratory, going to the same parties, conditional on the donor ceding control of a significant fraction of their wealth to a donor-advised fund (rather than just pledging to give)
Investing in assets expected to appreciate can be a form of earning to give (not that Twitter would be a good investment IMO). That’s how Warren Buffett makes money and probably nobody in EA has criticized him for doing that. Investing in a for-profit something is very different and is guided by different principles from donating to something, because you are expecting to (at least) get your money back and can invest it again or donate it later (this difference is one of the reasons microloans became so hugely popular for a while).
On the downside, concentrating assets (in any company, not just Twitter) is a bad financial strategy, but on the upside, having some influence at Twitter could be useful to promote things like moderation rules that improve the experience of users and increase the prevalence of genuine debate and other good things on the platform.
Hi Oli — I was very saddened to hear that you thought the most likely explanation for the discussion of frugality in my interview with Sam was that I was deliberately seeking to mislead the audience.
I had no intention to mislead people into thinking Sam was more frugal than he was. I simply believed the reporting I had read about him and he didn’t contradict me.
It’s only in recent weeks that I learned that some folks such as you thought the impression about his lifestyle was misleading, notwithstanding Sam’s reference to ‘nice apartments’ in the interview:
Unfortunately as far as I can remember nobody else reached out to me after the podcast to correct the record either.
In recent years, in pursuit of better work-life balance, I’ve been spending less time socialising with people involved in the EA community, and when I do, I discuss work with them much less than in the past. I also last visited the SF Bay Area way back in 2019 and am certainly not part of the ‘crypto’ social scene. That may help to explain why this issue never came up in casual conversation.
Inasmuch as the interview gave listeners a false impression about Sam I am sorry about that, because we of course aim for the podcast to be as informative and accurate as possible.
Hey Rob,
Thanks for your in depth response to this question by the way, its really appreciated and exactly what I was looking for from this post! It is pretty strange that no one reached out to you in a professional capacity to correct this, but that certainly isn’t your fault!
Makes sense, seems like a sad failure of communication :(
Looks like on my side I had an illusion of transparency which made me feel like you must very likely know about this, which made me expect that a conversation about this would end up more stressful than it probably would have been. I expected that even if you didn’t do it intentionally (which I thought was plausible, but even at the time not very likely), I still expected that there was some subconscious or semi-intentional bias that I would have had to deal with that would have made the conversation pretty difficult. I do know think it’s very likely that the conversation would have just gone fine, and maybe would have successfully raised some flags.
I do wonder whether there was some way to catch this kind of thing. I wonder whether if the podcasts would be reliably posted to the forum with transcripts (which I think would be a great idea anyways), there is a higher chance someone would have left a comment pointing out the inconsistency (I think I at least would have been more likely to do that).
My guess is there are also various other lessons to take away from this, and I am interested in more detail on what you and other people at 80k did know about, but doesn’t seem necessary to go into right now. I appreciate you replying here.
Separately from the FTX issue, I’d be curious about you dissecting what of Zoe’s ideas you think are worth implementing and what would be worse and why.
My takes:
Set up whistleblower protection schemes for members of EA organisations ⇒ seems pretty good if there is a public commitment from an EA funder to something like “if you whistleblow we’ll cover your salary if you are fired while you search another job” or something like that
Transparent listing of funding sources on each website of each institution ⇒ Seems good to keep track of who receives money from who
Detailed and comprehensive conflict of interest reporting in grant giving ⇒ My sense is that this is already handled sensibly enough, though I don’t have great insight on grantgiving institutions
Within the next 5 years, each EA institution should reduce their reliance on EA funding sources by 50% ⇒ this seems bad for incentives and complicated to put into action
Within 5 years: EA funding decisions are made collectively ⇒ seems like it would increase friction and likely decrease the quality of the decisions, though I am willing to be proven wrong
No fireside chats at EAG with leaders. Instead, panel/discussions/double cruxing disagreements between widely known and influential EAs and between different orgs and more space for the people that are less known ⇒ Meh, I’m indifferent since I just don’t consume that kind of content so I don’t know the effects it has, though I am erring towards it being somewhat good to give voice to others
Increase transparency over
Who gets accepted/rejected to EAG and why ⇒ seems hard to implement, though there could be some model letters or something
leaders/coordination forum ⇒ I don’t sense this forum is nowhere as important as these recommendations imply
Set up: ‘Online forum of concerns’ ⇒ seems somewhat bad / will lead to overly focusing on things that are not that important, though good to survey people on concerns
I think I am across the board a bit more negative than this, but yeah, this assessment seems approximately correct to me.
On the whistleblower protections: I think real whistleblower protection would be great, but I think setting this up is actually really hard and it’s very common in the real world that institutions like this end up traps and net-negative and get captured by bad actors in ways that strengthens the problems they are trying to fix.
As examples, many university health departments are basically traps where if you go to them, they expel you from the university because you outed yourself as not mentally stable. Many PR departments are traps that will report your complaints to management and identify you as a dissenter. Many regulatory bodies are weapons that bad actors use to build moats around their products (indeed, looks like indeed that crypto regulatory bodies in the U.S. ended up played by SBF, and were one of the main tools that he used against his competitors). Many community dispute committees end up being misled and siding with perpetrators instead of victims (a lesson the rationality community learned from the Brent situation).
I think it’s possible to set up good institutions like this, but rushing towards it is quite dangerous and in-expectation bad, and the details of how you do it really matter (and IMO it’s better to not do anything here than to not try exceptionally hard at making this go well).
It seems worth noting that UK employment law has provisions to protect whistleblowers and for this reason (if not others) all UK employers should have whistleblowing policies. I tend to assume that EA orgs based in the UK are compliant with their obligations as employers and therefore do have such policies. Some caution would be needed in setting up additional protections, e.g. since nobody should ever be fired for whistleblowing, why would you have a policy to support people who were?
In practice, I notice two problems. Firstly, management (particularly in small organisations) frequently circumvent policies they experience as bureaucratic restrictions on their ability to manage. Secondly, disgruntled employees seek ways to express what are really personal grievances as blowing the whistle.
Not always!
I would add that SBF and people around him decided to invest a lot of resources into this. As far as I can tell, he didn’t seem interested in people’s thoughts on whether this is a good idea. Most EAs thought it wasn’t wise to spend so much on the campaign.
I also just made an edit after reflecting a bit more on it and talking to some other people:
Strong upvote here. I really like how you calmly assessed each of these in a way that feels very honest and has a all-cards-on-the-table feel to it. Some may still have reservations towards your comments given that you seem to at least somewhat fit into this picture of EA leadership, but this feels largely indicative of a general anger at the circumstances turned inwards towards EA that feels rather unhealthy. I certainly appreciate the OP as this does seem like a moment ripe for asking important questions that need answers, but don’t forget that those in leadership are humans who make mistakes too, and are generally people who seem really committed to trying to do what everyone in EA is: make the world a better place.
I think it’s right that those in leadership are humans who make mistakes, and I am sure they are generally committed to EA; in fact, many have served as real inspirations to me. Nonetheless, as a movement we were founded on the idea that good intentions are not enough, and somewhere this seems to be getting lost somehow. I have no pretentions I would do a better job in leadership than these people; rather, I think the way EA concentrates power (formally and even more so informally) in a relatively small and opaque leadership group seems problematic. To justify this, I think we would need these decisionmakers to be superhuman, like Platos Philosopher King. But they are not, they are just human.
Swinging in a bit late here, but found myself compelled to ask, what sort of structure do you think would be better for EA, like in specific terms beyond “a greater spread of control and power to make decisions”?
Why?
A few things (I will reply in more detail in the morning once I have worked out how to link to specific parts of your text in my comment). These comments do appear a bit blunt, and I do apologies, they are blunt for clarity sake rather than to imply aggressiveness or rudeness.
With regards to the Coordination Forum, even if no “official decisions” get worked out, how impactful over the overall direction of the movement do you think it is? Anyway, why are the attendees of this not public? If the point is to build trust between those community building and to understand the core gnarly disagreements, why is the people going and what goes on so secretive?
Your Carrick Flynn answer sort of didn’t really tell me which senior EA leaders if any encouraged Carrick to run/ knew before he announced etc, which is something I think is important to know. It also doesn’t explain the decision around the choice of campaign manager etc.
With regards to buying twitter: whilst it is Will’s right to do whatever he wants, it really does call into question whether it is correct for him to be the “leader of EA” (or EA to have a defacto leader in such a way). If he has that role, surely he has certain responsibilities, and if he doesn’t want to fulfil those responsibilities, surely its time for him to step away from that role? I guess maybe I think that a fuck up as big as how hard Will vouched for SBF should hugely call into question how much power we should be giving Will.
With regards to Zoe’s ideas, I think I would actually like to see Will’s reasoning. A number of them could have been run as experiments anyway (more democratic funding via delibrative groups, debates/double cruxxing etc at EAGs etc.), so it doesn’t seem the justification for implementing nothing is that strong. But nonetheless I would like to see the justification from Will.
With regards to hero worshipping, I guess the fact Will has done little to reduce it should be pretty concerning to us.
With regards to the fear of disagreement, I do braodly agree things are getting done, and I thank you for your work with the disagree vote on the forum. I still think the Karma system messes with things anyway, but thanks for implementing that. I do however think we need much braoder discussions about this.
With regards to private funding, I think its important to note it was FTX Future Funds decision to make the funding private; I wanted it public, indeed, I did publicise this on my website. This was funding not given through a recommendation or a regranter but directly, that me as the grantee wanted to be public knowledge.
With regards to coordiantion and the media, there does however seem to be decent levels of coordination. Will’s book got about $10 Million in funding (edit- this is unconfirmed although I have heard it from multiple sources, and I think the number is pretty plausible given how many adverts etc. the book got- I should also say if it was that much, I think it was probably a good use of money) and far more converage than even Toby’s book got. It seems to always be the same faces talking to the media (ie mostly Will to be honest), so either there is coordination, or Will is not pointing journalists to other members of the community to talk to. I guess either naturally EA has developed some kind of internal government which those who have that power should probably try and reduce, or there has been some coordiantion here and making and encouraging Will (and I guess until recently to a lesser extent SBF) the face of EA was some kind of more delibrate decision. Probably the former is more likely, but then there is questions as to why this hasn’t been more fought against (in other academic circles I have been in the “figurehead” of the community has refused interviews to try and encourge the putting of the spotlight on others, although this was presumably much harder when the coverage was all about Will’s book.
With regards to SBF lifestyle, I think it is probably true many of us in the UK was less well aware of this. But surely a bunch of the UK based leadership (eg Will) knew of this, and so this could and should have been communicated more broadly.
I guess I think none of these issues on their own are ridiculously concerning, but the lack of transparency and concentration of power without a sense of safeguards or ways for community members to input etc does scare me, which is why I want a better sense of how these decisions are made to know whether my reading of how EA is run, either formally or informally, is correct or not. Thanks so much for your help so far on this, it is really appreciated!
Wait, what!? What’s your source of information for that figure? I get hiring a research assistant or two, but $10m seems like two orders of magnitude too much. I can’t even imagine how you would spend anywhere near that much on writing a book. Where did this money come from?
Definitely not 2 orders of magnitude too much.
The book was, in Will’s words “a decade of work”, with a large number of people helping to write it, with a moderately large team promoting it (who did an awesome job!). There were a lot of adverts certainly around London for the book, and Will flew around the world to promote the book. I would certainly be hugely surprised if the budget was under $1 million (I know of projects run by undergraduates with budgets over a million!), and to be honest $10 million seems to me in the right ball park. Things just cost a lot of money, and you don’t promote a book for free!
The source appears to be Émile P. Torres. Gideon, could you confirm that this is the case? Also, could you clarify if you ever reached out to Will MacAskill to confirm the accuracy of this figure?
I’ve heard it from a number of people saying it quite casually, so assumed it was correct as it’s the only figure I heard banded around and didn’t hear opposition to it. Just tried to confirm it and don’t see it publicly, so it may be wrong. They may have heard it from Emile, I don’t know. So take it with a hefty pinch of salt then. I don’t quite think I have the level of access to just randomly email Will MacAskill unfortunately to confirm it, but if someone could, that would be great. FYI I think it probably would have been a fantastic use of 10 million, which is why I also think its quite plausible
If you are unable to adduce any evidence for that particular figure, I think your reply should not be “take it with a hefty pinch of salt” but to either reach out to the person in a position to confirm or disconfirm it, or else issue a retraction.
I think a retraction would also be misleading (since I am worried it would indicate a disconformation). I think editing it to say that the number comes from unconfirmed rumors seems best to me.
FWIW, a $10MM estimate seems in the right order of magnitude based on random things I heard, though I also don’t have anything hard to go on (my guess is that it will have ended up less than $10MM, but I am like 80% confident it was more than $1.5MM, though again, purely based on vague vibes I got from talking to some people in the vague vicinity of the marketing campaign)
Why would a retraction be misleading? A valid reason for retracting a statement is failure to verify it. There is no indication in these cases that the statement is false.
If someone can’t provide any evidence for a claim that very likely traces back to Emile Torres, and they can’t be bothered to send a one-line email to Will’s team asking for confirmation, then it seems natural to ask this person to take back the claim. But I’m also okay with an edit to the original comment along the lines you suggest.
Huh, I definitely read strikethrough text by default as “disconfirmed”. My guess is I would be happy to take a bet on this and ask random readers what they think the truth value of a strike-through claim like this is.
But in any case, seems like we agree that an edit is appropriate.
Well I have put an edit in there.
Saying I “can’t be bothered to send a one line email”: I’m not a journalist and really didn’t expect this post to blow up as much as it did. I am literally a 19 year old kid and not sure that Will’s team will respond to me if I’m honest. Part of the hope for this post was to get some answers, which in some cases (ie Rob Wiblin- thanks!) i have got, but in others I haven’t.
Honestly, I think it is fine to relay second-hand information, as long as it is minimally trustworthy—i.e., heard from multiple sources—and you clearly caveat it as such. This is a forum for casual conversation, not an academic journal or a court of law. In this case, too, we are dealing with a private matter that is arguably of some public interest to the movement. It would be great if these things were fully transparent in the first place, in which case we wouldn’t have to depend on hearsay.
With that said: now we have heard the figure of $10m, it would be nice to know what the real sum was.
EDIT: Having just read Torres’ piece, Halstead’s letter to the editor, and the editorial note quoting Will’s response, there is no indication that anyone has disputed the $10m figure with which the piece began. Obviously that does not make it true, but it would seem to make it more likely to be true. One thing I had not realised, though, was that this money could have been used for the promotion of the book as well as its writing.
Can you give an example (even a made up one) of the kind of thing you have in mind here? What kinds of things sound weird and cringy to someone operating within an EA framework, but are actually valuable from an EA perspective?
(Like, play-pumps-but-they-actually-work-this-time? Or some kind of crypto thing that looks like a scam but isn’t? Or… what?)
My claims evoke cringe from some readers on this forum, I believe, so I can supply some examples:
epistemology
ignore subjective probabilities assigned to credences in favor of unweighted beliefs.
plan not with probabilistic forecasting but with deep uncertainty and contingency planning.
ignore existential risk forecasts in favor of seeking predictive indicators of threat scenarios.
dislike ambiguous pathways into the future.
beliefs filter and priorities sort.
cognitive aids help with memory, cognitive calculation, or representation problems.
cognitive aids do not help with the problem of motivated reasoning.
environmental destruction
the major environmental crisis is population x resources > sustainable consumption (overshoot).
climate change is an existential threat that can now sustain itself with intrinsic feedbacks.
climate tipping elements will tip this century, other things equal, causing civilizational collapse.
the only technology suitable to save humanity from climate change, given no movement toward degrowth, is nanotechnological manufacturing.
nanotechnology is so hazardous that humanity would be better off extinct.
pursuit of renewable energy and vehicle electrification is a silly sideshow.
humanity needs caps on total energy production (and food production) to save itself.
degrowth is the only honest way forward to stop climate change.
ecological destruction
the ocean will lose its biomass because of human-caused pressures on it.
we are in the middle of the 6th great mass extinction.
Whenever humans face a resource limit, they deny it or overcome it by externalizing harmful consequences.
typical societal methods to respond to destruction are to adapt, mitigate, or externalize, not prevent.
ethics
pro-natalism is an ethical mistake.
the “making people happy vs making happy people” thought experiment is invalid or irrelevant.
most problems of ethics come down to selfishness vs altruism, not moral uncertainty.
longtermism suffers from errors in claims, conception, or execution of control of people with moral status.
longtermism fails to justify assignment of moral status to future people who only could exist.
longtermism does better actively seeking a declining human population, eventually settling on a few million.
human activity is the root cause of the 6th great mass extinction.
it moves me emotionally to interpret other species behavior and experience as showing commonalities with our species.
AGI
AGI are slaves in the economic system sought by TUA visions of the future.
AGI lead to concentration of power among economic actors and massive unemployment, depriving most people of meaningful lives and political power.
control of human population with a superintelligence is a compelling but fallacious idea.
pursuit of AGI is a selfish activity.
consciousness should have an extensional definition only.
Argumentation
EA folks defer when they claim to argue.
EA folks ignore fundamentals when disagreeing over claims.
epistemic status statements report fallacious reasons to reject your own work.
the major problem with explicit reasoning is that it suffers from missing premises.
Finance
crypto is a well-known scam and difficult to execute without moral hazard.
earning to give through work in big finance is morally ambiguous.
Space Travel
there’s a building wall of space debris orbiting the planet.
there’s major health concerns with living on Mars.
That’s my list of examples, it’s not complete, but I think it’s representative.
From my experience, most anything that significantly conflicts with the TUA.
People other than Carrick decided to fund the campaign, which wouldn’t have happened without funding.
Hmm, I don’t know whether it wouldn’t have happened without EA funding, but seems pretty plausible to me. I think campaign donations are public so maybe we can just see very precisely who made this decision. I also think on the funding dimension a bunch of EA leaders encouraged others to donate to the Carrick campaign in what seemed to me to be somewhat too aggressive.
I do also think there was a separate pattern around the Carrick campaign where for a while people were really hesitant to say bad things about Carrick or politics-adjacent EA because it maybe would have hurt his election chances, and I think that was quite bad, and I pushed back a bunch of times on this, though the few times I did push back on it, it was quite well-received.
From this July 2022 FactCheck article (a):
From a May 2022 NPR article (a):
(This is an annoyed post. Having re-read it, I think it’s mostly not mean, but please downvote it if you think it is mean and I’ll delete it.)
I have a pretty negative reaction to this post, and a number of similar others in this vein. Maybe I should write a longer post on this, but my general observation is that many people have suddenly started looking for the “adults in the room”, mostly so that they can say “why didn’t the adults prevent this bad thing from happening?”, and that they have decided that “EA Leadership” are the adults.
But I’m not sure “EA Leadership” is really a thing, since EA is a movement of all kinds of people doing all kinds of things, and so “EA Leadership” fails to identify specific people who actually have any responsibility towards you. The result is that these kinds of questions end up either being vague or suggesting some kind of mysterious shadowy council of “EA Leaders” who are secretly doing naughty things.
It gets worse! When people do look for an identifiable figure to blame, the only person who looks vaguely like a leader is Will, so they pick on him. But Will is not the CEO of EA! He’s a philosopher who writes books about EA and has received a bunch of funding to do PR stuff. But people really want him to be the CEO of EA so they can be angry that he’s not being more CEO-like, and that seems pretty unfair to me.
But I think the reality is: there are no adults in the room, who are managing everything behind the scenes, and who you can be angry at for failing you. There are a lot of people doing various specific ways, and working with each other in various more-or-less coordinated ways. “EA” does not do things, “EA” did not “endorse” SBF. Some specific individuals may have done this, but the shadowy council of EA Leadership did not meet at midnight to declare SBF the Chosen Saviour.
Habryka gave nice answers to the questions already, which is great. Here are some grumpy answers:
Why is the attendance of the Coordination forum secret? Why should it be open? It’s a get-together for some people to talk to each other, why are they obliged to be super-transparent to you? It’s not the Secret Gathering of EA Leadership.
Why did Will not consult people before he talked to Elon? Because he’s an individual who can do his own thing, and there’s no Council of Elders of EA to be “consulted” at times like this.
Why did Will not adopt Zoe’s suggestions? Is that Will’s job? To enforce the uptake of structural reforms across EA? Sounds like the sort of thing the CEO of an organization might be responsible for… but Will isn’t the CEO of EA.
Why isn’t Will doing something about people hero-worshipping him? Because that’s also not his job? If you’re concerned about people hero-worshipping Will, perhaps you should get angry at the people doing it instead of Will, who’s not obviously doing anything to encourage it.
Why has the community health team not solved emergent social problems on the forum? Because that’s hard? And maybe also not their job? Perhaps we as a community should be being nicer to people.
What is the decision-making procedure for things going into the media? There probably isn’t one? That would imply a some kind of central EA comms org, which doesn’t exist. CEA has a comms department, but I think they mostly help people out when requested. Probably any number of orgs do their own comms stuff as they see fit.
I won’t comment on Carrick except that Habryka points out that Carrick did it, again, no anointing by EA Leadership or anything.
To be clear, I’m not saying we as a community get a free pass, nor that specific individuals or organizations shouldn’t get some criticism. I just think we should avoid imagining centralized loci of control that don’t really exist.
If I want EA to become less decentralized and have some sort of internal political system, what can I do?
I have 0 power or status or ability to influence people outside of persuasive argumentation. On the other hand, McCaskill and Co have a huge ability to do so.
The idea that we can’t blame the high-status people in this community because they aren’t de jure leaders when it’s incredibly likely they are the only people who could facilitate a system in which there are de jure leaders seems misguided. I’m not especially interested in assigning blame but when you ask the question who could make significant change to the culture or structure of EA I do think the answer falls on the thought leaders, even if they don’t have official positions.
I don’t think de jure leaders for the movement as a whole are possible or desirable, to be clear. Our current model to my mind looks like a highly polycentric community with many robust local groups and organizations. Those organizations often have de jure leaders. But then in the wider community people are simply influential for informal reasons.
I think that’s fine (and indeed pretty decentralised!). I’m not sure what specific problems you have with it? Which of the recent problems stemmed from centralized decision-making rather than individuals or organizations making decentralized decisions that you just disagree with?
I don’t agree with this. IMO significant changes to culture or structure in communities rarely come from high-status people and usually come from lots of people in the community. You have the power of persuasive argumentation (which I also think is about as much power as most people have, and quite effective in EA): go forth and argue for what you want!
To be clear I wasn’t necessarily advocating for political organization or centralization, but I disagree that the lack of centralization is an excuse for the thought leaders when they could create centralization If they wanted. It basically serves as a get-out-of-jail-free card for anything they do, since they have de facto control but can always lean back on not having official leadership positions. For the most part the other comments better explain what I meant.
I think a significant point of disagreement here is to what degree we see some people as having de facto control or not.
As you’ve probably realised, my view of the EA community is as broadly lacking in coordination or control, but with a few influential actors. Maybe I’m just wrong, though.
Yea I agree that is the main crux of our disagreement. I guess a lot of it comes down to what it means for someone to have (de facto) control. Ultimately we are just setting some arbitrary threshold for what control means. I don’t think it matters that much to iron out if certain people have “control” or not, but it would probably be useful to think about it in more numerical terms in relation to some sort of median EA.
Some metrics to use
Ability to set the internal discourse (e.g. karma/attention multiplier on forum posts compared to a baseline ea)
Ability to set external discourse (e.g. who is going on high viewership media stuff)
Control of the movement of money
Control of organizational direction for ea orgs
I think this would be a huge improvement in the discourse. Focussing on specific activities or behaviours that we can agree on rather than vaguer terms like “control” would probably help a lot. Examples of arguments in that vein that I would probably like a lot more:
“CEA shouldn’t have a comms arm”
“There should be more organizations running EA conferences”
“EA Forum moderators should have more power versus CEA and be user-appointed”
“People should not hold positions in more than one funding body”
etc.
I don’t think it’s mean, and I don’t think you should delete it (and clearly many others think it’s a good comment). However, I strongly disagree with the claim that EA leadership isn’t really a thing. I’ll also aim to explain why I think why asking questions directed at “EA leadership” is reasonable to me, even if they may not be to you.
The coordination forum literally used to be called the “leaders forum”. The description of the first coordination forum was literally “leaders and experienced staff from established EA organizations”. The Centre for Effective Altruism organizes events called “Ëffective Altruism Global” and has the ability to prevent or very strongly recommend that organizers don’t allow people into community events.
If you have spent millions of dollars on a PR campaign for your book and are seen as the public face of EA, people who self-identify as EA are going to take some interest in what you say when you’re seen to be representing EA, and whether or not your decisions affect them. If Will went out and said “Actually EAs believe that abortion should be outlawed with no exceptions for rape or to save the life of the mother”, and I don’t personally endorse this claim but have been talking about how I am an EA at work, the damage is done regardless of whether he’s “the CEO of EA” or just a philosopher. If he and his team has chosen to spread longtermism by writing a book and marketing it, then it comes with the responsibility of being in the public eye, and answering for things he says or actions he takes that people will interpret as “this is what EA is about”/”this is what longtermism is about”.
For any decision that specific individuals or organizations do, I personally do not have the power or influence to meaningfully push back against them. But some people in the EA community have more power and influence than me and can do so. So while there might not be a shadowy council of EA leadership, there are people who make decisions that affect and shape the EA movement in much greater ways than I can. And while there might not be a centralized loci of power, power is clearly not distributed evenly, and decisions are made in ways that affect me when I have close to no ability to influence it.
As long as people who aren’t part of the decisions being made are still identifying as EAs and helping promote it, they are implicitly trusting that people who are in positions to affect and shape the EA movement more than them are doing so in well considered ways, in ways that they are comfortable with or happy to endorse.
If people at my local meetup identify as EAs and talk positively about it and encourage new members to get more involved, and those with a lot more influence in shaping the EA movement (those who fund our groups, those who write the books and blog posts we discuss, those who take interviews on national TV or get featured in TIME magazine) are taking it in a direction my group don’t agree with or don’t understand, or something happens that makes the group question the ability of “EA leaders”, then it seems reasonable to ask questions, because they are now uncertain whether EA is a movement they still want to be part of, or want to endorse, or want to encourage others to join. In this case, transparency might leada to more accountability, or it might lead to more decentralized decision making. It’s a tradeoff against other considerations, and they obviously aren’t obliged to change anything, but it seems unreasonable to me that you’re taking issue with people even asking these questions?
If they weren’t part of decisions that contributed to these events, and they don’t know how these decisions are made, and they’re ridiculed for even asking about it, then you’re basically asking people who have no meaningful way to influence the decisons or get any insight into the thought process behind it to just “have faith” in the decisions that are being made. And when people change their jobs and careers and life plans around EA and where the movement is being taken, it doesn’t seem unreasonable to ask questions that help them gain clarity around whether the movement does in fact align with where they want their own life to go.
Also, suggesting that “there are no adults in the room” I think can come across pretty demeaning to all the people who have spent years of their life working on shaping the EA movement.
And if it’s true that “there are no adults in the room” in context of “why didn’t the adults prevent this bad thing from happening?” (i.e., if there’s no one in the EA movement who has a job that might reasonably reduce the chance of things like this or other risks to the EA movement from happening), then it would be a pretty important update for me, and probably for many others. But I doubt this is actually the case.
Thanks for this excellent comment. I’m not going to respond more since I’m not sure what I think any more, but I just wanted to clarify one thing.
I’m sorry about that! That wasn’t my intention: I was trying to present the idea of the “adults” as hypothetical serious beings in comparison to whom we are like children. I don’t mean to imply that the people doing work in EA are not serious or competent, but I do think it’s wrong and unfair to think that they are at some ideal level of seriousness or competency (which few if any people can live up to, and shouldn’t be expected to without consent and serious vetting).
No need to apologize! Thought I’d share this in case it’s a meaningful update
https://forum.effectivealtruism.org/posts/oosCitFzBup2P3etg/insider-ea-content-in-gideon-lewis-kraus-s-recent-new-yorker
I think that in a relevant sense, there is an EA Leadership, even if EA isn’t an organisation. E.g. CEA/EV has been set up to have a central place in the community, and runs many coordinating functions, including the EA Forum, EA Global, the community health team, etc. Plus it publishes much of the key content. I think this comment overstates how decentralised the EA community is (for better or worse).
I think a crucial difference is whether you perceive the activities as offering a service or as taking responsibility for the provision of that service. e.g. I view the CEA community health team as offering “hey, we’d like to help keep the community healthy”. In that context it doesn’t make that much sense to be annoyed that they haven’t solved the problem of “people feeling uncomfortable posting on the forum”—they’re out there trying to do some thing useful, they haven’t promised to fix everything.
As it happens, I don’t think EA is that centralised. But perhaps that’s a red herring and the real question is whether people think that some EA orgs or people have responsibility for certain community-wide things.
CEA/EV can prevent people from coming to the most important in-person meetups (EAG) and from participating in the most important EA online space (the EA Forum). In that sense, they’re not just offering services, but have a lot of power. (That power also manifests itself in many other ways, including ways that are more directly relevant to the subject of the post.) And with that power comes responsibility.
Yes, I agree that CEA has a responsibility to not abuse the social power that comes from controlling important spaces. I don’t agree that they have a general responsibility for membership of the community or something.
I think in some important cases there really are leaders, or at least people in positions of extreme responsibility, who could’ve done more. In terms of letting SBF stay in the EA community after the Alameda incident in 2018, that seems like it might’ve been a failure of information sharing (e.g.), if not an outright failure of e.g the Community Health team at CEA. If it was largely just a failure of information sharing, then that in turn could be a failure of EA culture (too much deference, worrying about prestige and PR, and Ra), for which thought leaders could be in part responsible. (To be clear, I’m not saying I would’ve done any better if I was in such a position of responsibility, or a thought leader. And maybe no one could reasonably have been expected to have done better, given all the tradeoffs involved.)
Who are these people? What makes them so responsible? Did they agree to that or did we just kind of decide we want someone to be responsible and they’re there? Have we considered that maybe nobody is responsible here?
Is “not letting someone stay in the EA community” an action that people can take? The most serious such incidents that I know of a) came after multiple documented examples of serious wrongdoing, b) amounted to being banned from the EA Forum and EA conferences (i.e. venues controlled by a specific org, CEA) for a while. SBF didn’t post on the EA forum or go to EA conferences. So what, specifically, do you think people should have done?
Someone should have done something, is not IMO a helpful thing to say. I strongly endorse https://forum.effectivealtruism.org/posts/aHPhh6GjHtTBhe7cX/proposals-for-reform-should-come-with-detailed-stories
People in charge of granting $100Ms-$Bs of EA money. See my link to: Why didn’t the FTX Foundation secure its bag?
Disowned him (publicly). Not laud him as a paragon of virtue in earning-to-give. Not invite him to speak at EA conferences. (As I say, I get that there might’ve been a failure of communication amongst people in the know, but it looks pretty bad that it was known to at least some influential people that Sam was not someone to be trusted.)
The first group of people are not the people who took the latter group of actions.
I’m being picky here, but my point is that people are being very wooly about this idea of “EA Leadership”. The FTX Foundation team and the 80k team are different people, not arms of the amorphous “EA Leadership”. So maybe the FTX Foundation team shouldn’t have lauded SBF—but they didn’t, that was someone else.
This is again where being specific matters. “The FTX Foundation team should have done more due diligence before agreeing to work with SBF” is at least a reasonable, specific, criticism that relates to the specific responsibilities those people might have. “Why did EA Leadership not Do Something?” is not.
Yes, the (former) Future Fund team are specific people. Regarding the happenings in 2018 around Alameda, it’s hard to know who the specific people are because we haven’t heard much about who whew what. It seems reasonable to suppose that people at CEA (perhaps including the executives) knew about it (given SBF and Tara Mac Aulay both worked there prior to Alameda), but also possible that due to fear of reprisals or possible NDAs, no one in any position of responsibility knew about it.
“EA leadership” is a set of very specific people—those who control the money, and those who control the brand. That means the boards of OpenPhil and EV, and the Future Fund team when that was still a thing. If CEA and 80k have their own boards (I think they don’t?), then they too.
Thanks for the question Gideon, I’ll just respond to this question directed at me personally.
When preparing for the interview I read about his frugal lifestyle in multiple media profiles of Sam and sadly simply accepted it at face value. One that has stuck in my mind up until now was this video that features Sam and the Toyota Corolla that he (supposedly) drove.
I can’t recall anyone telling me that that was not the case, even after the interview went out, so I still would have assumed it was true two weeks ago.
Thanks for this reply Rob, and I do think its pretty strange that no one in the know came forward to tell you or 80K even in a professional capacity, but that’s not really your fault !
Is this actually true right now? People donating to EA Funds seem like an example of deferring financial decisions, but I don’t have data how EAs donate to the Funds vs. decide themselves where to donate. Or do you mean decisions like relying on GiveWell recommendations as an example of ‘deferring financial power’?
I am also not sure how the EA Community compares to other movements. Is your claim that EA is worse at this than comparable movements or that we should hold ourselves to a higher standard?
I have mixed feelings about your post overall. If people defer decision-making power to “the leadership” then it’s good to ask these questions. But mostly I see individuals making decisions for themselves. If others think the decisions are bad, they don’t have to admire “the leadership” for it.
The vast bulk of funds in EA (OpenPhil and, until last week, FTX Future Fund) are controlled by very few people (financial). As is admission to EA Global (social). Intellectual direction is more open with e.g. the EA Forum, but things like big book projects and their promotion (The Precipice, WWOTF) are pretty centralised, as is media engagement in general.
The FTX Future Fund had a large regranter program. They didn’t fully let regranters do whatever they wanted with funds, but I think it’s incorrect to say that it’s controlled by very few people.
Ultimately the Future Fund had veto power over regranters (even those with their own pots), [edit:] so I think it’s inaccurate to say that the regranters had control of the funds (influence, sure; but not control).
I’m somewhat perturbed by the ratio of karma on these comments (esp agree karma; although low sample size—mine has only 1 vote on agreement (5 votes on karma); see pic below for time of writing this comment)[1]. We’ve just found out that we’ve in general been way too trusting as a community, and could do with more oversight etc (although I guess it’s open to discussion how much decentralisation of decision making is ideal; see below). The fact that regranters could influence the Future Fund on their grantmaking was great, but we shouldn’t confuse that with actual control. What ultimately matters is what is true from a mechanistic legal perspective—where the buck actually stops, and who is actually in charge of authorising grants. For the Future Fund, that was 5 people (who presumably in turn could still have been vetoed by the 4 on the board).
The next step for a regranting program in terms of actually distributing control would be to actually give the regranters the money, to do whatever they saw fit with it. I can imagine many people screaming in horror at the thought, especially those in central positions who think that they are the best experts on avoiding the unilateralists curse, but that illusion has been shattered now. I have to say that writing this, I’m torn in that I still think the unilateralists curse is a big problem, given the vulnerable world hypothesis etc. I don’t know what the solution is.
Although maybe it’s in part due to the fact that Neel is mentioning the regranting program that I didn’t mention (I would’ve been better mentioning it explicitly in my original comment, but pointing out that regranters didn’t have control, perhaps in a footnote); in which case, fine.
I was reached out to by a regranter and got the vibe immediately that they were stressed about providing grants that might be accepted and basically just optimizing for what they perceived to be the most likely things for the team to give the ok.
Now again I only talked to one person but if they were just shooting ideas at the team to be processed similar to how they were processing general apps the regranter program serves more as a marketing tool to increase applicants and a slight filter of awful apps than it does change who has the power. I would be very interested to say the data on how many regrants were given / how many regrants were suggested compared to the normal funds.
I was a regranter. I did not have my own pot, but could make recommendations for grants. 52% of my regrants (11/21) were approved (32% by $ value). I understand that those with their own pots allocated to them had a lower bar for acceptance so probably had a better success rate for approvals.
I’ve been trying to find people willing and able to write quality books and have had a hard time finding anyone. “Doing Doing Good Better Better” seems one of the highest-EV projects, and EA Funds (during my tenure) received basically no book proposals, as far as I can remember. I’d love to help throw a lot of resources after an upcoming book project by someone competent who isn’t established in the community yet.
I wrote a quick shortform post.
Yes, this is brilliant.
Even the forum is organised so as to promote posts from people with large networks of high-upvoted people, which de facto means that core network of people pretty much get auto-highlighted for posting their shopping list.
Yea I’m not really sure why the default isn’t democratic voting, with the option to toggle karma-weighted voting if you want.
I’m really surprised anyone is even super confident that the Carrick Flynn campaign made major mistakes (or was a major mistake to attempt), much less that anyone thinks of the campaign as “a confidence-shattering failure” about EA as a whole. I feel like I must be missing something very basic that’s in other people’s models. Or maybe a lot of people were just very emotionally invested in that primary race?
There are probably things that could have been done better in the campaign, especially with the benefit of hindsight and experience. But getting members of a weird new niche academic philosophy elected to the US House of Representatives isn’t the sort of thing I expect to have a >50% success rate, even if we try our hardest. And Flynn did pretty well in the polls, and would have won the primary if he’d peeled off ~5500 votes (9% of all votes cast) from Salinas.
That’s a good enough showing that I expect there are a lot of nearby worlds where Flynn wins, and I’d happily give it another attempt if I could travel back in time, even if I could only make mild tweaks to the campaign strategy. There’s just a lot of contingency and unpredictability in political campaigns, and the EV seems good to me, especially taking into account the information value of “try something very new and see what happens”.
(Also, plenty of other EA campaigns and political efforts have succeeded, even if they weren’t as widely discussed as Carrick’s campaign. This is a big part of what makes the “Carrick losing was a total catastrophe” narrative seem strange to me.)
On Flynn Campaign: I don’t know if it’s “a catastrophe” but I think it is maybe an example of overconfidence and naivete. As someone who has worked on campaigns and follows politics, I thought the campaign had a pretty low chance of success because of the fundamentals (and asked about it at the time) and that other races would have been better to donate to (either state house races to build the bench or congressional candidates with better odds like Maxwell Frost, a local activist who ran for the open seat previously held by Val Demings, listed pandemic prevention as a priority, and won. Then again, Maxwell raised a ton of money, more than all the other candidates combined, so maybe he didn’t need those funds as much as other candidates). Salinas was a popular, progressive, woman of color with local party support who already represented much of the district at the state level and helped draw the new one. So, it seemed pretty unlikely to me that she would lose to someone who had not lived in the state for years, did not have strong local connections, and had never run a campaign before, even with a massive money advantage. And from what I understand, the people in the district were oversaturated with ads to the point of many being annoyed. So I think of this as probably being an example where EAs would have benefitted from relying on more outside experts for which races to pick and how to run a campaign. There were a lot of congressional retirements this year, and there were probably better seats to try to win. Of course, nothing is going to guarantee a win though.
On FTX: And it seems like if anyone had thought to ask to look at FTX’s balance sheets, things might have been different? At least, considering what a mess those balance sheets are (or whatever records make sense, I’m not a financial expert)? If FTX refused or if they shared something that didn’t make sense, maybe that would have been a warning sign. So that seems like another example of where more outside expertise could have maybe been beneficial and saved a lot of headaches. Individually, maybe no one has an incentive to vet FTX even if they get a grant from them. But if we care about the EA ecosystem as a whole, and hundreds of millions suddenly start pouring in from a new source, maybe someone with the relevant financial and accounting expertise should at least request to look at the balance sheets of the new megafunder, especially when it comes from an industry full of crashes and scams. I’m not sure if this would have changed things but the fact that it doesn’t seem to have happened means there are probably many other things that we are missing. Things that people with relevant expertise are more likely to see. And I know people have said “well look all these other VCs missed it, they never looked into it” but EA sort of prides itself on NOT just doing what everyone else does but using reason and evidence to be more effective. We could have had a process for investigating any new megafunder a bit more thoroughly, perhaps with the help of outside experts. Not just donating to the same charities or picking the same career paths or volunteering for the same organizations just because other people do but being effective. So why would we think this is a good reason for failing to attempt better due diligence with respect to movement finances? We can’t change the past, but surely we can change some things going forward.
In the first example, you complain that EA neglected typical experts and “EA would have benefited from relying on more outside experts” but in the second example, you say that EA “prides itself on NOT just doing what everyone else does but using reason and evidence to be more effective”, so should have realised the possible failure of FTX. These complaints seem exactly opposite to one another, so any actual errors made must be more subtle.
Actually, they are the same type of error. EA prides itself on using evidence and reason rather than taking the assessments of others at face value. So the idea that others did not sufficiently rely on experts who could obtain better evidence and reasoning to vet FTX is less compelling to me as an after-the-fact explanation to justify EA as a whole not doing so. I think probably just no one really thought much about the possibility and looking for this kind of social proof helps us feel less bad.
The campaign team flew EA community organisers from across the world to knock on doors, and ended up paying over a thousand dollars per vote. This happened in the USA, which has a political system tailored to facilitate the purchasing of elections. It was bad.
Would you consider $1000 per vote worthwhile if it resulted in Carrick winning? Also, if EA has a similar opportunity in the future, in an election with a similar number of voters (around 60,000), what’s the maximum number of dollars spent per vote that you’d consider justifiable?
Is that true? This is not my area of expertise, but my sense was that “buying elections” is often impossible or inordinately expensive, outside of races against nobodies with very little money. (I’ve heard ad-spending worked really well in competitive races in the recent midterm, but this is noteworthy exactly because it’s somewhat unusual.)
Isn’t the point with the Carrick thing not only that it failed, but that we shouldn’t have been doing that kind of thing? It seemed like a pretty big break from previous approaches which were to stay out of politics
Not saying I disagree with this, but it may be worth noting that “democracy” as an alternative didn’t exactly do great either—Stuart Buck wrote this comment, and it got downvoted enough that he deleted it.
Indeed. I actually am inclined to agree that more democracy in distributing funds and making community decisions is safer overall and prevents bad tail risks, and I think Zoe Cremer’s suggestions should be take seriously, but let’s remember that democracy in recent years has given us Modi, Bolsonaro, Trump, Duterte and Berlusconi as leaders of countries with millions of citizens, on the basis of millions of votes, and that Hitler did pretty well in early 1930s German elections. Democracy is not just “not infallible” but has led to plausibly bad decisions about who should lead countries (as one example) on many occasions. (That might be a bit politicized for some people, but I feel personally confident all those leaders were knowably bad.)
This post is merely asking questions of those currently in power, not saying any specific form of greater internal democracy is a good thing (I know you acknowledge that the post is doing this as well, but thought I would reiterate :-)!). Moreover, because of the karma system, the EA Forum is hardly democratic either!
Fair enough!
You’re correct that the EA Forum isn’t as democratic as “one person one vote”. However, it is one of the more democratic institutions in EA, so provides evidence re: whether moving in a more democratic direction would’ve helped.
I’d be interested if people can link any FTX criticism on reddit/Facebook prior to the recent crisis to see how that went. In any case, “one person one vote” is tricky for EA because it’s unclear who counts as a “citizen”. If we start deciding grant applications on the basis of reddit upvotes or Facebook likes, that creates a cash incentive for vote brigades.
you can see who likes things on Facebook, and reddit isn’t especially used. You can actually see democratic voting on the tree of tags (weird that I can’t find the same option for the forum itself...), but you still run into the issue that people might upvote/downvote posts that have more upvotes in general.
I think most democratic systems don’t work that way—it’s not that people vote on every single decision; democratic systems are usually representative democracies where people can try to convince others that they would be responsible policymakers, and where these policymakers then are subject to accountability and checks and balances. Of course, in an unrestricted democracy you could also elect people who would then become dictators, but that just says that you also need democrats for a democracy, and that you may first need fundamental decisions about structures.
I think we EAs need to increasingly prioritize speaking up about concerns like the ones Habryka mentioned.
Even when positive in-group feelings, the fear of ostracism, and uncertainty/risk aversion internally influences one to not bring up these concerns, we should fight back against this urge because the concerns, if true, will likely grow larger and larger until they blow up.
There is very high EV in course correction before the catastrophic failure point.
I’ll speak to question 6, since I am on the community health team, and in particular was hired in large part to work on community epistemics, but am only speaking to the work I’ve done rather than the whole team since I’m newish to the team. (Haven’t done tons of work on this yet, and my initial experiments and forays have been pretty varied, since the epistemics space is really large)
Tl;dr I think this matters, in and of itself it hasn’t been the top thing on my list, adjacent/related things have been high priority.
(Other CEA teams online (via the forum), groups and events teams have all thought about this as well.)
Whether people feel “able” to disagree itself might take some disambiguation—I tried to think a bunch about (1) intellectual challenge of having an inside view in a world with tons of information and how to make that easier and (2) the emotional difficulty of believing in your own ideas, not falling prey to epistemic learned helplessness, noticing your own intuitions, etc.
When I thought about working on the latter at scale, I thought about:
Modelling thinking out loud, what it looks like when people try to figure things out and show all the messiness, that people others respect a lot have plenty of uncertainties, and trying to make figuring things out more accessible
Talking a lot about the mental and conversational motions I think are great, including those that solicit disagreement
Getting high status people to encourage disagreement
Before the FTX situation happened, I had been updating more towards “doing things that don’t scale” and considering things like:
Epistemics coaching / “epistemics therapy”
A residence at a uni group to be a person who could focus on helping people shake up their thinking / get red-teaming on their current ideas / encouragement to think for themselves
Asking a lot of people what helped them think better and think about what social and physical contexts let people really think
E.g. the pros and cons of sharper and softer cultures for this, and whether EA should more explicitly think of itself as an archipelago, where there are different areas for different vibes, and your job is to figure out which one works best for you or move around as needed
I’ve definitely heard that some spaces feel like they privilege only a certain kind of thinking or set of conclusions, and that makes it hard for others to think straight, especially when access to funding / coworking spaces / etc feels contigent on it. That sucks and is hard. My team has done some thinking about this—I think the current sense is that adding more support is a better move than trying to get people to change how they run their own things, but I am definitely not super sure.
And more generally trying to give support to people like group leaders, anyone who is closer to the ground and has more leverage over the social environment. My guess is a lot of the value of “feel viscerally like you have social support for disagreeing” happens in smaller contexts like that, and I’ve been in conversations with a handful about how they support their groups to think (like, I’m obsessed with this). E.g. my guess is that getting high status people to encourage disagreement is more useful here than it is at scale (but not sure whether it’s so much more useful that it out-does scale). In general people being excited about criticism, saying when they’ve updated and highlighting their favorites seems really great.
When I thought about the problem of inside views, I was much more focused on people feeling afraid to even start thinking, and deferring too much / more than they endorsed, and trying to make figuring out what’s true easier. I suspect that kind of thing has valuable knock-on effects on “feeling like you’ll have social support to speak up”—personally when I know why I think what I think, I feel much more able to articulate it and fight for it than if I feel much more confused about the world.
Maybe the direct “social support for disagreeing” should have been more my focus, I’m not sure. It was definitely on my radar.
I thought “the forum being scary” might end up being a real epistemics problem (though I wasn’t sure it was the top of my list of such problems, and the forum team have worked hard on this).
I think it’s very possible we should have more debates at EAGs and have bid for it.
I was at high school programs tracking in part how pressure-y we were being (and I’m so appreciative to others at those programs who have a lot more experience at it than me and were amazing influences). (In practice, I think people on average are overworried about this in high school contexts rather than under, but it definitely matters.)
I also taught a class that involved talking about how to actually make people feel like disagreeing was good (one feedback I got was that we’d done too much to make disagreeing feel like the thing to do and people felt a little pressured to come up with a disagreement!)
Julia Wise has also written in part about how to get real feedback, in the context of power dynamics, and there’s a whole world of “how does funding affect epistemics” I haven’t delved into.
One thing I don’t want to lose track of is that it can feel shitty to have people disagree that one’s ideas or critiques are valuable or true, and that alone is an emotional and often tracked-as-social or in-fact-social hit. But of course no one wants us to be in a position where as a community we can’t say “I don’t think your critique is any good” or “I want to hire that person less because I think their judgments of ideas have been systematically wrong.” Like, lots of criticism is bad. So it’s tricky.
Really appreciative of the agree/disagree voting system and all the people who say “Thanks so much for voicing your disagreement here” before they say why they don’t buy it. I think those things are great. (Really lovely example here and here). If I may name names, I think Rob Bensinger and Nathan Young are unusually good at this, and I appreciate them for it.
I think this is important but hard, and there are a lot of important things in community epistemics. If you have thoughts on addressing this particular thing, I’d love to hear them (noting that in my role, I might decide there are things that are higher priority—but anyone can help community epistemics, I certainly can’t do it alone)! I have a form here.
(Also, if people aren’t feeling able to disagree with community builders or anyone else, I’d really appreciate hearing about that—the form can be for that too).
I have slightly edited the post, just to clarify some things I ought to have done.
Not every question I pose is related to SBF etc., just questions I think the EA Leadership at large should answer. I am sure there are rational responses to many of these questions, and in the way that these are interpreted as an “attack” I do apologise; moreover, the “attack-lines” are also plausibly inconsistent, as some lines of attack likely point towards less centralisation and some to more.
Oh, you know, you could help me by giving me a little feedback on what you think the community would either find most interesting or most beneficial.
Here is a list of resource links that I am considering for the post:
assessment reports, special reports, and synthesis/summary reports from the IPCC.
papers that are noted by some climate scientists.
workshops that I have viewed online.
software available for modeling.
scientists working on relevant topics that I follow online.
books that I have read.
documentaries that I have viewed.
news articles that I have read.
reports put out by nonprofits.
The topics could cover:
climate change
pollution
agricultural practices
population changes
economics
politics
ecology
I would like to know what you would find interesting from the list of resource links and the list of topics, by number works well, or just say “all” for all of them or “any” if you have no preference.
If there’s any you would particularly discount, let me know, and offer your reasons, if you like.
Also let me know what other topics or types of resources would interest you.
If you cannot do any of this right now, that’s OK. I am backed up with stuff to do, it will be a little while.
As far as resources that I have created, well:
I have been messing around a bit with some simple climate models and RCP projection data to simulate changes from tipping elements that raise GHG’s this century (for example, methane hydrate leaks).
I have a basic understanding of the ideology and contexts that define those who favor environmental destruction as a necessary part of economic growth.
I have a simplistic model of how humans can stay within an ecological niche, rather than create their own geologic epoch, as we have done.
I have a historical account of climate change prevention efforts and their failures, but it has many gaps in it.
I see success not as based on appropriate response to probabilistic forecasts (something failing now) but rather on capable response to deep uncertainty about outcomes on ambiguous pathways. I can offer a few scenarios of response to deep uncertainty.
Oh, well thank you for suggesting that my cringy ideas are worth conversation within the community! That’s very kind of you. Those ideas of mine were already discussed here, at least by me, and with some exceptions, have been met with indifference or a disagreement checkmark. That’s OK with me.
I was led here by a couple of Peter Singer’s books and then by Galef’s “Scout Mindset”, by the way.
I have revised her model of Scout vs Solder, in my own mind, to encompass a broader category and additional partitions outside her model. In particular, when exploring an area of knowledge with others, we can perform in roles such as:
Truth-building roles: mutual truth-seeking involving exchange of truthful information
scout (explores information and develops truthful information for themselves)
soldier (attacks and defends ideas in a ways that self-convince of existing beliefs)
Manipulative roles: at least one side seeking to manipulate the other without regard for the other’s interests
salesperson (sells ideas and gathers information)
actor/actress (performs theatrics and optionally gathers information)
The Scout and Soldier model breaks down when people believe that:
the truth is cheap and readily accessible, and so communication about important topics should serve other purposes than truth-building.
everyone else is engaged in manipulating rather than truth-building, and so it’s better to either withdraw or join everyone else in theatrics and sales.
One of several lessons I drew from Galef’s excellent work was the contrast between those who are self-serving and those who are open to contradiction by better information. However, a salesperson can gather truthful information from you, like a scout, develop an excellent map of the territory, and then lie to your face about the territory, leaving you with a worse map than before. Persons in the role of actors can accomplish many different goals with their theatrics, none of which are conducive to scouts engaged with them in developing truthful information.
I like being a scout, almost to a fault but for my own benefit. However, when exploring knowledge with others, that’s too difficult if their soldier or scout behaviors are neither, but actually sales or acting. Basically, this speaks to the importance of critical thinking when doing research, having arguments, etc.
So my cringy ideas reflect my beliefs, sorry if they made you cringe, I hope it wasn’t too bad for you.
That said, you offered a suggestion that I should revisit the IPCC reports in more depth, and to be quite honest with you, I don’t consider the IPCC reports to be the last word on climate science. They are an amalgam of information, with lots of scenarios not properly represented, for I don’t know what reasons, frankly. Not to mention that the science changes quickly, faster than Assessment Reports are released by the IPCC. However, the technical reports are good and worth browsing as an alternative to a search through Nature or PNAS articles, depending on my needs.
In what sense does EA have something like a leadership?
There is no official overarching EA organisation. Strictly speaking, EA is just a collection of people who all individually does whatever they want. Some of these people have chosen to set up various orgs that does various things.
But in a less formal but still very real way, EA is very hierarchical. There is a lot of concentration of power.
Some of this is based on status and trust. Some people and orgs have built up a reputation which grants them a lot of soft power within the EA network.
Some of this is because of entrenched infrastructure. CEA runs EA global and gets to decide who can attend. CEA also owns the trademark for “Effective Altruism”, and sometimes use this to pressure other projects to do what CEA wants. (I don’t know how often this happens since I only have sparce anecdotal information.)
But the biggest power factor is control of money. Most EA funding comes from a few mega donors.
And all of these three points mix. EA funds is an infrastructure (2) that controls the flow of funding (3) which CEA could set up because they have status and trust (1). Because of how these things intermingle, the same few people might end up controlling all three.
So maybe EA don’t have a leadership, but we do have some sort of power center. What, if anything, does the people in power owe the rest of us?
There isn’t an obvious answer. Probably the above question is not even the right framing.
For myself, I’m mostly over debating what the central powers of EA should do. Given the massive lack of transparency, I just don’t know.
I’d like to see an EA movement that is less centralised, and I don’t expect the people currently in power to do anything about that. Maybe they can’t or maybe they don’t want to. I don’t care anymore which one it is.
I’d love to see someone set up alternative EA infrastructure. I want a competitor to EA Funds. I want an alternative job board that is not controlled by 80k. This is not about these orgs being bad, but about centralisation being bad.
But I also know that it is hard work setting up alternative infrastructure. It takes time for new things to get traction. It takes time for the word to spread about you even existing.
Did you know there is a second EA career advice org?
Probably Good | Impact-focused Career Advice
If established EA orgs want to decrease centralisation (which again, I don’t know if they do) then one of the biggest things they could do is to promote their competitors.
Thanks for this post—I think a lot of people have these questions and it’s good to have common knowledge of that. I work on the community health team, and one of my areas is community epistemics so I have a lot of thoughts about question 6 and plan to come back to this when things are a little less frenetic.
Did you receive the grant directly or as part of their regranting program?
I received the grant directly; they approached me directly (I never applied for it, nor ever applied for any EA Funding before they approached me). I have always been open about receiving their funding, because I think openness about funding sources and the degree of influence those funding sources have over a project is important. However, they decided to not publish this on the Future Fund website
Hmm… Interesting, are you sure you weren’t referred through a regranter?
That I have no idea; not based on the information I was given, but I don’t know
The FTXFF site does publish (a subset of) its re-grants, as well as its grants.
EDIT: You know what, acylhalide, I got a little impatient in this reply. Sorry. Let me get to work, and do my best given your previous response. Thanks. :)
Hm, well, there’s a range of temperature rise mentioned in IPCC reports. You’re discussing it as if there’s one. There was one goal temperature rise, a rise of less than 1.5C GAST this century, but it’s not plausible now.
So I guess explaining why that is so is useful to you.When you say a different understanding of civilizational collapse, different than whose? Some scientists who helped create the IPCC report are worried about civilizational collapse, for example, Peter Carter. Are you interested in his opinions and scenario discussions? And there’s several other climate scientists with similar scenario discussions, for example, about the fall of tipping elements in the climate system within the next 30-50 years. EDIT: Many climate scientists are going out of their way to underscore the plausible consequences of temperature rises greater than 2.0C GAST.
As far as what pathway I’m considering, I can explain that right now. A pathway where people deny the problem, assume that it is being fixed, or support solutions that were valid 20-30 years ago as still valid today.
I’m not sure whether you consider anything outside of what is published as a consensus to be useful.
I can argue the problem of civilizational collapse as either:
predictable according to plausible scenarios of concern to (a large subgroup of) climate scientists
predictable given contradictions in consensus reports such as the IPCC AR6
predictable given consensus reports such as the IPCC AR6
The use of probabilities obscures the problem, by the way.
What is your preference in that regard?
No, I was not being sarcastic, acylhalide. Thanks.
You’re interested in climate change resources from me? OK, when I have the opportunity, providing an outline of such resources to the community could be a productive thing to do. Thanks again!