I lead the DeepMind mechanistic interpretability team
Neel Nanda
To me this post ignores the elephant in the room: OpenPhil still has billions of dollars left and is trying to make funding decisions relative to where they think their last dollar is. I’d be pretty surprised if having the Wytham money liquid rather than illiquid (or even having £15mn out of nowhere!) really made a difference to that estimate.
It seems reasonable to argue that they’re being too conservative, and should be funding the various things you mention in this post, but also plausible to me that they’re acting correctly? More importantly, I think this is a totally separate question to whether to sell Wytham,and requires different arguments. Eg I gather that CEEALAR has several times been considered and passed over for funding before, I don’t have a ton of context for why, but that suggests to me it’s not a slam dunk re being a better use of money.
Thanks a lot for writing this up and sharing this. I have little context beyond following the story around CARE and reading this post, but based on the information I have, these seem like highly concerning allegations, and ones I would like to see more discussion around. And I think writing up plausible concerns like this clearly is a valuable public service.
Out of all these, I feel most concerned about the aspects that reflect on ACE as an organisation, rather than that which reflect the views of ACE employees. If ACE employees didn’t feel comfortable going to CARE, I think it is correct for ACE to let them withdraw. But I feel concerned about ACE as an organisation making a public statement against the conference. And I feel incredibly concerned if ACE really did downgrade the rating of Anima International as a result.
That said, I feel like I have fairly limited information about all this, and have an existing bias towards your position. I’m sad that a draft of this wasn’t run by ACE beforehand, and I’d be keen to hear their perspective. Though, given the content and your desire to remain anonymous, I can imagine it being unusually difficult to hear ACE’s thoughts before publishing.
Personally, I consider the epistemic culture of EA to be one of its most valuable aspects, and think it’s incredibly important to preserve the focus on truth-seeking, people being free to express weird and controversial ideas, etc. I think this is an important part of EA finding neglected ways to improve the world, identifying and fixing its mistakes, and keeping a focus on effectiveness. To the degree that the allegations in this post are true, and that this represents an overall trend in the movement, I find this extremely concerning, and expect this to majorly harm the movement’s ability to improve the world.
My understanding is that FTX’s business model fairly straightforwardly made sense? It was an exchange, and there are many exchanges in the world that are successful and probably not fraudulent businesses (even in crypto—Binance, Coinbase, etc). As far as I can tell, the fraud was due to supporting specific failures of Alameda due to bad decisions, but wasn’t inherent to FTX making any money at all?
I appreciate you for writing this! I don’t agree with everything you’ve written here, but I vibe a lot with the spirit. Hindsight bias and “my personal pet peeve against EA is responsible for everything bad that happened here” seem everywhere. My guess is that while there are important updates to be made, and many short term fires to put out, the actually important updates will be far easier to make in a few weeks once things feel calmer and less emotionally raw.
Strong +1, I imagine this is a very stressful time for all of them! I think they’ve all done an incredibly impressive amount of good already and wish them the best.
EDIT: I made this comment when my understanding of the situation was that FTX had experienced a liquidity crisis due to a bank run, and were going to be acquired by Binance (and customers made whole). I’m now a lot more confused about the situation, and what the appropriate emotional orientation to it/to FTX is.
I upvoted this comment, since I think it’s a correct critique of poor quality studies and adds important context, but I also wanted to flag that I also broadly think Athena is a worthwhile initiative and I’m glad it’s happening! (In line with Lewis’ argument below). I think it can create bad vibes for the highest voted comment on a post about promoting diversity to be critical
Really excited to see this initiative!
Should grantees with significant runway apply for this, (ie, they lost out on money, but this mostly cut into future runway, and this won’t really affect things for the next few months), or would you like to reserve this for grantees with urgent need?
Also, has OpenPhil considered guaranteeing some/all grantees for covering clawbacks? This seems like it might be reasonably cheap in expectation, but save a lot of people from stressful distractions and unnecessary conservatism (but I imagine also has a bunch of legal consequences and possibly significantly increases the risk of clawbacks!)
(EDIT: Added a follow-up comment after reading the original article)
A lot of this discourse feels like it’s missing the point to me. FTX was not an EA org. (Alameda was founded mostly with EAs, but then most of them left, in part because of bad governance and lack of ethics!). FTX was not beholden to EAs, and EA and EA orgs didn’t have any say in how FTX was governed. EA may have been well placed to blow the whistle on this, and maybe to say things to Sam about this, but it seems very off to say that the governance of EA orgs led to the bad governance of FTX.
(Also, Alameda was founded in ~2018, where the EA scene was very, very different and much less mature (and probably much worse governed). I expect many bad governance decisions are baked in when an org is founded)
I think the correct reference class for the governance of FTX is more like startups, esp crypto startups. I don’t see that much of a causal link between, eg, how well OpenPhil or MIRI is governed, and how FTX was governed.
To be clear, I am not arguing that EA orgs are not badly governed, I just think these should be two separate conversations. And, even, that the fact that FTX was badly governed and this led to a disaster, is at best weak evidence that EA orgs governed in a similar way will also lead to a disaster, given how different the work is (it’s harder to fuck up when you aren’t managing >$10bn of customer funds!) (Though, to be clear, I think that if I discovered an EA org being governed in the same way as FTX it would be a concerning red flag. Just that it should have been a red flag regardless of the FTX blow up!)
- 13 Dec 2022 12:11 UTC; 26 points) 's comment on Reflections on Vox’s “How effective altruism let SBF happen” by (
Thanks for writing this up! One problem with this proposal that I didn’t see flagged (but may have missed) is that if the ETG donors defer to the megadonors you don’t actually get a diversified donor base. I earn enough to be a mid-sized donor, but I would be somewhat hesitant about funding an org that I know OpenPhil has passed up on/decided to stop funding, unless I understood the reasons why and felt comfortable disagreeing with them. This is both because of fear of unilateralist curse/downside risks, and because I broadly expect them to have spent more time than me and thought harder about the problem. I think there’s a bunch of ways this is bad reasoning, grantmaker time is scarce and they may pass up on a bunch of good grants due to lack of time/information/noise, but it would definitely give me pause.
If I were giving specifically within technical AI Safety (my area of expertise), I’d feel this less strongly, but still feel it a bit, and I imagine most mid-sized donors wouldn’t have expertise in any EA cause area.
- 26 Nov 2023 18:15 UTC; 5 points) 's comment on Elizabeth’s Quick takes by (
Thanks for sharing, that part updated me a lot away from Ben’s view and towards Hypatia’s view.
An aspect I found particularly interesting was that Anima International seems to do a lot of work in Eastern European countries, which tend to be much more racially homogenous, and I presume have fairly different internal politics around race to the US. And that ACE’s review emphasises concerns, not about their ability to do good work in their countries, but about their ability to participate in international spaces with other organisations.
They work in:
Denmark, Poland, Lithuania, Belarus, Estonia, Norway, Ukraine, the United Kingdom, Russia, and France
It seems even less justifiable to me to judge an organisation according to US views around racial justice, when they operate in such a different context.
EDIT: This point applies less than I thought. Looks like Connor Jackson, the person in question, is a director of their UK branch, which I’d consider much closer to the US on this topic.
My personal take is that funders owe explanations to their donors, but not necessarily to the public or to the broader EA community (though it’s nice!). In this case, since the grant wasn’t really funded by CEA, this seems totally fine (though I do agree that, if it was bought by CEA, and many EAs donated to CEA, then justifying this publicly is probably good).
If it was a large funder with private funding, like OpenPhil, it feels much less clear to me. My guess is that general transparency is pretty good, and being able to receive high-quality external feedback is high value, but I’m not convinced that high-quality external feedback happens very often (and think that, eg, this post and the surrounding comments don’t meet that bar). I find Holden’s thoughts on this fairly persuasive. And I think that needing to make all decisions externally legible with clear, long justifications, seems plausibly more effort than is worth. Though I’m pretty in favour of the brief public grants databases they have.
My understanding is that the FTX terms of use explicitly said customer funds would not be lent out? I agree that in fractional reserve banking, it’s conventional that banks do this, but FTX was an exchange not a bank.
So it is being organised by the Center for Effective Altruism? Or is being organised by a separate organisation, with help/sponsorship from CEA?
The FTX Future Fund had a large regranter program. They didn’t fully let regranters do whatever they wanted with funds, but I think it’s incorrect to say that it’s controlled by very few people.
Legal risk. I am assuming that you are not suggesting that any of these figureheads have done anything illegal. In which case the risk here is a reputational one: they don’t want their words dragged into legal proceedings. But that seems like a nebulous possibility, and legal cases like this can take years in any case. Surely you are not saying that they won’t address the subject of FTX or SBF over that entire span lest a lawyer quote them? Or am I misreading you somehow?
I think you’re underestimating how messy being dragged into a court case is—even if you’re totally innocent, there can be significant time, emotional, and energy costs, courts aren’t perfect at distinguishing truth from falsehood, bankruptcy cases are messy and money can be clawed back, etc etc. I think that taking this seriously is extremely reasonable, regardless of whether you’ve done something illegal!
I’m pretty confused about the argument made by this post. Pascal’s Mugging seems like a legitimately important objection to expected value based decision theory, and all of these thought experiments are basically flavours of that. This post feels like it’s just imposing scorn on that idea without making an actual argument?
I think “utilitarianism says seemingly weird shit when given large utilities and tiny probabilities” is one of the most important objections.
Is your complaint that this is an isolated demand for rigor?
I’d fairly strongly disagree with that take. I think it’s an extremely reasonable assumption that a somewhat cartoony red button someone put at the top of a website deliberately does not do harm to press. Someone deliberately chose to put it there, and most features on websites are optimised for user interaction. This only looks unreasonable within the strong frame of having cultural context about Petrov Day
If you’re ever running an event that you are not excited to be part of, something has gone wrong
This seems way too strong to me. Eg, reasonable and effective intro talks feel like they wouldn’t be much fun for me to do, yet seem likely high value
Man, I have a strong negative aesthetic reaction to the new frontpage that I struggle to articulate—the old one was just so pretty and aesthetic, in a way that feels totally lost! How hard would it be to have an option to revert to the old style?
Downvoted. I appreciate you a lot for writing this letter, and am sorry you/Will were slandered in this way! But I would like to see less of this content on the EA Forum. I think Torres’ has a clear history of writing very bad faith and outrage inducing hit pieces, and think that prominently discussing these or really paying any attention on the EA forum easily sucks in time and emotional energy with little reward. So seeing this post with a lot of comments and at 300+ karma feels sad to me!
My personal take is that the correct policy for the typical EA is to not bother reading their criticisms, given their history of quote mining and misrepresentation, and would have rather never heard about this article.
All that said, I want to reiterate that I’m very glad you wrote this letter, sorry you went through this, and that this has conveyed the useful information to take the bulletin’s editorial standards less seriously!