I would qualify this statement by saying that it would be nice for OP to have more reasoning transparency, but it is not the most important thing and can be expensive to produce. So it would be quite reasonable for additional marginal transparency to not be the most valuable use of their staff time.
Michael_PJ
Some EAs knew about his relationship with Caroline, which would undermine the public story about FTX<->Alameda relations, but didn’t disclose this.
Some EAs knew that Sam and FTX weren’t behaving frugally, which would undermine his public image, but also didn’t disclose.
FWIW, these examples feel hindsight-bias-y to me. They have the flavour of “we now know this information was significant, so of course at the time people should have known this and done something about it”. If I put myself in the shoes of the “some EAs” in these examples, it’s not clear to me that I would have acted differently and it’s not clear what norm would suggest different action.
Suppose you are a random EA. Maybe you run an EA org. You have met Sam a few times, he seems fine. You hear that he is dating Caroline. You go “oh, that’s disappointing, probably bad for the organization, but I guess we’ll have to see what happens” and get on with your life.
It seems to me that you’re suggesting this was negligent, but I’m not sure what norm we would like to enforce here. Always publish (on the forum?) negative information about people you are at all associated with, even if it seems like it might not matter?
The case doesn’t seem much stronger to me even if you’re, say, on the FTX Foundation board. You hear something that sounds potentially bad, maybe you investigate a little, but it seems that you want a norm that there should be some kind of big public disclosure, and I’m not sure that really is something we could endorse in general.
To reuse your example, if you were the only person the perpetrator of the heist could con into lending their car to act as a getaway vehicle, then that would make P(Heist happens | Your actions) quite a bit higher than P(Heist happens | You acting differently), but you would still be primarily a mark or (minor) victim of the crime
Yes, this is a good point. I notice that I don’t in fact feel very moved by arguments that P(FTX exists | EA exists) is higher, I think for this reason. So perhaps I shouldn’t have brought that argument up, since I don’t think it’s the crux (although I do think it’s true, it’s just over-determining the conclusion).
Only ~10k/10B people are in EA, while they represent ~1/10 of history’s worst frauds, giving a risk ratio of about 10^5:1, or 10^7:1, if you focus on an early cohort of EAs.
This seems wildly off to me—I think the strength of the conclusion here should make you doubt the reasoning!
I think that the scale of the fraud seems like a random variable uncorrelated with our behaviour as a community. It seems to me like the relevant outcome is “producing someone able and willing to run a company-level fraud”; given that, whether or not it’s a big one or a small one seems like it just adds (an enormous amount of) noise.
How many people-able-and-willing-to-run-a-company-level-fraud does the world produce? I’m not sure, but I would say it has to be at least a dozen per year in finance alone, and more in crypto. So far EA has got 1. Is that above the base rate? Hard to say, especially if you control for the community’s demographics (socioeconomic class, education, etc.).
I do think it’s an interesting question whether EA is prone to generate Sams at higher than the base rate. I think it’s pretty hard to tell from a single case, though.
I am one of the people who thinks that we have reacted too much to the FTX situation. I think as a community we sometimes suffer from a surfeit of agency and we should consider the degree to which we are also victims of SBF’s fraud. We got used. It’s like someone asking to borrow your car and then using it as the getaway car for a heist. Sure, you causally contributed, but your main sin was poor character judgement. And many, many people, even very sophisticated people, get taken in by charismatic financial con men.
I also think there’s too much uncritical acceptance of what SBF said about his thoughts and motivations. That makes it look like FTX wouldn’t have existed and the fraud wouldn’t have happened without EA ideas… but that’s what Sam wants you to think.
I think if SBF had never encountered EA, he would still have been working in finance and spotted the opportunity to cash in with a crypto exchange. And he would still have been put in situations where he could commit fraud to keep his enterprise alive. And he would still have done it. Perhaps RB provided cover for what he was thinking (subconsciously even), but plenty of financial fraud happens with much flimsier cover.
That is, I think P(FTX exists) is not much lower than P(FTX exists | SBF in EA), and similarly for P(FTX is successful) and P(SBF commits major fraud) (actually I’d find it plausible to think that the EA involvement might even have lowered the chances of fraud).
Overall this makes me mostly feel that we should be embarrassed about the situation and not really blameworthy. I think the following people could do with some reflection:
People who met, liked, and promoted SBF. “How could I have spotted that he was a con man?”
People who bought SBF’s RB reasoning at face value. “How could I have noticed that this was a cover for bad behaviour?”
I include myself in this a bit. I remember reading the Tyler Cowen interview where he says he’d play double or nothing with the world on a coin flip and thinking “haha, how cute, he enjoys biting some bullets in decision theory, I’m sure this mostly just applies to his investing strategy”.
That’s about it. I think the people who took money from the FTXFF are pretty blameless and it’s weird to expect them to have done massive due diligence that others didn’t manage.
It seems like we could use the new reactions for some of this. At the moment they’re all positive but there could be some negative ones. And we’d want to be able to put the reactions on top level posts (which seems good anyway).
I wonder if the focus on “narrow EA” is a reflection of short AI timelines and/or a belief that we need to make changes sooner rather than later.
It seems to me that “global EA” looks better the longer the future we have. Gains in people compound, and other countries may be much more influential in the future. If Nigeria is a global power in 50 years, then growing a community there now might be a good investment.
I don’t think this has a clear answer though, since benefits from actually solving problems sooner can compound too.
I wholeheartedly agree with this post.
I think there has been a bit of over-reacting to recent events. I don’t think the damage is that bad, and to some degree I think we’ve just been unlucky. Maybe we need to do some things differently (e.g. try to project less of an air of certainty, which many critics seem to perceive) but we should also beware the illusion of control.
That was sad to read :(
You say, in effect, “not that centralised”, but, from your description, EA seems highly centralised
Your argument that it’s not centralised seems to be that EA is not a single legal entity
These are two examples, but I generally didn’t feel like your reply really engaged with Will’s description of the ways in which EA is decentralized, nor his attempt to look for finer distinctions in decentralization. It felt a bit like you just said “no, it is centralised!”.
democracy has the effect of decentralising power.
I don’t agree with this at all. IMO democracy often has the opposite effect, and many decentralized communities (e.g. the open-source community) have zero democracy. But I think this needs me to write a full post...
If we think of centralisation just on a spectrum of ‘decision-making power’, as you define it above (how few people determine what happens to the whole) EA could hardly be more centralised!
This seems false to me. If the only kind of decision you think matters is funding decisions, then sure, those are somewhat centralised. But that’s not everything, and it’s far from clear to me why you think that’s the only thing that matters?
For example, as Will discusses in the post, even amongst the individual EA orgs:
There are many of them, and they are small
They basically all do their own strategy and planning
Sure doesn’t look like centralized decision-making to me. You could say “For any decision, OP could threaten to refuse to fund an organization unless it made the choice that OP wants, therefore actually OP has the decision-making power”. But this seems to me to just not be a good description of reality. OP doesn’t behave like that, and in practice most decisions are made in a decentralized fashion.
Yet, de facto, if we think about where power, in fact, resides, it is concentrated in a very small group. If someone sets up an invite-only group called the ‘leaders’ forum’, it seems totally reasonable for people to say “ah, you guys run the show”.
This equivocates between saying that power does resides a small group, and saying that we have created the perception that power resides with a small group. As I already argued, I think the former is false, and Will explicitly agrees with the latter and thinks we should change it.
My overall impression of your post is that it seems to me that you think the non-diversity of funding is bad (which I think we all agree on), but that for some reason funding is the only thing that matters when it comes to whether we describe EA as centralized or not.
Whereas to me EA looks like a pretty decentralized movement that currently happens to have a dominant funder. Moreover, we’re lucky in that our funder doesn’t (AFAIK) throw their weight around too much.
I think the average EA might underestimate the extent to which being visible in EA (e.g. speaking at EAG) is seen as a burden rather than an opportunity.
Having read this I’m still unclear what the benefit of your restructuring of CEA is. It’s not a decentralising move (if anything it seems like the opposite to me); it might be a legitimising move, but is lack of legitimacy an actual problem that we have?
The main other difference I can see is that it might make CEA more populist in the sense of following the will of the members of the movement more. Maybe I’m as much of an instinctive technocrat as you are a democrat, but it seems far from clear to me that that would be good. Nor that it solves a problem we actually have.
Yes, I think there’s a lot of sliding between “decentralised” and “democratic” even though these have pretty much nothing to do with each other.
As a pretty clear example, the open source software community is extremely decentralised but has essentially zero democracy anywhere.
I think this is a place where the centralisation vs decentralisation axis is not the right thing to talk about. It sounds like you want more transparency and participation, which you might get by having more centrally controlled communication systems.
IME decentralised groups are not usually more transparent, if anything the opposite as they often have fragmented communication, lots of which is person-to-person.
I would love the community to be more supportive in ways that would help with that. Things I would like:
Accept that new projects may be not that great, encourage them to grow and maybe even chip in as well as criticising.
Accept and even celebrate failure.
Even more incubator style things. I love what CE does here.
A few thoughts:
The level of quality and professionalism has risen since the old days which makes it intimidating to contribute your own half-assed thing.
Doing things does usually require time, and a lot of the early doing was done by students (and still is!). It’s much harder to be that involved when you’re older without becoming professionally involved. These days we have a lot more non-students!
I think all Will’s stuff about the perceived allocation of responsibility and control has a big impact.
I’m not super convinced that the fundraising situation is tougher? It seems much easier to me than it was. Especially for small things we have a decent range of funders.
I also had this reaction, and I think it was mostly just the phrasing. “Break up OP” suggests that we have the power or right to do that, which we definitely don’t. I think if the post said “OP could consider breaking itself up” it wouldn’t sound like that.
I think the proposal to have significant re-grantors is a more approachable way of achieving something similar, in that it delegates control of some funds.
100% agree. I think it is almost always better to be honest, even if that makes you look weird. If you are worried about optics, “oh yeah, we say this to get people in but we don’t really believe it” looks pretty bad.