Just wanted to flag that I personally believe - most of Cremer’s proposed institutional reforms are either bad or zero impact, this was the case when proposed, and is still true after updates from FTX - it seems clear proposed reforms would not have prevented or influenced the FTX fiasco— I think part of Cremer’s reaction after FTX is not epistemically virtuous; “I was a vocal critic of EA”—“there is an EA-related scandal”—“I claim to be vindicated in my criticism” is not sound reasoning, when the criticisms are mostly tangentially related to the scandal. It will get you a lot of media attention, in particular if you present yourself asare willing to cooperate in being presented as some sort of virtuous insider who was critical of the leaders and saw this coming, but I hope upon closer scrutiny people are actually able to see through this.
edit: present yourself as replaced with are willing to cooperate in being presented
I don’t think this is a fair comment, and aspects of it reads more of a personal attack rather than an attack of ideas. This feels especially the case given the above post has significantly more substance and recommendations to it, but this one comment just focuses in on Zoe Cremer. It worries me a bit that it was upvoted as much as it was.
For the record, I think some of Zoe’s recommendations could plausibly be net negative and some are good ideas; as with everything, it requires further thinking through and then skillful implementation. But I think the amount of flack she’s taken for this has been disproportionate and sends the wrong signal to others about dissenting.
I think this aspect of the comment is particularly harsh, which is in and of itself likely counterproductive. But on top of that, it’s not the type that should be made lightly or without a lot of evidence that that is the person’s agenda (bold for emphasis):
- I think part of Cremer’s reaction after FTX is not epistemically virtuous; “I was a vocal critic of EA”—“there is an EA-related scandal”—“I claim to be vindicated in my criticism” is not sound reasoning, when the criticisms are mostly tangentially related to the scandal. It will get you a lot of media attention, in particular if you present yourself as some sort of virtuous insider who was critical of the leaders and saw this coming, but I hope upon closer scrutiny people are actually able to see through this.
This discussion here made me curious, so I went to Zoe’s twitter to check out what she’s posted recently. (Maybe she also said things in other places, in which case I lack info.) The main thing I see her taking credit for (by retweeting other people’s retweets saying Zoe “called it”) is this tweet from last August:
EA is to me to be unwilling to implement institutional safeguards against fuck-ups. They mostly happily rely on a self-image of being particularly well-intentioned, intelligent, precautious. That’s not good enough for an institution that prizes itself in understanding tail-risk.
That seems legitimate to me. (We can debate whether institutional safeguards would have been the best action against FTX in particular, but the more general point of “EAs have a blind spot around tail risks due to an elated self-image of the movement” seems to have gotten a “+1″ score with the FTX collapse (and EAs not having seen it coming despite some concerning signs).
There’s also a tweet by a journalist that she retweeted:
3) Critics (eg @CarlaZoeC@LukaKemp) warned that EA should decentralize funding so it doesn’t become a closed validation loop where the people in SBF’s inner circle get millions to promote his & their vision for EA while others don’t. But EA funding remained overcentralized.
That particular wording sounds suspiciously like it was tailored to the events with hindsight, in which case retweeting without caveats is potentially slightly suboptimal. But knowing what we know now, I’d indeed be worried about a hypothetical world where FTX hadn’t collapsed! Where Sam’s vision of things and his attitude to risks gets to have such a huge degree of influence within EA. (That said, it’s not like we can just wish money into existence from diversified sources of funding – in Zoe’s document, I saw very little discussion of the costs of cutting down on “centralized” funding.)
In any case, I agree with Jan’s point that it would be a massive overreaction to now consider all of Zoe’s criticisms vindicated. In fact, without a more detailed analysis, I think it would even be premature to say that she got some important details exactly right (especially when it comes to suggestions for change).
Even so, I think it’s important to concede that Zoe gets significant credit here at least directionally, and that’s an argument for people to (re-)engage with her suggestions if they haven’t already done so or if there’s a chance they may have been a bit closed off to them the last time.
(My own view remains skeptical, though, as I explained here.)
3) Critics (eg @CarlaZoeC@LukaKemp) warned that EA should decentralize funding so it doesn’t become a closed validation loop where the people in SBF’s inner circle get millions to promote his & their vision for EA while others don’t. But EA funding remained overcentralized
I think the FTX regranting program was the single biggest push to decentralize funding EA has ever seen, and it’s crazy to me that anyone could look at what FTX Foundation was doing and say that the key problem is that the funding decisions were getting more, rather than less, centralized. (I would be interested in hearing from those who had some insight into the program whether this seems incorrect or overstated.)
That said, first, I was a regrantor, so I am biased, and even aside from the tremendous damage caused by the foundation needing to back out and the possibility of clawbacks, the fact that at least some of the money which was being regranted was stolen makes the whole thing completely unacceptable. However, it was unacceptable in ways that have nothing to do with being overly centralized.
This seems right within longtermism, but, AFAIK, the vast majority of FTX’s grantmaking was longtermist. This decision to focus on longtermism seemed very centralized and might otherwise have shaped the direction and composition of EA disproportionately towards longtermism.
If FTX’s decentralised model had been proven successful for long-termism, I suspect it would have influenced the way funding was handled for other cause areas as well.
In case my wording was confusing, I meant that a community shift towards longtermism seems to have been decided by a small number of individuals (FTX founders). I’m not talking about centralization within causes, but centralization in deciding prioritization between causes.
Also, I’m skeptical that global health and poverty or animal welfare would shift towards very decentralized regranting without a massive increase in available funding first, because
some of the large cost-effective charities that get funded are still funding-constrained, and so the bars to beat seem better defined, and
there already are similar experiments on a smaller scale through the EA Funds.
Yeah, I got that, I was just mentioning an effect that might have partially offset it.
I agree that a small number of individuals decided that the funds should focus on long-terminal, although this is partially offset by how the EA movement was shifting that direction anyway.
I think you lack part of the context where Zoe seems to claim to media the suggested reforms would help
- this Economist piece, mentioning Zoe about 19 times - WP - this New Yorker piece, with Zoe explaining “My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying … “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.” -this twitter thread
- this New Yorker piece, with Zoe explaining “My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying … “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.””My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying … “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.”
To be fair, this seems like a reasonable statement on Zoe’s part:
If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says “likely” which is obviously not particularly specific, but this would fit my definition of likely.
If EA had diversified our portfolio to be less reliant on a few central donors, this would have also (quite obviously) mean the crash had less impact on EA overall, so this also seems true.
Basically, as othercomments have stated, you do little to actually say why these proposed reforms are, as you initially said, bad or would have no impact. I think if you’re going to make a statement like:
“it seems clear proposed reforms would not have prevented or influenced the FTX fiasco”
You need to actually provide some evidence or reasoning for this, as clearly lots of people don’t believe it’s clear. Additionally, if it feels unfair to call Zoe “not epistemically virtuous” when you’re making quite bold claims, without any reasoning laid out, then saying it would be too time-intensive to explain your thinking.
For example, you say here that you’re concerned about what democratisation actually looks like, which is a fair point and useful object-level argument, but this seems more like a question of implementation rather than the actual idea is necessarily bad.
If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says “likely” which is obviously not particularly specific, but this would fit my definition of likely.
Why do you think so? Whistleblowers inside of FTX would have been protected under US law, and US institution like SEC offer them multi-million dollar bounties. Why would EA scheme create stronger incentive?
Also: even if the possible whistleblowers inside of FTX were EAs, whistleblowing about fraud at FTX not directed toward authorities like SEC, but toward some EA org scheme, would have been particularly bad idea. The EA scheme would not be equipped to deal with this and would need to basically immediately forward it to authorities, leading to immediate FTX collapse. Main difference would be putting EAs in the centre of the happenings?
If EA had diversified our portfolio to be less reliant on a few central donors, this would have also (quite obviously) mean the crash had less impact on EA overall, so this also seems true.
I think the ‘diversified our portfolio’ frame is subtly misleading, because it’s usually associated with investments or holdings, but here it is applied to ‘donors’. You can’t diversify donations the same way. Also: assume you recruit donors uniformly, no matter how wealthy they are. Most of the wealth will be with the wealthiest minority , basically because how the wealth distribution looks like. Attempt to diversify donation portfolio toward smaller donors … would look like GWWC?
Only real option how to have much less FTX money in EA was to not accept that much FTX funding. Which was a tough call at the time, in part because FTX FF seemed like the biggest step toward decentralized distribution of funding, and a big step toward diversifying from OP.
Only real option how to have much less FTX money in EA was to not accept that much FTX funding. Which was a tough call at the time, in part because FTX FF seemed like the biggest step toward decentralized distribution of funding, and a big step toward diversifying from OP.
And even then, decisions about accepting funding are made by individuals and individual organizations. Would there be someone to kick you out of EA if you accept “unapproved” funding? The existing system is, in a sense, fairly democratic in that everyone gets to decide whether they want to take the money or not. I don’t see how Cremer’s proposal could be effective without a blacklist to enforce community will against anyone who chose to take the money anyway, and that gives whoever maintains the blacklist great power (which is contrary to Cremer’s stated aims).
The reality, perhaps unfortunate, is that charities need donors more than donors need specific charities or movements.
Also: assume you recruit donors uniformly, no matter how wealthy they are. Most of the wealth will be with the wealthiest minority , basically because how the wealth distribution looks like. Attempt to diversify donation portfolio toward smaller donors … would look like GWWC?
It depends on how you define wealthiest minority, but if you mean billionaires, the majority of philanthropy is not from billionaires. EA has been unusually successful with billionaires. That means if EA mean reverts, perhaps by going mainstream, the majority of EA funding will not be from billionaires. CEA deprioritized GWWC for several years-I think if they had continued to prioritize it, funding would have gotten at least somewhat more diversified. Also, I find that talking with midcareer professionals it’s much easier to mention donations rather than switching their career. So I think that more emphasis on donations from people of modest means could help EA diversify with respect to age.
If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says “likely” which is obviously not particularly specific, but this would fit my definition of likely.
Why do you believe this? To me, FTX fits more in the reference class of financial firms than EA orgs, and I don’t see how EA whistleblower protections would have helped FTX employees whistleblow (I believe that most FTX employees were not EAs, for example). And it seems much more likely to me that an FTX employee would be able to whistle-blow than an EA at a non-FTX org.
Also, my current best guess is that only the top 4 at FTX/Alameda knew about the fraud, and I have not come across anyone who seems like they might have been a whistleblower (I’d love to be corrected on this though!)
I’ve honestly been pretty surprised there has not been more public EA discussion post-FTX of adopting a number of Cremer’s proposed institutional reforms, many of which seem to me obviously worth doing … Also, insofar as she’d be willing (and some form of significant compensation is clearly merited), integrally engaging Cremer in whatever post-FTX EA institutional reform process emerges would be both directly helpful and a public show of good faith efforts at rectification.
I think it’s fine for a comment to engage with just a part of the original post. Also, if a posts advocates for giving someone some substantial power, it seems fair to comment on media presence of the person.
Have you seen any actual in detail analysis how would the proposal influenced FTX? I did not. I’m sceptical of the helpfulness—for example, with whistleblower protections... - Many EA orgs have whistleblower protection. Empirically, it seems it had zero impact on FTX, and the damage to the orgs seems independent of this. - There are already laws and incentives for reporting wire fraud. If there was someone in the know from within FTX considering whistleblowing, if I understand SEC and CFTC comments, they would have been eligible for both protection and bounty in millions of dollars- and possibly avoided other bad things happening them, such as going to jail. Why would some EA bounty create stronger incentive? - My impression is the original whistleblowing protection proposal was implicitly directed toward “EA charities”, not “companies of EA funders”.
But I think the amount of flack she’s taken for this has been disproportionate and sends the wrong signal to others about dissenting.
Can you link to something specific? I haven’t found any specific critical post or comment mentioning her on the forum since Nov.
In contrast, after a Google News search, I think the opposite is closer to reality: media coverage of Zoe’s criticism is uncritically positive, and who is taking flack is MacAskill. While I’m sometimes critical of Will, the narrative that he is at fault for not implementing Zoe’s proposals seems completely unfair to me.
Thanks Jan! Could you elaborate on the first point specifically? Just from a cursory look at the linked doc, the first three suggestions seem to have few drawbacks to me, and seem to constitute good practice for a charitable movement.
Set up whistleblower protection schemes for members of EA organisations
Transparent listing of funding sources on each website of each institution
Detailed and comprehensive conflict of interest reporting in grant giving
I’ll note that many EA orgs already have whistleblower protection policies in place and that there are also various whistleblowing protection laws in many jurisdictions (including the US and the UK) which I assume any EA affiliated organization or employee would have to follow.
I can’t speak to orgs, but the scope of legal protection for whistleblowing protection for US private employees is quite narrow—I think people are calling for something much more robust. Also, I believe those protections often only cover an organization’s actions against current employees—not non-employer actions like blacklisting the whistleblower against receiving grants or trashing them to potential future employers.
Unfortunately not in detail - it’s a lot of work to go through the whole list and comment on every proposal. My claim is not ‘every item on the list is wrong’, but ‘the list is wrong on average’ so commenting on three items does not solve possible disagreement.
To discuss something object-level, let’s look at the first one
’Whistleblower protection schemes’ sound like a good proposal on paper, but the devil is in detail:
1. Actually, at least in the EU and UK, whistleblowers pointing out things like fraud or illegal stuff are protected by the law. The protection offered by law is probably stronger than an internal org policy for some cases, and does not apply in other cases. Also, in some countries there are regulations what whistleblower protections you should have in place—I assume orgs do follow it where it applies. 2. Many orgs where it makes sense have some policies/systems in this direction, but not necessarily under the name of ‘whistleblower protection’. 3. Majority of EA orgs are orgs which are quite small. I don’t think if you have a team of e.g. four people, having a whistleblower protection scheme works the same way as in org with four hundred people. In my view, what actually often makes more sense, is having external contacts for all sort of issues—e.g. the community health team. 4. Overall, I think often the worst situation is when you have a system which seemingly does something, but actually does not. For example: a campus mental health support system which is actually not qualified to help with mental health problems, but keeps track who reached out to them, is probably worse than nothing.
My bottom line is something like … ‘whistleblower protection scheme’ may be good to implement in some cases, and some orgs have them. But it is too bureaucratic in other cases. Blanket policy requiring every org to have a formal scheme, no matter what the size or circumstances, seems bad.
The Cremer document mixes two different types of whistleblower policies: protection and incentives. Protection is about trying to ensure that organisations do not disincentivize employees or other insiders from trying to address illegal/undesired activities of the organisation through for example threats or punishments. Whistleblower incentives are about incentivizing insiders to address illegal/undesired activities.
The recent EU whistleblowing directive for example is a rather complex piece of legislation that aims to protect whistleblowers from e.g. being fired by their employers in some situations.
The US SEC whistleblowing program on the other hand incentivizes whistleblowing by providing financial awards, some 10-30% of sanctions collected, for information that leads to significant findings. This policy, for the US, has a quickly estimated return of 5-10x through first order effects, and possibly many times that in second order effects through stopping fraud and reducing the expected value of fraud in general. The SEC gives several awards each month. A report about the program is available here for those interested.
Whistleblower protections tend to be more bureaucratic and are already covered by US and EU legislation to such an extent that improving them seems difficult. Whistleblower incentive mechanisms meanwhile seem much more worthwhile to investigate, because such a mechanism could be operated by a small centralized function without adding any new bureaucracy to existing organisations. I suspect that even a minimal whistleblower incentive* mechanism would reduce risks and increase trust within the EA diaspora by increasing the probability that we become aware of risky situations before they snowball into larger crises.
(*incentives here might not mean financial awards like in the SEC program, but something like helping the whistleblower find a new job, or taking the responsibility for investigating the information further instead of expecting the whistleblower to do it. I’d guess that most whistleblowing reports in EA, if any, would involve junior workers who are afraid of losing their income or status in the community, or simply do not have the energy, network, or skills to address the issue directly themselves.)
“It seems clear proposed reforms would not have prevented or influenced the FTX fiasco” doesn’t really engage with the original poster’s argument (at least as I understand it). The argument, I think, is that FTX revealed the possibility that serious undiscovered negatives exist, and that some of Cremer’s proposed reforms and/or other reforms would reduce those risks. Given that they involve greater accountability, transparency, and deconcentration of power, this seems plausible.
Maybe Cremer is arguing that her reforms would have likely prevented FTX, but that’s not really relevant to the discussion of the original post.
In my reading, the OP updated toward the position “it’s plausible that effective altruist community-building activities could be net-negative in impact, and I wanted to explore some conjectures about what that plausibility would entail” based on FTX causing large economic damage. One of the conjectures based on this is “Implement Carla Zoe Cremer’s Recommendations”.
I’m mostly arguing against the position that ‘the update of probability mass on EA community building being negative due to FTX evidence is a strong reason to implement Carla Zoe Cremer’s Recommendations’
For comparison: I held the position that effective altruist community-building activities could be net-negative in impact before FTX and did not update much on the FTX evidence. In my view, the main reason for plausible negativity is EA seems much better at “finding places of high leverage” where you can influence the trajectory of the world a lot, than in figuring out what to actually do in those places. In my view, interventions against the risk include emphasis on epistemics, pushing against local consequentialist reasoning, and pushing against free-floating “community building” where people not working on the object level try mostly to bring in a lot of new people.
Personally, I think implementing Zoe Cremer’s Recommendations as a wholeeither does not impact the largest real risks, orwould make the negative outcomes more likely. Repeated themes in the recommendations are ‘introduce bureaucracy’ and ‘decide democratically’. I don’t think bureaucracies are wise, and in ‘democratizing’ things the big question is ‘who is the demos?’.
I strongly downvoted this for not making any of the reasoning transparent and thus contributing little to the discussion beyond stating that “Jan believes this”.
This could sometimes be reasonable for the purpose of deferring to authority, but that is riskier in this case because Jan has severe conflicts of interest due to being employed by a core EA organisation and being a stakeholder in for example a ~$4.7 million grant to buy a chateau.
When the discussion is roughly at the level ‘seem to me obviously worth doing ’ it seem to me fine to state dissent of the form ‘often seems bad or not working to me’.
Stating an opinion is not ‘appeal to authority’. I think in many cases it’s useful to know what people believe, and if I have to choose between a forum where people state their beliefs openly and more often, and a forum, where people state beliefs only when they are willing to write a long and detailed justification, I prefer the first.
I’m curious in which direction you think the supposed ‘conflict of interests’ point:
I’m employed at the same institution (FHI) as Zoe works, and we were part of the same RSP program (although in different cohorts). This mostly creates some incentive to not criticize Zoe’s ideas publicly and would preclude me from e.g. reviewing Zoe’s papers, because of favourable bias.
Also … I think while being a stakeholder in a grant to buy a cheap and cost-saving events venue has not much to do with the topics in question, it mostly creates some incentive to be silent, because by engaging critically with the topic, you increase the risk someone will summon an angry twitter mob to attack you.
Overall … it’s probably worth noticing people like you strong downvoting my comment (now at karma 5, yours at 12) are the side actually trying to silence the critic here, while agreement with “it is surprising that some of Carla Zoe Kremer’s reforms haven’t been implemented”or vague criticisms of “EA leadership” are what’s in vogue on EA forum now.
I don’t think (almost) anyone is trying to silence you here; the agreevotes on your top comment are pretty high and I’d expect a silencing campaign to target both. That suggests to me that the votes are likely due to what some perceive as an uncharitable tone toward Zoe, or possibly a belief that having the then-top comment be one that focuses heavily on Zoe’s self-portrayal in the media risks derailing discussion of the original poster’s main points (Zoe’s potential involvement being a subpoint to a subpoint).
Just wanted to flag that I personally believe
- most of Cremer’s proposed institutional reforms are either bad or zero impact, this was the case when proposed, and is still true after updates from FTX
- it seems clear proposed reforms would not have prevented or influenced the FTX fiasco—
I think part of Cremer’s reaction after FTX is not epistemically virtuous; “I was a vocal critic of EA”—“there is an EA-related scandal”—“I claim to be vindicated in my criticism” is not sound reasoning, when the criticisms are mostly tangentially related to the scandal. It will get you a lot of media attention, in particular if you
present yourself asare willing to cooperate in being presented as some sort of virtuous insider who was critical of the leaders and saw this coming, but I hope upon closer scrutiny people are actually able to see through this.edit:
present yourself asreplaced with are willing to cooperate in being presentedI don’t think this is a fair comment, and aspects of it reads more of a personal attack rather than an attack of ideas. This feels especially the case given the above post has significantly more substance and recommendations to it, but this one comment just focuses in on Zoe Cremer. It worries me a bit that it was upvoted as much as it was.
For the record, I think some of Zoe’s recommendations could plausibly be net negative and some are good ideas; as with everything, it requires further thinking through and then skillful implementation. But I think the amount of flack she’s taken for this has been disproportionate and sends the wrong signal to others about dissenting.
I think this aspect of the comment is particularly harsh, which is in and of itself likely counterproductive. But on top of that, it’s not the type that should be made lightly or without a lot of evidence that that is the person’s agenda (bold for emphasis):
This discussion here made me curious, so I went to Zoe’s twitter to check out what she’s posted recently. (Maybe she also said things in other places, in which case I lack info.) The main thing I see her taking credit for (by retweeting other people’s retweets saying Zoe “called it”) is this tweet from last August:
That seems legitimate to me. (We can debate whether institutional safeguards would have been the best action against FTX in particular, but the more general point of “EAs have a blind spot around tail risks due to an elated self-image of the movement” seems to have gotten a “+1″ score with the FTX collapse (and EAs not having seen it coming despite some concerning signs).
There’s also a tweet by a journalist that she retweeted:
That particular wording sounds suspiciously like it was tailored to the events with hindsight, in which case retweeting without caveats is potentially slightly suboptimal. But knowing what we know now, I’d indeed be worried about a hypothetical world where FTX hadn’t collapsed! Where Sam’s vision of things and his attitude to risks gets to have such a huge degree of influence within EA. (That said, it’s not like we can just wish money into existence from diversified sources of funding – in Zoe’s document, I saw very little discussion of the costs of cutting down on “centralized” funding.)
In any case, I agree with Jan’s point that it would be a massive overreaction to now consider all of Zoe’s criticisms vindicated. In fact, without a more detailed analysis, I think it would even be premature to say that she got some important details exactly right (especially when it comes to suggestions for change).
Even so, I think it’s important to concede that Zoe gets significant credit here at least directionally, and that’s an argument for people to (re-)engage with her suggestions if they haven’t already done so or if there’s a chance they may have been a bit closed off to them the last time.
(My own view remains skeptical, though, as I explained here.)
I think the FTX regranting program was the single biggest push to decentralize funding EA has ever seen, and it’s crazy to me that anyone could look at what FTX Foundation was doing and say that the key problem is that the funding decisions were getting more, rather than less, centralized. (I would be interested in hearing from those who had some insight into the program whether this seems incorrect or overstated.)
That said, first, I was a regrantor, so I am biased, and even aside from the tremendous damage caused by the foundation needing to back out and the possibility of clawbacks, the fact that at least some of the money which was being regranted was stolen makes the whole thing completely unacceptable. However, it was unacceptable in ways that have nothing to do with being overly centralized.
This seems right within longtermism, but, AFAIK, the vast majority of FTX’s grantmaking was longtermist. This decision to focus on longtermism seemed very centralized and might otherwise have shaped the direction and composition of EA disproportionately towards longtermism.
If FTX’s decentralised model had been proven successful for long-termism, I suspect it would have influenced the way funding was handled for other cause areas as well.
In case my wording was confusing, I meant that a community shift towards longtermism seems to have been decided by a small number of individuals (FTX founders). I’m not talking about centralization within causes, but centralization in deciding prioritization between causes.
Also, I’m skeptical that global health and poverty or animal welfare would shift towards very decentralized regranting without a massive increase in available funding first, because
some of the large cost-effective charities that get funded are still funding-constrained, and so the bars to beat seem better defined, and
there already are similar experiments on a smaller scale through the EA Funds.
Yeah, I got that, I was just mentioning an effect that might have partially offset it.
I agree that a small number of individuals decided that the funds should focus on long-terminal, although this is partially offset by how the EA movement was shifting that direction anyway.
Yes, that seems correct.
I think you lack part of the context where Zoe seems to claim to media the suggested reforms would help
- this Economist piece, mentioning Zoe about 19 times
- WP
- this New Yorker piece, with Zoe explaining “My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying … “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.”
-this twitter thread
To be fair, this seems like a reasonable statement on Zoe’s part:
If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says “likely” which is obviously not particularly specific, but this would fit my definition of likely.
If EA had diversified our portfolio to be less reliant on a few central donors, this would have also (quite obviously) mean the crash had less impact on EA overall, so this also seems true.
Basically, as other comments have stated, you do little to actually say why these proposed reforms are, as you initially said, bad or would have no impact. I think if you’re going to make a statement like:
You need to actually provide some evidence or reasoning for this, as clearly lots of people don’t believe it’s clear. Additionally, if it feels unfair to call Zoe “not epistemically virtuous” when you’re making quite bold claims, without any reasoning laid out, then saying it would be too time-intensive to explain your thinking.
For example, you say here that you’re concerned about what democratisation actually looks like, which is a fair point and useful object-level argument, but this seems more like a question of implementation rather than the actual idea is necessarily bad.
Why do you think so? Whistleblowers inside of FTX would have been protected under US law, and US institution like SEC offer them multi-million dollar bounties. Why would EA scheme create stronger incentive?
Also: even if the possible whistleblowers inside of FTX were EAs, whistleblowing about fraud at FTX not directed toward authorities like SEC, but toward some EA org scheme, would have been particularly bad idea. The EA scheme would not be equipped to deal with this and would need to basically immediately forward it to authorities, leading to immediate FTX collapse. Main difference would be putting EAs in the centre of the happenings?
I think the ‘diversified our portfolio’ frame is subtly misleading, because it’s usually associated with investments or holdings, but here it is applied to ‘donors’. You can’t diversify donations the same way. Also: assume you recruit donors uniformly, no matter how wealthy they are. Most of the wealth will be with the wealthiest minority , basically because how the wealth distribution looks like. Attempt to diversify donation portfolio toward smaller donors … would look like GWWC?
Only real option how to have much less FTX money in EA was to not accept that much FTX funding. Which was a tough call at the time, in part because FTX FF seemed like the biggest step toward decentralized distribution of funding, and a big step toward diversifying from OP.
And even then, decisions about accepting funding are made by individuals and individual organizations. Would there be someone to kick you out of EA if you accept “unapproved” funding? The existing system is, in a sense, fairly democratic in that everyone gets to decide whether they want to take the money or not. I don’t see how Cremer’s proposal could be effective without a blacklist to enforce community will against anyone who chose to take the money anyway, and that gives whoever maintains the blacklist great power (which is contrary to Cremer’s stated aims).
The reality, perhaps unfortunate, is that charities need donors more than donors need specific charities or movements.
It depends on how you define wealthiest minority, but if you mean billionaires, the majority of philanthropy is not from billionaires. EA has been unusually successful with billionaires. That means if EA mean reverts, perhaps by going mainstream, the majority of EA funding will not be from billionaires. CEA deprioritized GWWC for several years-I think if they had continued to prioritize it, funding would have gotten at least somewhat more diversified. Also, I find that talking with midcareer professionals it’s much easier to mention donations rather than switching their career. So I think that more emphasis on donations from people of modest means could help EA diversify with respect to age.
Why do you believe this? To me, FTX fits more in the reference class of financial firms than EA orgs, and I don’t see how EA whistleblower protections would have helped FTX employees whistleblow (I believe that most FTX employees were not EAs, for example). And it seems much more likely to me that an FTX employee would be able to whistle-blow than an EA at a non-FTX org.
Also, my current best guess is that only the top 4 at FTX/Alameda knew about the fraud, and I have not come across anyone who seems like they might have been a whistleblower (I’d love to be corrected on this though!)
I was reacting mostly to this part of the post
I think it’s fine for a comment to engage with just a part of the original post. Also, if a posts advocates for giving someone some substantial power, it seems fair to comment on media presence of the person.
Overall, to me, it seem you advocate for double-standard / selective demand for rigour.
Post-FTX discussion of Zoe’s proposals seems mostly on the level ‘Implement Carla Zoe Cremer’s Recommendations’ or ‘very annoyed this all had to happen before a rethink, given that 10 months earlier, I sat in his office proposing whistleblower protections, transparency over funding sources, bottom-up control over risky donations’ or similar high level supportive comments, never going into details of the proposals, and without any realistic analysis of what would have happened. I expressed the opposite sentiment, clearly marking it as my belief.
Have you seen any actual in detail analysis how would the proposal influenced FTX? I did not. I’m sceptical of the helpfulness—for example, with whistleblower protections...
- Many EA orgs have whistleblower protection. Empirically, it seems it had zero impact on FTX, and the damage to the orgs seems independent of this.
- There are already laws and incentives for reporting wire fraud. If there was someone in the know from within FTX considering whistleblowing, if I understand SEC and CFTC comments, they would have been eligible for both protection and bounty in millions of dollars- and possibly avoided other bad things happening them, such as going to jail. Why would some EA bounty create stronger incentive?
- My impression is the original whistleblowing protection proposal was implicitly directed toward “EA charities”, not “companies of EA funders”.
Can you link to something specific? I haven’t found any specific critical post or comment mentioning her on the forum since Nov.
In contrast, after a Google News search, I think the opposite is closer to reality: media coverage of Zoe’s criticism is uncritically positive, and who is taking flack is MacAskill. While I’m sometimes critical of Will, the narrative that he is at fault for not implementing Zoe’s proposals seems completely unfair to me.
Thanks Jan! Could you elaborate on the first point specifically? Just from a cursory look at the linked doc, the first three suggestions seem to have few drawbacks to me, and seem to constitute good practice for a charitable movement.
I’ll note that many EA orgs already have whistleblower protection policies in place and that there are also various whistleblowing protection laws in many jurisdictions (including the US and the UK) which I assume any EA affiliated organization or employee would have to follow.
I can’t speak to orgs, but the scope of legal protection for whistleblowing protection for US private employees is quite narrow—I think people are calling for something much more robust. Also, I believe those protections often only cover an organization’s actions against current employees—not non-employer actions like blacklisting the whistleblower against receiving grants or trashing them to potential future employers.
Unfortunately not in detail - it’s a lot of work to go through the whole list and comment on every proposal. My claim is not ‘every item on the list is wrong’, but ‘the list is wrong on average’ so commenting on three items does not solve possible disagreement.
To discuss something object-level, let’s look at the first one
’Whistleblower protection schemes’ sound like a good proposal on paper, but the devil is in detail:
1. Actually, at least in the EU and UK, whistleblowers pointing out things like fraud or illegal stuff are protected by the law. The protection offered by law is probably stronger than an internal org policy for some cases, and does not apply in other cases. Also, in some countries there are regulations what whistleblower protections you should have in place—I assume orgs do follow it where it applies.
2. Many orgs where it makes sense have some policies/systems in this direction, but not necessarily under the name of ‘whistleblower protection’.
3. Majority of EA orgs are orgs which are quite small. I don’t think if you have a team of e.g. four people, having a whistleblower protection scheme works the same way as in org with four hundred people. In my view, what actually often makes more sense, is having external contacts for all sort of issues—e.g. the community health team.
4. Overall, I think often the worst situation is when you have a system which seemingly does something, but actually does not. For example: a campus mental health support system which is actually not qualified to help with mental health problems, but keeps track who reached out to them, is probably worse than nothing.
My bottom line is something like … ‘whistleblower protection scheme’ may be good to implement in some cases, and some orgs have them. But it is too bureaucratic in other cases. Blanket policy requiring every org to have a formal scheme, no matter what the size or circumstances, seems bad.
The Cremer document mixes two different types of whistleblower policies: protection and incentives. Protection is about trying to ensure that organisations do not disincentivize employees or other insiders from trying to address illegal/undesired activities of the organisation through for example threats or punishments. Whistleblower incentives are about incentivizing insiders to address illegal/undesired activities.
The recent EU whistleblowing directive for example is a rather complex piece of legislation that aims to protect whistleblowers from e.g. being fired by their employers in some situations.
The US SEC whistleblowing program on the other hand incentivizes whistleblowing by providing financial awards, some 10-30% of sanctions collected, for information that leads to significant findings. This policy, for the US, has a quickly estimated return of 5-10x through first order effects, and possibly many times that in second order effects through stopping fraud and reducing the expected value of fraud in general. The SEC gives several awards each month. A report about the program is available here for those interested.
Whistleblower protections tend to be more bureaucratic and are already covered by US and EU legislation to such an extent that improving them seems difficult. Whistleblower incentive mechanisms meanwhile seem much more worthwhile to investigate, because such a mechanism could be operated by a small centralized function without adding any new bureaucracy to existing organisations. I suspect that even a minimal whistleblower incentive* mechanism would reduce risks and increase trust within the EA diaspora by increasing the probability that we become aware of risky situations before they snowball into larger crises.
(*incentives here might not mean financial awards like in the SEC program, but something like helping the whistleblower find a new job, or taking the responsibility for investigating the information further instead of expecting the whistleblower to do it. I’d guess that most whistleblowing reports in EA, if any, would involve junior workers who are afraid of losing their income or status in the community, or simply do not have the energy, network, or skills to address the issue directly themselves.)
“It seems clear proposed reforms would not have prevented or influenced the FTX fiasco” doesn’t really engage with the original poster’s argument (at least as I understand it). The argument, I think, is that FTX revealed the possibility that serious undiscovered negatives exist, and that some of Cremer’s proposed reforms and/or other reforms would reduce those risks. Given that they involve greater accountability, transparency, and deconcentration of power, this seems plausible.
Maybe Cremer is arguing that her reforms would have likely prevented FTX, but that’s not really relevant to the discussion of the original post.
I’m not confident what the whole argument is.
In my reading, the OP updated toward the position “it’s plausible that effective altruist community-building activities could be net-negative in impact, and I wanted to explore some conjectures about what that plausibility would entail” based on FTX causing large economic damage. One of the conjectures based on this is “Implement Carla Zoe Cremer’s Recommendations”.
I’m mostly arguing against the position that ‘the update of probability mass on EA community building being negative due to FTX evidence is a strong reason to implement Carla Zoe Cremer’s Recommendations’
For comparison: I held the position that effective altruist community-building activities could be net-negative in impact before FTX and did not update much on the FTX evidence. In my view, the main reason for plausible negativity is EA seems much better at “finding places of high leverage” where you can influence the trajectory of the world a lot, than in figuring out what to actually do in those places. In my view, interventions against the risk include emphasis on epistemics, pushing against local consequentialist reasoning, and pushing against free-floating “community building” where people not working on the object level try mostly to bring in a lot of new people.
Personally, I think implementing Zoe Cremer’s Recommendations as a whole either does not impact the largest real risks, or would make the negative outcomes more likely. Repeated themes in the recommendations are ‘introduce bureaucracy’ and ‘decide democratically’. I don’t think bureaucracies are wise, and in ‘democratizing’ things the big question is ‘who is the demos?’.
I strongly downvoted this for not making any of the reasoning transparent and thus contributing little to the discussion beyond stating that “Jan believes this”.
This could sometimes be reasonable for the purpose of deferring to authority, but that is riskier in this case because Jan has severe conflicts of interest due to being employed by a core EA organisation and being a stakeholder in for example a ~$4.7 million grant to buy a chateau.
When the discussion is roughly at the level ‘seem to me obviously worth doing ’ it seem to me fine to state dissent of the form ‘often seems bad or not working to me’.
Stating an opinion is not ‘appeal to authority’. I think in many cases it’s useful to know what people believe, and if I have to choose between a forum where people state their beliefs openly and more often, and a forum, where people state beliefs only when they are willing to write a long and detailed justification, I prefer the first.
I’m curious in which direction you think the supposed ‘conflict of interests’ point:
I’m employed at the same institution (FHI) as Zoe works, and we were part of the same RSP program (although in different cohorts). This mostly creates some incentive to not criticize Zoe’s ideas publicly and would preclude me from e.g. reviewing Zoe’s papers, because of favourable bias.
Also … I think while being a stakeholder in a grant to buy a cheap and cost-saving events venue has not much to do with the topics in question, it mostly creates some incentive to be silent, because by engaging critically with the topic, you increase the risk someone will summon an angry twitter mob to attack you.
Overall … it’s probably worth noticing people like you strong downvoting my comment (now at karma 5, yours at 12) are the side actually trying to silence the critic here, while agreement with “it is surprising that some of Carla Zoe Kremer’s reforms haven’t been implemented”or vague criticisms of “EA leadership” are what’s in vogue on EA forum now.
I don’t think (almost) anyone is trying to silence you here; the agreevotes on your top comment are pretty high and I’d expect a silencing campaign to target both. That suggests to me that the votes are likely due to what some perceive as an uncharitable tone toward Zoe, or possibly a belief that having the then-top comment be one that focuses heavily on Zoe’s self-portrayal in the media risks derailing discussion of the original poster’s main points (Zoe’s potential involvement being a subpoint to a subpoint).
I disagree with Jan here entirely, but also with you.
First of all, I don’t see what the problem is with commenting one’s opinion; “Reasoning transparency” is a thing that’s only sometimes appropriate.
Second, I wouldn’t call FHI a “core EA organisation” and I frankly don’t see the conflict of interest at all.