I think in any world, including ones where EA leadership is dropping the ball or is likely to cause more future harm like FTX, it would be very surprising if they individually had not updated substantially.
As an extreme illustrative example, really just intended to get the intuition across, imagine that some substantial fraction of EA leaders are involved in large scale fraud and continue to plan to do so (which to be clear, I don’t have any evidence of), then of course the individuals would update a lot on FTX, but probably on the dimensions of “here are the ways Sam got caught, here is what I really need to avoid doing to not get caught myself”.
It would be very surprising if a crisis like FTX would not cause at least moderately high scores on a question like the one you chart above. The key thing that I would want to see is evidence that the leadership has updated in a direction that will likely prevent future harm, and does not push people further into deceptive relationships with the world.
The concrete list of changes below helps, though as far as I can tell practically none of them have actually been implemented (and the concrete numbers you cite for people who mention them seems quite low, given that 50+ people were at the coordination forum).
Briefly going through them:
Shore up governance, diversify funding sources, build more robust whistleblower systems, and have more decentralized systems in order to be less reliant on key organizations/people.
I don’t think much of any funding diversification has occurred (though I do think achieving that is hard). There are no whistleblower systems in place at any major EA orgs as far as I know, and my sense is we are more reliant on a smaller number of people in leadership than we were before (as more people decided to step back due to the conflict and stress that leadership has implied over the past months).
Create crisis response teams, do crisis scenario planning, have playbooks for crisis communication, and empower leaders to coordinate crisis response.
I don’t think any such crisis response teams or crisis scenario planning has been done, at least to my knowledge. I don’t know what people mean by “crisis communication” though IMO it’s clear that the issue with FTX was not one of EA comms, though if people mean “do investigations into bad things that have happened and communicate the results in a credible and verifiable manner” then I think it’s clear nothing of that sort has occurred, neither for FTX, and it seems like we also rolled extremely low on crisis communication in the OpenAI board crisis.
Better vet risks from funders/leaders, have lower tolerance for bad behavior, and remove people responsible for the crisis from leadership roles.
I don’t think any such removals have happened, and my sense is tolerance of bad behavior of the type that seems to me most responsible for FTX has gone up (in-particular heavy optimization for optics and large tolerance for divergences between public narratives and what is actually going on behind the scenes).
Invest more in communications for crises, improve early warning/information sharing, and platform diverse voices as EA representatives.
I don’.t think there are any initiatives for that kind of early information sharing. My sense is the rumor mill has gotten less functional instead of more, as the environment in which people act has become more adversarial, though it’s not super clear. But it seems like there are no serious efforts in this space.
Adopt lower default trust, consult experts sooner, and avoid groupthink and overconfidence in leaders.
I think this has probably happened implicitly, which I do think is good.
Recognize the effect of stress on behavior, and be aware of problems with unilateral action and the tendency not to solve collective action problems.
This one is kind of vague. I don’t know of anything we’ve done that helps here, and I think the OpenAI board situation is at least one point of evidence that people in EA leadership still lack on this dimension.
Value integrity and humility, promote diverse virtues rather than specific people, and update strongly against naive consequentialism.
My sense is integrit, trying to make sure that your de-facto actions and professed virtues line up, that you are generally open and honest, and that you are willing to stand up for your beliefs, seems to overall have gotten a lot worse, as people have re-emphasized the importance of good PR and optics in the wake of FTX.
Naive consequentialist plans also seem to have increased since FTX, mostly as a result of shorter AI timelines and much more involvement of EA in the policy space.
Overall, I don’t think the coordination forum survey is much evidence about good things happening here, and the things that people did want to see have not seen much movement since the coordination forum.
Many organizations in EA have whistleblower policies, some of which are public in their bylaws (for example, GiveWell and ACE publish their whistleblower policies among other policies). EV US and EV UK have whistleblower policies that apply to all the projects under their umbrella (CEA, 80,000 Hours, etc.) This is just a normal thing for nonprofits; the IRS asks whether you have one even though they don’t strictly require it, and you can look up on a nonprofit’s 990 whether they have such a policy.
Additionally, UK law, state law in many US states, and lots of other countries provide some legal protections for whistleblowers. Legal protection varies by state in the US, but is relatively strong in California.
Neither government protections nor organizational policies cover all the scenarios where someone might reasonably want protection from negative effects of bringing a problem to light. But that seems to be the case in all industries, including in the nonprofit field in general, not something unusual about EA.
I’m not aware of any EA organizations that provide financial rewards for whistleblowers, which seem like they’d be very tricky to administer without creating incentives you don’t want. The main example of financial rewards that I’m aware of is that the US government provides large financial rewards to whistleblowers whose evidence leads to the conviction of some fraud cases.
Neither government protections nor organizational policies cover all the scenarios where someone might reasonably want protection from negative effects of bringing a problem to light. But that seems to be the case in all industries, including in the nonprofit field in general, not something unusual about EA.
I think that is correct as far as it goes, but I suspect that the list of things you generally won’t get protection from (from your linked post) is significantly more painful in practice in EA than in most industries.
For example, although individuals dependent on small grants are probably particularly vulnerable to retaliation in ~all industries, that’s practically a much bigger hole in EA than elsewhere. The general unavailability of protection for disclosures about entities you don’t work for is more stifling in fields with a patchwork of mostly small-to-midsize orgs than in (say) the aerospace industry. Funding centralization could make retaliation easier to pull off.
So while the scope of coverage might be similar on paper in EA, it seems reasonably possible that the extent of protection as applied is unusually weak in EA.
I’m not aware of any EA organizations that provide financial rewards for whistleblowers, which seem like they’d be very tricky to administer without creating incentives you don’t want.
Agree, although those incentive problems could potentially be mitigated by limiting compensation to losses (e.g., loss of job, grant opportunity, an estimate of lost reputation) incurred due to good-faith whistleblowing activity that met specified criteria.
My understanding is that UK law and state law whistleblower protections are extremely weak and only cover knowledge of literal and usually substantial crimes (including in California). I don’t think any legally-mandated whistleblower protections make much of a difference for the kind of thing that EAs are likely to encounter.
I checked the state of the law in the FTX case, and unless someone knew specifically of clear fraud going on, they would have not been protected, which seems like it makes them mostly useless for things we care about. They also wouldn’t cover e.g. capabilities companies being reckless or violating commitments they made, unless they break some clear law, and even then protections are pretty limited. So I can’t really think of any case, except the most extreme, in which at least the US state protections come into play.
I was not aware of any CEA or 80k whistleblower systems. If they have some, that seems good! Is there any place that has more details on them? (you also didn’t mention them in the article you linked, which I had read recently, so I wasn’t aware of them)
Also, for the record, organizational whisteblower protections seem not that important to me. I e.g. care more about having norms against libel suits and other litigious behavior, though the norms for that seem mostly gone, so I expect substantially less whistleblowing of that type in the future. I mostly covered them because I was comprehensively covering the list of things people submitted to the Coordination Forum.
They also wouldn’t cover e.g. capabilities companies being reckless or violating commitments they made, unless they break some clear law, and even then protections are pretty limited.
“Better vet risks from funders/leaders, have lower tolerance for bad behavior, and remove people responsible for the crisis from leadership roles.”
I don’t think any such removals have happened, and my sense is tolerance of bad behavior of the type that seems to me most responsible for FTX has gone up (in-particular heavy optimization for optics and large tolerance for divergences between public narratives and what is actually going on behind the scenes).
I’d like to single out this part of your comment for extra discussion. On the Sam Harris podcast, Will MacAskill named leadership turnover as his main example of post-FTX systemic change; I’d love to know why you and Will seem to be saying opposite things here.
I’d also love to hear from more people whether they agree or disagree with Oliver on these two points:
Was “heavy optimization for optics and large tolerance for divergences between public narratives and what is actually going on behind the scenes” one of the EA behaviors that was most responsible for FTX?
So, I think it’s clear that a lot of leadership turnover has happened. However, my sense is that the kind of leadership turnover that has occurred is anti-correlated with what I would consider good. Most importantly, it seems to me that the people in EA leadership that I felt were often the most thoughtful about these issues took a step back from EA, often because EA didn’t live up to their ethical standards, or because they burned out trying to affect change and this recent period has been very stressful (or burned out for other reasons, unrelated to trying to affect change).
Below I’ll make a concrete list of leadership transitions I know have occurred and judge specific individuals, which I want to be clear on, are my personal judgements and I expect lots of people will disagree with me here:
Max Dalton left CEA. My sense is despite my many disagreements with him, he still seemed to me the best CEO that CEA has had historically, and he seemed to have a genuine strong interest in acting in high-integrity ways. My understanding is that the FTX stuff burned him out (as well as some of the Owen stuff, though the FTX stuff seemed more important).
He was replaced by Zack, who seems to think that this WaPo piece is a good way to start tackling FTX-related issues (more of my thoughts on that here). Also, in contrast to leadership claims that funding and ideological diversity is important, he is an ex-Open Philanthropy employee with pretty strong ties to the organization.
My sense is that most people in EA leadership would agree with me that Max stepping down and being replaced by Zack is a bad sign for post-FTX EA Reform (but also, my sense is many would think that Zack will do better on other dimensions that others consider more important).
Becca Kagan left the EV board. Given that she did so explicitly because of concerns that people were not taking FTX seriously enough, this seems like obviously a movement in a bad direction.
Will MacAskill and Nick Beckstead left the EV board. I do think these are reasonable moves given their historical affiliation with FTX, though my sense is this was mostly overdetermined by the legal constraints, basic COI principles making it very difficult for them to act as board members, and the bad optics of keeping them on the board. But this one does seem real.
Claire Zabel left as head of Open Phil’s capacity-building team. Claire seemed to me to also be among the people at Open Phil with the strongest interest in integrity. I have strong disagreements with the actions her team has taken since FTX, but I have trouble seeing this as a positive development.
Holden stepped back as CEO of Open Philanthropy, replaced by Alexander Berger. This also seems to me like a mostly negative development on the dimension of post-FTX reform. I have disagreements with Holden here, but my sense is he has thought much more about honesty and integrity than Alexander has, and Alexander’s takes on Wytham don’t fill me with that much hope.
Owen was relieved of a lot of his duties and banned from a lot of EA stuff. I think the process followed here was kind of reasonable, but my sense is Owen is among EA leadership one of the people most thoughtful about integrity and honesty, so on this specific dimension it seems like a step backwards (though there having been any kind of investigation that was followed up on is a mild positive sign)
Shakeel left CEA as Head of Comms. I don’t think this has much to do with FTX, though I do think Shakeel did really mess up post-FTX communications at CEA and I view this as a mildly good sign.
I think these are all the major leadership changes I can think of right now. There are very likely more I am forgetting. At least the ones I have here seem to me unlikely to help much with making EA into less of the kind of thing that would cause future things like FTX, though my guess is some people disagree with me on this.
Edit: Also seems like Nicole Ross is stepping down from the EV board. This also seems quite sad to me, she seemed like the person left on the EV board with the strongest moral compass on the relevant dimension. I don’t know the two people who are joining (Patrick Gruban and Johnstuart Winchell), so can’t speak to them, but on the surface having someone from EA Germany seems good.
Given that it appears EVF will soon be sent off to the scrapping yards for disassembly, it seems that changes in EVF board composition—for better or worse—may be less salient than they would have been been in 2022 or even much of 2023.
So “a lot of leadership turnover has happened” may not be quite as high-magnitude as had those changes had occurred in years past. Furthermore, some of these changes seem less connected to FTX than others, so it’s not clear to me how much turnover has happened as a fairly direct result of FTX. The most related change was Will & Nick leaving the EVF board, but I strongly suspect there was little practical choice there and so is weak evidence of some sort of internal change in direction.
All that is to say that I am not sure how much the nominal extent of leadership turnover suggests EA is turning over a new leadership leaf or something.
Most importantly, it seems to me that the people in EA leadership that I felt were often the most thoughtful about these issues took a step back from EA, often because EA didn’t live up to their ethical standards, or because they burned out trying to affect change and this recent period has been very stressful
Who on your list matches this description? Maybe Becca if you think she’s thoughtful on these issues? But isn’t that one at most?
Becca, Nicole and Max all stand out as people who I think burned out trying to make things go better around FTX stuff.
Also Claire leaving her position worsened my expectations of how much Open Phil will do things that seem bad. Alexander also seems substantially worse than Holden on this dimension. I think Holden was on the way out anyways, but my sense was Claire found the FTX-adjacent work very stressful and that played a role in her leaving (I don’t thinks she agrees with me on many of these issues, but I nevertheless trusted her decision-making more than others in the space).
What are you referring to when you say “Naive consequentialism”?[1] Because I’m not sure that it’s what others reading might take it to mean?
Like you seem critical of the current plan to sell Wytham Abbey, but I think many critics view the original purchase of it as an act of naive consequentialism that ignored the side effects that it’s had, such as reinforcing negative views of EA etc. Can both the purchase and the sale be a case of NC? Are they the same kind of thing?
So I’m not sure the 3 respondents from the MCF and you have the same thing in mind when you talk about naive consequentialism, and I’m not quite sure I am either.
The issue is that there are degrees of naiveness. Oliver’s view, as I understand it, is that there are at least three positions:
Maximally Naive: Buy nice event venues, because we need more places to host events.
Moderately Naive: Don’t buy nice event venues, because it’s more valuable to convince people that we’re frugal and humble than it is valuable to host events.
Non-Naive: Buy nice event venues, because we need more places to host events, and the value of signaling frugality and humility is in any case lower than the value of signaling that we’re willing to do weird and unpopular things when the first-order effects are clearly positive. Indeed, trying to look frugal here may even cause more harm than benefit, since:
(a) it nudges EA toward being a home for empty virtue-signalers instead of people trying to actually help others, and
(b) it nudges EA toward being a home for manipulative people who are obsessed with controlling others’ perceptions of EA, as opposed to EA being a home for honest, open, and cooperative souls who prize doing good and causing others to have accurate models over having a good reputation.
Optimizing too hard for reputation can get you into hot water, because you’ve hit the sour spot of being too naive to recognize that many others can see what you’re doing and discount your signals accordingly, but not naive enough to just blithely do the obvious right thing without overthinking it.
There are obviously cases where reputation matters for impact, but many people fall into the trap of fixating on reputation when they lack the skill to take into account enough higher-order effects.
(Of course, the above isn’t the only reason people might disagree on the utility of event venues. If you think EA is mainly bottlenecked on research and ideas, then you’ll want to gather people together to solve problems and share their thoughts. If you instead think EA’s big bottleneck is that we aren’t drawing in enough people to donate to GiveWell top charities, then you should think events are a lot less useful, unless maybe it’s a very large event targeted at drawing in new people to donate.)
I think this captures some of what I mean, though my model is also that the “Maximally naive” view is not very stable, in that if you are being “maximally naive” you do often end up just lying to people (because the predictable benefits from lying to people outweigh the predictable costs in that moment).
I do think a combination of being “maximally naive” combined with strong norms against deception and in favor of honesty can work, though in-general people want good reasons for following norms, and arguing for honesty requires some non-naive reasoning.
‘Naive consequentialist plans also seem to have increased since FTX, mostly as a result of shorter AI timelines and much more involvement of EA in the policy space.’
This gives me the same feeling as Rebecca’s original post: that you have specific information about very bad stuff that you are (for good or bad reasons) not sharing.
I don’t particularly feel like my knowledge here is confidential, it would just take a bunch of inferential distance to cross. I do have some confidential information, but it doesn’t feel that load-bearing to me.
I think in any world, including ones where EA leadership is dropping the ball or is likely to cause more future harm like FTX, it would be very surprising if they individually had not updated substantially.
As an extreme illustrative example, really just intended to get the intuition across, imagine that some substantial fraction of EA leaders are involved in large scale fraud and continue to plan to do so (which to be clear, I don’t have any evidence of), then of course the individuals would update a lot on FTX, but probably on the dimensions of “here are the ways Sam got caught, here is what I really need to avoid doing to not get caught myself”.
It would be very surprising if a crisis like FTX would not cause at least moderately high scores on a question like the one you chart above. The key thing that I would want to see is evidence that the leadership has updated in a direction that will likely prevent future harm, and does not push people further into deceptive relationships with the world.
The concrete list of changes below helps, though as far as I can tell practically none of them have actually been implemented (and the concrete numbers you cite for people who mention them seems quite low, given that 50+ people were at the coordination forum).
Briefly going through them:
I don’t think much of any funding diversification has occurred (though I do think achieving that is hard). There are no whistleblower systems in place at any major EA orgs as far as I know, and my sense is we are more reliant on a smaller number of people in leadership than we were before (as more people decided to step back due to the conflict and stress that leadership has implied over the past months).
I don’t think any such crisis response teams or crisis scenario planning has been done, at least to my knowledge. I don’t know what people mean by “crisis communication” though IMO it’s clear that the issue with FTX was not one of EA comms, though if people mean “do investigations into bad things that have happened and communicate the results in a credible and verifiable manner” then I think it’s clear nothing of that sort has occurred, neither for FTX, and it seems like we also rolled extremely low on crisis communication in the OpenAI board crisis.
I don’t think any such removals have happened, and my sense is tolerance of bad behavior of the type that seems to me most responsible for FTX has gone up (in-particular heavy optimization for optics and large tolerance for divergences between public narratives and what is actually going on behind the scenes).
I don’.t think there are any initiatives for that kind of early information sharing. My sense is the rumor mill has gotten less functional instead of more, as the environment in which people act has become more adversarial, though it’s not super clear. But it seems like there are no serious efforts in this space.
I think this has probably happened implicitly, which I do think is good.
This one is kind of vague. I don’t know of anything we’ve done that helps here, and I think the OpenAI board situation is at least one point of evidence that people in EA leadership still lack on this dimension.
My sense is integrit, trying to make sure that your de-facto actions and professed virtues line up, that you are generally open and honest, and that you are willing to stand up for your beliefs, seems to overall have gotten a lot worse, as people have re-emphasized the importance of good PR and optics in the wake of FTX.
Naive consequentialist plans also seem to have increased since FTX, mostly as a result of shorter AI timelines and much more involvement of EA in the policy space.
Overall, I don’t think the coordination forum survey is much evidence about good things happening here, and the things that people did want to see have not seen much movement since the coordination forum.
I’ve heard this claim repeatedly, but it’s not true that EA orgs have no whistleblower systems.
I looked into this as part of this project on reforms at EA organizations: Resource on whistleblowing and other ways of escalating concerns
Many organizations in EA have whistleblower policies, some of which are public in their bylaws (for example, GiveWell and ACE publish their whistleblower policies among other policies). EV US and EV UK have whistleblower policies that apply to all the projects under their umbrella (CEA, 80,000 Hours, etc.) This is just a normal thing for nonprofits; the IRS asks whether you have one even though they don’t strictly require it, and you can look up on a nonprofit’s 990 whether they have such a policy.
Additionally, UK law, state law in many US states, and lots of other countries provide some legal protections for whistleblowers. Legal protection varies by state in the US, but is relatively strong in California.
Neither government protections nor organizational policies cover all the scenarios where someone might reasonably want protection from negative effects of bringing a problem to light. But that seems to be the case in all industries, including in the nonprofit field in general, not something unusual about EA.
I’m not aware of any EA organizations that provide financial rewards for whistleblowers, which seem like they’d be very tricky to administer without creating incentives you don’t want. The main example of financial rewards that I’m aware of is that the US government provides large financial rewards to whistleblowers whose evidence leads to the conviction of some fraud cases.
I think that is correct as far as it goes, but I suspect that the list of things you generally won’t get protection from (from your linked post) is significantly more painful in practice in EA than in most industries.
For example, although individuals dependent on small grants are probably particularly vulnerable to retaliation in ~all industries, that’s practically a much bigger hole in EA than elsewhere. The general unavailability of protection for disclosures about entities you don’t work for is more stifling in fields with a patchwork of mostly small-to-midsize orgs than in (say) the aerospace industry. Funding centralization could make retaliation easier to pull off.
So while the scope of coverage might be similar on paper in EA, it seems reasonably possible that the extent of protection as applied is unusually weak in EA.
Agree, although those incentive problems could potentially be mitigated by limiting compensation to losses (e.g., loss of job, grant opportunity, an estimate of lost reputation) incurred due to good-faith whistleblowing activity that met specified criteria.
My understanding is that UK law and state law whistleblower protections are extremely weak and only cover knowledge of literal and usually substantial crimes (including in California). I don’t think any legally-mandated whistleblower protections make much of a difference for the kind of thing that EAs are likely to encounter.
I checked the state of the law in the FTX case, and unless someone knew specifically of clear fraud going on, they would have not been protected, which seems like it makes them mostly useless for things we care about. They also wouldn’t cover e.g. capabilities companies being reckless or violating commitments they made, unless they break some clear law, and even then protections are pretty limited. So I can’t really think of any case, except the most extreme, in which at least the US state protections come into play.
I was not aware of any CEA or 80k whistleblower systems. If they have some, that seems good! Is there any place that has more details on them? (you also didn’t mention them in the article you linked, which I had read recently, so I wasn’t aware of them)
Also, for the record, organizational whisteblower protections seem not that important to me. I e.g. care more about having norms against libel suits and other litigious behavior, though the norms for that seem mostly gone, so I expect substantially less whistleblowing of that type in the future. I mostly covered them because I was comprehensively covering the list of things people submitted to the Coordination Forum.
An alternative take on this (I haven’t researched this topic myself): https://forum.effectivealtruism.org/posts/LttenWwmRn8LHoDgL/josh-jacobson-s-quick-takes?commentId=ZA2N2LNqQteD5dE4g
I’d like to single out this part of your comment for extra discussion. On the Sam Harris podcast, Will MacAskill named leadership turnover as his main example of post-FTX systemic change; I’d love to know why you and Will seem to be saying opposite things here.
I’d also love to hear from more people whether they agree or disagree with Oliver on these two points:
Was “heavy optimization for optics and large tolerance for divergences between public narratives and what is actually going on behind the scenes” one of the EA behaviors that was most responsible for FTX?
Has this behavior increased in EA post-FTX?
So, I think it’s clear that a lot of leadership turnover has happened. However, my sense is that the kind of leadership turnover that has occurred is anti-correlated with what I would consider good. Most importantly, it seems to me that the people in EA leadership that I felt were often the most thoughtful about these issues took a step back from EA, often because EA didn’t live up to their ethical standards, or because they burned out trying to affect change and this recent period has been very stressful (or burned out for other reasons, unrelated to trying to affect change).
Below I’ll make a concrete list of leadership transitions I know have occurred and judge specific individuals, which I want to be clear on, are my personal judgements and I expect lots of people will disagree with me here:
Max Dalton left CEA. My sense is despite my many disagreements with him, he still seemed to me the best CEO that CEA has had historically, and he seemed to have a genuine strong interest in acting in high-integrity ways. My understanding is that the FTX stuff burned him out (as well as some of the Owen stuff, though the FTX stuff seemed more important).
He was replaced by Zack, who seems to think that this WaPo piece is a good way to start tackling FTX-related issues (more of my thoughts on that here). Also, in contrast to leadership claims that funding and ideological diversity is important, he is an ex-Open Philanthropy employee with pretty strong ties to the organization.
My sense is that most people in EA leadership would agree with me that Max stepping down and being replaced by Zack is a bad sign for post-FTX EA Reform (but also, my sense is many would think that Zack will do better on other dimensions that others consider more important).
Becca Kagan left the EV board. Given that she did so explicitly because of concerns that people were not taking FTX seriously enough, this seems like obviously a movement in a bad direction.
Will MacAskill and Nick Beckstead left the EV board. I do think these are reasonable moves given their historical affiliation with FTX, though my sense is this was mostly overdetermined by the legal constraints, basic COI principles making it very difficult for them to act as board members, and the bad optics of keeping them on the board. But this one does seem real.
Claire Zabel left as head of Open Phil’s capacity-building team. Claire seemed to me to also be among the people at Open Phil with the strongest interest in integrity. I have strong disagreements with the actions her team has taken since FTX, but I have trouble seeing this as a positive development.
Holden stepped back as CEO of Open Philanthropy, replaced by Alexander Berger. This also seems to me like a mostly negative development on the dimension of post-FTX reform. I have disagreements with Holden here, but my sense is he has thought much more about honesty and integrity than Alexander has, and Alexander’s takes on Wytham don’t fill me with that much hope.
Owen was relieved of a lot of his duties and banned from a lot of EA stuff. I think the process followed here was kind of reasonable, but my sense is Owen is among EA leadership one of the people most thoughtful about integrity and honesty, so on this specific dimension it seems like a step backwards (though there having been any kind of investigation that was followed up on is a mild positive sign)
Shakeel left CEA as Head of Comms. I don’t think this has much to do with FTX, though I do think Shakeel did really mess up post-FTX communications at CEA and I view this as a mildly good sign.
I think these are all the major leadership changes I can think of right now. There are very likely more I am forgetting. At least the ones I have here seem to me unlikely to help much with making EA into less of the kind of thing that would cause future things like FTX, though my guess is some people disagree with me on this.
Edit: Also seems like Nicole Ross is stepping down from the EV board. This also seems quite sad to me, she seemed like the person left on the EV board with the strongest moral compass on the relevant dimension. I don’t know the two people who are joining (Patrick Gruban and Johnstuart Winchell), so can’t speak to them, but on the surface having someone from EA Germany seems good.
Given that it appears EVF will soon be sent off to the scrapping yards for disassembly, it seems that changes in EVF board composition—for better or worse—may be less salient than they would have been been in 2022 or even much of 2023.
So “a lot of leadership turnover has happened” may not be quite as high-magnitude as had those changes had occurred in years past. Furthermore, some of these changes seem less connected to FTX than others, so it’s not clear to me how much turnover has happened as a fairly direct result of FTX. The most related change was Will & Nick leaving the EVF board, but I strongly suspect there was little practical choice there and so is weak evidence of some sort of internal change in direction.
All that is to say that I am not sure how much the nominal extent of leadership turnover suggests EA is turning over a new leadership leaf or something.
Who on your list matches this description? Maybe Becca if you think she’s thoughtful on these issues? But isn’t that one at most?
Becca, Nicole and Max all stand out as people who I think burned out trying to make things go better around FTX stuff.
Also Claire leaving her position worsened my expectations of how much Open Phil will do things that seem bad. Alexander also seems substantially worse than Holden on this dimension. I think Holden was on the way out anyways, but my sense was Claire found the FTX-adjacent work very stressful and that played a role in her leaving (I don’t thinks she agrees with me on many of these issues, but I nevertheless trusted her decision-making more than others in the space).
What are you referring to when you say “Naive consequentialism”?[1] Because I’m not sure that it’s what others reading might take it to mean?
Like you seem critical of the current plan to sell Wytham Abbey, but I think many critics view the original purchase of it as an act of naive consequentialism that ignored the side effects that it’s had, such as reinforcing negative views of EA etc. Can both the purchase and the sale be a case of NC? Are they the same kind of thing?
So I’m not sure the 3 respondents from the MCF and you have the same thing in mind when you talk about naive consequentialism, and I’m not quite sure I am either.
Both here and in this other example, for instance
The issue is that there are degrees of naiveness. Oliver’s view, as I understand it, is that there are at least three positions:
Maximally Naive: Buy nice event venues, because we need more places to host events.
Moderately Naive: Don’t buy nice event venues, because it’s more valuable to convince people that we’re frugal and humble than it is valuable to host events.
Non-Naive: Buy nice event venues, because we need more places to host events, and the value of signaling frugality and humility is in any case lower than the value of signaling that we’re willing to do weird and unpopular things when the first-order effects are clearly positive. Indeed, trying to look frugal here may even cause more harm than benefit, since:
(a) it nudges EA toward being a home for empty virtue-signalers instead of people trying to actually help others, and
(b) it nudges EA toward being a home for manipulative people who are obsessed with controlling others’ perceptions of EA, as opposed to EA being a home for honest, open, and cooperative souls who prize doing good and causing others to have accurate models over having a good reputation.
Optimizing too hard for reputation can get you into hot water, because you’ve hit the sour spot of being too naive to recognize that many others can see what you’re doing and discount your signals accordingly, but not naive enough to just blithely do the obvious right thing without overthinking it.
There are obviously cases where reputation matters for impact, but many people fall into the trap of fixating on reputation when they lack the skill to take into account enough higher-order effects.
(Of course, the above isn’t the only reason people might disagree on the utility of event venues. If you think EA is mainly bottlenecked on research and ideas, then you’ll want to gather people together to solve problems and share their thoughts. If you instead think EA’s big bottleneck is that we aren’t drawing in enough people to donate to GiveWell top charities, then you should think events are a lot less useful, unless maybe it’s a very large event targeted at drawing in new people to donate.)
I think this captures some of what I mean, though my model is also that the “Maximally naive” view is not very stable, in that if you are being “maximally naive” you do often end up just lying to people (because the predictable benefits from lying to people outweigh the predictable costs in that moment).
I do think a combination of being “maximally naive” combined with strong norms against deception and in favor of honesty can work, though in-general people want good reasons for following norms, and arguing for honesty requires some non-naive reasoning.
‘Naive consequentialist plans also seem to have increased since FTX, mostly as a result of shorter AI timelines and much more involvement of EA in the policy space.’
This gives me the same feeling as Rebecca’s original post: that you have specific information about very bad stuff that you are (for good or bad reasons) not sharing.
I don’t particularly feel like my knowledge here is confidential, it would just take a bunch of inferential distance to cross. I do have some confidential information, but it doesn’t feel that load-bearing to me.
This dialogue has a bit of a flavor of the kind of thing I am worried about: https://www.lesswrong.com/posts/vFqa8DZCuhyrbSnyx/integrity-in-ai-governance-and-advocacy?revision=1.0.0