I haven’t heard any arguments against doing an investigation yet, and I could imagine folks might be nervous about speaking up here. So I’ll try to break the ice by writing an imaginary dialogue between myself and someone who disagrees with me.
Obviously this argument may not be compelling compared to what an actual proponent would say, and I’d guess I’m missing at least one key consideration here, so treat this as a mere conversation-starter.
Hypothetical EA: Why isn’t EV’s 2023 investigation enough? You want us to investigate; well, we investigated.
Rob: That investigation was only investigating legal risk to EV. Everything I’ve read (and everything I’ve heard privately) suggests that it wasn’t at all trying to answer the question of whether the EA community made any moral or prudential errors in how we handled SBF over the years. Nor was it trying to produce common-knowledge documents (either private or public) to help any subset of EA understand what happened. Nor was it trying to come up with any proposal for what we should do differently (if anything) in the future.
I take it as fairly obvious that those are all useful activities to carry out after a crisis, especially when there was sharp disagreement, within EA leadership, long before the FTX implosion, about how we should handle SBF.
Hypothetical EA: Look, I know there’s been no capital-I “Investigation”, but plenty of established EAs have poked around at dinner parties and learned a lot of the messy complicated details of what happened. My own informal poking around has convinced me that no EAs outside FTX leadership did anything super evil or Machiavellian. The worst you can say is that they muddled along and had miscommunications and brain farts like any big disorganized group of humans, and were a bit naively over-trusting.
Me: Maybe! But scattered dinner conversation with random friends and colleagues, with minimal following up or cross-checking of facts, isn’t the best medium for getting an unbiased picture of what happened. People skew the truth, withhold info, pass the blame ball around. And you like your friends, so you’re eager to latch on to whatever story shows they did an OK job.
Perhaps your story is true, but we shouldn’t be scared of checking, applying the same level of rigor we readily apply to everything else we’re doing.
The utility of this doesn’t require that any EAs be Evil. A postmortem is plenty useful in a world where we were “too trusting” or were otherwise subject to biases in how we thought, or how we shared information and made group decisions — so we can learn from our mistakes and do better next time.
And if we’ve historically been “too trusting”, it seems doubly foolish to err on the side of trusting every individual, institution, and process involved in the EA-SBF debacle, and write them a preemptive waiver for all the errors we’re studiously avoiding checking whether they’ve made.
Hypothetical EA: Look, there’s just no reason to use SBF in particular for your social experiment in radical honesty and perfect transparency. It was to some extent a matter of luck that SBF succeeded as well as he did, and that he therefore had an opportunity to cause so much harm. If there were systemic biases in EA that caused us to err here, then those same biases should show up in tons of other cases too.
The only reason to single out the SBF case in particular and give it 1000x more attention than everything else is that it’s the most newsworthy EA error.
But the main effect of this is to inflate and distort minor missteps random EA decision-makers made, bolstered by the public’s hindsight bias and cancel culture and by journalists’ axe-grinding, so that the smallest misjudgments an EA makes look like horrific unforgivable sins.
SBF is no more useful for learning about EA’s causal dynamics than any other case (and in fact SBF is an unusually bad place to try to learn generalizable lessons, because the sky-high stakes will cause people to withhold key evidence and/or bend the truth toward social desirability); it’s only useful as a bludgeon, if you came into all this already sure that EA is deeply corrupt (or that particular individuals or orgs are), and you want to summon a mob to punish those people and drive them from the community.
(Or, alternatively, if you’re sad about EA’s bad reputation and you want to find scapegoats: find the specific Bad EAs and drive them out, to prove to the world that you’re a Good EA and that EA-writ-large is now pure.)
Me: I find that argument somewhat compelling, but I still think an investigation would make sense.
First, extreme cases can often illustrate important causal dynamics that are harder to see in normal cases. E.g., if EA has a problem like “we fudge the truth too much”, this might be hard to detect in low-stakes cases where people have less incentive to lie. People’s behavior when push comes to shove is important, given the huge impact EA is trying to have on the world; and SBF is one huge instance where push came to shove and our character was really tested.
And, yes, some people may withhold information more because of the high stakes. But others will be much more willing to spend time on this question because they recognize it as important. If nothing else, SBF is a Schelling point for us all to direct our eyes at the same thing simultaneously, and see if we can converge on some new truths about the world.
Second, and moving away from abstractions to talk about the specifics of this case: My understanding is that a bunch of EAs tried to warn the community that SBF was extremely shady, and a bunch of other EAs apparently didn’t believe the warnings, or didn’t want those warnings widely shared even though they believed them.
“SBF is extremely shady” isn’t knowledge that FTX was committing financial fraud, and shouting “SBF is extremely shady” from the hills wouldn’t necessarily have prevented the fraud from happening. But there’s some probability it might have been the tipping point at various important junctures, as potential employees and funders and customers weighed their options. And even if it wouldn’t have helped at all in this case, it’s good to share that kind of information in case it helps the next time around.
I think it would be directly useful to know what happened to those warnings about SBF, so we can do better next time. And I think it would also help restore a lot of trust in EA (and a lot of internal ability for EAs to coordinate with each other) if people knew what happened — if we knew which thought leaders or orgs did better or worse, how processes failed, how people plan to do better next time.
I recognize that this will be harder in some ways with journalists and twitter users breathing down your necks. And I recognize that some people may suffer unfair scrutiny and criticism because they were in the wrong place at the wrong time. To some extent I just think we need to eat that cost; when you’re playing chess with the world and making massively impactful decisions, that comes with some extra responsibility to take a rare bit of unfair flack for the sake of being able to fact-find and orient at all about what happened. Hopefully the fact that some time has passed, and that we’re looking at a wide variety of people and orgs rather than a specific singled-out individual, will mitigate this problem.
If FTX were a total bolt out of the blue, that would be one thing. But apparently there were rather a lot of EAs who thought SBF was untrustworthy and evil, and had lots of evidence on hand to cite, at the exact same time 80K and Will and others were using their megaphones to broadcast that SBF is an awesome EA hero. I don’t know that 80K or Will in particular are the ones who fucked up here, but it seems like somebody fucked up in order for this perception gap to exist and go undiscussed.
I understand people having disagreements about someone’s character. Hindsight bias is a thing, and I’m sure people had reasons at the time to be skeptical of some of the bad rumors about SBF. But I tend to think those disagreements should be things that are argued about rather than kept secret. Especially if the secret conversations empirically have not resulted in the best outcomes.
Hypothetical EA: I dunno, this whole “we need a public airing out of our micro-sins in order to restore trust” thing sounds an awful lot like the exact “you’re looking for scapegoats” thing I was warning about.
You’re fixated on this idea that EAs did something Wrong and need to be chastised and corrected, like we’re perpetrators alongside SBF. On the contrary, I claim that the non-FTX EAs who interacted the most with Sam should mostly be thought of as additional victims of Sam: people who were manipulated and mistreated, who often saw their livelihoods threatened as a result and their life’s work badly damaged or destroyed.
The policies you’re calling for amount to singling out and re-victimizing many of Sam’s primary victims, in the name of pleasant-sounding abstractions like Accountability — abstractions that have little actual consequentialist value in this case, just a veneer of “that sounds nice on paper”.
Me: It’s unfortunately hard for me to assess the consequentialist value in this case, because no investigation has taken place. I’ve gestured at some questions I have above, but I’m missing most of the pieces about what actually happened, and some of the unknown unknowns here might turn out to swamp the importance of what I know about. It’s not clear to me that you know much more than me, either. Rather than pitting your speculation against mine, I’d rather do some actual inquiry.
Hypothetical EA: I think we already know enough, including from the legal investigation into Sam Bankman-Fried and who was involved in his conspiracy, to make a good guess that re-victimizing random EAs is not a useful way for this movement to spend its time and energy. The world has many huge problems that need fixing, and it’s not as though EA’s critics are going to suddenly conclude that EAs are Good After All if we spill all of our dirty laundry. What will actually happen is that they’ll cherry-pick and distort the worst-sounding tidbits, while ignoring all the parts you hoped would be “trust-restoring”.
Me: Some EA critics will do that, sure. But there are plenty of people, both within EA and outside of it, who legitimately just want to know what happened, and will be very reassured to have a clearer picture of the basic sequence of events, which orgs did a better or worse job, which processes failed or succeeded. They’ll also be reassured to know that we know what happened, vs. blinding ourselves to the facts and to any lessons they might contain.
Or maybe they’ll be horrified because the details are actually awful (ethically, not legally). Part of being honest is taking on the risk that this could happen too. That’s just not avoidable. If we’re not the sort of community that would share bad stuff if it were true, then people are forced to be that much more worried that we’re in fact hiding a bunch of bad stuff.
Hypothetical EA: I just don’t think there’s that much crucial information EA leaders are missing, from their informal poking around. You can doubt that, but I don’t think a formal investigation would help much, since people who don’t want to speak now will (if anything) probably be even more tight-lipped in the face of what looks like a witch-hunt.
You say that EAs have a responsibility to jump through a bunch of transparency hoops. But whether or not you agree with my “EAs are victims” frame: EAs don’t owe the community their lives. If you’re someone who made personal sacrifices to try to make the world a better place, that doesn’t somehow come with a gotcha clause where you now have incurred a huge additional responsibility that we’d never impose on ordinary private citizens, to dump your personal life into the public Internet.
Me: I don’t necessarily disagree with that, as stated. But I think particular EAs are signing up for some extra responsibility, e.g., when they become EA leaders and ask for a lot of trust on the part of their community.
I wouldn’t necessarily describe that responsibility as “huge”, because I don’t actually think a basic investigation into the SBF thing is that unusual or onerous.
I don’t see myself as proposing anything all that radical here. I’m even open to the idea that we might want to redact some names and events in the public recounting of what happened, to protect the innocent. I don’t see anything weird about that; what strikes me as puzzling is the complete absence of any basic fact-finding effort (beyond the narrow-scope EV legal inquiry).
And what strike me as doubly puzzling is that there hasn’t even been a public write-up that CEA and others are not planning to look into this at all, nor has there been any public argument for this policy — whence this dialogue. As though EAs are just hoping we’ll quietly forget about this pretty major omission, so they don’t have to say anything potentially controversial. That I don’t really respect; if you think this investigation is a bad idea, do the EA thing and make your case!
Hypothetical EA: Well, hopefully my arguments have given you some clues about (non-nefarious reasons why) EAs might want to quietly let this thing die, rather than giving a big public argument for letting it die. In addition to the obvious fact that folks are just very busy, and more time spent on this means less time spent on a hundred other things.
Me: And hopefully my arguments have helped remind some folks that things are sometimes worth doing even when they’re hard.
All the arguments in the world don’t erase the fact that at the end of the day, we have a choice between taking risks for the sake of righting our wrongs and helping people understand what happened, versus hiding from the light of day and quietly hoping that no one calls us out for retreating from our idealistic-sounding principles.
We have a choice between following the path of least resistance into ever-murkier, ever-more-confusing, ever-less-trusting waters; or taking a bold stand and doing whatever we can to give EAs and non-EAs alike real insight into what happened, and a real capacity to adjust course if and only if some course-changing is warranted.
There are certainly times when the boring, practical, un-virtuous-sounding option really is the right option. I don’t think this is one of those times; I think we need to be better than that this one time, or we risk losing by a thousand cuts some extremely precious things that used to be central to what made EA EA.
… And if you disagree with me about all that, well, tell me why I’m wrong.
I think I agree with Hypothetical EA that we basically know the broad picture.
Probably nobody was actually complicit or knew there was fraud; and
Various people made bad judgement calls and/or didn’t listen to useful rumours about Sam
I guess I’m just… satisfied with that? You say:
But there are plenty of people, both within EA and outside of it, who legitimately just want to know what happened, and will be very reassured to have a clearer picture of the basic sequence of events, which orgs did a better or worse job, which processes failed or succeeded.
.. why? None of this seems that important to me? Most of it seems like a matter for the person/org in question to reflect/improve on. Why is it important for “plenty of people” to learn this stuff, given we already know the broad picture above?
I would sum up my personal position as:
We got taken for a ride, so we should take the general lesson to be more cautious of charismatic people with low scruples, especially bearing large sums of money.
If you or your org were specifically taken for a ride you should reflect on why that happened to you and why you didn’t listen to the people who did spot what was going on.
EA is compelling insofar as it is about genuinely making the world a better place, ie we care about the actual consequences. Just because there are probably no specific people/processes to blame, doesn’t mean we should be satisfied with how things are.
There is now decent evidence that EA might cause considerable harm in the world, so we should be strongly motivated to figure out how to change that. Maybe EA’s failures are just the cost of ambition and agency, and come along with the good it does, but I think that’s both untrue and worryingly defeatist.
I care about the end result of all of this, and the fact that we’re okay with some serious Ls happening (and not being willing to fix the root cause of those errors) is concerning.
Maybe we should—after this question of investigation or not has been discussed in more detail—organize community-wide vote on whether there should be an investigation or not?
Knowing what people think is useful, especially if it’s a non-anonymous poll aimed at sparking conversations, questions, etc. (One thing that might help here is to include a field for people to leave a brief explanation of their vote, if the polling software allows for it.)
Anonymous polls are a bit trickier, since random people on the Internet can easily brigade such a poll. And I wouldn’t want to assume that something’s a good idea just because most EAs agree with it; I’d rather focus on the arguments for and against.
“Just focus on the arguments” isn’t a decision-making algorithm, but I think informal processes like “just talk about it and individually do what makes sense” perform better than rigid algorithms in cases like this.
If we want something more formal, I tend to prefer approaches like “delegate the question to someone trustworthy who can spend a bunch of time carefully weighing the arguments” or “subsidize a prediction market to resolve the question” over “just run an opinion poll and do whatever the majority of people-who-see-the-poll vote for, without checking how informed or wise the respondents are”.
The question of a community-wide vote, on any level, about whether there should be such an investigation might at this point be moot. I have personally offered to begin conducting significant parts of such an investigation myself. Since I made that initial comment, I’ve now read several more providing arguments against the need or desirability for such an investigation. Having found them unconvincing, I now intend privately contact at least several private individuals—both in and around the EA movement, as well as some outside of or who no longer participate in the EA community—to pursue that end. Something like a community-wide vote, or some proxy like even dozens of effective altruists trying to talk me out of that, would be unlikely to convice me to not do so.
I disagree, and in this case I don’t think the forum team should have a say in the matter. Each user has their own interpretation of the upvote/downvote button and that’s ok. Personally I don’t use it as “I disagree” but rather as “this comment shouldn’t have been written”, but there’s certainly a correlation. For instance, I both disagree-voted and downvoted your comment (since I dislike the attempt to police this).
I haven’t heard any arguments against doing an investigation yet, and I could imagine folks might be nervous about speaking up here. So I’ll try to break the ice by writing an imaginary dialogue between myself and someone who disagrees with me.
Obviously this argument may not be compelling compared to what an actual proponent would say, and I’d guess I’m missing at least one key consideration here, so treat this as a mere conversation-starter.
Hypothetical EA: Why isn’t EV’s 2023 investigation enough? You want us to investigate; well, we investigated.
Rob: That investigation was only investigating legal risk to EV. Everything I’ve read (and everything I’ve heard privately) suggests that it wasn’t at all trying to answer the question of whether the EA community made any moral or prudential errors in how we handled SBF over the years. Nor was it trying to produce common-knowledge documents (either private or public) to help any subset of EA understand what happened. Nor was it trying to come up with any proposal for what we should do differently (if anything) in the future.
I take it as fairly obvious that those are all useful activities to carry out after a crisis, especially when there was sharp disagreement, within EA leadership, long before the FTX implosion, about how we should handle SBF.
Hypothetical EA: Look, I know there’s been no capital-I “Investigation”, but plenty of established EAs have poked around at dinner parties and learned a lot of the messy complicated details of what happened. My own informal poking around has convinced me that no EAs outside FTX leadership did anything super evil or Machiavellian. The worst you can say is that they muddled along and had miscommunications and brain farts like any big disorganized group of humans, and were a bit naively over-trusting.
Me: Maybe! But scattered dinner conversation with random friends and colleagues, with minimal following up or cross-checking of facts, isn’t the best medium for getting an unbiased picture of what happened. People skew the truth, withhold info, pass the blame ball around. And you like your friends, so you’re eager to latch on to whatever story shows they did an OK job.
Perhaps your story is true, but we shouldn’t be scared of checking, applying the same level of rigor we readily apply to everything else we’re doing.
The utility of this doesn’t require that any EAs be Evil. A postmortem is plenty useful in a world where we were “too trusting” or were otherwise subject to biases in how we thought, or how we shared information and made group decisions — so we can learn from our mistakes and do better next time.
And if we’ve historically been “too trusting”, it seems doubly foolish to err on the side of trusting every individual, institution, and process involved in the EA-SBF debacle, and write them a preemptive waiver for all the errors we’re studiously avoiding checking whether they’ve made.
Hypothetical EA: Look, there’s just no reason to use SBF in particular for your social experiment in radical honesty and perfect transparency. It was to some extent a matter of luck that SBF succeeded as well as he did, and that he therefore had an opportunity to cause so much harm. If there were systemic biases in EA that caused us to err here, then those same biases should show up in tons of other cases too.
The only reason to single out the SBF case in particular and give it 1000x more attention than everything else is that it’s the most newsworthy EA error.
But the main effect of this is to inflate and distort minor missteps random EA decision-makers made, bolstered by the public’s hindsight bias and cancel culture and by journalists’ axe-grinding, so that the smallest misjudgments an EA makes look like horrific unforgivable sins.
SBF is no more useful for learning about EA’s causal dynamics than any other case (and in fact SBF is an unusually bad place to try to learn generalizable lessons, because the sky-high stakes will cause people to withhold key evidence and/or bend the truth toward social desirability); it’s only useful as a bludgeon, if you came into all this already sure that EA is deeply corrupt (or that particular individuals or orgs are), and you want to summon a mob to punish those people and drive them from the community.
(Or, alternatively, if you’re sad about EA’s bad reputation and you want to find scapegoats: find the specific Bad EAs and drive them out, to prove to the world that you’re a Good EA and that EA-writ-large is now pure.)
Me: I find that argument somewhat compelling, but I still think an investigation would make sense.
First, extreme cases can often illustrate important causal dynamics that are harder to see in normal cases. E.g., if EA has a problem like “we fudge the truth too much”, this might be hard to detect in low-stakes cases where people have less incentive to lie. People’s behavior when push comes to shove is important, given the huge impact EA is trying to have on the world; and SBF is one huge instance where push came to shove and our character was really tested.
And, yes, some people may withhold information more because of the high stakes. But others will be much more willing to spend time on this question because they recognize it as important. If nothing else, SBF is a Schelling point for us all to direct our eyes at the same thing simultaneously, and see if we can converge on some new truths about the world.
Second, and moving away from abstractions to talk about the specifics of this case: My understanding is that a bunch of EAs tried to warn the community that SBF was extremely shady, and a bunch of other EAs apparently didn’t believe the warnings, or didn’t want those warnings widely shared even though they believed them.
“SBF is extremely shady” isn’t knowledge that FTX was committing financial fraud, and shouting “SBF is extremely shady” from the hills wouldn’t necessarily have prevented the fraud from happening. But there’s some probability it might have been the tipping point at various important junctures, as potential employees and funders and customers weighed their options. And even if it wouldn’t have helped at all in this case, it’s good to share that kind of information in case it helps the next time around.
I think it would be directly useful to know what happened to those warnings about SBF, so we can do better next time. And I think it would also help restore a lot of trust in EA (and a lot of internal ability for EAs to coordinate with each other) if people knew what happened — if we knew which thought leaders or orgs did better or worse, how processes failed, how people plan to do better next time.
I recognize that this will be harder in some ways with journalists and twitter users breathing down your necks. And I recognize that some people may suffer unfair scrutiny and criticism because they were in the wrong place at the wrong time. To some extent I just think we need to eat that cost; when you’re playing chess with the world and making massively impactful decisions, that comes with some extra responsibility to take a rare bit of unfair flack for the sake of being able to fact-find and orient at all about what happened. Hopefully the fact that some time has passed, and that we’re looking at a wide variety of people and orgs rather than a specific singled-out individual, will mitigate this problem.
If FTX were a total bolt out of the blue, that would be one thing. But apparently there were rather a lot of EAs who thought SBF was untrustworthy and evil, and had lots of evidence on hand to cite, at the exact same time 80K and Will and others were using their megaphones to broadcast that SBF is an awesome EA hero. I don’t know that 80K or Will in particular are the ones who fucked up here, but it seems like somebody fucked up in order for this perception gap to exist and go undiscussed.
I understand people having disagreements about someone’s character. Hindsight bias is a thing, and I’m sure people had reasons at the time to be skeptical of some of the bad rumors about SBF. But I tend to think those disagreements should be things that are argued about rather than kept secret. Especially if the secret conversations empirically have not resulted in the best outcomes.
Hypothetical EA: I dunno, this whole “we need a public airing out of our micro-sins in order to restore trust” thing sounds an awful lot like the exact “you’re looking for scapegoats” thing I was warning about.
You’re fixated on this idea that EAs did something Wrong and need to be chastised and corrected, like we’re perpetrators alongside SBF. On the contrary, I claim that the non-FTX EAs who interacted the most with Sam should mostly be thought of as additional victims of Sam: people who were manipulated and mistreated, who often saw their livelihoods threatened as a result and their life’s work badly damaged or destroyed.
The policies you’re calling for amount to singling out and re-victimizing many of Sam’s primary victims, in the name of pleasant-sounding abstractions like Accountability — abstractions that have little actual consequentialist value in this case, just a veneer of “that sounds nice on paper”.
Me: It’s unfortunately hard for me to assess the consequentialist value in this case, because no investigation has taken place. I’ve gestured at some questions I have above, but I’m missing most of the pieces about what actually happened, and some of the unknown unknowns here might turn out to swamp the importance of what I know about. It’s not clear to me that you know much more than me, either. Rather than pitting your speculation against mine, I’d rather do some actual inquiry.
Hypothetical EA: I think we already know enough, including from the legal investigation into Sam Bankman-Fried and who was involved in his conspiracy, to make a good guess that re-victimizing random EAs is not a useful way for this movement to spend its time and energy. The world has many huge problems that need fixing, and it’s not as though EA’s critics are going to suddenly conclude that EAs are Good After All if we spill all of our dirty laundry. What will actually happen is that they’ll cherry-pick and distort the worst-sounding tidbits, while ignoring all the parts you hoped would be “trust-restoring”.
Me: Some EA critics will do that, sure. But there are plenty of people, both within EA and outside of it, who legitimately just want to know what happened, and will be very reassured to have a clearer picture of the basic sequence of events, which orgs did a better or worse job, which processes failed or succeeded. They’ll also be reassured to know that we know what happened, vs. blinding ourselves to the facts and to any lessons they might contain.
Or maybe they’ll be horrified because the details are actually awful (ethically, not legally). Part of being honest is taking on the risk that this could happen too. That’s just not avoidable. If we’re not the sort of community that would share bad stuff if it were true, then people are forced to be that much more worried that we’re in fact hiding a bunch of bad stuff.
Hypothetical EA: I just don’t think there’s that much crucial information EA leaders are missing, from their informal poking around. You can doubt that, but I don’t think a formal investigation would help much, since people who don’t want to speak now will (if anything) probably be even more tight-lipped in the face of what looks like a witch-hunt.
You say that EAs have a responsibility to jump through a bunch of transparency hoops. But whether or not you agree with my “EAs are victims” frame: EAs don’t owe the community their lives. If you’re someone who made personal sacrifices to try to make the world a better place, that doesn’t somehow come with a gotcha clause where you now have incurred a huge additional responsibility that we’d never impose on ordinary private citizens, to dump your personal life into the public Internet.
Me: I don’t necessarily disagree with that, as stated. But I think particular EAs are signing up for some extra responsibility, e.g., when they become EA leaders and ask for a lot of trust on the part of their community.
I wouldn’t necessarily describe that responsibility as “huge”, because I don’t actually think a basic investigation into the SBF thing is that unusual or onerous.
I don’t see myself as proposing anything all that radical here. I’m even open to the idea that we might want to redact some names and events in the public recounting of what happened, to protect the innocent. I don’t see anything weird about that; what strikes me as puzzling is the complete absence of any basic fact-finding effort (beyond the narrow-scope EV legal inquiry).
And what strike me as doubly puzzling is that there hasn’t even been a public write-up that CEA and others are not planning to look into this at all, nor has there been any public argument for this policy — whence this dialogue. As though EAs are just hoping we’ll quietly forget about this pretty major omission, so they don’t have to say anything potentially controversial. That I don’t really respect; if you think this investigation is a bad idea, do the EA thing and make your case!
Hypothetical EA: Well, hopefully my arguments have given you some clues about (non-nefarious reasons why) EAs might want to quietly let this thing die, rather than giving a big public argument for letting it die. In addition to the obvious fact that folks are just very busy, and more time spent on this means less time spent on a hundred other things.
Me: And hopefully my arguments have helped remind some folks that things are sometimes worth doing even when they’re hard.
All the arguments in the world don’t erase the fact that at the end of the day, we have a choice between taking risks for the sake of righting our wrongs and helping people understand what happened, versus hiding from the light of day and quietly hoping that no one calls us out for retreating from our idealistic-sounding principles.
We have a choice between following the path of least resistance into ever-murkier, ever-more-confusing, ever-less-trusting waters; or taking a bold stand and doing whatever we can to give EAs and non-EAs alike real insight into what happened, and a real capacity to adjust course if and only if some course-changing is warranted.
There are certainly times when the boring, practical, un-virtuous-sounding option really is the right option. I don’t think this is one of those times; I think we need to be better than that this one time, or we risk losing by a thousand cuts some extremely precious things that used to be central to what made EA EA.
… And if you disagree with me about all that, well, tell me why I’m wrong.
I think I agree with Hypothetical EA that we basically know the broad picture.
Probably nobody was actually complicit or knew there was fraud; and
Various people made bad judgement calls and/or didn’t listen to useful rumours about Sam
I guess I’m just… satisfied with that? You say:
.. why? None of this seems that important to me? Most of it seems like a matter for the person/org in question to reflect/improve on. Why is it important for “plenty of people” to learn this stuff, given we already know the broad picture above?
I would sum up my personal position as:
We got taken for a ride, so we should take the general lesson to be more cautious of charismatic people with low scruples, especially bearing large sums of money.
If you or your org were specifically taken for a ride you should reflect on why that happened to you and why you didn’t listen to the people who did spot what was going on.
EA is compelling insofar as it is about genuinely making the world a better place, ie we care about the actual consequences. Just because there are probably no specific people/processes to blame, doesn’t mean we should be satisfied with how things are.
There is now decent evidence that EA might cause considerable harm in the world, so we should be strongly motivated to figure out how to change that. Maybe EA’s failures are just the cost of ambition and agency, and come along with the good it does, but I think that’s both untrue and worryingly defeatist.
I care about the end result of all of this, and the fact that we’re okay with some serious Ls happening (and not being willing to fix the root cause of those errors) is concerning.
Random idea:
Maybe we should—after this question of investigation or not has been discussed in more detail—organize community-wide vote on whether there should be an investigation or not?
It’s easy to vote for something you don’t have to pay for. If we do anything like this, an additional fundraiser to pay for it might be appropriate.
Knowing what people think is useful, especially if it’s a non-anonymous poll aimed at sparking conversations, questions, etc. (One thing that might help here is to include a field for people to leave a brief explanation of their vote, if the polling software allows for it.)
Anonymous polls are a bit trickier, since random people on the Internet can easily brigade such a poll. And I wouldn’t want to assume that something’s a good idea just because most EAs agree with it; I’d rather focus on the arguments for and against.
“Just focus on the arguments” isn’t a decision-making algorithm, but I think informal processes like “just talk about it and individually do what makes sense” perform better than rigid algorithms in cases like this.
If we want something more formal, I tend to prefer approaches like “delegate the question to someone trustworthy who can spend a bunch of time carefully weighing the arguments” or “subsidize a prediction market to resolve the question” over “just run an opinion poll and do whatever the majority of people-who-see-the-poll vote for, without checking how informed or wise the respondents are”.
The question of a community-wide vote, on any level, about whether there should be such an investigation might at this point be moot. I have personally offered to begin conducting significant parts of such an investigation myself. Since I made that initial comment, I’ve now read several more providing arguments against the need or desirability for such an investigation. Having found them unconvincing, I now intend privately contact at least several private individuals—both in and around the EA movement, as well as some outside of or who no longer participate in the EA community—to pursue that end. Something like a community-wide vote, or some proxy like even dozens of effective altruists trying to talk me out of that, would be unlikely to convice me to not do so.
People, the downvote button is not a disagree button. That’s not really what it should be used for.
Thanks
Maybe quite some people don’t like random ideas being shared on the Forum?
I disagree, and in this case I don’t think the forum team should have a say in the matter. Each user has their own interpretation of the upvote/downvote button and that’s ok. Personally I don’t use it as “I disagree” but rather as “this comment shouldn’t have been written”, but there’s certainly a correlation. For instance, I both disagree-voted and downvoted your comment (since I dislike the attempt to police this).