Hey—I’m starting to post and comment more on the Forum than I have been, and you might be wondering about whether and when I’m going to respond to questions around FTX. So here’s a short comment to explain how I’m currently thinking about things:
The independent investigation commissioned by EV is still ongoing, and the firm running it strongly preferred me not to publish posts on backwards-looking topics around FTX while the investigation is still in-progress. I don’t know when it’ll be finished, or what the situation will be like for communicating on these topics even after it’s done.
I had originally planned to get out a backwards-looking post early in the year, and I had been holding off on talking about other things until that was published. That post has been repeatedly delayed, and I’m not sure when it’ll be able to come out. If I’d known that it would have been delayed this long, I wouldn’t have waited on it before talking on other topics, so I’m now going to start talking more than I have been, on the Forum and elsewhere; I’m hoping I can be helpful for some of the other issues that are currently active topics of discussion.
Briefly, though, and as I indicated before: I had no idea that Sam and others were misusing customer funds. Since November I’ve thought a lot about whether there were signs of this I really should have spotted, but even in hindsight I don’t think I had reason to suspect that that was happening.
Looking back, I wish I’d been far less trusting of Sam and those who’ve pleaded guilty. Looking forward, I’m going to be less likely to infer that, just because someone has sincere-seeming signals of being highly morally motivated, like being vegan or demonstrating credible plans to give away most of their wealth, they will have moral integrity in other ways, too.
I’m also more wary, now, of having such a high-trust culture within EA, especially as EA grows. This thought favours robust governance mechanisms even more than before (“trust but verify”), so that across EA we can have faith in organisations and institutions, rather than heavily relying on character judgement about the leaders of those organisations.
EA has grown enormously the last few years; in many ways it feels like an adolescent, in the process of learning how to deal with its newfound role in the world. I’m grateful that we’re in a moment of opportunity for us to think more about how to improve ourselves, including both how we work and how we think and talk about effective altruism.
As part of that broader set of reflections (especially around the issue of (de)centralisation in EA), I’m making some changes to how I operate, which I describe, along with some of the other changes happening across EA, in my post on decision-making and decentralisation here. First, I plan to distance myself from the idea that I’m “the face of” or “the spokesperson for” EA; this isn’t how I think of myself, and I don’t think that description reflects reality, but I’m sometimes portrayed that way. I think moving in the direction of clarity on this will better reflect reality and be healthier for both me and the movement.
Second, I plan to step down from the board of Effective Ventures UK once it has more capacity and has recruited more trustees. I found it tough to come to this decision: I’ve been on the board of EV UK (formerly CEA) for 11 years now, and I care deeply, in a very personal way, about the projects housed under EV UK, especially CEA, 80,000 Hours, and Giving What We Can. But I think it’s for the best, and when I do step down I’ll know that EV will be in good hands.
Over the next year, I’ll continue to do learning and research on global priorities and cause prioritisation, especially in light of the astonishing (and terrifying) developments in AI over the last year. And I’ll continue to advocate for EA and related ideas: for example, in September, WWOTF will come out in paperback in the US and UK, and will come out in Spanish, German, and Finnish that month, too. Given all that’s happened in the world in the last few years — including a major pandemic, war in Europe, rapid AI advances, and an increase in extreme poverty rates — it’s more important than ever to direct people, funding and clear thinking towards the world’s most important issues. I’m excited to continue to help make that happen.
Honestly, it does seem like it might be challenging, and I welcome ideas on things to do. (In particular, it might be hard without sacrificing lots of value in other ways. E.g. going on big-name podcasts can be very, very valuable, and I wouldn’t want to indefinitely avoid doing that—that would be too big a cost. More generally, public advocacy is still very valuable, and I still plan to be “a” public proponent of EA.)
The lowest-hanging fruit is just really hammering the message to journalists / writers I speak to; but there’s not a super tight correlation between what I say to journalists / writers and what they write about. Having others give opening / closing talks at EAG also seems like an easy win.
The ideal is that we build up a roster of EA-aligned public figures. I’ve been spending some time on that this year, providing even more advice / encouragement to potential public figures than before, and connecting them to my network. The last year has made it more challenging though, as there are larger costs to being an EA-promoting public figure than there were before, so it’s a less attractive prospect; at the same time, a lot of people are now focusing on AI in particular. But there are a number of people who I think could be excellent in this position.
“First, building up a solid roster of EA public figures will take a while—many years, at least. For example, suppose that someone decides to become a public figure and goes down a book-writing path. Writing a book typically takes a couple of years or more, then there’s a year between finishing the manuscript and publication. And people’s first books are rarely huge hits—someone’s public profile tends to build slowly over time. There are also just a few things that are hard and take time to replicate, like conventional status indicators (being a professor at a prestigious university).
Second, I don’t think we’re ever going to be able to get away from a dynamic where a handful of public figures are far more well-known than all others. Amount of public attention (as measured by, e.g. twitter followers) follows a power law. So if we try to produce a lot of potential public figures, just via the underlying dynamics we’ll probably get a situation where the most-well-known person is a lot more well-known than the next most-well-known person, and so on.
(The same dynamic is why it’s more or less inevitable that much or most funding in EA will come from a small handful of donors. Wealth follows a fat-tailed distribution; the people who are most able to donate will be able to donate far more than most people. Even for GiveWell, which is clearly aimed at “retail” donors, 51% of their funds raised came from a single donor — Open Philanthropy.)
It’s also pretty chance-y who gets the most attention at any one time. WWOTF ended up getting a lot more media attention than The Precipice; this could easily have been the other way around. It certainly didn’t have anything to do with the intrinsic quality of the two books; whereas out-of-control factors like The Precipice being published right after COVID made a significant difference. Toby also got a truly enormous amount of media attention in both 2009 and 2010 (I think he was the most-read news story on the BBC both times); if that had happened now, he’d have a much larger public profile than he currently does.
All this is to say: progress here will take some time. A major success story for this plan would be that, in five years’ time, there are a couple more well-known EA figureheads, in addition to what we have now. That said, there are still things we can to make progress on this in the near term: have other people speaking with the media when they can; having other people do the opening talks at EAGs; and showcasing EAs who already have public platforms, like Toby Ord, or Natalie Cargill, who has an excellent TED talk that’s coming out this year.”
CEA distributes books at scale, right? Seems like offering more different books could boost name recognition of other authors and remove a signal of emphasis on you. This would be far from a total fix, but is very easy to implement.
I haven’t kept up with recent books, but back in 2015 I preferred Nick Cooney’s intro to EA book to both yours and Peter Singer’s, and thought it was a shame it got a fraction of the attention.
Presumably it’s easier to sell your own book than someone else’s? I assume CEA is able to get a much better rate on The Precipice and What We Owe The Future than How To Be Great At Doing Good or The Most Good You Can Do. The Life You Can Save (the org) even bought the rights to The Life You Can Save (the book) to make it easier to distribute.
[Edit: This may have been a factor too/instead:
“In my personal case, all of the proceeds from the book — all of the advances and royalties — are going to organizations focused on making sure we have a long-term future.”—Toby
“All proceeds from What We Owe The Future are donated to the Longtermism Fund”—Will
I can’t find anything similar for Peter’s or Nick’s books.]
It will always be easier to promote nearby highly popular people than farther away, lesser known people. One person being the “face” is the natural outcome of that dynamic. If you want a diverse field you need to promote other people even when it’s more effort in the short run.
If you want a diverse field you need to promote other people even when it’s more effort in the short run.
Agreed, sorry, I should have been clearer: I was aiming to offer reasons for why Nick Cooney’s book may have gotten a fraction of the attention to date (and, to a lesser extent, pushing back a bit on the idea that it would be “very easy to implement”).
The rest of us can help by telling others Will MacAskill is seeking to divest himself of this reputation whenever we see or hear someone talking about him as if he still wants to be that person (not that he ever did, as evidenced by his above statement, a sentiment I’ve seen him express before in years past).
I’m glad that you are stepping down from EV UK and focusing more on global priorities and cause prioritisation (and engaging on this forum!). I have a feeling, given your philosophy background, that this will move you to focus more where you have a comparative advantage. I can’t wait to read what you have to say about AI!
Thanks for your work, it’s my sense you work really really hard and have done for a long time. Thank you
Thanks for the emotional effort. I guess that at times your part within EA is pretty sad, tiring, stressful. I’m sad if that happens to you
I sense you screwed up in trusting SBF and in someone not being on top of where the money was moving in FTXFF. It was an error. Seems worth calling an L an L (a loss a loss). This has caused a little harm to me personally and I forgive you. Sounds fatuous but feels important to say. I’m pretty confident if our roles were reversed I would have screwed it up much worse. I think many people would have—given how long it took for billions of $ to figure this out.
I don’t know if the decision to step down is the right one. I acknowledge my prior is that you should but there is much more information than that that we don’t have. I will say that you are almost uniquely skilled at the job and often I guess that people who were pretty good but made some big errors are better than people who were generally bad or who are unskilled. I leave that up to you, but seems worth saying
I sense, on balance that the thing that most confuses/concerns me is this line from the Time article
“Bouscal recalled speaking to Mac Aulay immediately after one of Mac Aulay’s conversations with MacAskill in late 2018. “Will basically took Sam’s side,” said Bouscal, who recalls waiting with Mac Aulay in the Stockholm airport while she was on the phone. (Bouscal and Mac Aulay had once dated; though no longer romantically involved, they remain close friends.) “Will basically threatened Tara,” Bouscal recalls. “I remember my impression being that Will was taking a pretty hostile stance here and that he was just believing Sam’s side of the story, which made no sense to me.””
While other things may have been bigger errors, this once seems most sort of “out of character” or “bad normsy”. And I know Naia well enough that this moves me a lot, even though it seems so out of character for you (maybe 30% that this is a broadly accurate account). This causes me consternation, I don’t understand and I think if this happened it was really bad and behaviour like it should not happen from any powerful EAs (or any EAs frankly).
Again, I find this very hard to think about, and my priors are that there should be consequences for this—perhaps it does imply you aren’t suitable for some of these roles. But I’m pretty open-minded to the idea that that isn’t the case—we just know your flaws where we don’t know most other peoples. The Copenhagen Theory of Leadership.
I don’t really know how to integrate all these things into a coherent view of the world, so I will just leave them here jagged and hungry for explanation.
Thanks for your work, I am optimistic about the future, I wish you well in whatever you go forward and do
I don’t know if the decision to step down is the right one. I acknowledge my prior is that you should but there is much more information than that that we don’t have. I will say that you are almost uniquely skilled at the job and often I guess that people who were pretty good but made some big errors are better than people who were generally bad or who are unskilled. I leave that up to you, but seems worth saying
I think it’s important to consider that the nature of being on the EVF board over the next few years is likely to be much different than it was pre-FTX. No matter the result of the CC inquiry, EVF needs to consider itself as on the CC’s radar for the next few years, and that means extra demands on trustees to handle corporate-governance type stuff. It sounds like a number of projects will spin off (which I think should happen), and the associated logistics will be a major source of board involvement. Plus there’s all the FTX fallout, for which Will is recused anyway.
So there are a number of reasons someone might decide to step down, including that the post-FTX role just takes too much of their time, or that they don’t have a comparative advantage in light of the new expected composition of the board’s workload.
This is an ancillary point, but IMO it would be very unfair to focus too much on what Will personally did or did not know about FTX. There were plenty of other opportunities for other people with far less personal involvement to partially figure this one out, and some did so before the site’s failure.
My own major redflag about FTX, for instance, was the employment of Dan Friedberg as their chief regulatory officer, a known liar and fraud-enabler from his involvement with the UltimateBet superusing scandal. Friedberg’s executive role at FTX was public record, while the tapes that confirmed the degree of his involvement in the thefts at UltimateBet were leaked in 2013 and were widely publicized in the poker community. Some prominent EAs are even former professional poker players (Igor Kurganov and Liv Boeree).
Even just a few months before FTX’s failure, enormous redflags were emerging everywhere. Due to the bankruptcy proceedings of crypto lender Voyager, it became public knowledge in July 2022 that Alameda Research owed them $377 million at the time of bankruptcy. The obvious conclusion was, that like Voyager’s other outstanding debtor Three Arrows Capital, Alameda was insolvent (which we now know was in fact the the case). All this was public record and easy to find if you paid a bit of attention to crypto (i.e it was reported in the crypto press), which surely many EAs did at the time.
tl;dr This idea that only a small caste of elite EAs with access to privileged information about FTX could have made some good educated guesses about the potential risks does not stack up: there was plenty in the public record, and I suggest that EA collectively was very happy to look the other way as long as the money was flowing and SBF seemed like a nice EA vegan boy. No doubt Will & other elite EAs deserve some blame, but it would be very easy to try to pin too much on them.
Given how badly and broadly FTX was missed by a variety of actors, it’s hard to assign much relative blame to anyone absent circumstances that distinguish their potential blame above the baseline:
Some people had access to significant non-public information that should have increased their assessment of the risk posed by FTX, above and beyond the publicly-available information.
Some people had a particularized duty to conduct due dilligence and form an assessment of FTX’s risk (or to supervise someone to whom this duty was delegated). This duty would accrue from, e.g., a senior leadership role in an organization receiving large amounts of FTX funding, In other words, it was some people’s job to think about FTX risk.
Your average EA met neither of these criteria. In contrast, I think these two criteria—special knowledge and special responsibility—are multiplicative (i.e., that the potential blame for someone meeting both criteria is much higher than for those who met only one).
Some people had access to significant non-public information that should have increased their assessment of the risk posed by FTX
Plausible. Also plausible that they also had access to info that decreased their assessment. Perhaps the extra info they had access to even suggested they should decrease their assessment overall. Or perhaps they didn’t have access to any significant/relevant extra info.
It was some people’s job to think about FTX risk
Agreed. But I think Benjamin_Todd offers a good reflection on this:
I’m unconvinced that there should have been much more scenario / risk planning. I think it was already obvious that FTX might fall 90% in a crypto bear market (e.g. here) – and if that was all that happened, things would probably be OK. What surprised people was the alleged fraud and that everything was so entangled it would all go to zero at once, and I’m skeptical additional risk surveying exercises would have ended up with a significant credence on these (unless a bunch of other things were different). There were already some risk surveying attempts and they didn’t get there (e.g. in early 2022, metaculus had a 1% chance of FTX making any default on customer funds over the year with ~40 forecasters).
Hey—I’m starting to post and comment more on the Forum than I have been, and you might be wondering about whether and when I’m going to respond to questions around FTX. So here’s a short comment to explain how I’m currently thinking about things:
The independent investigation commissioned by EV is still ongoing, and the firm running it strongly preferred me not to publish posts on backwards-looking topics around FTX while the investigation is still in-progress. I don’t know when it’ll be finished, or what the situation will be like for communicating on these topics even after it’s done.
I had originally planned to get out a backwards-looking post early in the year, and I had been holding off on talking about other things until that was published. That post has been repeatedly delayed, and I’m not sure when it’ll be able to come out. If I’d known that it would have been delayed this long, I wouldn’t have waited on it before talking on other topics, so I’m now going to start talking more than I have been, on the Forum and elsewhere; I’m hoping I can be helpful for some of the other issues that are currently active topics of discussion.
Briefly, though, and as I indicated before: I had no idea that Sam and others were misusing customer funds. Since November I’ve thought a lot about whether there were signs of this I really should have spotted, but even in hindsight I don’t think I had reason to suspect that that was happening.
Looking back, I wish I’d been far less trusting of Sam and those who’ve pleaded guilty. Looking forward, I’m going to be less likely to infer that, just because someone has sincere-seeming signals of being highly morally motivated, like being vegan or demonstrating credible plans to give away most of their wealth, they will have moral integrity in other ways, too.
I’m also more wary, now, of having such a high-trust culture within EA, especially as EA grows. This thought favours robust governance mechanisms even more than before (“trust but verify”), so that across EA we can have faith in organisations and institutions, rather than heavily relying on character judgement about the leaders of those organisations.
EA has grown enormously the last few years; in many ways it feels like an adolescent, in the process of learning how to deal with its newfound role in the world. I’m grateful that we’re in a moment of opportunity for us to think more about how to improve ourselves, including both how we work and how we think and talk about effective altruism.
As part of that broader set of reflections (especially around the issue of (de)centralisation in EA), I’m making some changes to how I operate, which I describe, along with some of the other changes happening across EA, in my post on decision-making and decentralisation here. First, I plan to distance myself from the idea that I’m “the face of” or “the spokesperson for” EA; this isn’t how I think of myself, and I don’t think that description reflects reality, but I’m sometimes portrayed that way. I think moving in the direction of clarity on this will better reflect reality and be healthier for both me and the movement.
Second, I plan to step down from the board of Effective Ventures UK once it has more capacity and has recruited more trustees. I found it tough to come to this decision: I’ve been on the board of EV UK (formerly CEA) for 11 years now, and I care deeply, in a very personal way, about the projects housed under EV UK, especially CEA, 80,000 Hours, and Giving What We Can. But I think it’s for the best, and when I do step down I’ll know that EV will be in good hands.
Over the next year, I’ll continue to do learning and research on global priorities and cause prioritisation, especially in light of the astonishing (and terrifying) developments in AI over the last year. And I’ll continue to advocate for EA and related ideas: for example, in September, WWOTF will come out in paperback in the US and UK, and will come out in Spanish, German, and Finnish that month, too. Given all that’s happened in the world in the last few years — including a major pandemic, war in Europe, rapid AI advances, and an increase in extreme poverty rates — it’s more important than ever to direct people, funding and clear thinking towards the world’s most important issues. I’m excited to continue to help make that happen.
I’m curious about ways you think to mitigate against being seen as the face of/spokesperson for EA
Honestly, it does seem like it might be challenging, and I welcome ideas on things to do. (In particular, it might be hard without sacrificing lots of value in other ways. E.g. going on big-name podcasts can be very, very valuable, and I wouldn’t want to indefinitely avoid doing that—that would be too big a cost. More generally, public advocacy is still very valuable, and I still plan to be “a” public proponent of EA.)
The lowest-hanging fruit is just really hammering the message to journalists / writers I speak to; but there’s not a super tight correlation between what I say to journalists / writers and what they write about. Having others give opening / closing talks at EAG also seems like an easy win.
The ideal is that we build up a roster of EA-aligned public figures. I’ve been spending some time on that this year, providing even more advice / encouragement to potential public figures than before, and connecting them to my network. The last year has made it more challenging though, as there are larger costs to being an EA-promoting public figure than there were before, so it’s a less attractive prospect; at the same time, a lot of people are now focusing on AI in particular. But there are a number of people who I think could be excellent in this position.
I talk a bit more about some of the challenges in “Will MacAskill should not be the face of EA”:
“First, building up a solid roster of EA public figures will take a while—many years, at least. For example, suppose that someone decides to become a public figure and goes down a book-writing path. Writing a book typically takes a couple of years or more, then there’s a year between finishing the manuscript and publication. And people’s first books are rarely huge hits—someone’s public profile tends to build slowly over time. There are also just a few things that are hard and take time to replicate, like conventional status indicators (being a professor at a prestigious university).
Second, I don’t think we’re ever going to be able to get away from a dynamic where a handful of public figures are far more well-known than all others. Amount of public attention (as measured by, e.g. twitter followers) follows a power law. So if we try to produce a lot of potential public figures, just via the underlying dynamics we’ll probably get a situation where the most-well-known person is a lot more well-known than the next most-well-known person, and so on.
(The same dynamic is why it’s more or less inevitable that much or most funding in EA will come from a small handful of donors. Wealth follows a fat-tailed distribution; the people who are most able to donate will be able to donate far more than most people. Even for GiveWell, which is clearly aimed at “retail” donors, 51% of their funds raised came from a single donor — Open Philanthropy.)
It’s also pretty chance-y who gets the most attention at any one time. WWOTF ended up getting a lot more media attention than The Precipice; this could easily have been the other way around. It certainly didn’t have anything to do with the intrinsic quality of the two books; whereas out-of-control factors like The Precipice being published right after COVID made a significant difference. Toby also got a truly enormous amount of media attention in both 2009 and 2010 (I think he was the most-read news story on the BBC both times); if that had happened now, he’d have a much larger public profile than he currently does.
All this is to say: progress here will take some time. A major success story for this plan would be that, in five years’ time, there are a couple more well-known EA figureheads, in addition to what we have now. That said, there are still things we can to make progress on this in the near term: have other people speaking with the media when they can; having other people do the opening talks at EAGs; and showcasing EAs who already have public platforms, like Toby Ord, or Natalie Cargill, who has an excellent TED talk that’s coming out this year.”
CEA distributes books at scale, right? Seems like offering more different books could boost name recognition of other authors and remove a signal of emphasis on you. This would be far from a total fix, but is very easy to implement.
I haven’t kept up with recent books, but back in 2015 I preferred Nick Cooney’s intro to EA book to both yours and Peter Singer’s, and thought it was a shame it got a fraction of the attention.
Presumably it’s easier to sell your own book than someone else’s? I assume CEA is able to get a much better rate on The Precipice and What We Owe The Future than How To Be Great At Doing Good or The Most Good You Can Do. The Life You Can Save (the org) even bought the rights to The Life You Can Save (the book) to make it easier to distribute.
[Edit: This may have been a factor too/instead:
“In my personal case, all of the proceeds from the book — all of the advances and royalties — are going to organizations focused on making sure we have a long-term future.”—Toby
“All proceeds from What We Owe The Future are donated to the Longtermism Fund”—Will
I can’t find anything similar for Peter’s or Nick’s books.]
It will always be easier to promote nearby highly popular people than farther away, lesser known people. One person being the “face” is the natural outcome of that dynamic. If you want a diverse field you need to promote other people even when it’s more effort in the short run.
Agreed, sorry, I should have been clearer: I was aiming to offer reasons for why Nick Cooney’s book may have gotten a fraction of the attention to date (and, to a lesser extent, pushing back a bit on the idea that it would be “very easy to implement”).
Have you thought about not doing interviews?
The rest of us can help by telling others Will MacAskill is seeking to divest himself of this reputation whenever we see or hear someone talking about him as if he still wants to be that person (not that he ever did, as evidenced by his above statement, a sentiment I’ve seen him express before in years past).
I’m glad that you are stepping down from EV UK and focusing more on global priorities and cause prioritisation (and engaging on this forum!). I have a feeling, given your philosophy background, that this will move you to focus more where you have a comparative advantage. I can’t wait to read what you have to say about AI!
Thanks! And I agree re comparative advantage!
I’m confused by the disagree-votes on Malde’s comment, since it makes sense to me. Can anyone who disagreed explain their reasoning?
I’m much more confused by the single strong (-4) downvote on yours at the time of writing. (And no agree/disagree votes.)
By the way, I can only see one (strong, −7) disagree-vote on Malde’s.
Some quick thoughts:
Thanks for your work, it’s my sense you work really really hard and have done for a long time. Thank you
Thanks for the emotional effort. I guess that at times your part within EA is pretty sad, tiring, stressful. I’m sad if that happens to you
I sense you screwed up in trusting SBF and in someone not being on top of where the money was moving in FTXFF. It was an error. Seems worth calling an L an L (a loss a loss). This has caused a little harm to me personally and I forgive you. Sounds fatuous but feels important to say. I’m pretty confident if our roles were reversed I would have screwed it up much worse. I think many people would have—given how long it took for billions of $ to figure this out.
I don’t know if the decision to step down is the right one. I acknowledge my prior is that you should but there is much more information than that that we don’t have. I will say that you are almost uniquely skilled at the job and often I guess that people who were pretty good but made some big errors are better than people who were generally bad or who are unskilled. I leave that up to you, but seems worth saying
I sense, on balance that the thing that most confuses/concerns me is this line from the Time article
“Bouscal recalled speaking to Mac Aulay immediately after one of Mac Aulay’s conversations with MacAskill in late 2018. “Will basically took Sam’s side,” said Bouscal, who recalls waiting with Mac Aulay in the Stockholm airport while she was on the phone. (Bouscal and Mac Aulay had once dated; though no longer romantically involved, they remain close friends.) “Will basically threatened Tara,” Bouscal recalls. “I remember my impression being that Will was taking a pretty hostile stance here and that he was just believing Sam’s side of the story, which made no sense to me.””
While other things may have been bigger errors, this once seems most sort of “out of character” or “bad normsy”. And I know Naia well enough that this moves me a lot, even though it seems so out of character for you (maybe 30% that this is a broadly accurate account). This causes me consternation, I don’t understand and I think if this happened it was really bad and behaviour like it should not happen from any powerful EAs (or any EAs frankly).
Again, I find this very hard to think about, and my priors are that there should be consequences for this—perhaps it does imply you aren’t suitable for some of these roles. But I’m pretty open-minded to the idea that that isn’t the case—we just know your flaws where we don’t know most other peoples. The Copenhagen Theory of Leadership.
I don’t really know how to integrate all these things into a coherent view of the world, so I will just leave them here jagged and hungry for explanation.
Thanks for your work, I am optimistic about the future, I wish you well in whatever you go forward and do
I think it’s important to consider that the nature of being on the EVF board over the next few years is likely to be much different than it was pre-FTX. No matter the result of the CC inquiry, EVF needs to consider itself as on the CC’s radar for the next few years, and that means extra demands on trustees to handle corporate-governance type stuff. It sounds like a number of projects will spin off (which I think should happen), and the associated logistics will be a major source of board involvement. Plus there’s all the FTX fallout, for which Will is recused anyway.
So there are a number of reasons someone might decide to step down, including that the post-FTX role just takes too much of their time, or that they don’t have a comparative advantage in light of the new expected composition of the board’s workload.
This is an ancillary point, but IMO it would be very unfair to focus too much on what Will personally did or did not know about FTX. There were plenty of other opportunities for other people with far less personal involvement to partially figure this one out, and some did so before the site’s failure.
My own major redflag about FTX, for instance, was the employment of Dan Friedberg as their chief regulatory officer, a known liar and fraud-enabler from his involvement with the UltimateBet superusing scandal. Friedberg’s executive role at FTX was public record, while the tapes that confirmed the degree of his involvement in the thefts at UltimateBet were leaked in 2013 and were widely publicized in the poker community. Some prominent EAs are even former professional poker players (Igor Kurganov and Liv Boeree).
Even just a few months before FTX’s failure, enormous redflags were emerging everywhere. Due to the bankruptcy proceedings of crypto lender Voyager, it became public knowledge in July 2022 that Alameda Research owed them $377 million at the time of bankruptcy. The obvious conclusion was, that like Voyager’s other outstanding debtor Three Arrows Capital, Alameda was insolvent (which we now know was in fact the the case). All this was public record and easy to find if you paid a bit of attention to crypto (i.e it was reported in the crypto press), which surely many EAs did at the time.
tl;dr This idea that only a small caste of elite EAs with access to privileged information about FTX could have made some good educated guesses about the potential risks does not stack up: there was plenty in the public record, and I suggest that EA collectively was very happy to look the other way as long as the money was flowing and SBF seemed like a nice EA vegan boy. No doubt Will & other elite EAs deserve some blame, but it would be very easy to try to pin too much on them.
Given how badly and broadly FTX was missed by a variety of actors, it’s hard to assign much relative blame to anyone absent circumstances that distinguish their potential blame above the baseline:
Some people had access to significant non-public information that should have increased their assessment of the risk posed by FTX, above and beyond the publicly-available information.
Some people had a particularized duty to conduct due dilligence and form an assessment of FTX’s risk (or to supervise someone to whom this duty was delegated). This duty would accrue from, e.g., a senior leadership role in an organization receiving large amounts of FTX funding, In other words, it was some people’s job to think about FTX risk.
Your average EA met neither of these criteria. In contrast, I think these two criteria—special knowledge and special responsibility—are multiplicative (i.e., that the potential blame for someone meeting both criteria is much higher than for those who met only one).
Plausible. Also plausible that they also had access to info that decreased their assessment. Perhaps the extra info they had access to even suggested they should decrease their assessment overall. Or perhaps they didn’t have access to any significant/relevant extra info.
Agreed. But I think Benjamin_Todd offers a good reflection on this:
I read almost all of the comments on the original EA Forum post linking to the Time article in question. If I recall correctly,Will made a quick comment that he would respond to these kinds of details when he would be at liberty to do so. (Edit: he made that point even more clearly in this shortform post he wrote a few months ago. https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=ACDPftuESqkJP9RxP)
I assume he will address these concerns you’ve mentioned here at the same time he provides a fuller retrospective on the FTX collapse and its fallout.