Like Buck and Toby, I think this is a great piece of legislation and think that it’s well worth the time to send a letter to Governor Newsom. I’d love to see the community rallying together and helping to make this bill a reality!
William_MacAskill
On talking about this publicly
A number of people have asked why there hasn’t been more communication around FTX. I’ll explain my own case here; I’m not speaking for others. The upshot is that, honestly, I still feel pretty clueless about what would have been the right decisions, in terms of communications, from both me and from others, including EV, over the course of the last year and a half. I do, strongly, feel like I misjudged how long everything would take, and I really wish I’d gotten myself into the mode of “this will all take years.”
Shortly after the collapse, I drafted a blog post and responses to comments on the Forum. I was also getting a lot of media requests, and I was somewhat sympathetic to the idea of doing podcasts about the collapse — defending EA in the face of the criticism it was getting. My personal legal advice was very opposed to speaking publicly, for reasons I didn’t wholly understand; the reasons were based on a general principle rather than anything to do with me, as they’ve seen a lot of people talk publicly about ongoing cases and it’s gone badly for them, in a variety of ways. (As I’ve learned more, I’ve come to see that this view has a lot of merit to it). I can’t remember EV’s view, though in general it was extremely cautious about communication at that time. I also got mixed comments on whether my Forum posts were even helpful; I haven’t re-read them recently, but I was in a pretty bad headspace at the time. Advisors said that by January things would be clearer. That didn’t seem like that long to wait, and I felt very aware of how little I knew.
The “time at which it’s ok to speak”, according to my advisors, kept getting pushed back. But by March I felt comfortable, personally, about speaking publicly. I had a blog post ready to go, but by this point the Mintz investigation (that is, the investigation that EV had commissioned) had gotten going. Mintz were very opposed to me speaking publicly. I think they said something like that my draft was right on the line where they’d consider resigning from running the investigation if I posted it. They thought the integrity of the investigation would be compromised if I posted, because my public statements might have tainted other witnesses in the investigation, or had a bearing on what they said to the investigators. EV generally wanted to follow Mintz’s view on this, but couldn’t share legal advice with me, so it was hard for me to develop my own sense of the costs and benefits of communicating.
By December, the Mintz report was fully finished and the bankruptcy settlement was completed. I was travelling (vacation and work) over December and January, and aimed to record podcasts on FTX in February. That got delayed by a month because of Sam Harris’s schedule, so they got recorded in March.
It’s still the case that talking about this feels like walking through a minefield. There’s still a real risk of causing unjustified and unfair lawsuits against me or other people or organisations, which, even if frivolous, can impose major financial costs and lasting reputational damage. Other relevant people also don’t want to talk about the topic, even if just for their own sanity, and I don’t want to force their hand. In my own case, thinking and talking about this topic feels like fingering an open wound, so I’m sympathetic to their decision.
Elon Musk
Stuart Buck asks:
“[W]hy was MacAskill trying to ingratiate himself with Elon Musk so that SBF could put several billion dollars (not even his in the first place) towards buying Twitter? Contributing towards Musk’s purchase of Twitter was the best EA use of several billion dollars? That was going to save more lives than any other philanthropic opportunity? Based on what analysis?”
Sam was interested in investing in Twitter because he thought it would be a good investment; it would be a way of making more money for him to give away, rather than a way of “spending” money. Even prior to Musk being interested in acquiring Twitter, Sam mentioned he thought that Twitter was under-monetised; my impression was that that view was pretty widely-held in the tech world. Sam also thought that the blockchain could address the content moderation problem. He wrote about this here, and talked about it here, in spring and summer of 2022. If the idea worked, it could make Twitter somewhat better for the world, too.
I didn’t have strong views on whether either of these opinions were true. My aim was just to introduce the two of them, and let them have a conversation and take it from there.
On “ingratiating”: Musk has pledged to give away at least half his wealth; given his net worth in 2022, that would amount to over $100B. There was a period of time when it looked like he was going to get serious about that commitment, and ramp up his giving significantly. Whether that money was donated well or poorly would be of enormous importance to the world, and that’s why I was in touch with him.
How I publicly talked about Sam
Some people have asked questions about how I publicly talked about Sam, on podcasts and elsewhere. Here is a list of all the occasions I could find where I publicly talked about him. Though I had my issues with him, especially his overconfidence, overall I was excited by him. I thought he was set to do a tremendous amount of good for the world, and at the time I felt happy to convey that thought. Of course, knowing what I know now, I hate how badly I misjudged him, and hate that I at all helped improve his reputation.
Some people have claimed that I deliberately misrepresented Sam’s lifestyle. In a number of places, I said that Sam planned to give away 99% of his wealth, and in this post, in the context of discussing why I think honest signalling is good, I said, “I think the fact that Sam Bankman-Fried is a vegan and drives a Corolla is awesome, and totally the right call”. These statements represented what I believed at the time. Sam said, on multiple occasions, that he was planning to give away around 99% of his wealth, and the overall picture I had of him was highly consistent with that, so the Corolla seemed like an honest signal of his giving plans.
It’s true that the apartment complex where FTX employees, including Sam, lived, and which I visited, was extremely high-end. But, generally, Sam seemed uninterested in luxury or indulgence, especially for someone worth $20 billion at the time. As I saw it, he would usually cook dinner for himself. He was still a vegan, and I never saw him consume a non-vegan product. He dressed shabbily. He never expressed interest in luxuries. As far as I could gather, he never took a vacation, and rarely even took a full weekend off. On time off he would play chess or video games, or occasionally padel. I never saw him drink alcohol or do illegal drugs.
The only purchase that I knew of that seemed equivocal was the penthouse. But that was shared with 9 other flatmates, with the living room doubling as an office space, and was used to host company dinners. I did ask Nishad about why they were living in such luxury accommodation: he said that it was nicer than they’d ideally like, but that they were supply constrained in the Bahamas. They wanted to have somewhere that would be attractive enough to make employees move from the US, that would have good security, and that would have a campus feel, and that Albany was pretty much their only option. This seemed credible to me at the time, especially given how strange and cramped their offices were. And even if it was a pure indulgence, the cost to Sam of 1/10th of a $30M penthouse was ~0.01% of his wealth — so, compatible with giving away 99% of what he made.
After the collapse happened, though, I re-listened to Sam’s appearance on the 80,000 Hours podcast, where he commented that he likes nice apartments, which suggests that there was more self-interest at play than Nishad had made out. And, of course, I don’t know what I didn’t see; I was deceived about many things, so perhaps Sam and others lied about their personal spending, too.
What I heard from former Alameda people
A number of people have asked about what I heard and thought about the split at early Alameda. I talk about this on the Spencer podcast, but here’s a summary. I’ll emphasise that this is me speaking about my own experience; I’m not speaking for others.
In early 2018 there was a management dispute at Alameda Research. The company had started to lose money, and a number of people were unhappy with how Sam was running the company. They told Sam they wanted to buy him out and that they’d leave if he didn’t accept their offer; he refused and they left.
I wasn’t involved in the dispute; I heard about it only afterwards. There were claims being made on both sides and I didn’t have a view about who was more in the right, though I was more in touch with people who had left or reduced their investment. That included the investor who was most closely involved in the dispute, who I regarded as the most reliable source.
It’s true that a number of people, at the time, were very unhappy with Sam, and I spoke to them about that. They described him as reckless, uninterested in management, bad at managing conflict, and being unwilling to accept a lower return, instead wanting to double down. In hindsight, this was absolutely a foreshadowing of what was to come. At the time, I believed the view, held by those that left, that Aladema had been a folly project that was going to fail.[1]
As of late 2021, the early Alameda split made me aware that Sam might be difficult to work with. But there are a number of reasons why it didn’t make me think I shouldn’t advise his foundation, or that he might be engaging in fraud.
The main investor who was involved in the 2018 dispute and negotiations — and who I regarded as largely “on the side” of those who left (though since the collapse they’ve emphasised to me they didn’t regard themselves as “taking sides”) — continued to invest in Alameda, though at a lower amount, after the dispute. This made me think that what was at issue, in the dispute, was whether the company was being well-run and would be profitable, not whether Sam was someone one shouldn’t work with.
The view of those that left was that Alameda was going to fail. When, instead, it and FTX were enormously successful, and had received funding from leading VCs like Blackrock and Sequoia, this suggested that those earlier views had been mistaken, or that Sam had learned lessons and matured over the intervening years. I thought this view was held by a number of people who’d left Alameda; since the collapse I checked with several of those who left, who have confirmed that was their view.[2]
This picture was supported by actions taken by people who’d previously worked at Alameda. Over the course of 2022, former Alameda employees, investors or advisors with former grievances against Sam did things like: advise Future Fund, work as a Future Fund regranter, accept a grant from Future Fund, congratulate Nick on his new position, trade on FTX, or even hold a significant fraction of their net worth on FTX. People who left early Alameda, including very core people, were asked for advice prior to working for FTX Foundation by people who had offers to work there; as far as I know, none of them advised against working for Sam.
I was also in contact with a few former Alameda people over 2022: as far as I remember, none of them raised concerns to me. And shortly after the collapse, one of the very most core people who left early Alameda, with probably the most animosity towards Sam, messaged me to say that they were as surprised as anyone, that they thought it was reasonable to regard the early Alameda split as a typical cofounder fallout, and that even they had come to think that Alameda and FTX had overcome their early issues and so they had started to trade on FTX.[3][4]
I wish I’d been able to clear this up as soon as the TIME article was released, and I’m sorry that this means there’s been such a long period of people having question marks about this. There was a failure where at the time I thought I was going to be able to talk publicly about this just a few weeks later, but then that moment in time kept getting delayed.
- ^
Sam was on the board of CEA US at the time (early 2018). Around that time, after the dispute, I asked the investor that I was in touch with whether Sam should be removed from the board, and the investor said there was no need. A CEA employee (who wasn’t connected to Alameda) brought up the idea that Sam should transition off the board, because he didn’t help improve diversity of the board, didn’t provide unique skills or experience, and that CEA now employed former Alameda employees who were unhappy with him. Over the course of the year that followed, Sam was also becoming busier and less available. In mid-2019, we decided to start to reform the board, and Sam agreed to step down.
- ^
In addition, one former Alameda employee, who I was not particularly in touch with, made the following comment in March 2023. It was a comment on a private googledoc (written by someone other than me), but they gave me permission to share:
“If you’d asked me about Sam six months ago I probably would have said something like “He plays hardball and is kind of miserable to work under if you want to be treated as an equal, but not obviously more so than other successful business people.” (Think Elon Musk, etc.)
“Personally, I’m not willing to be an asshole in order to be successful, but he’s the one with the billions and he comprehensively won on our biggest concrete disagreements so shrug. Maybe he reformed, or maybe this is how you have to be.”
As far as I was concerned that impression was mostly relevant to people considering working with or for Sam directly, and I shared it pretty freely when that came up.
Saying anything more negative still feels like it would have been a tremendous failure to update after reality turned out not at all like I thought it would when I left Alameda in 2018 (I thought Alameda would blow up and that FTX was a bad idea which played to none of our strengths).
Basically I think this and other sections [of the googledoc] are acting like people had current knowledge of bad behaviour which they feared sharing, as opposed to historical knowledge of bad behaviour which tended to be accompanied by doomy predictions that seemed to have been comprehensively proven false. Certainly I had just conceded epistemic defeat on this issue.”
- ^
They also thought, though, that the FTX collapse should warrant serious reflection about the culture in EA.
- ^
On an older draft of this comment (which was substantively similar) I asked several people who left Alameda in 2018 (or reduced their investment) to check the above six paragraphs, and they told me they thought the paragraphs were accurate.
- 18 Apr 2024 9:39 UTC; 99 points) 's comment on Personal reflections on FTX by (
- 23 Apr 2024 14:15 UTC; 39 points) 's comment on Personal reflections on FTX by (
- ^
Lessons and updates
The scale of the harm from the fraud committed by Sam Bankman-Fried and the others at FTX and Alameda is difficult to comprehend. Over a million people lost money; dozens of projects’ plans were thrown into disarray because they could not use funding they had received or were promised; the reputational damage to EA has made the good that thousands of honest, morally motivated people are trying to do that much harder. On any reasonable understanding of what happened, what they did was deplorable. I’m horrified by the fact that I was Sam’s entry point into EA.
In these comments, I offer my thoughts, but I don’t claim to be the expert on the lessons we should take from this disaster. Sam and the others harmed me and people and projects I love, more than anyone else has done in my life. I was lied to, extensively, by people I thought were my friends and allies, in a way I’ve found hard to come to terms with. Even though a year and a half has passed, it’s still emotionally raw for me: I’m trying to be objective and dispassionate, but I’m aware that this might hinder me.
There are four categories of lessons and updates:
Undoing updates made because of FTX
Appreciating the new world we’re in
Assessing what changes we could make in EA to make catastrophes like this less likely to happen again
Assessing what changes we could make such that EA could handle crises better in the future
On the first two points, the post from Ben Todd is good, though I don’t agree with all of what he says. In my view, the most important lessons when it comes to the first two points, which also have bearing on the third and fourth, are:
Against “EA exceptionalism”: without evidence to the contrary, we should assume that people in EA are about average (given their demographics) on traits that don’t relate to EA. Sadly, that includes things like likelihood to commit crimes. We should be especially cautious to avoid a halo effect — assuming that because someone is good in some ways, like being dedicated to helping others, then they are good in other ways, too, like having integrity.
Looking back, there was a crazy halo effect around Sam, and I’m sure that will have influenced how I saw him. Before advising Future Fund, I remember asking a successful crypto investor — not connected to EA — what they thought of him. Their reply was: “He is a god.”
In my own case, I think I’ve been too trusting of people, and in general too unwilling to countenance the idea that someone might be a bad actor, or be deceiving me. Given what we know now, it was obviously a mistake to trust Sam and the others, but I think I’ve been too trusting in other instances in my life, too. I think in particular that I’ve been too quick to assume that, because someone indicates they’re part of the EA team, they are thereby trustworthy and honest. I think that fully improving on this trait will take a long time for me, and I’m going to bear this in mind in which roles I take on in the future.
Presenting EA in the context of the whole of morality.
EA is compatible with very many different moral worldviews, and this ecumenicism was a core reason for why EA was defined as it was. But people have often conflated EA with naive utilitarianism: that promoting wellbeing is the *only* thing that matters.
Even on pure utilitarian grounds, you should take seriously the wisdom enshrined in common-sense moral norms, and be extremely sceptical if your reasoning leads you to depart wildly from them. There are very strong consequentialist reasons for acting with integrity and for being cooperative with people with other moral views.
But, what’s more, utilitarianism is just one plausible moral view among many, and we shouldn’t be at all confident in it. Taking moral uncertainty into account means taking seriously the consequences of your actions, but it also means respecting common-sense moral prohibitions.[1]
I could have done better in how I’ve communicated on this score. In the past, I’ve emphasised the distinctive aspects of EA, treated the conflation with naive utilitarianism as a confusion that people have, and the response to it as an afterthought, rather than something built into the core of talking about the ideas. I plan to change that, going forward — emphasising more the whole of morality, rather than just the most distinctive contributions that EA makes (namely, that we should be a lot more benevolent and a lot more intensely truth-seeking than common-sense morality suggests).
Going even further on legibly acting in accordance with common-sense virtues than one would otherwise, because onlookers will be more sceptical of people associated with EA than they were before.
Here’s an analogy I’ve found helpful. Suppose it’s a 30mph zone, where almost everyone in fact drives at 35mph. If you’re an EA, how fast should you drive? Maybe before it was ok to go at 35, in line with prevailing norms. Now I think we should go at 30.
Being willing to fight for EA qua EA.
FTX has given people an enormous stick to hit EA with, and means that a lot of people have wanted to disassociate from EA. This will result in less work going towards the most important problems in the world today—yet another of the harms that Sam and the others caused.
But it means we’ll need, more than ever, for people who believe that the ideas are true and important to be willing to stick up for them, even in the face of criticism that’s often unfair and uncharitable, and sometimes downright mean.
On the third point — how to reduce the chance of future catastrophes — the key thing, in my view, is to pay attention to people’s local incentives when trying to predict their behaviour, in particular looking at the governance regime they are in. Some of my concrete lessons, here, are:
You can’t trust VCs or the financial media to detect fraud.[2] (Indeed, you shouldn’t even expect VCs to be particularly good at detecting fraud, as it’s often not in their self-interest to do so; I found Jeff Kaufman’s post on this very helpful).
The base rates of fraud are surprisingly high (here and here).
We should expect the base rate to be higher in poorly-regulated industries.
The idea that a company is run by “good people” isn’t sufficient to counterbalance that.
In general, people who commit white collar crimes often have good reputations before the crime; this is one of the main lessons from Eugene Soltes’s book Why They Do It.
In the case of FTX: the fraud was committed by Caroline, Gary and Nishad, as well as Sam. Though some people had misgivings about Sam, I haven’t heard the same about the others. In Nishad’s case in particular, comments I’ve heard about his character are universally that he seemed kind, thoughtful and honest. Yet, that wasn’t enough.
(This is all particularly on my mind when thinking about the future behaviour of AI companies, though recent events also show how hard it is to get governance right so that it’s genuinely a check on power.)
In the case of FTX, if there had been better aggregation of people’s opinions on Sam that might have helped a bit, though as I note in another comment there was a widespread error in thinking that the 2018 misgivings were wrong or that he’d matured. But what would have helped a lot more, in my view, was knowing how poorly-governed the company was — there wasn’t a functional board, or a risk department, or a CFO.
On how to respond better to crises in the future…. I think there’s a lot. I currently have no formal responsibilities over any community organisations, and do limited informal advising, too,[3] so I’ll primarily let Zach (once he’s back from vacation) or others comment in more depth on lessons learned from this, as well as changes that are being made, and planned to be made, across the EA community as a whole.
But one of the biggest lessons, for me, is decentralisation, and ensuring that people and organisations to a greater extent have clear separation in their roles and activities than they have had in the past. I wrote about this more here. (Since writing that post, though, I now lean more towards thinking that someone should “own” managing the movement, and that that should be the Centre for Effective Altruism. This is because there are gains from “public goods” in the movement that won’t be provided by default, and because I think Zach is going to be a strong CEO who can plausibly pull it off.)
In my own case, at the point of time of the FTX collapse, I was:
On the board of EV
An advisor to Future Fund
The most well-known advocate of EA
But once FTX collapsed, these roles interfered with each other. In particular, being on the board of EV and an advisor to Future Fund majorly impacted my ability to defend EA in the aftermath of the collapse and to help the movement try to make sense of what had happened. In retrospect, I wish I’d started building up a larger board for EV (then CEA), and transitioned out of that role, as early as 2017 or 2018; this would have made the movement as a whole more robust.
Looking forward, I’m going to stay off boards for a while, and focus on research, writing and advocacy.
- ^
I give my high-level take on what generally follows from taking moral uncertainty seriously, here: “In general, and very roughly speaking, I believe that maximizing expected choice- worthiness under moral uncertainty entails something similar to a value-pluralist consequentialism-plus-side-constraints view, with heavy emphasis on consequences that impact the long-run future of the human race.”
- ^
There’s a knock against prediction markets, here, too. A Metaculus forecast, in March of 2022 (the end of the period when one could make forecasts on this question), gave a 1.3% chance of FTX making any default on customer funds over the year. The probability that the Metaculus forecasters would have put on the claim that FTX would default on very large numbers of customer funds, as a result of misconduct, would presumably have been lower.
- ^
More generally, I’m trying to emphasise that I am not the “leader” of the EA movement, and, indeed, that I don’t think that the EA movement is the sort of thing that should have a leader. I’m still in favour of EA having advocates (and, hopefully, very many advocates, including people who hopefully get a lot more well-known than I am), and I plan to continue to advocate for EA, but I see that as a very different role.
Personal reflections on FTX
Hi Yarrow (and others on this thread) - this topic comes up on the Clearer Thinking podcast, which comes out tomorrow. As Emma Richter mentions, the Clearer Thinking podcast is aimed more at people in or related to EA, whereas Sam Harris’s wasn’t; it was up to him what topics he wanted to focus on.
- 14 Apr 2024 20:44 UTC; 11 points) 's comment on Yarrow Bouchard’s Quick takes by (
Thanks! Didn’t know you’re sceptical of AI x-risk. I wonder if there’s a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally.
Thanks so much for those links, I hadn’t seen them!
(So much AI-related stuff coming out every day, it’s so hard to keep on top of everything!)
This is a quick post to talk a little bit about what I’m planning to focus on in the near and medium-term future, and to highlight that I’m currently hiring for a joint executive and research assistant position. You can read more about the role and apply here! If you’re potentially interested, hopefully the comments below can help you figure out whether you’d enjoy the role.
Recent advances in AI, combined with economic modelling (e.g. here), suggest that we might well face explosive AI-driven growth in technological capability in the next decade, where what would have been centuries of technological and intellectual progress on a business-as-usual trajectory occur over the course of just months or years.
Most effort to date, from those worried by an intelligence explosion, has been on ensuring that AI systems are aligned: that they do what their designers intend them to do, at least closely enough that they don’t cause catastrophically bad outcomes.
But even if we make sufficient progress on alignment, humanity will have to make, or fail to make, many hard-to-reverse decisions with important and long-lasting consequences. I call these decisions Grand Challenges. Over the course of an explosion in technological capability, we will have to address many Grand Challenges in a short space of time including, potentially: what rights to give digital beings; how to govern the development of many new weapons of mass destruction; who gets control over an automated military; how to deal with fast-reproducing human or AI citizens; how to maintain good reasoning and decision-making even despite powerful persuasion technology and greatly-improved ability to ideologically indoctrinate others; and how to govern the race for space resources.
As a comparison, we could imagine if explosive growth had occurred in Europe in the 11th century, and that all the intellectual and technological advances that took a thousand years in our actual history occurred over the course of just a few years. It’s hard to see how decision-making would go well under those conditions.
The governance of explosive growth seems to me to be of comparable importance as AI alignment, not dramatically less tractable, and is currently much more neglected. The marginal cost-effectiveness of work in this area therefore seems to be even higher than marginal work on AI alignment. It is, however, still very pre-paradigmatic: it’s hard to know what’s most important in this area, what things would be desirable to push on, or even what good research looks like.
I’ll talk more about all this in my EAG: Bay Area talk, “New Frontiers in Effective Altruism.” I’m far from the only person to highlight these issues, though. For example, Holden Karnofsky has an excellent blog post on issues beyond misalignment; Lukas Finnveden has a great post on similar themes here and an extensive and in-depth series on potential projects here. More generally, I think there’s a lot of excitement about work in this broad area that isn’t yet being represented in places like the Forum. I’d be keen for more people to start learning about and thinking about these issues.
Over the last year, I’ve done a little bit of exploratory research into some of these areas; over the next six months, I plan to continue this in a focused way, with an eye toward making this a multi-year focus. In particular, I’m interested in the rights of digital beings, governance of space resources, and, above all, on the “meta” challenge of ensuring that we have good deliberative processes through the period of explosive growth. (One can think of work on the meta challenge as fleshing out somewhat realistic proposals that could take us in the direction of the “long reflection”.) By working on good deliberative processes, we could thereby improve decision-making on all the Grand Challenges we will face. This work could also help with AI safety, too: if we can guarantee power-sharing after the development of superintelligence, that decreases the incentive for competitors to race and cut corners on safety.
I’m not sure yet what output this would ultimately lead to, if I decide to continue work on this beyond the next six months. Plausibly there could be many possible books, policy papers, or research institutes on these issues, and I’d be excited to help make happen whichever of these seem highest-impact after further investigation.
Beyond this work, I’ll continue to provide support for individuals and organisations in EA (such as via fundraising, advice, advocacy and passing on opportunities) in an 80⁄20 way; most likely, I’ll just literally allocate 20% of my time to this, and spend the remaining 80% on the ethics and governance issues I list above. I expect not to be very involved with organisational decision-making (for example by being on boards of EA organisations) in the medium term, in order to stay focused and play to my comparative advantage.
I’m looking for a joint research and executive assistant to help with the work outlined above. The role involves research tasks such as providing feedback on drafts, conducting literature reviews and small research projects, as well as administrative tasks like processing emails, scheduling, and travel booking. The role could also turn into a more senior role, depending on experience and performance.
Example projects that a research assistant could help with include:
A literature review on the drivers of moral progress.
A “literature review” focused on reading through LessWrong, the EA Forum, and other blogs, and finding the best work there related to the fragility of value thesis.
Case studies on: What exactly happened to result in the creation of the UN, and the precise nature of the UN Charter? What can we learn from it? Similarly for The Kyoto Protocol, the Nuclear Non-Proliferation Agreement, the Montreal Protocol.
Short original research projects, such as:
Figuring out what a good operationalisation of transformative AI would be, for the purpose of creating an early tripwire to alert the world of an imminent intelligence explosion.
Taking some particular neglected Grand Challenge, and fleshing out the reasons why this Grand Challenge might or might not be a big deal.
Supposing that the US wanted to make an agreement to share power and respect other countries’ sovereignty in the event that it develops superintelligence, figuring out how we could legibly guarantee future compliance with that agreement, such that the commitment is credible to other countries?
The deadline for applications is February the 11th. If this seems interesting, please apply!- Who’s hiring? (Feb-May 2024) by 31 Jan 2024 11:01 UTC; 39 points) (
- Factory farming in a cosmic future by 10 Feb 2024 12:14 UTC; 28 points) (
- 15 Jan 2024 15:58 UTC; 10 points) 's comment on AI doing philosophy = AI generating hands? by (
I’m really excited about Zach coming on board as CEA’s new CEO!
Though I haven’t worked with him a ton, the interactions I have had with him have been systematically positive: he’s been consistently professional, mission-focused and inspiring. He helped lead EV US well through what was a difficult time, and I’m really looking forward to seeing what CEA achieves under his leadership!
Thank you so much for your work with EV over the last year, Howie! It was enormously helpful to have someone so well-trusted, with such excellent judgment, in this position. I’m sure you’ll have an enormous positive impact at Open Phil.
And welcome, Rob—I think it’s fantastic news that you’ve taken the role!
I mentioned a few months ago that I was planning to resign from the board of EV UK: I’ve now officially done so.
Since last November, I’ve been recused from the board on all matters associated with FTX and related topics, which has ended up being a large proportion of board business. (This is because the recusal affected not just decisions that were directly related to the collapse of FTX, but also many other decisions for which the way EV UK has been affected by the collapse of FTX was important context.) I know I initially said that I’d wait for there to be more capacity, but trustee recruitment has moved more slowly than I’d anticipated, and with the ongoing recusal I didn’t expect to add much capacity for the foreseeable future, so it felt like a natural time to step down.
It’s been quite a ride over the last eleven years. Effective Ventures has grown to a size far beyond what I expected, and I’ve felt privileged to help it on that journey. I deeply respect the rest of the board, and the leadership teams at EV, and I’m glad they’re at the helm.
Some people have asked me what I’m currently working on, and what my plans are. This year has been quite spread over a number of different things, including fundraising, helping out other EA-adjacent public figures, support for GPI, CEA and 80,000 Hours, writing additions to What We Owe The Future and helping with the print textbook version of utilitarianism.net that’s coming out next year. It’s also personally been the toughest year of my life; my mental health has been at its worst in over a decade, and I’ve been trying to deal with that, too.
At the moment, I’m doing three main things:
- Some public engagement, in particular around the WWOTF paperback and foreign language book launches and at EAGxBerlin. This has been and will be lower-key than the media around WWOTF last year, and more focused on in-person events; I’m also more focused on fundraising than I was before.
- Research into “trajectory changes”: in particular, ways of increasing the wellbeing of future generations other than ‘standard’ existential risk mitigation strategies, in particular on issues that arise even if we solve AI alignment, like digital sentience and the long reflection. I’m also doing some learning to try to get to grips on how to update properly on the latest developments in AI, in particular with respect to the probability of an intelligence explosion in the next decade, and on how hard we should expect AI alignment to be.
- Gathering information for what I should focus on next. In the medium term, I still plan to be a public proponent of EA-as-an-idea, which I think plays to my comparative advantage, and because I’m worried about people neglecting “EA qua EA”. If anything, all the crises faced by EA and by the world in the last year has reminded me of just how deeply I believe in EA as a project, and how the message of taking a thoughtful, humble, and scientific approach to doing good is more important than ever. The precise options I’m considering are still quite wide-ranging, including: a podcast and/or YouTube show and/or substack; a book on effective giving; a book on evidence-based living; or deeper research into the ethics and governance questions that arise even if we solve AI alignment. I hope to decide on that by the end of the year.
- Will MacAskill has stepped down as trustee of EV UK by 21 Sep 2023 15:41 UTC; 141 points) (
- 30 Nov 2023 12:07 UTC; 18 points) 's comment on EA is good, actually by (
(My personal views only, and like Nick I’ve been recused from a lot of board work since November.)
Thank you, Nick, for all your work on the Boards over the last eleven years. You helped steward the organisations into existence, and were central to helping them flourish and grow. I’ve always been impressed by your work ethic, your willingness to listen and learn, and your ability to provide feedback that was incisive, helpful, and kind.Because you’ve been less in the limelight than me or Toby, I think many people don’t know just how crucial a role you played in EA’s early days. Though you joined shortly after launch, given all your work on it I think you were essentially a third cofounder of Giving What We Can; you led its research for many years, and helped build vital bridges with GiveWell and later Open Philanthropy. I remember that when you launched Giving What We Can: Rutgers, you organised a talk with I think over 500 people. It must still be one of the most well-attended talks that we’ve ever had within EA, and helped the idea of local groups get off the ground.
The EA movement wouldn’t have been the same without your service. It’s been an honour to have worked with you.
Hey,
I’m really sorry to hear about this experience. I’ve also experienced what feels like social pressure to have particular beliefs (e.g. around non-causal decision theory, high AI x-risk estimates, other general pictures of the world), and it’s something I also don’t like about the movement. My biggest worries with my own beliefs stem around the worry that I’d have very different views if I’d found myself in a different social environment. It’s just simply very hard to successfully have a group of people who are trying to both figure out what’s correct and trying to change the world: from the perspective of someone who thinks the end of the world is imminent, someone who doesn’t agree is at best useless and at worst harmful (because they are promoting misinformation).
In local groups in particular, I can see how this issue can get aggravated: people want their local group to be successful, and it’s much easier to track success with a metric like “number of new AI safety researchers” than “number of people who have thought really deeply about the most pressing issues and have come to their own well-considered conclusions”.
One thing I’ll say is that core researchers are often (but not always) much more uncertain and pluralist than it seems from “the vibe”. The second half of Holden Karnofsky’s recent 80k blog post is indicative. Open Phil splits their funding across quite a number of cause areas, and I expect that to continue. Most of the researchers at GPI are pretty sceptical of AI x-risk. Even among people who are really worried about TAI in the next decade, there’s normally significant support (whether driven by worldview diversification or just normal human psychology) for neartermist or other non-AI causes. That’s certainly true of me. I think longtermism is highly non-obvious, and focusing on near-term AI risk even more so; beyond that, I think a healthy EA movement should be highly intellectually diverse and exploratory.
What should be done? I have a few thoughts, but my most major best guess is that, now that AI safety is big enough and getting so much attention, it should have its own movement, separate from EA. Currently, AI has an odd relationship to EA. Global health and development and farm animal welfare, and to some extent pandemic preparedness, had movements working on them independently of EA. In contrast, AI safety work currently overlaps much more heavily with the EA/rationalist community, because it’s more homegrown.
If AI had its own movement infrastructure, that would give EA more space to be its own thing. It could more easily be about the question “how can we do the most good?” and a portfolio of possible answers to that question, rather than one increasingly common answer — “AI”.
At the moment, I’m pretty worried that, on the current trajectory, AI safety will end up eating EA. Though I’m very worried about what the next 5-10 years will look like in AI, and though I think we should put significantly more resources into AI safety even than we have done, I still think that AI safety eating EA would be a major loss. EA qua EA, which can live and breathe on its own terms, still has huge amounts of value: if AI progress slows; if it gets so much attention that it’s no longer neglected; if it turns out the case for AI safety was wrong in important ways; and because there are other ways of adding value to the world, too. I think most people in EA, even people like Holden who are currently obsessed with near-term AI risk, would agree.
- Relationship between EA Community and AI safety by 18 Sep 2023 13:49 UTC; 157 points) (
- 21 Sep 2023 15:39 UTC; 126 points) 's comment on William_MacAskill’s Quick takes by (
- EA Switzerland—Strategy (v2024) by 22 Jul 2024 9:16 UTC; 44 points) (
- 18 Sep 2023 16:36 UTC; 21 points) 's comment on Relationship between EA Community and AI safety by (
- 10 Aug 2023 3:57 UTC; 1 point) 's comment on CEA: still doing CEA things by (
This was an extremely helpful post, thank you!
This isn’t answering the question you ask (sorry), but one possible response to this line of criticism is for some people within EA / longtermism to more clearly state what vision of the future they are aiming towards. Because this tends not to happen, it means that critics can attribute particular visions to people that they don’t have. In particular, critics of WWOTF often thought that I was trying to push for some particular narrow vision of the future, whereas really the primary goal, in my mind at least, is to keep our options open as much as possible, and make moral progress in order to figure out what sort of future we should try to create.
Here are a couple of suggestions for positive visions. These are what I’d answer if asked: “What vision of the future are you aiming towards?”:
”Procedural visions”
(Name options: Viatopia—representing the idea of a waypoint, and of keeping multiple paths open—though this mixes latin and greek roots. Optiotopia, though is a mouthful and mixes latin and greek roots. Related ideas: existential security, the long reflection.)
These doesn’t have some vision of what we ultimately want to achieve. Instead they propose a waypoint that we’d want to achieve, as a step on the path to a good future. That waypoint would involve: (i) ending all obvious grievous contemporary harms, like war, violence and unnecessary suffering; (ii) reducing existential risk down to a very low level; (iii) securing a deliberative process for humanity as a whole, so that we make sufficient moral progress before embarking on potentially-irreversible actions like space settlement.
The hope could be that almost everyone could agree on this as a desirable waypoint.
”Utopia for everyone”
(Name options: multitopia or pluritopia, but this mixes latin and greek roots; polytopia, but this is the name of a computer game. Related idea: Paretopia.)
This vision is where a great diversity of different visions of the good are allowed to happen, and people have choice about what sort of society they want to live in. Environmentalists could preserve Earth’s ecosystems; others can build off-world societies. Liberals and libertarians can create a society where everyone is empowered to act autonomously, pursuing their own goals; lovers of knowledge can build societies devoted to figuring out the deepest truths of the universe; philosophical hedonists can create societies devoted to joy, and so on.
The key insight, here, is that there’s just a lot of available stuff in the future, and that scientific, social and moral progress will potentially enable us to produce great wealth with that stuff (if we don’t destroy the world first, or suffer value lock-in). Plausibly, if we as a global society get our act together, the large majority of moral perspectives can get most of what they want.
Like the procedural visions, spelling this vision out more could have great benefits today, via greater collaboration: if we could agree that this is what we’ll aim for, at least in part, then we could reduce the chance of some person or people with some narrow view trying to grab power for itself.
(I write about these a little bit about both of these idea in a fictional short story, here.)
I’d welcome name ideas for these, especially the former. My best guesses so far are “viatopia” and “multitopia”, but I’m not wedded to them and I haven’t spent lots of time on naming. I don’t think that the -topia suffix is strictly necessary.- 21 Aug 2023 19:58 UTC; 29 points) 's comment on An Elephant in the Community Building room by (
- 15 Sep 2023 9:25 UTC; 25 points) 's comment on James Herbert’s Quick takes by (
- 1 Sep 2023 11:01 UTC; 4 points) 's comment on Jonas’s Quick takes by (
- 4 Nov 2023 9:34 UTC; 1 point) 's comment on Existential Hope and Existential Risk: Exploring the value of optimistic approaches to shaping the long-term future by (
This is a good point, and it’s worth pointing out that increasing is always good whereas increasing is only good if the future is of positive value. So risk aversion reduces the value of increasing relative to increasing , provided we put some probability on a bad future.
Agree this is worth pointing out! I’ve a draft paper that goes into some of this stuff in more detail, and I make this argument.
Another potential argument for trying to improve is that, plausibly at least, the value lost as a result of the gap between expected- and best-possible- is greater that the value lost as a result of the gap between expected- and best-possible-. So in that sense the problem that expected- is not as high as it could be is more “important” (in the ITN sense) than the problem that the expected is not as high as it could be.
I’m obviously sad that you’re moving on, but I trust your judgment that it’s the right decision. I’ve deeply appreciated your hard work on GWWC over these last years—it’s both a hugely impactful project from an impartial point of view and, from my own partial point of view, one that I care very strongly about. I think you’re a hard-working, morally motivated and high-integrity person and it’s always been very reassuring to me to have you at the helm. Under your leadership you transformed the organisation. So: thank you!
I really hope your next step helps you flourish and continues to give you opportunities to make the world better.