Rob, Thanks for this clarification and acknowledgement of what happened with the podcast. Hope you’re doing better since your last post.
One question on how I should be interpreting the statements describing your views:
So it seems worth clarifying what I actually do think. In brief, I entirely agree with Matt Yglesias that:
Returns to additional money are certainly not linear at large scales, which counsels in favour of risk aversion.
Returns become sublinear more quickly when you’re working on more niche cause areas like longtermism, relative to larger cause areas such as global poverty alleviation.
This sublinearity becomes especially pronounced when you’re considering giving on the scale of billions rather than millions of dollars.
There are other major practical considerations that point in favour of risk-aversion as well.
———
While in the hypothetical your downside is meant to be capped at zero, in reality, ‘swinging for the fences’ with all your existing funds can mean going far below zero in impact.
So between $1 billion with certainty versus a 10% chance of $15 billion, one could make a theoretical case for either option — but if it were me I would personally lean towards taking the $1 billion with certainty.
———
I regret having swept those and other complications under the rug for the sake of simplicity in a way that may well have confused some listeners to the show and seemed like an endorsement of an approach that is risk-neutral with respect to dollar returns, which would in fact be severely misguided.
Just wanted to clarify whether I’m meant to be interpreting these as “these are my views and they were my views at the time of the SBF podcast”, or “In hindsight, I agree with these views now, but didn’t hold this view at the time”, or “I think I always believed this, but just didn’t really think about this when we published the podcast”, or something else?
The reason I ask is because the post makes it sound like the first interpretation, but if these were your views and always have been, to the point where you are saying an approach that is risk-neutral with respect to dollar returns would be “severely misguided”, it seems difficult to reconcile that with the justification of publishing the relevant quote[1] as “for the sake of simplicity”.
If you are happy to publish things like “you should just go with whatever has the highest expected value”, “this is the totally rational approach” for the sake of simplicity when you actually don’t endorse the claim (or even consider it severely misguided), what does that mean about other content on 80,000 hours? What else has been published for the sake of “simplicity” that you actually don’t endorse, or consider severely misguided? I find this option hard to believe because it’s not consistent with the publication/editorial standards I expect from 80,000 hours, nor its Director of Research, and it’s an update I’m rather hesitant about making.
Sorry if this wasn’t worded as politely or kindly as it could have been, and I hope you interpret me seeking clarification here as charitable. I’m aware there may be other possibilities I’m not thinking of, and wanted to ask because I didn’t want to jump to any conclusions. I’m hoping this gives you an opportunity to clarify things for me and others who might be similarly confused.
Thanks!
Edit: Added this quote from the podcast, taken from davidc’s comment below: ”But when it comes to doing good, you don’t hit declining returns like that at all. Or not really on the scale of the amount of money that any one person can make. So you kind of want to just be risk neutral.”
“If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.
But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome [in this case $1.5 billion] — and so swing for the fences.
This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.”
Thanks for the question Pseudonym — I had a bunch of stuff in there defending the honour of 80k/my colleagues, but took it out as it sounded too defensive.
So I’m glad you’ve given me a clear chance to lay out how I was thinking about the episode and the processes we use to make different kinds of content so you can judge how much to trust them.
Basically, yes — I did hold the views above about risk aversion for as long as I can recall. I could probably go find supporting references for that, but I think the claim should be believable because the idea that one should be truly risk neutral with respect to dollars at very large amounts just obviously makes no sense and would be in direct conflict with our focus on neglected areas (e.g. IIRC if you hold the tractability term of our problem framework constant then you get logarithmic returns to additional funding).
When I wrote that SBF’s approach was ‘totally rational’, in my mind I was referring to thinking in terms of expected value in general, not to maximizing expected $ amounts, though I appreciate that was super unclear which is my fault.
Podcast interviews and their associated blog posts do not lay out 80,000 Hours staff’s all-things-considered positions and never have (with the possible exception of Benjamin Todd talking about our ‘key ideas’).
They’re a chance to explore ideas — often with people I partially disagree with — and to expose listeners to the diversity of views out there. For an instance of that from the same interview, I disagree with SBF on broad vs narrow longtermism but I let him express his views to provide a counterpoint to the ones listeners will be familiar with hearing from me.
The blog posts I or Keiran write to go with the episodes are rarely checked by anyone else on the team for substance. They’re probably the only thing on the site that gets away with that lack of scrutiny, and we’ll see whether that continues or not after this experience. So blame for errors should fall on us (and in this case, me).
Reasons for that looser practice include:
They’re usually more clearly summarising a guest’s opinions rather than ours.
They have to be imprecise, as podcast RSS feeds set a 4000 character limit for episode descriptions (admittedly we overrun these from time to time).
They’re written primarily to highlight the content of the episode so interested people can subscribe and/or listen to the episode.
Even if the blog post is oversimplified, the interview itself should hopefully provide more subtlety.
Not everyone agrees with every sentence of course, but little goes out without substantial review.
We could try to make the show as polished as articles, more similar to say, a highly produced show like Planet Money. But that would involve reducing output by more than half, which I think the audience would overall dislike (and would also sabotage the role the podcast plays in exposing people to ideas we don’t share).
You or other readers might be curious as to what was going through my head when I decided to prioritise the aspect of expected value that I did during the interview itself:
We hadn’t explained the concept of expected value and ambition in earning to give and other careers very much before. Many listeners won’t have heard of expected value or if they have heard of it, not know what it is. So the main goal I had in mind was to get us off the ground floor and explain the basic case there. As such these explanations were aimed at a different audience than Effective Altruism Forum regulars, who would probably benefit more from advanced material like the interview with Alan Hajék.
The great majority of listeners (99.9%+) are not dealing with resources on the scale of billions of dollars, and so in my mind the priority was to get them to see the case for not being very risk averse, inasmuch as they’re a small fraction of all the effort going into their problem and aren’t at risk of causing massive harm to others.
I do wish I had pointed out that this only applies if they’re not taking the same correlated risks as everyone else in the field — that was a predictable mistake in my view and something that wasn’t as prominent in my mind as it ought to have been or is today.
Among the tiny minority of people who are dealing with resources or careers at scales over $100 million, by that point they’re now mostly thinking about these issues full-time or have advisors who do, and are likely to think up or be told the case for risk aversion (it should become obvious through personal experience to someone sensible in such a situation).
I do think I made a mistake ex ante not to connect personal and professional downside risk more into this discussion. We had mentioned it in previous episodes and an article I read which went out in audio form on the podcast feed itself, but at the time I thought of seeking upside potential, and the risk of doing more harm than good, as more conceptually and practically distinct issues than I do now after the last month.
Overall I think I screwed up a bunch about this episode. If you were inclined to think that the content of these interviews is reliable the great majority of the time, then this is a helpful reminder that ideas are sometimes garbled on the show — and if something sounds wrong to you, it might well just be because it’s wrong. I’m sorry about that, and we try to keep it at reasonable levels, though with the format we have we’ll never get it to zero.
But if it were me I wouldn’t update much on the quality of the written articles as they’re produced pretty differently and by different people.
Overall I think I screwed up a bunch about this episode. If you were inclined to think that the content of these interviews is reliable the great majority of the time, then this is a helpful reminder that ideas are sometimes garbled on the show — and if something sounds wrong to you, it might well just be because it’s wrong. I’m sorry about that, and we try to keep it at reasonable levels, though with the format we have we’ll never get it to zero.
FWIW I’ve generally assumed that the content in those interviews are wrong pretty often, certainly I’d expect the average interview to have at least one egregious mistake.
I don’t think this should be too surprising, being fully accurate for 2h+ on interesting topics is very hard.
Rob, Thanks, I appreciated this response. I have a few thoughts but I don’t want the focus on pushbacks to give the impression I think negatively of what you said-I think overall it was a positive update. It’s also easier for me to sit and push back and say things that just sound like hindsight bias, but I’m erring on the side of sharing them because I’m taking you at face value RE: these being views you have already held for as long as you recall.
As you allude to below, I think it’s really hard in a podcast setting to cover all the nuances and be super precise with language, and I think that’s understandable. OTOH, from the 2020 EA survey: “more than half (50.7%) of respondents cited 80,000 Hours as important for them getting involved in EA.” 80,000 hours is one of the most public facing EA organizations, and what 80,000 hours publishes will often be seen as “what EA thinks”, and I think one initial reaction when this happened was something like “Maybe 80,000 hours don’t really take that seriously enough” or something (the pushback Ben received online when tweeting the climate change problem profile was another example of how these kinds of public facing concerns seemed to be underrated, especially because the tweet was later deleted) and I hope this will be considered more seriously when deciding what (if any) changes are appropriate going forward.
Another point: it seems a little weird to say the blog post gets away with less scrutiny because the interview provides more subtlety and then not actually provide more subtlety in the interview, which is I think what happened here? Like if you can’t explore the nuance during the podcast because of the podcast format, that’s understandable, but it doesn’t seem reasonable to then also say that you don’t cover it in the accompanying blog post because you intend for the subtlety to be covered in the podcast. It’s also not like you’re deciding about whether to include layer 5 and 6 of the nuance, but whether to include a disclaimer about a view that you personally find severely misguided.
I guess one possible suggestion might be to review the transcript/blog post and add relevant caveats and disclaimers after the podcast (especially since there’s already a relevant article you’ve already published on it). I think a general disclaimer would be an even lower cost version, but less helpful in this specific case where you appear to be putting aside your disagreement with SBF’s views and actively not pushing back on it for the express purpose of better communication with the viewers?
The great majority of listeners (99.9%+) are not dealing with resources on the scale of billions of dollars, and so in my mind the priority was to get them to see the case for not being very risk averse, inasmuch as they’re a small fraction of all the effort going into their problem and aren’t at risk of causing massive harm to others.
I do think it’s important to consider harm to themselves and possibly their dependents as a consideration here, even if they aren’t operating in the scale of billions. Also while I agree with the point about tiny minority etc, you probably don’t want to stake reputational risks to 80,000 hours or the EA movement more broadly on whether or not your listeners or guests are ‘sensible’.
I agree it seems valuable to let guests talk about points of disagreement, but where you do this it seems important to be clear at some stage whether you are letting them talk about their views because you want to showcases a different viewpoint, or at least that you aren’t endorsing their message, and especially if the message is a potentially harmful one. It also minimizes scenarios where you pretty reasonably justify yourself but people from the outside or those who are less charitable find it hard to tell the difference between how you’ve justified yourself in this comment VS a world where you were endorsing SBF’s views followed by some combination of post-hoc rationalization/hindsight bias when things turned out poorly (in this case, I wouldn’t consider it uncharitable if people thought you were in fact endorsing SBF’s stated views, based just on the podcast and blog). I think this could be harmful not only for you, but also for 80,000 hours and the EA movement more broadly.
Again, thanks for all your work, and I’m aware it’s easier for me to sit behind a pseudonym and throw critical comments over than actually do the work you have to do-but I’m doing this with the intention of hopefully contributing to something constructive.
Rob,
Thanks for this clarification and acknowledgement of what happened with the podcast. Hope you’re doing better since your last post.
One question on how I should be interpreting the statements describing your views:
Just wanted to clarify whether I’m meant to be interpreting these as “these are my views and they were my views at the time of the SBF podcast”, or “In hindsight, I agree with these views now, but didn’t hold this view at the time”, or “I think I always believed this, but just didn’t really think about this when we published the podcast”, or something else?
The reason I ask is because the post makes it sound like the first interpretation, but if these were your views and always have been, to the point where you are saying an approach that is risk-neutral with respect to dollar returns would be “severely misguided”, it seems difficult to reconcile that with the justification of publishing the relevant quote[1] as “for the sake of simplicity”.
If you are happy to publish things like “you should just go with whatever has the highest expected value”, “this is the totally rational approach” for the sake of simplicity when you actually don’t endorse the claim (or even consider it severely misguided), what does that mean about other content on 80,000 hours? What else has been published for the sake of “simplicity” that you actually don’t endorse, or consider severely misguided? I find this option hard to believe because it’s not consistent with the publication/editorial standards I expect from 80,000 hours, nor its Director of Research, and it’s an update I’m rather hesitant about making.
Sorry if this wasn’t worded as politely or kindly as it could have been, and I hope you interpret me seeking clarification here as charitable. I’m aware there may be other possibilities I’m not thinking of, and wanted to ask because I didn’t want to jump to any conclusions. I’m hoping this gives you an opportunity to clarify things for me and others who might be similarly confused.
Thanks!
Edit: Added this quote from the podcast, taken from davidc’s comment below:
”But when it comes to doing good, you don’t hit declining returns like that at all. Or not really on the scale of the amount of money that any one person can make. So you kind of want to just be risk neutral.”
“If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.
But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome [in this case $1.5 billion] — and so swing for the fences.
This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.”
Thanks for the question Pseudonym — I had a bunch of stuff in there defending the honour of 80k/my colleagues, but took it out as it sounded too defensive.
So I’m glad you’ve given me a clear chance to lay out how I was thinking about the episode and the processes we use to make different kinds of content so you can judge how much to trust them.
Basically, yes — I did hold the views above about risk aversion for as long as I can recall. I could probably go find supporting references for that, but I think the claim should be believable because the idea that one should be truly risk neutral with respect to dollars at very large amounts just obviously makes no sense and would be in direct conflict with our focus on neglected areas (e.g. IIRC if you hold the tractability term of our problem framework constant then you get logarithmic returns to additional funding).
When I wrote that SBF’s approach was ‘totally rational’, in my mind I was referring to thinking in terms of expected value in general, not to maximizing expected $ amounts, though I appreciate that was super unclear which is my fault.
Podcast interviews and their associated blog posts do not lay out 80,000 Hours staff’s all-things-considered positions and never have (with the possible exception of Benjamin Todd talking about our ‘key ideas’).
They’re a chance to explore ideas — often with people I partially disagree with — and to expose listeners to the diversity of views out there. For an instance of that from the same interview, I disagree with SBF on broad vs narrow longtermism but I let him express his views to provide a counterpoint to the ones listeners will be familiar with hearing from me.
The blog posts I or Keiran write to go with the episodes are rarely checked by anyone else on the team for substance. They’re probably the only thing on the site that gets away with that lack of scrutiny, and we’ll see whether that continues or not after this experience. So blame for errors should fall on us (and in this case, me).
Reasons for that looser practice include:
They’re usually more clearly summarising a guest’s opinions rather than ours.
They have to be imprecise, as podcast RSS feeds set a 4000 character limit for episode descriptions (admittedly we overrun these from time to time).
They’re written primarily to highlight the content of the episode so interested people can subscribe and/or listen to the episode.
Even if the blog post is oversimplified, the interview itself should hopefully provide more subtlety.
By comparison our articles like key ideas or our AI problem profile are debated over and commented on endlessly. On this issue there’s our short piece on ‘How much risk to take’.
Not everyone agrees with every sentence of course, but little goes out without substantial review.
We could try to make the show as polished as articles, more similar to say, a highly produced show like Planet Money. But that would involve reducing output by more than half, which I think the audience would overall dislike (and would also sabotage the role the podcast plays in exposing people to ideas we don’t share).
You or other readers might be curious as to what was going through my head when I decided to prioritise the aspect of expected value that I did during the interview itself:
We hadn’t explained the concept of expected value and ambition in earning to give and other careers very much before. Many listeners won’t have heard of expected value or if they have heard of it, not know what it is. So the main goal I had in mind was to get us off the ground floor and explain the basic case there. As such these explanations were aimed at a different audience than Effective Altruism Forum regulars, who would probably benefit more from advanced material like the interview with Alan Hajék.
The great majority of listeners (99.9%+) are not dealing with resources on the scale of billions of dollars, and so in my mind the priority was to get them to see the case for not being very risk averse, inasmuch as they’re a small fraction of all the effort going into their problem and aren’t at risk of causing massive harm to others.
I do wish I had pointed out that this only applies if they’re not taking the same correlated risks as everyone else in the field — that was a predictable mistake in my view and something that wasn’t as prominent in my mind as it ought to have been or is today.
Among the tiny minority of people who are dealing with resources or careers at scales over $100 million, by that point they’re now mostly thinking about these issues full-time or have advisors who do, and are likely to think up or be told the case for risk aversion (it should become obvious through personal experience to someone sensible in such a situation).
I do think I made a mistake ex ante not to connect personal and professional downside risk more into this discussion. We had mentioned it in previous episodes and an article I read which went out in audio form on the podcast feed itself, but at the time I thought of seeking upside potential, and the risk of doing more harm than good, as more conceptually and practically distinct issues than I do now after the last month.
Overall I think I screwed up a bunch about this episode. If you were inclined to think that the content of these interviews is reliable the great majority of the time, then this is a helpful reminder that ideas are sometimes garbled on the show — and if something sounds wrong to you, it might well just be because it’s wrong. I’m sorry about that, and we try to keep it at reasonable levels, though with the format we have we’ll never get it to zero.
But if it were me I wouldn’t update much on the quality of the written articles as they’re produced pretty differently and by different people.
FWIW I’ve generally assumed that the content in those interviews are wrong pretty often, certainly I’d expect the average interview to have at least one egregious mistake.
I don’t think this should be too surprising, being fully accurate for 2h+ on interesting topics is very hard.
Rob,
Thanks, I appreciated this response. I have a few thoughts but I don’t want the focus on pushbacks to give the impression I think negatively of what you said-I think overall it was a positive update. It’s also easier for me to sit and push back and say things that just sound like hindsight bias, but I’m erring on the side of sharing them because I’m taking you at face value RE: these being views you have already held for as long as you recall.
As you allude to below, I think it’s really hard in a podcast setting to cover all the nuances and be super precise with language, and I think that’s understandable. OTOH, from the 2020 EA survey: “more than half (50.7%) of respondents cited 80,000 Hours as important for them getting involved in EA.” 80,000 hours is one of the most public facing EA organizations, and what 80,000 hours publishes will often be seen as “what EA thinks”, and I think one initial reaction when this happened was something like “Maybe 80,000 hours don’t really take that seriously enough” or something (the pushback Ben received online when tweeting the climate change problem profile was another example of how these kinds of public facing concerns seemed to be underrated, especially because the tweet was later deleted) and I hope this will be considered more seriously when deciding what (if any) changes are appropriate going forward.
Another point: it seems a little weird to say the blog post gets away with less scrutiny because the interview provides more subtlety and then not actually provide more subtlety in the interview, which is I think what happened here? Like if you can’t explore the nuance during the podcast because of the podcast format, that’s understandable, but it doesn’t seem reasonable to then also say that you don’t cover it in the accompanying blog post because you intend for the subtlety to be covered in the podcast. It’s also not like you’re deciding about whether to include layer 5 and 6 of the nuance, but whether to include a disclaimer about a view that you personally find severely misguided.
I guess one possible suggestion might be to review the transcript/blog post and add relevant caveats and disclaimers after the podcast (especially since there’s already a relevant article you’ve already published on it). I think a general disclaimer would be an even lower cost version, but less helpful in this specific case where you appear to be putting aside your disagreement with SBF’s views and actively not pushing back on it for the express purpose of better communication with the viewers?
I do think it’s important to consider harm to themselves and possibly their dependents as a consideration here, even if they aren’t operating in the scale of billions. Also while I agree with the point about tiny minority etc, you probably don’t want to stake reputational risks to 80,000 hours or the EA movement more broadly on whether or not your listeners or guests are ‘sensible’.
I agree it seems valuable to let guests talk about points of disagreement, but where you do this it seems important to be clear at some stage whether you are letting them talk about their views because you want to showcases a different viewpoint, or at least that you aren’t endorsing their message, and especially if the message is a potentially harmful one. It also minimizes scenarios where you pretty reasonably justify yourself but people from the outside or those who are less charitable find it hard to tell the difference between how you’ve justified yourself in this comment VS a world where you were endorsing SBF’s views followed by some combination of post-hoc rationalization/hindsight bias when things turned out poorly (in this case, I wouldn’t consider it uncharitable if people thought you were in fact endorsing SBF’s stated views, based just on the podcast and blog). I think this could be harmful not only for you, but also for 80,000 hours and the EA movement more broadly.
Again, thanks for all your work, and I’m aware it’s easier for me to sit behind a pseudonym and throw critical comments over than actually do the work you have to do-but I’m doing this with the intention of hopefully contributing to something constructive.