Thanks for your work. I appreciate people doing real work, and at some level I feel bad for contributing to a Forum culture where people in the peanut gallery consistently shout down people doing real work on the ground. Nonetheless, here are my quick takes:
I think if someone starts from an impartial altruistic attempt at making the long-term future go well, it will be rather surprising if they landed on alternative proteins as one of the most likely ways to make sure the future goes well. Given the probable heavy-tailed distribution of impact, it will be further surprising if this is through a number of disjunctive channels that are each plausibly equally valuable. Additionally, it sure will be strange if people who started working on alternative proteins for entirely unrelated reasons somehow stumbled upon one of the best ways at making the long-term future go well. Finally, it’s kind of surprising, and perhaps suspicious, that you and/or GFI are making a claim that happens to line up well with your ideological and perhaps financial incentives.[1]
I apologize for not engaging with this post’s object-level arguments. They might well be correct. (For what it’s worth, I personally find the values-spreading now-> people have better values during the hinge of history → some form of lockin causes the future go well theory of change to be the most plausible[2]). But I do think the evidential bar for me to believe this posts’ arguments is rather high, and readers can judge for themselves whether these arguments meet such a bar.
Indeed the one time I tried to do a deep dive into adjacent research in a similar domain, I discovered elementary errors that consistently shaded in one direction.
Linch, thanks so much for your comment. I think I agree with all of it, and I was pretty confused initially, b/c I didn’t realize the transcription error.
But yes, as David notes, the transcript flipped my point: I am not arguing that any of the external costs of alt proteins clock in at anywhere near a 1 in 10 or 1 in 30 X-risk.
My point is just (as noted in # 3 of my synopsis) that they are “sufficiently high… to warrant attention from longtermists” who are in a position to advance more than one thing (e.g., working in government or philanthropy, where you can have a portfolio).
Examples:
- Government: Adding alt proteins to one’s portfolio will often be fairly easy—e.g., at OSTP or NSF or in a congressional office.
- Philanthropy: Some philanthropists will insist on giving with a focus on global health, biodiversity, or climate; where that happens, steering them toward alt proteins can make a lot of sense.
Apologies for the transcription error—fixed, as detailed in my next comment.
I liked your comment a lot, but I’m pretty sure you misunderstood a big part of the argument because there’s a prettybig typo in this post.
In the original recording(4:07) Fredrich argues that advancing alternative proteins should be a significant part of longtermist thinking, but not that they’re “one of the best ways at making the long-term future go well” or even “on par with AI risk or bioengineered pandemics”.
But this transcript makes it seem like he is saying the opposite in the intro:
“They [alternative proteins] should be the priority...”
I think you still bring up a lot of good points though.
I appreciate your catching this, David—I would not have noticed this and would have been pretty confused by Linch’s comment. I did go back and edit the transcript to align with the video, and I appreciate your noting this.
Edited per your excellent comment that the text flipped the meaning of the video:
The third point is that alternative proteins address multiple risks to long-term flourishing and they should be a priority for longtermists. I’m not going to try to convince you they should be the priority or that they’re on par with AI risk or bioengineered pandemics. But I am going to try to convince you that unless you are working for an organization that is focused on one thing, you should add alternative proteins to your portfolio if you are focused on longtermism.
Also slightly edited this bit to better capture the video:
[16:40] In Toby Ord’s book The Precipice he says, “Risks greater than 1 in 1,000 should be central global priorities.” … my goal is not going to be to convince you [that] if you’re working on unaligned AI or bioengineered pandemics (What are they − 1 in 10 and 1 in 30 existential risks) that this is something that you should do instead. But if you are working in policy… you can have a more diverse portfolio than unaligned AI and bioengineered pandemics. In fact, it’s probably to our benefit to have a more diverse portfolio. And I would contend that alternative proteins should be a part of that portfolio for all of the reasons that I’ve just described...
Thanks for your work. I appreciate people doing real work, and at some level I feel bad for contributing to a Forum culture where people in the peanut gallery consistently shout down people doing real work on the ground. Nonetheless, here are my quick takes:
I think if someone starts from an impartial altruistic attempt at making the long-term future go well, it will be rather surprising if they landed on alternative proteins as one of the most likely ways to make sure the future goes well. Given the probable heavy-tailed distribution of impact, it will be further surprising if this is through a number of disjunctive channels that are each plausibly equally valuable. Additionally, it sure will be strange if people who started working on alternative proteins for entirely unrelated reasons somehow stumbled upon one of the best ways at making the long-term future go well. Finally, it’s kind of surprising, and perhaps suspicious, that you and/or GFI are making a claim that happens to line up well with your ideological and perhaps financial incentives.[1]
I apologize for not engaging with this post’s object-level arguments. They might well be correct. (For what it’s worth, I personally find the values-spreading now-> people have better values during the hinge of history → some form of lockin causes the future go well theory of change to be the most plausible[2]). But I do think the evidential bar for me to believe this posts’ arguments is rather high, and readers can judge for themselves whether these arguments meet such a bar.
Indeed the one time I tried to do a deep dive into adjacent research in a similar domain, I discovered elementary errors that consistently shaded in one direction.
I do not know what is the latest or best treatment of this argument, but I think I personally like Michael Dickens’ 2015 arguments here and here.
Linch, thanks so much for your comment. I think I agree with all of it, and I was pretty confused initially, b/c I didn’t realize the transcription error.
But yes, as David notes, the transcript flipped my point: I am not arguing that any of the external costs of alt proteins clock in at anywhere near a 1 in 10 or 1 in 30 X-risk.
My point is just (as noted in # 3 of my synopsis) that they are “sufficiently high… to warrant attention from longtermists” who are in a position to advance more than one thing (e.g., working in government or philanthropy, where you can have a portfolio).
Examples:
- Government: Adding alt proteins to one’s portfolio will often be fairly easy—e.g., at OSTP or NSF or in a congressional office.
- Philanthropy: Some philanthropists will insist on giving with a focus on global health, biodiversity, or climate; where that happens, steering them toward alt proteins can make a lot of sense.
Apologies for the transcription error—fixed, as detailed in my next comment.
I liked your comment a lot, but I’m pretty sure you misunderstood a big part of the argument because there’s a pretty big typo in this post.
In the original recording(4:07) Fredrich argues that advancing alternative proteins should be a significant part of longtermist thinking, but not that they’re “one of the best ways at making the long-term future go well” or even “on par with AI risk or bioengineered pandemics”.
But this transcript makes it seem like he is saying the opposite in the intro:
I think you still bring up a lot of good points though.
I appreciate your catching this, David—I would not have noticed this and would have been pretty confused by Linch’s comment. I did go back and edit the transcript to align with the video, and I appreciate your noting this.
Edited per your excellent comment that the text flipped the meaning of the video:
The third point is that alternative proteins address multiple risks to long-term flourishing and they should be a priority for longtermists. I’m not going to try to convince you they should be the priority or that they’re on par with AI risk or bioengineered pandemics. But I am going to try to convince you that unless you are working for an organization that is focused on one thing, you should add alternative proteins to your portfolio if you are focused on longtermism.
Also slightly edited this bit to better capture the video:
[16:40] In Toby Ord’s book The Precipice he says, “Risks greater than 1 in 1,000 should be central global priorities.” … my goal is not going to be to convince you [that] if you’re working on unaligned AI or bioengineered pandemics (What are they − 1 in 10 and 1 in 30 existential risks) that this is something that you should do instead. But if you are working in policy… you can have a more diverse portfolio than unaligned AI and bioengineered pandemics. In fact, it’s probably to our benefit to have a more diverse portfolio. And I would contend that alternative proteins should be a part of that portfolio for all of the reasons that I’ve just described...