Iād like to reinforce and expand on this point. I think it pushes us towards interventions that benefit animals earlier or with potentially large lasting counterfactual impacts through an AI transition. If the world or animal welfare donors specifically will be far wealthier in X years, then higher animal welfare and satisfying alternative proteins will be extremely cheap in relative terms in X years and weāll get them basically for free, so we should probably severely discount any potential counterfactual impacts past X years.
I would personally focus on large payoffs within the next ~10 years and maybe work to shape space colonization to reduce s-risks, each when weāre justified in believing the upsides outweigh the backfire risks, in a way that isnāt very sensitive to our direct intuitions.
Great point, Michael! I agree on discounting potential counterfactual impacts of current interventions past X years and think that short-term large payoffs are a very good way of dealing with the overall situation. In addition to that, Iād argue that cheaper higher animal welfare and alternative proteins in X years suggest that interventions will be more cost-effective in X years, which might imply that we should āsave and investā (either literally, in capital, or conceptually, in movement capacity). Do you have any thoughts on that?
To me, this suggests prioritizing (1) short-term, large payoff interventions, (2) interventions actively seeking to navigate and benefit animals through an AI transition (depending on how optimistic you are about the tractability of doing so), (3) interventions that robustly invest in movement capacity (depending on whether you think interventions are likely to be more cost-effective in the future), and perhaps (4) interventions that seem unlikely to change through an AI transition (depending on how optimistic you are in their current cost-effectiveness and how high your credence is in their robustness).
Iād argue that cheaper higher animal welfare and alternative proteins in X years suggest that interventions will be more cost-effective in X years, which might imply that we should āsave and investā (either literally, in capital, or conceptually, in movement capacity). Do you have any thoughts on that?
I agree they could be cheaper (in relative terms), but also possibly far more likely to happen without us saving and investing more on the margin. Itās probably worth ensuring a decent sum of money is saved and invested for this possibility, though.
Your 4 priorities seem reasonable to me. I might aim 2, 3 and 4 primarily at potentially extremely high payoff interventions, e.g. s-risks. They should beat 1 in expectation, and we should have plausible models for how they could.
Thanks for writing this!
Iād like to reinforce and expand on this point. I think it pushes us towards interventions that benefit animals earlier or with potentially large lasting counterfactual impacts through an AI transition. If the world or animal welfare donors specifically will be far wealthier in X years, then higher animal welfare and satisfying alternative proteins will be extremely cheap in relative terms in X years and weāll get them basically for free, so we should probably severely discount any potential counterfactual impacts past X years.
I would personally focus on large payoffs within the next ~10 years and maybe work to shape space colonization to reduce s-risks, each when weāre justified in believing the upsides outweigh the backfire risks, in a way that isnāt very sensitive to our direct intuitions.
Great point, Michael! I agree on discounting potential counterfactual impacts of current interventions past X years and think that short-term large payoffs are a very good way of dealing with the overall situation. In addition to that, Iād argue that cheaper higher animal welfare and alternative proteins in X years suggest that interventions will be more cost-effective in X years, which might imply that we should āsave and investā (either literally, in capital, or conceptually, in movement capacity). Do you have any thoughts on that?
To me, this suggests prioritizing (1) short-term, large payoff interventions, (2) interventions actively seeking to navigate and benefit animals through an AI transition (depending on how optimistic you are about the tractability of doing so), (3) interventions that robustly invest in movement capacity (depending on whether you think interventions are likely to be more cost-effective in the future), and perhaps (4) interventions that seem unlikely to change through an AI transition (depending on how optimistic you are in their current cost-effectiveness and how high your credence is in their robustness).
I agree they could be cheaper (in relative terms), but also possibly far more likely to happen without us saving and investing more on the margin. Itās probably worth ensuring a decent sum of money is saved and invested for this possibility, though.
Your 4 priorities seem reasonable to me. I might aim 2, 3 and 4 primarily at potentially extremely high payoff interventions, e.g. s-risks. They should beat 1 in expectation, and we should have plausible models for how they could.