I have not contemplated deeply the meaning of Rethink Priority’s findings on cross cause prioritization, but my perhaps shallow understanding was that despite somewhat high likelihood of AI catastrophe arriving quite soon, “traditional” animal welfare looked good in expectation. I think the point was something like despite quite high chances of AI catastrophe, the even higher chances (but far from 100%) of survival means in expectation animal welfare looks very good. So while it is not guaranteed animal welfare interventions will pay off due to an intervening AI crisis, it is still worth the bet unless you think value growth is extremely high (cubic or logistic) and there are only a very few periods from now until “infinite time” that x-risk will be high (only one or two such periods). I did not read your post carefully, but did you take this into account? That even if there might be a 30% chance of imminent AI catastrophe, the remaining 70% chance of “success” makes animal welfare with longer time horizons still look good in expectation?
I have not contemplated deeply the meaning of Rethink Priority’s findings on cross cause prioritization, but my perhaps shallow understanding was that despite somewhat high likelihood of AI catastrophe arriving quite soon, “traditional” animal welfare looked good in expectation. I think the point was something like despite quite high chances of AI catastrophe, the even higher chances (but far from 100%) of survival means in expectation animal welfare looks very good. So while it is not guaranteed animal welfare interventions will pay off due to an intervening AI crisis, it is still worth the bet unless you think value growth is extremely high (cubic or logistic) and there are only a very few periods from now until “infinite time” that x-risk will be high (only one or two such periods). I did not read your post carefully, but did you take this into account? That even if there might be a 30% chance of imminent AI catastrophe, the remaining 70% chance of “success” makes animal welfare with longer time horizons still look good in expectation?