This does not undermine the central claim of your Spotlighting section or that of your overall post, which are about tractability and not magnitude/importance, but a quick comment on:
either that (a) we’re assuming these interventions don’t affect wild animals, or (b) the effects are small enough to ignore. Unfortunately, both claims seem obviously incorrect. Just to illustrate one aspect of this issue: Any AI policy that influences the timelines to AGI will affect land use, resource consumption, and mining behavior — all of which have substantial effects on wild animal populations and welfare. The net effect is unlikely to be small and we don’t know whether it is positive or negative. [Emphases are mine (Jim’s)]
Longtermist AI policy folks could say “sure, but these effects are still small in terms of moral importance” because they think the value of our actions is dominated by things that have nothing to do with wild animal welfare—or WAW before AGI---[1]e.g., the welfare of digital minds or biologically enhanced humans which they expect to be more numerous in the far future.[2] In fact, I think that’s what most longtermists who seriously thought about impartial cause prio believe.[3]
And especially wild animal welfare before AGI. Maybe they think long-term WAW is what matters most (like, e.g., Bentham’s Bulldog in this post) and that reducing x-risks or making AI safe is more promising than current WAW work to increase long-term WAW. And maybe they’re clueless about how reducing x-risks or making AI safe affects near-term WAW, but they think this is dwarfed by long-term WAW anyway. Surely some people in the Sentient Futures and Longtermism and Animals communities think this (or would think this under more reflection).
From my experience, people who would hold this view endorse assigning precise probabilities (no matter what). And they kind of have to. It’s hard to defend this view without endorsing that, at least implicitly.
I’m not saying they’re right. Just trying to clarify what the crux is here (magnitude, not tractability) and highlighting that it may not be consensual at all that (b) is incorrect.
Yes, totally agree that some longtermist or AI safety oriented types have actually thought about these things, and endorse precise probabilties, and have precise probability assignments to things I find quite strange, like thinking it’s 80% likely that the universe will be dominated by sentient machines instead of wild animals. Although I expect I’d find any precise probability assignment about outcomes like this quite surprising, perhaps I’m just a very skeptical person.
But I think a lot of EAs I talk to have not reflected on this much and don’t realize how much the view hinges on these sorts of beliefs.
However, many of the longtermists who would be convinced by this might fall back on the opinion I describe in footnote 1 of my above comment in the (they don’t know how likely) scenario where wild animals dominate (and then the crux becomes what we can reasonably think is good/best for long-term WAW).
This does not undermine the central claim of your Spotlighting section or that of your overall post, which are about tractability and not magnitude/importance, but a quick comment on:
Longtermist AI policy folks could say “sure, but these effects are still small in terms of moral importance” because they think the value of our actions is dominated by things that have nothing to do with wild animal welfare—or WAW before AGI---[1]e.g., the welfare of digital minds or biologically enhanced humans which they expect to be more numerous in the far future.[2] In fact, I think that’s what most longtermists who seriously thought about impartial cause prio believe.[3]
And especially wild animal welfare before AGI. Maybe they think long-term WAW is what matters most (like, e.g., Bentham’s Bulldog in this post) and that reducing x-risks or making AI safe is more promising than current WAW work to increase long-term WAW. And maybe they’re clueless about how reducing x-risks or making AI safe affects near-term WAW, but they think this is dwarfed by long-term WAW anyway. Surely some people in the Sentient Futures and Longtermism and Animals communities think this (or would think this under more reflection).
From my experience, people who would hold this view endorse assigning precise probabilities (no matter what). And they kind of have to. It’s hard to defend this view without endorsing that, at least implicitly.
I’m not saying they’re right. Just trying to clarify what the crux is here (magnitude, not tractability) and highlighting that it may not be consensual at all that (b) is incorrect.
Yes, totally agree that some longtermist or AI safety oriented types have actually thought about these things, and endorse precise probabilties, and have precise probability assignments to things I find quite strange, like thinking it’s 80% likely that the universe will be dominated by sentient machines instead of wild animals. Although I expect I’d find any precise probability assignment about outcomes like this quite surprising, perhaps I’m just a very skeptical person.
But I think a lot of EAs I talk to have not reflected on this much and don’t realize how much the view hinges on these sorts of beliefs.
Agreed. I think we should probably have very indeterminate/imprecise beliefs about what moral patients will dominate in the far future, and this imprecision arguably breaks the Pascalian wager (that many longtermists take) in favor of assuming enhanced human-ish minds outnumber wild animals.
However, many of the longtermists who would be convinced by this might fall back on the opinion I describe in footnote 1 of my above comment in the (they don’t know how likely) scenario where wild animals dominate (and then the crux becomes what we can reasonably think is good/best for long-term WAW).