How will artificial general intelligence (AGI) affect WAW? How should AI affect WAW?
AGI could be the only way we could implement complex solutions to WAW
How do we hedge against different takeoff scenarios?
I guess one potential premise of this point, is the consideration that AGI may have enormous perception and physical, real world faculty. This faculty includes deep understanding and the ability to edit complex, natural systems. This can be used to reduce suffering in wild animal welfare.
Does the below seem like a useful comment?
I think the root concern behind AI safety is overwhelming AI faculty overpowering human agency and values. Maybe this faculty will involve enormous capability (omniscient, extremely powerful machines that can edit real world ecologies). But AGI or even “ASI” doesn’t need to do this to be dangerous. It seems like it can just overpower/lock-in humans without obtaining these competencies (it doesn’t even need to be AGI to be extremely dangerous).
(I guess this involves topics like “tractability”) it’s unclear why humans can’t become competent and use a variety of tools, including “AI”, in prosaic ways that are pretty effective. For example, Google ads optimize for clicks pretty well and complex rules are used to fly planes or drones. The extent of these systems actually seem sort of improbable until developed. So it’s possible that relatively simple tools are sufficient to improve WAW, or at least the sophistication is orthogonal to AGI?
If the above is true, then maybe in some deep sense, safety work on AGI/ASI might be disjoint from work on WAW. So the approach to AGI that was focused on WAW might be very different?
I don’t know much about these areas though. I would like to be corrected to learn more.
It seems like it can just overpower/lock-in humans without obtaining these competencies (it doesn’t even need to be AGI to be extremely dangerous).
Ideally, I think WAW would consider all the different AI timelines. TAI that just increases our industrial capacity might be enough to seriously threaten wild animals if it makes us even more capable of shaping their lives and we don’t have considered values about how to look out for them.
So it’s possible that relatively simple tools are sufficient to improve WAW, or at least the sophistication is orthogonal to AGI?
I agree! Personally, I don’t think it’s lack of ntelligence per se holding us back from complex WAW intervention (by which I mean interventions that have to compensate for ripple effects on the ecosystem or require lots of active monitoring). I think we’re more limited by the number of monitoring measurements we can take and our ability to deliver specific, measured intervention at specific places and times. I think we could conceivably gain this ability with hardware upgrades alone and no further improvement in algorithms.
I think we’re more limited by the number of monitoring measurements we can take and our ability to deliver specific, measured intervention at specific places and times.
This seems a bit surprising to me, as currently we don’t even have a good understanding of biology/ecology in general, and of welfare biology in particular. (which means that we need intelligence to solve these)
So, did you mean that engineering capabilities (e.g. the minotoring measurements that you mentioned) are more of a bottleneck to WAW than theoretical understanding (into welfare biology) is? If yes, could you explain the reason?
One plausible reason I can think of: When developping WAW interventions, we could use a SpaceX-style approach, i.e. doing many small-scale experiments, iterating rapidly, and learn from tight feedback loops, in a trial-and-error manner. Is that what you were having in mind?
This is an awesome post!
I want to learn more!
I guess one potential premise of this point, is the consideration that AGI may have enormous perception and physical, real world faculty. This faculty includes deep understanding and the ability to edit complex, natural systems. This can be used to reduce suffering in wild animal welfare.
Does the below seem like a useful comment?
I think the root concern behind AI safety is overwhelming AI faculty overpowering human agency and values. Maybe this faculty will involve enormous capability (omniscient, extremely powerful machines that can edit real world ecologies). But AGI or even “ASI” doesn’t need to do this to be dangerous. It seems like it can just overpower/lock-in humans without obtaining these competencies (it doesn’t even need to be AGI to be extremely dangerous).
(I guess this involves topics like “tractability”) it’s unclear why humans can’t become competent and use a variety of tools, including “AI”, in prosaic ways that are pretty effective. For example, Google ads optimize for clicks pretty well and complex rules are used to fly planes or drones. The extent of these systems actually seem sort of improbable until developed. So it’s possible that relatively simple tools are sufficient to improve WAW, or at least the sophistication is orthogonal to AGI?
If the above is true, then maybe in some deep sense, safety work on AGI/ASI might be disjoint from work on WAW. So the approach to AGI that was focused on WAW might be very different?
I don’t know much about these areas though. I would like to be corrected to learn more.
Ideally, I think WAW would consider all the different AI timelines. TAI that just increases our industrial capacity might be enough to seriously threaten wild animals if it makes us even more capable of shaping their lives and we don’t have considered values about how to look out for them.
I agree! Personally, I don’t think it’s lack of ntelligence per se holding us back from complex WAW intervention (by which I mean interventions that have to compensate for ripple effects on the ecosystem or require lots of active monitoring). I think we’re more limited by the number of monitoring measurements we can take and our ability to deliver specific, measured intervention at specific places and times. I think we could conceivably gain this ability with hardware upgrades alone and no further improvement in algorithms.
This seems a bit surprising to me, as currently we don’t even have a good understanding of biology/ecology in general, and of welfare biology in particular. (which means that we need intelligence to solve these)
So, did you mean that engineering capabilities (e.g. the minotoring measurements that you mentioned) are more of a bottleneck to WAW than theoretical understanding (into welfare biology) is? If yes, could you explain the reason?
One plausible reason I can think of: When developping WAW interventions, we could use a SpaceX-style approach, i.e. doing many small-scale experiments, iterating rapidly, and learn from tight feedback loops, in a trial-and-error manner. Is that what you were having in mind?