Edit: I have a lot of sympathy for the take above but I tried to write up my response around why I think lock-ins are pretty plausible.
I’m not sure rn whether the majority of downside comes from lock-in but I think that’s what I’m most immediately concerned about.
I assume by singularity you mean an intelligence explosion or extremely rapid economic growth. I think my default story for how this happens in the current paradigm involves people using AIs in existing institutions (or institutions that look pretty similar today’s one’s) in markets that looks pretty similar to current markets which (on my view) are unlikely to care about the moral patienthood of AIs in a pretty similar ways to current market failures.
On the “markets still exist and we do things in kind of like how we do now view”—I agree that in principle we’d be better positioned to make progress on problems generally if we had something like PASTA but I feel like you need to tell a reasonable story for one of
how governance works post TAI so that you can easily enact improvements like eliminating ai suffering
why current markets do allow for things like factory farming and slavery but wouldn’t allow for violation of AI preferences
I’m guessing your view is that progress will be highly discontinuous and society will look extremely different post singularity to how it does now (kind of like going from pre-agricultural revolution to now whereas my view is more like preindustrial revolution to now).
I’m not really sure where the cruxes are on this view or how to reason about it well but my high level argument is that the “god like AGI which has significant responsibility but still checks in with its operators” will still need to make some trade offs across various factors and unless it’s doing some cev type thing, outcomes will be fairly dependent on the goals that you give it and it’s not clear to me that the median world leader or ceo gives the agi goals that concern the ai’s wellbeing (or its subsystems wellbeing) - even if it’s relatively cheap to evaluate it. I am more optimistic about agi controlled by person sampled from a culture that has already set up norms around how to orient to the moral patienthood of ai systems than one that needs to figure it out on the fly. I do feel much better about worlds where some kind of reflection process is overdetermined.
My views here are pretty fuzzy and are often influenced substantially by thought experiments like “If random tech ceo could effectively control all the worlds scientists and have them run at 10x speed and had 100 trillion dollars does factory farming still exist?” which isn’t a very high epistemic bar to beat. (I also don’t think I’ve articulated my models very well and I may take another stab at this later on).
I have some tractability concerns but my understanding is that few people are actually trying to solve the problem right now and when few people are trying it’s pretty hard for me to actually get a sense of how tractable a thing is, so my priors on similarly shaped problems are doing most of the work (which leaves me feeling quite confused).
Edit: I have a lot of sympathy for the take above but I tried to write up my response around why I think lock-ins are pretty plausible.
I’m not sure rn whether the majority of downside comes from lock-in but I think that’s what I’m most immediately concerned about.
I assume by singularity you mean an intelligence explosion or extremely rapid economic growth. I think my default story for how this happens in the current paradigm involves people using AIs in existing institutions (or institutions that look pretty similar today’s one’s) in markets that looks pretty similar to current markets which (on my view) are unlikely to care about the moral patienthood of AIs in a pretty similar ways to current market failures.
On the “markets still exist and we do things in kind of like how we do now view”—I agree that in principle we’d be better positioned to make progress on problems generally if we had something like PASTA but I feel like you need to tell a reasonable story for one of
how governance works post TAI so that you can easily enact improvements like eliminating ai suffering
why current markets do allow for things like factory farming and slavery but wouldn’t allow for violation of AI preferences
I’m guessing your view is that progress will be highly discontinuous and society will look extremely different post singularity to how it does now (kind of like going from pre-agricultural revolution to now whereas my view is more like preindustrial revolution to now).
I’m not really sure where the cruxes are on this view or how to reason about it well but my high level argument is that the “god like AGI which has significant responsibility but still checks in with its operators” will still need to make some trade offs across various factors and unless it’s doing some cev type thing, outcomes will be fairly dependent on the goals that you give it and it’s not clear to me that the median world leader or ceo gives the agi goals that concern the ai’s wellbeing (or its subsystems wellbeing) - even if it’s relatively cheap to evaluate it. I am more optimistic about agi controlled by person sampled from a culture that has already set up norms around how to orient to the moral patienthood of ai systems than one that needs to figure it out on the fly. I do feel much better about worlds where some kind of reflection process is overdetermined.
My views here are pretty fuzzy and are often influenced substantially by thought experiments like “If random tech ceo could effectively control all the worlds scientists and have them run at 10x speed and had 100 trillion dollars does factory farming still exist?” which isn’t a very high epistemic bar to beat. (I also don’t think I’ve articulated my models very well and I may take another stab at this later on).
I have some tractability concerns but my understanding is that few people are actually trying to solve the problem right now and when few people are trying it’s pretty hard for me to actually get a sense of how tractable a thing is, so my priors on similarly shaped problems are doing most of the work (which leaves me feeling quite confused).