If we donāt know which of infinite or astronomically many possible theories about AGI are more likely to be correct than the others, how can we prepare?
Maybe alignment techniques conceived based on our current wrong theory make otherwise benevolent and safe AGIs murderous and evil on the correct theory. Or maybe theyāre just inapplicable. Who knows?
Not everything being funded here even IS alignment techniques, but also, insofar as you just want general better understanding of AI as a domain through science, why wouldnāt you learn useful stuff from applying techniques to current models. If the claim is that current models are too different from any possible AGI for this info to be useful, why do you think ādo scienceā would help prepare for AGI at all? Assuming you do think that, which still seems unclear to me.
You might learn useful stuff about current models from research on current models, but not necessarily anything useful about AGI (except maybe in the slightest, most indirect way). For example, I donāt know if anyone thinks if we had invested 100x or 1,000x more into research on symbolic AI systems 30 years ago, that we would know meaningfully more about AGI today. So, as you anticipated, the relevance of this research to AGI depends on an assumption about the similar between a hypothetical future AGI and current models.
However, even if you think AGI will be similar to current models, or it might be similar, there might be no cost to delaying research related to alignment, safety, control, preparedness, value lock-in, governance, and so on until more fundamental research progress on capabilities has been made. If in five or ten or fifteen years or whatever we understand much better how AGI will be built, then a single $1 million grant to a few researchers might produce more useful knowledge about alignment, safety, etc. than Dustin Moskovitzās entire net worth would produce today if it were spend on research into the same topics.
My argument about ādoing basic scienceā vs. āmitigating existential riskā is that these collapse into the same thing unless you make very specific assumptions about which theory of AGI is correct. I donāt think those assumptions are justifiable.
Put it this way: letās say we are concerned that, for reasons due to fundamental physics, the universe might spontaneously end. But we also suspect that, if this is true, there may be something we can do to prevent it. What we want to know is a) if the universe is in danger in the first place, b) if so, how soon, and c) if so, what we can do about it.
To know any of these three things, (a), (b), or (c), we need to know which fundamental theory of physics is correct, and what the fundamental physical properties of our universe are. Problem is, there are half a dozen competing versions of string theory, and within those versions, the number of possible variations that could describe our universe is astronomically large, 10^500, or 10^272,000, or possibly even infinite. We donāt know which variation correctly describes our universe.
Plus, a lot of physicists say string theory is a poorly conceived theory in the first place. Some offer competing theories. Some say we just donāt know yet. Thereās no consensus. Everybody disagrees.
What does the āexistential riskā framing get us? What action does it recommend? How does the precautionary principle apply? Letās say you have a $10 billion budget. How do you spend it to mitigate existential risk?
I donāt see how this doesnāt just loop all the way back around to basic science. Whether thereās an existential risk, and if so, when we need to worry about it, and if when the time comes, what we can do about it, are all things we can only know if we figure out the basic science. How do we figure out the basic science? By doing the basic science. So, your $10 billion budget will just go to funding basic science, the same physics research that is getting funded anyway.
The space of possible theories about how the mind works is at least six, plus a lot of people saying we just donāt know yet, and there are probably silly but illustrative ways to formulate it where you get very large numbers.
For instance, if we think the correct theory can be summed up in just 100 bits of information, then the number of possible theories is 10,000.
Or we could imagine what would happen if we paid a very large number of experts from various relevant fields (e.g. philosophy, cognitive science, AI) a lot of money to spend a year coming up with a one-to-two-page description of as many original, distinct, even somewhat plausible or credible theories as they could think of. Then we group together all the submissions that were similar enough and counted them as the same theory. How many distinct theories would we end up with? A handful? Dozens? Hundreds? Thousands?
Iām aware these thought experiments are ridiculous, but Iām trying to emphasize the point that the space of possible ideas seems very large. At the frontier of knowledge in a domain like the science of the mind, which largely exists in a pre-scientific or protoscientific or pre-paradigmatic state, trying to actually map out the space of theories that might possibly be correct is a daunting task. Doing that well, to a meaningful extent, ultimately amounts to actually doing the science or advancing the frontier of knowledge yourself.
What is the right way to apply the precautionary principle in this situation? I would say the precautionary principle isnāt the right way to think about it. We would like to be precautionary, but we donāt know enough to know how to be. Weāre in a situation of fundamental, wide-open uncertainty, at the frontier of knowledge, in a largely pre-scientific state of understanding about the nature of the mind and intelligence. So, we donāt know how to reduce risk ā for example, our ideas on how to reduce risk might do nothing or they might increase risk.
If we donāt know which of infinite or astronomically many possible theories about AGI are more likely to be correct than the others, how can we prepare?
Maybe alignment techniques conceived based on our current wrong theory make otherwise benevolent and safe AGIs murderous and evil on the correct theory. Or maybe theyāre just inapplicable. Who knows?
Not everything being funded here even IS alignment techniques, but also, insofar as you just want general better understanding of AI as a domain through science, why wouldnāt you learn useful stuff from applying techniques to current models. If the claim is that current models are too different from any possible AGI for this info to be useful, why do you think ādo scienceā would help prepare for AGI at all? Assuming you do think that, which still seems unclear to me.
You might learn useful stuff about current models from research on current models, but not necessarily anything useful about AGI (except maybe in the slightest, most indirect way). For example, I donāt know if anyone thinks if we had invested 100x or 1,000x more into research on symbolic AI systems 30 years ago, that we would know meaningfully more about AGI today. So, as you anticipated, the relevance of this research to AGI depends on an assumption about the similar between a hypothetical future AGI and current models.
However, even if you think AGI will be similar to current models, or it might be similar, there might be no cost to delaying research related to alignment, safety, control, preparedness, value lock-in, governance, and so on until more fundamental research progress on capabilities has been made. If in five or ten or fifteen years or whatever we understand much better how AGI will be built, then a single $1 million grant to a few researchers might produce more useful knowledge about alignment, safety, etc. than Dustin Moskovitzās entire net worth would produce today if it were spend on research into the same topics.
My argument about ādoing basic scienceā vs. āmitigating existential riskā is that these collapse into the same thing unless you make very specific assumptions about which theory of AGI is correct. I donāt think those assumptions are justifiable.
Put it this way: letās say we are concerned that, for reasons due to fundamental physics, the universe might spontaneously end. But we also suspect that, if this is true, there may be something we can do to prevent it. What we want to know is a) if the universe is in danger in the first place, b) if so, how soon, and c) if so, what we can do about it.
To know any of these three things, (a), (b), or (c), we need to know which fundamental theory of physics is correct, and what the fundamental physical properties of our universe are. Problem is, there are half a dozen competing versions of string theory, and within those versions, the number of possible variations that could describe our universe is astronomically large, 10^500, or 10^272,000, or possibly even infinite. We donāt know which variation correctly describes our universe.
Plus, a lot of physicists say string theory is a poorly conceived theory in the first place. Some offer competing theories. Some say we just donāt know yet. Thereās no consensus. Everybody disagrees.
What does the āexistential riskā framing get us? What action does it recommend? How does the precautionary principle apply? Letās say you have a $10 billion budget. How do you spend it to mitigate existential risk?
I donāt see how this doesnāt just loop all the way back around to basic science. Whether thereās an existential risk, and if so, when we need to worry about it, and if when the time comes, what we can do about it, are all things we can only know if we figure out the basic science. How do we figure out the basic science? By doing the basic science. So, your $10 billion budget will just go to funding basic science, the same physics research that is getting funded anyway.
The space of possible theories about how the mind works is at least six, plus a lot of people saying we just donāt know yet, and there are probably silly but illustrative ways to formulate it where you get very large numbers.
For instance, if we think the correct theory can be summed up in just 100 bits of information, then the number of possible theories is 10,000.
Or we could imagine what would happen if we paid a very large number of experts from various relevant fields (e.g. philosophy, cognitive science, AI) a lot of money to spend a year coming up with a one-to-two-page description of as many original, distinct, even somewhat plausible or credible theories as they could think of. Then we group together all the submissions that were similar enough and counted them as the same theory. How many distinct theories would we end up with? A handful? Dozens? Hundreds? Thousands?
Iām aware these thought experiments are ridiculous, but Iām trying to emphasize the point that the space of possible ideas seems very large. At the frontier of knowledge in a domain like the science of the mind, which largely exists in a pre-scientific or protoscientific or pre-paradigmatic state, trying to actually map out the space of theories that might possibly be correct is a daunting task. Doing that well, to a meaningful extent, ultimately amounts to actually doing the science or advancing the frontier of knowledge yourself.
What is the right way to apply the precautionary principle in this situation? I would say the precautionary principle isnāt the right way to think about it. We would like to be precautionary, but we donāt know enough to know how to be. Weāre in a situation of fundamental, wide-open uncertainty, at the frontier of knowledge, in a largely pre-scientific state of understanding about the nature of the mind and intelligence. So, we donāt know how to reduce risk ā for example, our ideas on how to reduce risk might do nothing or they might increase risk.