Strong upvoted. Thank you for clarifying your views. Thatâs helpful. We might be getting somewhere.
With regard to AI 2027, I get the impression that a lot of people in EA and in the wider world were not initially aware that AI 2027 was an exercise in judgmental forecasting. The AI 2027 authors did not sufficiently foreground this in the presentation of their âresultsâ. I would guess there are still a lot of people in EA and outside it who think AI 2027 is something more rigorous, empirical, quantitative, and/âor scientific than a judgmental forecasting exercise.
I think this was a case of some people in EA being fooled or tricked (even if that was not the authorsâ intention). They didnât evaluate the evidence they were looking at properly. You were quick to agree with my characterization of AI 2027 as a forecast based on subjective intuitions. However, in one previous instance on the EA Forum, I also cited nostalgebraistâs eloquent post and made essentially the same argument I just made, and someone strongly disagreed. So, I think people are just getting fooled, thinking that evidence exists that really doesnât.
What does the forecasting literature say about long-term technology forecasting? Iâve only looked into it a little bit, but generally technology forecasting seems really inaccurate, and the questions forecasters/âexperts are being asked in those studies seem way easier than forecasting something like AGI. So, Iâm not sure there is a credible scientific basis for the idea of AGI forecasting.
I have been saying from the beginning and Iâll say once again that my forecast of the probability and timeline of AGI is just a subjective guess and thereâs a high level of irreducible uncertainty here. I wish that people would stop talking so much about forecasting and their subjective guesses. This eats up an inordinate portion of the conversation, despite its low epistemic value and credibility. For months, I have been trying to steer the conversation away from forecasting toward object-level technical issues.
Initially, I didnât want to give any probability, timeline, or forecast, but I realized the only way to be part of the conversation in EA is to âplay the gameâ and say a number. I had hoped that would only be the beginning of the conversation, not the entire focus of the conversation forever.
You canât squeeze Bayesian blood from a stone of uncertainty. You canât know what you canât know by an act of sheer will. Most discussion of AGI forecasting is wasted effort. Most of it is mostly pointless.
What is not pointless is understanding the object-level technical issues better. If anything helps with AGI forecasting accuracy (and thatâs a big âifâ), this will. But it also has other important advantages, such as:
Helping us understand what risks AGI might or might not pose
Helping us understand what we might be able to do, if anything, to prepare for AGI, and what we would need to know to usefully prepare
Getting a better sense of what kinds of technical or scientific research might be promising to fund in order to advance fundamental AI capabilities
Understanding the economic impact of generative AI
Possibly helping to inform a better picture of how the human mind works
And more topics besides these.
I would consider it a worthy contribution to the discourse to play some small part in raising the overall knowledge level of people in EA about the object-level technical issues relevant to the AI frontier and to AGI. Based on track records, technology forecasting may be mostly forlorn, but, based on track records, science certainly isnât forlorn. Focusing on the science of AI rather than on an Aristotelian approach would be a beautiful return to Enlightenment values, away from the anti-scientific/âanti-Enlightenment thinking that pervades much of this discourse.
By the way, in case itâs not already clear, saying there is a high level of irreducible uncertainty does not support funding whatever AGI-related research program people in EA might currently feel inclined to fund. The number of possible ways the mind could work and the number of possible paths the future could take is large, perhaps astronomically large, perhaps infinite. To arbitrarily seize on one and say thatâs the one, pour millions of dollars into that â that is not justifiable.
I think what you are saying here is mostly reasonable, even if I am not sure how much I agree: it seems to turn on very complicated issue in the philosophy of probability/âdecision theory, and what you should do when accurate prediction is hard, and exactly how bad predictions have to be to be valueless. Having said that, I donât think your going to succeed in steering conversation away from forecasts if you keep writing about how unlikely it is that AGI will arrive near term. Which you have done a lot, right?
Iâm genuinely not sure how much EA funding for AI-related stuff even is wasted on your view. To a first approximation, EA is what Moskowitz and Tuna fund. When I look at Coefficientâs-i.e. what previously was Open Philâs-7 most recent AI safety and governance grants hereâs what I find:
1) A joint project of METR and RAND to develop new ways of assessing AI systems for risky capabilities.
2) âAI safety workshop field buildingâ by BlueDot Impact
3) An AI governance workshop at ICML
4) âGeneral supportâ for the Center for Governance of AI.
5) A âstudy on encoded reasoning in LLMs at the University of Marylandâ
So is this stuff bad or good on the worldview youâve just described? I have no idea, basically. None of it is forecasting, plausibly it all broadly falls under either empirical research on current and very near future models, training new researchers, or governance stuff, though that depends on what âresearch on misalignmentâ means. But of course, youâd only endorse if it is good research. If you are worried about lack of academic credibility specifically, as far as I can tell 7 out of the 20 most recent grants are to academic research in universities. It does seem pretty obvious to me that significant ML research goes on at places other than universities, though, not least the frontier labs themselves.
I donât really know all the specifics of all the different projects and grants, but my general impression is that very little (if any) of the current funding makes sense or can be justified if the goal is to do something useful about AGI (as opposed to, say, make sure Claude doesnât give risky medical advice). Absent concerns about AGI, I donât know if Coefficient Giving would be funding any of this stuff.
To make it a bit concrete, there at least five different proposed pathways to AGI, and I imagine the research Coefficient Giving is only relevant to one of the five pathways, if itâs even relevant to that one. But the number five is arbitrary here. The actual decision-relevant number might be a hundred, or a thousand, or a million, or infinity. It just doesnât feel meaningful or practical to try to map out the full space of possible theories of how the mind works and apply the precautionary principle against the whole possibility space. Why not just do science instead?
By word count, I think Iâve written significantly more about object-level technical issues relevant to AGI than directly about AGI forecasts or my subjective guesses of timelines or probabilities. The object-level technical issues are what Iâve tried to emphasize. Unfortunately, commenters seem fixated on surveys, forecasts, and bets, and donât seem to be as interested in the object-level technical topics. I keep trying to steer the conversation in a technical direction. But people keep wanting to steer it back toward forecasting, subjective guesses, and bets.
My post âFrozen skills arenât general intelligenceâ mainly focuses on object-level technical issues, including some of the research problems discussed in the other post. You have the top comment on that post (besides SummaryBot) and your comment is about a forecasting survey.
People on the EA Forum are apparently just really into surveys, bets, and forecasts.
The forum is kind of a bit dead generally, for one thing.
I donât really get on what grounds your are saying that the Coefficient Grants are not to people to do science, apart from the governance ones. I also think you are switching back and forth between: âNo one knows when AGI will arrive, best way to prepare just in case is more normal AI scienceâ and âwe know that AGI is far, so thereâs no point doing normal science to prepare against AGI now, although there might be other reasons to do normal science.â
If we donât know which of infinite or astronomically many possible theories about AGI are more likely to be correct than the others, how can we prepare?
Maybe alignment techniques conceived based on our current wrong theory make otherwise benevolent and safe AGIs murderous and evil on the correct theory. Or maybe theyâre just inapplicable. Who knows?
Not everything being funded here even IS alignment techniques, but also, insofar as you just want general better understanding of AI as a domain through science, why wouldnât you learn useful stuff from applying techniques to current models. If the claim is that current models are too different from any possible AGI for this info to be useful, why do you think âdo scienceâ would help prepare for AGI at all? Assuming you do think that, which still seems unclear to me.
You might learn useful stuff about current models from research on current models, but not necessarily anything useful about AGI (except maybe in the slightest, most indirect way). For example, I donât know if anyone thinks if we had invested 100x or 1,000x more into research on symbolic AI systems 30 years ago, that we would know meaningfully more about AGI today. So, as you anticipated, the relevance of this research to AGI depends on an assumption about the similar between a hypothetical future AGI and current models.
However, even if you think AGI will be similar to current models, or it might be similar, there might be no cost to delaying research related to alignment, safety, control, preparedness, value lock-in, governance, and so on until more fundamental research progress on capabilities has been made. If in five or ten or fifteen years or whatever we understand much better how AGI will be built, then a single $1 million grant to a few researchers might produce more useful knowledge about alignment, safety, etc. than Dustin Moskovitzâs entire net worth would produce today if it were spend on research into the same topics.
My argument about âdoing basic scienceâ vs. âmitigating existential riskâ is that these collapse into the same thing unless you make very specific assumptions about which theory of AGI is correct. I donât think those assumptions are justifiable.
Put it this way: letâs say we are concerned that, for reasons due to fundamental physics, the universe might spontaneously end. But we also suspect that, if this is true, there may be something we can do to prevent it. What we want to know is a) if the universe is in danger in the first place, b) if so, how soon, and c) if so, what we can do about it.
To know any of these three things, (a), (b), or (c), we need to know which fundamental theory of physics is correct, and what the fundamental physical properties of our universe are. Problem is, there are half a dozen competing versions of string theory, and within those versions, the number of possible variations that could describe our universe is astronomically large, 10^500, or 10^272,000, or possibly even infinite. We donât know which variation correctly describes our universe.
Plus, a lot of physicists say string theory is a poorly conceived theory in the first place. Some offer competing theories. Some say we just donât know yet. Thereâs no consensus. Everybody disagrees.
What does the âexistential riskâ framing get us? What action does it recommend? How does the precautionary principle apply? Letâs say you have a $10 billion budget. How do you spend it to mitigate existential risk?
I donât see how this doesnât just loop all the way back around to basic science. Whether thereâs an existential risk, and if so, when we need to worry about it, and if when the time comes, what we can do about it, are all things we can only know if we figure out the basic science. How do we figure out the basic science? By doing the basic science. So, your $10 billion budget will just go to funding basic science, the same physics research that is getting funded anyway.
The space of possible theories about how the mind works is at least six, plus a lot of people saying we just donât know yet, and there are probably silly but illustrative ways to formulate it where you get very large numbers.
For instance, if we think the correct theory can be summed up in just 100 bits of information, then the number of possible theories is 10,000.
Or we could imagine what would happen if we paid a very large number of experts from various relevant fields (e.g. philosophy, cognitive science, AI) a lot of money to spend a year coming up with a one-to-two-page description of as many original, distinct, even somewhat plausible or credible theories as they could think of. Then we group together all the submissions that were similar enough and counted them as the same theory. How many distinct theories would we end up with? A handful? Dozens? Hundreds? Thousands?
Iâm aware these thought experiments are ridiculous, but Iâm trying to emphasize the point that the space of possible ideas seems very large. At the frontier of knowledge in a domain like the science of the mind, which largely exists in a pre-scientific or protoscientific or pre-paradigmatic state, trying to actually map out the space of theories that might possibly be correct is a daunting task. Doing that well, to a meaningful extent, ultimately amounts to actually doing the science or advancing the frontier of knowledge yourself.
What is the right way to apply the precautionary principle in this situation? I would say the precautionary principle isnât the right way to think about it. We would like to be precautionary, but we donât know enough to know how to be. Weâre in a situation of fundamental, wide-open uncertainty, at the frontier of knowledge, in a largely pre-scientific state of understanding about the nature of the mind and intelligence. So, we donât know how to reduce risk â for example, our ideas on how to reduce risk might do nothing or they might increase risk.
Strong upvoted. Thank you for clarifying your views. Thatâs helpful. We might be getting somewhere.
With regard to AI 2027, I get the impression that a lot of people in EA and in the wider world were not initially aware that AI 2027 was an exercise in judgmental forecasting. The AI 2027 authors did not sufficiently foreground this in the presentation of their âresultsâ. I would guess there are still a lot of people in EA and outside it who think AI 2027 is something more rigorous, empirical, quantitative, and/âor scientific than a judgmental forecasting exercise.
I think this was a case of some people in EA being fooled or tricked (even if that was not the authorsâ intention). They didnât evaluate the evidence they were looking at properly. You were quick to agree with my characterization of AI 2027 as a forecast based on subjective intuitions. However, in one previous instance on the EA Forum, I also cited nostalgebraistâs eloquent post and made essentially the same argument I just made, and someone strongly disagreed. So, I think people are just getting fooled, thinking that evidence exists that really doesnât.
What does the forecasting literature say about long-term technology forecasting? Iâve only looked into it a little bit, but generally technology forecasting seems really inaccurate, and the questions forecasters/âexperts are being asked in those studies seem way easier than forecasting something like AGI. So, Iâm not sure there is a credible scientific basis for the idea of AGI forecasting.
I have been saying from the beginning and Iâll say once again that my forecast of the probability and timeline of AGI is just a subjective guess and thereâs a high level of irreducible uncertainty here. I wish that people would stop talking so much about forecasting and their subjective guesses. This eats up an inordinate portion of the conversation, despite its low epistemic value and credibility. For months, I have been trying to steer the conversation away from forecasting toward object-level technical issues.
Initially, I didnât want to give any probability, timeline, or forecast, but I realized the only way to be part of the conversation in EA is to âplay the gameâ and say a number. I had hoped that would only be the beginning of the conversation, not the entire focus of the conversation forever.
You canât squeeze Bayesian blood from a stone of uncertainty. You canât know what you canât know by an act of sheer will. Most discussion of AGI forecasting is wasted effort. Most of it is mostly pointless.
What is not pointless is understanding the object-level technical issues better. If anything helps with AGI forecasting accuracy (and thatâs a big âifâ), this will. But it also has other important advantages, such as:
Helping us understand what risks AGI might or might not pose
Helping us understand what we might be able to do, if anything, to prepare for AGI, and what we would need to know to usefully prepare
Getting a better sense of what kinds of technical or scientific research might be promising to fund in order to advance fundamental AI capabilities
Understanding the economic impact of generative AI
Possibly helping to inform a better picture of how the human mind works
And more topics besides these.
I would consider it a worthy contribution to the discourse to play some small part in raising the overall knowledge level of people in EA about the object-level technical issues relevant to the AI frontier and to AGI. Based on track records, technology forecasting may be mostly forlorn, but, based on track records, science certainly isnât forlorn. Focusing on the science of AI rather than on an Aristotelian approach would be a beautiful return to Enlightenment values, away from the anti-scientific/âanti-Enlightenment thinking that pervades much of this discourse.
By the way, in case itâs not already clear, saying there is a high level of irreducible uncertainty does not support funding whatever AGI-related research program people in EA might currently feel inclined to fund. The number of possible ways the mind could work and the number of possible paths the future could take is large, perhaps astronomically large, perhaps infinite. To arbitrarily seize on one and say thatâs the one, pour millions of dollars into that â that is not justifiable.
I think what you are saying here is mostly reasonable, even if I am not sure how much I agree: it seems to turn on very complicated issue in the philosophy of probability/âdecision theory, and what you should do when accurate prediction is hard, and exactly how bad predictions have to be to be valueless. Having said that, I donât think your going to succeed in steering conversation away from forecasts if you keep writing about how unlikely it is that AGI will arrive near term. Which you have done a lot, right?
Iâm genuinely not sure how much EA funding for AI-related stuff even is wasted on your view. To a first approximation, EA is what Moskowitz and Tuna fund. When I look at Coefficientâs-i.e. what previously was Open Philâs-7 most recent AI safety and governance grants hereâs what I find:
1) A joint project of METR and RAND to develop new ways of assessing AI systems for risky capabilities.
2) âAI safety workshop field buildingâ by BlueDot Impact
3) An AI governance workshop at ICML
4) âGeneral supportâ for the Center for Governance of AI.
5) A âstudy on encoded reasoning in LLMs at the University of Marylandâ
6) âResearch on misalignmentâ here: https://ââwww.meridiancambridge.org/ââlabs
7) âSecure Enclaves for LLM Evaluationâ here https://ââopenmined.org/ââ
So is this stuff bad or good on the worldview youâve just described? I have no idea, basically. None of it is forecasting, plausibly it all broadly falls under either empirical research on current and very near future models, training new researchers, or governance stuff, though that depends on what âresearch on misalignmentâ means. But of course, youâd only endorse if it is good research. If you are worried about lack of academic credibility specifically, as far as I can tell 7 out of the 20 most recent grants are to academic research in universities. It does seem pretty obvious to me that significant ML research goes on at places other than universities, though, not least the frontier labs themselves.
I donât really know all the specifics of all the different projects and grants, but my general impression is that very little (if any) of the current funding makes sense or can be justified if the goal is to do something useful about AGI (as opposed to, say, make sure Claude doesnât give risky medical advice). Absent concerns about AGI, I donât know if Coefficient Giving would be funding any of this stuff.
To make it a bit concrete, there at least five different proposed pathways to AGI, and I imagine the research Coefficient Giving is only relevant to one of the five pathways, if itâs even relevant to that one. But the number five is arbitrary here. The actual decision-relevant number might be a hundred, or a thousand, or a million, or infinity. It just doesnât feel meaningful or practical to try to map out the full space of possible theories of how the mind works and apply the precautionary principle against the whole possibility space. Why not just do science instead?
By word count, I think Iâve written significantly more about object-level technical issues relevant to AGI than directly about AGI forecasts or my subjective guesses of timelines or probabilities. The object-level technical issues are what Iâve tried to emphasize. Unfortunately, commenters seem fixated on surveys, forecasts, and bets, and donât seem to be as interested in the object-level technical topics. I keep trying to steer the conversation in a technical direction. But people keep wanting to steer it back toward forecasting, subjective guesses, and bets.
For example, I wrote a 2,000-word post called âUnsolved research problems on the road to AGIâ. There are two top-level comments. The one with the most karma proposes a bet.
My post âFrozen skills arenât general intelligenceâ mainly focuses on object-level technical issues, including some of the research problems discussed in the other post. You have the top comment on that post (besides SummaryBot) and your comment is about a forecasting survey.
People on the EA Forum are apparently just really into surveys, bets, and forecasts.
The forum is kind of a bit dead generally, for one thing.
I donât really get on what grounds your are saying that the Coefficient Grants are not to people to do science, apart from the governance ones. I also think you are switching back and forth between: âNo one knows when AGI will arrive, best way to prepare just in case is more normal AI scienceâ and âwe know that AGI is far, so thereâs no point doing normal science to prepare against AGI now, although there might be other reasons to do normal science.â
If we donât know which of infinite or astronomically many possible theories about AGI are more likely to be correct than the others, how can we prepare?
Maybe alignment techniques conceived based on our current wrong theory make otherwise benevolent and safe AGIs murderous and evil on the correct theory. Or maybe theyâre just inapplicable. Who knows?
Not everything being funded here even IS alignment techniques, but also, insofar as you just want general better understanding of AI as a domain through science, why wouldnât you learn useful stuff from applying techniques to current models. If the claim is that current models are too different from any possible AGI for this info to be useful, why do you think âdo scienceâ would help prepare for AGI at all? Assuming you do think that, which still seems unclear to me.
You might learn useful stuff about current models from research on current models, but not necessarily anything useful about AGI (except maybe in the slightest, most indirect way). For example, I donât know if anyone thinks if we had invested 100x or 1,000x more into research on symbolic AI systems 30 years ago, that we would know meaningfully more about AGI today. So, as you anticipated, the relevance of this research to AGI depends on an assumption about the similar between a hypothetical future AGI and current models.
However, even if you think AGI will be similar to current models, or it might be similar, there might be no cost to delaying research related to alignment, safety, control, preparedness, value lock-in, governance, and so on until more fundamental research progress on capabilities has been made. If in five or ten or fifteen years or whatever we understand much better how AGI will be built, then a single $1 million grant to a few researchers might produce more useful knowledge about alignment, safety, etc. than Dustin Moskovitzâs entire net worth would produce today if it were spend on research into the same topics.
My argument about âdoing basic scienceâ vs. âmitigating existential riskâ is that these collapse into the same thing unless you make very specific assumptions about which theory of AGI is correct. I donât think those assumptions are justifiable.
Put it this way: letâs say we are concerned that, for reasons due to fundamental physics, the universe might spontaneously end. But we also suspect that, if this is true, there may be something we can do to prevent it. What we want to know is a) if the universe is in danger in the first place, b) if so, how soon, and c) if so, what we can do about it.
To know any of these three things, (a), (b), or (c), we need to know which fundamental theory of physics is correct, and what the fundamental physical properties of our universe are. Problem is, there are half a dozen competing versions of string theory, and within those versions, the number of possible variations that could describe our universe is astronomically large, 10^500, or 10^272,000, or possibly even infinite. We donât know which variation correctly describes our universe.
Plus, a lot of physicists say string theory is a poorly conceived theory in the first place. Some offer competing theories. Some say we just donât know yet. Thereâs no consensus. Everybody disagrees.
What does the âexistential riskâ framing get us? What action does it recommend? How does the precautionary principle apply? Letâs say you have a $10 billion budget. How do you spend it to mitigate existential risk?
I donât see how this doesnât just loop all the way back around to basic science. Whether thereâs an existential risk, and if so, when we need to worry about it, and if when the time comes, what we can do about it, are all things we can only know if we figure out the basic science. How do we figure out the basic science? By doing the basic science. So, your $10 billion budget will just go to funding basic science, the same physics research that is getting funded anyway.
The space of possible theories about how the mind works is at least six, plus a lot of people saying we just donât know yet, and there are probably silly but illustrative ways to formulate it where you get very large numbers.
For instance, if we think the correct theory can be summed up in just 100 bits of information, then the number of possible theories is 10,000.
Or we could imagine what would happen if we paid a very large number of experts from various relevant fields (e.g. philosophy, cognitive science, AI) a lot of money to spend a year coming up with a one-to-two-page description of as many original, distinct, even somewhat plausible or credible theories as they could think of. Then we group together all the submissions that were similar enough and counted them as the same theory. How many distinct theories would we end up with? A handful? Dozens? Hundreds? Thousands?
Iâm aware these thought experiments are ridiculous, but Iâm trying to emphasize the point that the space of possible ideas seems very large. At the frontier of knowledge in a domain like the science of the mind, which largely exists in a pre-scientific or protoscientific or pre-paradigmatic state, trying to actually map out the space of theories that might possibly be correct is a daunting task. Doing that well, to a meaningful extent, ultimately amounts to actually doing the science or advancing the frontier of knowledge yourself.
What is the right way to apply the precautionary principle in this situation? I would say the precautionary principle isnât the right way to think about it. We would like to be precautionary, but we donât know enough to know how to be. Weâre in a situation of fundamental, wide-open uncertainty, at the frontier of knowledge, in a largely pre-scientific state of understanding about the nature of the mind and intelligence. So, we donât know how to reduce risk â for example, our ideas on how to reduce risk might do nothing or they might increase risk.