Are you presupposing that good practical reasoning involves (i) trying to picture the most-likely future, and then (ii) doing what would be best in that event (while ignoring other credible possibilities, no matter their higher stakes)?
No, of course not.
I have written about this at length before, on multiple occasions (e.g. here and here, to give just two examples). I donāt expect everyone who reads one of my posts for the first time to know all that context and background ā why would they? ā but, also, the amount of context and background I have to re-explain every time I make a new post is already high because if I donāt, people will just raise the obvious objections I didnāt already anticipate and respond to in the post.
But, in, short: no.
Iām just very dubious of the OPās apparent assumption that losing such a bet ought to trigger deep āsoul-searchingā. Itās just not that easy to resolve deep disagreements about what priors /ā epistemic practices are reasonable.
I agree, but I didnāt say the AI bubble popping should settle the matter, only that I hoped it would motivate people to revisit the topic of near-term AGI with more open-mindedness and curiosity, and much less hostility toward people with dissenting opinions, given that there are already clear, strong objections ā and some quite prominently made, as in the case of Toby Ordās post on RL scaling ā to the majority view of the EA community that seem to have mostly escaped serious consideration.
You donāt need an external economic event to see that the made-up graphs in āSituational Awarenessā are ridiculous or that AI 2027 could not rationally convince anyone of anything who is not already bought-in to the idea of near-term AGI for other reasons not discussed in AI 2027. And so on. And if the EA community hasnāt noticed these glaring problems, what else hasnāt it noticed?
These are examples that anyone can (hopefully) easily understand with a few minutes of consideration. Anyone can click on one of the āSituational Awarenessā graphs and very quickly see that the numbers and lines are just made-up, or that the y-axis has an ill-defined unit of measurement (āeffective computeā, which is relative the tasks/āproblems compute is used for) or no unit of measurement (just āorders of magnitudeā, but orders of magnitude of what?) and also no numbers. Plus other ridiculous features, such as claiming that GPT-4 is an AGI.
With AI 2027, it takes more like 10-20 minutes to see that the whole thing is just based on a few guysā gut intuitions and nothing else. There are other glaring problems in EA discourse around AGI that take more time to explain, such as objections around benchmark construct validity or criterion validity. Even in cases where errors are clear, straightforward, objective, and relatively quick and simple to explain (see below), people often just ignore it when someone points them out. More complex or subtle errors will probably never be considered, even if they are consequential.
The EA community doesnāt have any analogue of peer review ā or it just barely does ā where people play the role of rigorously scrutinizing work to catch errors and make sure it meets a certain quality threshold. Some people in the community (probably a minority, but a vocal and aggressive minority) are disdainful of academic science in general and peer review in particular, and donāt think peer review or an analogue of it would actually be helpful. This makes things a little more difficult.
I recently caught two methodological errors in a survey question asked by the Forecasting Research Institute. Pointing them out was an absolutely thankless task and was deeply unpleasant. I got dismissed and downvoted, and if not for titotalās intervention one of the errors probably never would have gotten fixed. This is very discouraging.
Iām empathetic to the fact that producing research or opinion writing and getting criticized to death also feels deeply unpleasant and thankless, and Iām not entirely sure on the nuances of how to make both sides of the coin feel rewarded rather than punished, but surely there must be a way. Iāve seen it work out well before (and itās not like this is a new problem no one has dealt with before).
The FRI survey is one example, but one of many. In my observation, people in the EA community are not receptive to the sort of scrutiny that is commonplace in academic contexts. This could be anything from correcting someone on a misunderstanding of the definitions of technical terms used in machine learning or pointing out that Waymo vehicles still have a human in the loop (Waymo calls it āfleet responseā). The community pats itself on the back for āloving criticismā. I donāt think anybody really loves criticism ā only rarely ā and maybe the best we can hope for is to begrudgingly accept criticism. But that involves setting up a social and maybe even institutional process of criticism that currently doesnāt exist in the EA community.
When I say ānot receptiveā, I donāt just mean that people hear the scrutiny and just disagree ā thatās not inherently problematic, and could be what being receptive to scrutiny looks like ā I mean that, for example, they downvote posts/ācomments and engage in personal insults or accusations (e.g. explicit accusations of ābad faithā, of which there is one in the comments on this very post), or other hostile behaviour that discourages the scrutiny. Only my masochism allows me to continue posting and commenting on the EA Forum. I honestly donāt know if I have the stomach to do this long-term. Itās probably a bad idea to try.
The Unjournal seems like it could be a really promising project in the area of scrutiny and sober second thought. I love the idea of commissioning outside experts to review EA research. I think for organizations with the money to pay for this, this should be the default.
Iāll say just a little bit more on the topic of the precautionary principle for now. I have a complex multi-part argument on this, which will take some explaining that I wonāt try to do here. I have covered a lot of this in some previous posts and comments. The main three points Iād make in relation to the precautionary principle and AGI risk are:
Near-term AGI is highly unlikely, much less than a 0.05% chance in the next decade
We donāt have enough knowledge of how AGI will be built to usefully prepare now
As knowledge of how to build AGI is gained, investment into preparing for AGI becomes vastly more useful, such that the benefits of investing resources into preparation at higher levels of knowledge totally overwhelm the benefits of investing resources at lower levels of knowledge
In principle, of course, but how? There are various practical obstacles such as:
Are such bets legal?
How do you compel people to pay up?
Why would someone on the other side of the bet want to take it?
I donāt have spare money to be throwing at Internet stunts where thereās a decent chance that, e.g. someone will just abscond with my money and Iāll have no recourse (or at least nothing cost-effective)
If itās a bet that takes a form where if AGI isnāt invented by January 1, 2036, people have to pay me a bunch of money (and vice versa), of course Iāll accept such bets gladly in large sums.
I would also be willing to take bets of that form for good intermediate proxies for AGI, which would take a bit of effort to figure out, but that seems doable. The harder part is figuring out how to actually structure the bet and ensure payment (if this is even legal in the first place).
From my perspective, itās free money, and Iāll gladly take free money (at least from someone wealthy enough to have money to spare ā I would feel bad taking it from someone who isnāt financially secure). But even though similar bets have been made before, people still donāt have good solutions to the practical obstacles.
I wouldnāt want to accept an arrangement that would be financially irrational (or illegal, or not legally enforceable), though, and that would amount to essentially burning money to prove a point. That would be silly, I donāt have that kind of money to burn.
Also, if I were on the low probability end of a bet, Iād be more worried about the risk of measurement or adjudicator error where measuring the outcome isnāt entirely clear cut. Maybe a ruleset could be devised that is so objective and so well captures whether AGI exists that this concern isnāt applicable. But if thereās an adjudication/āerror error risk of (say) 2 percent and the error is equally likely on either side, itās much more salient to someone betting on (say) under 1 percent odds.
No, of course not.
I have written about this at length before, on multiple occasions (e.g. here and here, to give just two examples). I donāt expect everyone who reads one of my posts for the first time to know all that context and background ā why would they? ā but, also, the amount of context and background I have to re-explain every time I make a new post is already high because if I donāt, people will just raise the obvious objections I didnāt already anticipate and respond to in the post.
But, in, short: no.
I agree, but I didnāt say the AI bubble popping should settle the matter, only that I hoped it would motivate people to revisit the topic of near-term AGI with more open-mindedness and curiosity, and much less hostility toward people with dissenting opinions, given that there are already clear, strong objections ā and some quite prominently made, as in the case of Toby Ordās post on RL scaling ā to the majority view of the EA community that seem to have mostly escaped serious consideration.
You donāt need an external economic event to see that the made-up graphs in āSituational Awarenessā are ridiculous or that AI 2027 could not rationally convince anyone of anything who is not already bought-in to the idea of near-term AGI for other reasons not discussed in AI 2027. And so on. And if the EA community hasnāt noticed these glaring problems, what else hasnāt it noticed?
These are examples that anyone can (hopefully) easily understand with a few minutes of consideration. Anyone can click on one of the āSituational Awarenessā graphs and very quickly see that the numbers and lines are just made-up, or that the y-axis has an ill-defined unit of measurement (āeffective computeā, which is relative the tasks/āproblems compute is used for) or no unit of measurement (just āorders of magnitudeā, but orders of magnitude of what?) and also no numbers. Plus other ridiculous features, such as claiming that GPT-4 is an AGI.
With AI 2027, it takes more like 10-20 minutes to see that the whole thing is just based on a few guysā gut intuitions and nothing else. There are other glaring problems in EA discourse around AGI that take more time to explain, such as objections around benchmark construct validity or criterion validity. Even in cases where errors are clear, straightforward, objective, and relatively quick and simple to explain (see below), people often just ignore it when someone points them out. More complex or subtle errors will probably never be considered, even if they are consequential.
The EA community doesnāt have any analogue of peer review ā or it just barely does ā where people play the role of rigorously scrutinizing work to catch errors and make sure it meets a certain quality threshold. Some people in the community (probably a minority, but a vocal and aggressive minority) are disdainful of academic science in general and peer review in particular, and donāt think peer review or an analogue of it would actually be helpful. This makes things a little more difficult.
I recently caught two methodological errors in a survey question asked by the Forecasting Research Institute. Pointing them out was an absolutely thankless task and was deeply unpleasant. I got dismissed and downvoted, and if not for titotalās intervention one of the errors probably never would have gotten fixed. This is very discouraging.
Iām empathetic to the fact that producing research or opinion writing and getting criticized to death also feels deeply unpleasant and thankless, and Iām not entirely sure on the nuances of how to make both sides of the coin feel rewarded rather than punished, but surely there must be a way. Iāve seen it work out well before (and itās not like this is a new problem no one has dealt with before).
The FRI survey is one example, but one of many. In my observation, people in the EA community are not receptive to the sort of scrutiny that is commonplace in academic contexts. This could be anything from correcting someone on a misunderstanding of the definitions of technical terms used in machine learning or pointing out that Waymo vehicles still have a human in the loop (Waymo calls it āfleet responseā). The community pats itself on the back for āloving criticismā. I donāt think anybody really loves criticism ā only rarely ā and maybe the best we can hope for is to begrudgingly accept criticism. But that involves setting up a social and maybe even institutional process of criticism that currently doesnāt exist in the EA community.
When I say ānot receptiveā, I donāt just mean that people hear the scrutiny and just disagree ā thatās not inherently problematic, and could be what being receptive to scrutiny looks like ā I mean that, for example, they downvote posts/ācomments and engage in personal insults or accusations (e.g. explicit accusations of ābad faithā, of which there is one in the comments on this very post), or other hostile behaviour that discourages the scrutiny. Only my masochism allows me to continue posting and commenting on the EA Forum. I honestly donāt know if I have the stomach to do this long-term. Itās probably a bad idea to try.
The Unjournal seems like it could be a really promising project in the area of scrutiny and sober second thought. I love the idea of commissioning outside experts to review EA research. I think for organizations with the money to pay for this, this should be the default.
Iāll say just a little bit more on the topic of the precautionary principle for now. I have a complex multi-part argument on this, which will take some explaining that I wonāt try to do here. I have covered a lot of this in some previous posts and comments. The main three points Iād make in relation to the precautionary principle and AGI risk are:
Near-term AGI is highly unlikely, much less than a 0.05% chance in the next decade
We donāt have enough knowledge of how AGI will be built to usefully prepare now
As knowledge of how to build AGI is gained, investment into preparing for AGI becomes vastly more useful, such that the benefits of investing resources into preparation at higher levels of knowledge totally overwhelm the benefits of investing resources at lower levels of knowledge
Is this something youāre willing to bet on?
In principle, of course, but how? There are various practical obstacles such as:
Are such bets legal?
How do you compel people to pay up?
Why would someone on the other side of the bet want to take it?
I donāt have spare money to be throwing at Internet stunts where thereās a decent chance that, e.g. someone will just abscond with my money and Iāll have no recourse (or at least nothing cost-effective)
If itās a bet that takes a form where if AGI isnāt invented by January 1, 2036, people have to pay me a bunch of money (and vice versa), of course Iāll accept such bets gladly in large sums.
I would also be willing to take bets of that form for good intermediate proxies for AGI, which would take a bit of effort to figure out, but that seems doable. The harder part is figuring out how to actually structure the bet and ensure payment (if this is even legal in the first place).
From my perspective, itās free money, and Iāll gladly take free money (at least from someone wealthy enough to have money to spare ā I would feel bad taking it from someone who isnāt financially secure). But even though similar bets have been made before, people still donāt have good solutions to the practical obstacles.
I wouldnāt want to accept an arrangement that would be financially irrational (or illegal, or not legally enforceable), though, and that would amount to essentially burning money to prove a point. That would be silly, I donāt have that kind of money to burn.
Also, if I were on the low probability end of a bet, Iād be more worried about the risk of measurement or adjudicator error where measuring the outcome isnāt entirely clear cut. Maybe a ruleset could be devised that is so objective and so well captures whether AGI exists that this concern isnāt applicable. But if thereās an adjudication/āerror error risk of (say) 2 percent and the error is equally likely on either side, itās much more salient to someone betting on (say) under 1 percent odds.