I was going to reply with something longer here, but I think Gregory Lewisâ excellent comment highlights most of what I wanted to, r.e. titotal does actually give an alternative suggestion in the piece.
So instead Iâll counter two claims I think you make (or imply) in your comments here:
1. A shoddy toy model is better than no model at all
I mean this seems clearly not true, if we take model to be referring to the sort of formalised, quantified exercise similar to AI-2027. Some examples here might be Samuelsonâs infamous predictions of the Soviet Union inevitably overtaking the US in GNP.[1] This was a bad model of the world, and even if it was âbetterâ than the available alternatives or came from a more prestigious source, it was still bad and I think worse than no model (again, defined as formal exercise ala AI2027).
A second example I can think of is the infamous Growth in a Time of Debtpaper, which I remember being used to win arguments and justify austerity across Europe in the 2010s, being rendered much less convincing after an Excel error was corrected.[2]
This also seems clearly false, unless weâre stretching âmodelâ to mean simply âa reason/âargument/âjustificationâ or defining âlife decisionsâ narrowly as only those with enormous consequences instead of any âdecision about my lifeâ.
Even in the more serious cases, the role of models is to support presenting arguments for or against some decision or not, or to frame some explanation about the world, and of course simplification and quantification can be useful and powerful, but they shouldnât be the only game in town. Other schools of thought are available.[3]
The reproduction paper turned critique is here, feels crazy that I canât see the original data but the âmodelâ here seemed just to be spreadsheet of ~20 countries where the average only counted 15
This also seems clearly false, unless weâre stretching âmodelâ to mean simply âa reason/âargument/âjustificationâ
Yep, this is what I meant, sorry for the confusion. Or to phrase it another way: âIâm going off my intuitionâ is not a type of model which has privileged epistemic status; itâs one which can be compared with something like AI 2027 (and, like you say, may be found better).
I was going to reply with something longer here, but I think Gregory Lewisâ excellent comment highlights most of what I wanted to, r.e. titotal does actually give an alternative suggestion in the piece.
So instead Iâll counter two claims I think you make (or imply) in your comments here:
1. A shoddy toy model is better than no model at all
I mean this seems clearly not true, if we take model to be referring to the sort of formalised, quantified exercise similar to AI-2027. Some examples here might be Samuelsonâs infamous predictions of the Soviet Union inevitably overtaking the US in GNP.[1] This was a bad model of the world, and even if it was âbetterâ than the available alternatives or came from a more prestigious source, it was still bad and I think worse than no model (again, defined as formal exercise ala AI2027).
A second example I can think of is the infamous Growth in a Time of Debt paper, which I remember being used to win arguments and justify austerity across Europe in the 2010s, being rendered much less convincing after an Excel error was corrected.[2]
TL;dr, as Thane said on LessWrong, we shouldnât grade models on a curve
2. You need to base life decisions on a toy model
This also seems clearly false, unless weâre stretching âmodelâ to mean simply âa reason/âargument/âjustificationâ or defining âlife decisionsâ narrowly as only those with enormous consequences instead of any âdecision about my lifeâ.
Even in the more serious cases, the role of models is to support presenting arguments for or against some decision or not, or to frame some explanation about the world, and of course simplification and quantification can be useful and powerful, but they shouldnât be the only game in town. Other schools of thought are available.[3]
The reproduction paper turned critique is here, feels crazy that I canât see the original data but the âmodelâ here seemed just to be spreadsheet of ~20 countries where the average only counted 15
Such as:
Decisionmaking under Deep Uncertainty
Do The Math, Then Burn The Math and Go With Your Gut
Make a decision based on the best explanation of the world
Go with common-sense heuristics since they likely encode knowledge gained from cultural evolution
Yep, this is what I meant, sorry for the confusion. Or to phrase it another way: âIâm going off my intuitionâ is not a type of model which has privileged epistemic status; itâs one which can be compared with something like AI 2027 (and, like you say, may be found better).