I posted a short version of this, but I think people found it unhelpful, so I’m trying to post somewhat longer version.
I have seen some number of papers and talks broadly in the genre of “academic economy”
My intuition based on that is, often they seem to consist of projecting complex reality into a space of single-digit real number dimensions and a bunch of differential equations
The culture of the field often signals solving the equations is profound/important, and the how you do the projection “world → 10d” is less interesting
In my view for practical decision making and world-modelling it’s usually the opposite: the really hard and potentially profound part is the projection. Solving the maths is in often is some sense easy, at least in comparison to the best maths humans are doing
While I overall think the enterprise is worth to pursue, people should in my view have a relatively strong prior that for any conclusions which depends on the “world-> reals” projection there could be many alternatives leading to different conclusions; while I like the effort in this post to dig into how stable the conclusions are, in my view people who do not have cautious intuitions about the space of “academic economy models” could still easily over-update or trust too much the robustness
If people are not sure, an easy test could be something like “try to modify the projection in any way, so the conclusions do not hold”. At the same time this will usually not lead to an interesting or strong argument, it’s just trying some semi-random moves is the model space. But it can lead to a better intuition.
I tried to do few tests in a cheap and lazy way (eg what would this model tell me about running at night on a forested slope?) and my intuitions was:
I agree with the cautious the work in the paper represents very weak evidence for the conclusions that follow only from the detailed assumptions of the model in the present post. (At the same time it can be an excellent academic economy paper)
I’m more worried about other writing about the results, such as linked post on Phil’s blog , which in my reading signals more of “these results are robust” than it’s safe
Harder and more valuable work is to point to something like some of the most significant way in which the projection fails” (aspects of reality you ignored etc.). In this case this was done by Carl Shulman and it’s worth discussing further
In practice I do have some worries about some meme ‘ah, we don’t know, but given we don’t know, speeding up progress is likely good’ (as proved in this good paper) being created in the EA memetic ecosystem. (To be clear I don’t think the meme would reflect what Leopold or Ben believe)
I posted a short version of this, but I think people found it unhelpful, so I’m trying to post somewhat longer version.
I have seen some number of papers and talks broadly in the genre of “academic economy”
My intuition based on that is, often they seem to consist of projecting complex reality into a space of single-digit real number dimensions and a bunch of differential equations
The culture of the field often signals solving the equations is profound/important, and the how you do the projection “world → 10d” is less interesting
In my view for practical decision making and world-modelling it’s usually the opposite: the really hard and potentially profound part is the projection. Solving the maths is in often is some sense easy, at least in comparison to the best maths humans are doing
While I overall think the enterprise is worth to pursue, people should in my view have a relatively strong prior that for any conclusions which depends on the “world-> reals” projection there could be many alternatives leading to different conclusions; while I like the effort in this post to dig into how stable the conclusions are, in my view people who do not have cautious intuitions about the space of “academic economy models” could still easily over-update or trust too much the robustness
If people are not sure, an easy test could be something like “try to modify the projection in any way, so the conclusions do not hold”. At the same time this will usually not lead to an interesting or strong argument, it’s just trying some semi-random moves is the model space. But it can lead to a better intuition.
I tried to do few tests in a cheap and lazy way (eg what would this model tell me about running at night on a forested slope?) and my intuitions was:
I agree with the cautious the work in the paper represents very weak evidence for the conclusions that follow only from the detailed assumptions of the model in the present post. (At the same time it can be an excellent academic economy paper)
I’m more worried about other writing about the results, such as linked post on Phil’s blog , which in my reading signals more of “these results are robust” than it’s safe
Harder and more valuable work is to point to something like some of the most significant way in which the projection fails” (aspects of reality you ignored etc.). In this case this was done by Carl Shulman and it’s worth discussing further
In practice I do have some worries about some meme ‘ah, we don’t know, but given we don’t know, speeding up progress is likely good’ (as proved in this good paper) being created in the EA memetic ecosystem. (To be clear I don’t think the meme would reflect what Leopold or Ben believe)
>academic economy
Do you mean “academic economics”?