Strong upvoted and couldn’t decide whether to disagreevote or not. I agree with the points you list under meta-uncertainty and your point on naively using calibration as a proxy for forecasting ability + thinking you can bet on the end of the world by borrowing money. I disagree with your thoughts on ethics (I’m sympathetic to Zvi’s writing on EAs confusing the map for the territory).
I’m not sure what would be the best thing since I don’t remember there being a particular post about this. However, he talks about it in his book review for Going Infinite and I also like his post on Altruism is Incomplete. Lots of people I know find his writing confusing though and it’s not like he’s rigorously arguing for something. When I agree with Zvi, it’s usually because I have had that belief in the back of my mind for a while and him pointing it out makes it more salient, rather than because I got convinced by a particular argument he was making.
Strong upvoted and couldn’t decide whether to disagreevote or not. I agree with the points you list under meta-uncertainty and your point on naively using calibration as a proxy for forecasting ability + thinking you can bet on the end of the world by borrowing money. I disagree with your thoughts on ethics (I’m sympathetic to Zvi’s writing on EAs confusing the map for the territory).
What’s the best thing to read on “Zvi’s writing on EAs confusing the map for the territory”? Or at least something good?
I’m not sure what would be the best thing since I don’t remember there being a particular post about this. However, he talks about it in his book review for Going Infinite and I also like his post on Altruism is Incomplete. Lots of people I know find his writing confusing though and it’s not like he’s rigorously arguing for something. When I agree with Zvi, it’s usually because I have had that belief in the back of my mind for a while and him pointing it out makes it more salient, rather than because I got convinced by a particular argument he was making.