So I haven’t read your whole post (apologies), but:
I also found Holden’s argument related to the “Most Important Century” rather confusing. My theory was that Holden was responding to arguments along the lines of what Robin Hanson and some other economists have made, but without having taken the time to step through and explain those arguments. So for me the experience was one of jumping into the middle of the conversation (even though I had heard some of Hanson’s arguments before, I was somewhat vague on them and I took me some time to figure out that I needed to understand Holden’s argument in that context).
I’ve generally found Holden’s other posts to be much less confusing (although it sounds like you differ here). So from my perspective, I see Holden as a skilled communicator who in this instance was probably so enmeshed in a particular set of conversations that are happening that it didn’t occur to him that he needed to explain certain details.
In that case, it’s weird that the post is highlighted in the navigation on Karnofsky’s, blog and described as “the core of [his] AI content”. This strongly implies it is a foundational argument from one of the most influential people about EA on why we should be concerned about AI. In that case, it would either be held to fairly high standards or be replaced.
So I haven’t read your whole post (apologies), but:
I also found Holden’s argument related to the “Most Important Century” rather confusing. My theory was that Holden was responding to arguments along the lines of what Robin Hanson and some other economists have made, but without having taken the time to step through and explain those arguments. So for me the experience was one of jumping into the middle of the conversation (even though I had heard some of Hanson’s arguments before, I was somewhat vague on them and I took me some time to figure out that I needed to understand Holden’s argument in that context).
I’ve generally found Holden’s other posts to be much less confusing (although it sounds like you differ here). So from my perspective, I see Holden as a skilled communicator who in this instance was probably so enmeshed in a particular set of conversations that are happening that it didn’t occur to him that he needed to explain certain details.
In that case, it’s weird that the post is highlighted in the navigation on Karnofsky’s, blog and described as “the core of [his] AI content”. This strongly implies it is a foundational argument from one of the most influential people about EA on why we should be concerned about AI. In that case, it would either be held to fairly high standards or be replaced.