Noted! The key point I was trying to make is that I’d think it helpful for the discourse to separate 1) how one would act in a frame and 2) why one thinks each one is more or less likely (which is more contentious and easily gets a bit political). Since your post aims at the former, and the latter has been discussed at more length elsewhere, it would make sense to further de-emphasize the latter.
1) how one would act in a frame and 2) why one thinks each one is more or less likely (which is more contentious and easily gets a bit political). Since your post aims at the former
My post aims at at both. It is a post about how to think about AI, and a large part of that is establishing the “right” framing.
Noted! The key point I was trying to make is that I’d think it helpful for the discourse to separate 1) how one would act in a frame and 2) why one thinks each one is more or less likely (which is more contentious and easily gets a bit political). Since your post aims at the former, and the latter has been discussed at more length elsewhere, it would make sense to further de-emphasize the latter.
My post aims at at both. It is a post about how to think about AI, and a large part of that is establishing the “right” framing.