Do I view every problem in my life and my community as analogous or bearing surprising similarity to the alignment problem
This made me laugh.
But also, as I said at the top of the post, I actually do think the alignment problem does bear surprising similarities to other things, but this is mainly because of general ideas about complex systems pertain to both.
[This comment isn’t meant to signal any opinion about the rest of your post.]
Carlsmith’s report in particular is highly interdisciplinary and draws on technical AI, economics, and also philosophy. It doesn’t have much in the way of technical AI or economics claims. It’s not really clear who would be most qualified to write this, but in general a philosopher doesn’t seem like such a bad choice. In fact, I’d think the average philosopher with strong quantitative skills would be better at this than the average economist or certainly AI researcher.
Whether a more experienced philosopher should have done it is another question, but I’d imagine that even with money Open Phil cannot summon very experienced experts to write reports for them at the drop of a hat.