I share Max’s sense that those two examples fit, while particulars on how best to align AI wouldn’t.
It also seems worth noting that Bostrom/FHI were among the most prominent champions (though not the sole originator) of “AI may be an important cause area”, and Bostrom was the lead author on the unilateralist’s curse paper. Bostrom and FHI themselves describe a big part of what they do as macrostrategy. (I think they may also have coined the term, though I’m unsure.)
I share Max’s sense that those two examples fit, while particulars on how best to align AI wouldn’t.
It also seems worth noting that Bostrom/FHI were among the most prominent champions (though not the sole originator) of “AI may be an important cause area”, and Bostrom was the lead author on the unilateralist’s curse paper. Bostrom and FHI themselves describe a big part of what they do as macrostrategy. (I think they may also have coined the term, though I’m unsure.)
In that case, I stand corrected on the unilateralist’s curse—I thought it was more mainstream