My philosophical axioms that are relevant for EA are largely utilitarian as long as that doesn’t interfere with truthfulness. To be clear though I am not a moral realist!
My interests are:
-forecasting
-animal welfare
-politics (unfortunately)
-intelligence research
A lot of EAs wanted to slow down AGI development to have more time for alignment. Now Trump’s tariffs have done that—accidentally and for the wrong reasons—but they did slow it down. Yet no EA seems happy about this. Given how unpopular his tariffs are maybe people don’t want to endorse them for PR reasons? But if you think that AI is by far the most important issue that should easily lead you to say the unpopular truth. Scenarios where China reaches AGI before the US became more likely, but that was always an argument against AI slowdown and it didn’t seem to convince many people in the past.
Thoughts?
Maybe this post should be placed in some AI safety thread, but I wasn’t sure where exactly.