This makes a lot of sense, thanks so much!
I think I agree with this point, but in my experience I don’t see many AI safety people using these inferentially-distant/extreme arguments in outreach. That’s just my very limited anecdata though.
This makes a lot of sense, thanks so much!
I think I agree with this point, but in my experience I don’t see many AI safety people using these inferentially-distant/extreme arguments in outreach. That’s just my very limited anecdata though.