Arguments pushing back against the Bostrom-Yudkowsky view of AI by Ben Garfinkel.
I don’t know to what extent this is dependent on the fact that researchers like me argue for alignment by default, but I want to note that at least my views do not argue for patient longtermism according to my understanding. (Though I have not read e.g. Phil Trammel’s paper.)
As the post notes, it’s a spectrum, I would not argue that Open Phil should spend a billion dollars on AI safety this year, but I would probably not argue for Open Phil to take fewer opportunities than they currently do, nor would I recommend that individuals not donate to x-risk orgs and save the money instead.
I don’t know to what extent this is dependent on the fact that researchers like me argue for alignment by default, but I want to note that at least my views do not argue for patient longtermism according to my understanding. (Though I have not read e.g. Phil Trammel’s paper.)
As the post notes, it’s a spectrum, I would not argue that Open Phil should spend a billion dollars on AI safety this year, but I would probably not argue for Open Phil to take fewer opportunities than they currently do, nor would I recommend that individuals not donate to x-risk orgs and save the money instead.