However I still find myself reluctant to put AI as my priority despite knowing these things.
One way out is to simply not put AI as your own, personal, priority (vs say “the wider EA community’s priority”, a separate question altogether). 80,000 Hours’ problem profiles page for instance explicitly says that their list of most pressing world problems, where AI risk features at the top, is
ranked roughly by our guess at the expected impact of an additional person working on them, assuming your ability to contribute to solving each is similar
which is already an untrue assumption, as they clarify in their problem framework:
While personal fit is not assessed in our problem profiles, it is relevant to your personal decisions. If you enter an area that you find totally demotivating, then you’ll have almost no impact.
Given the ostensible reluctance in your post, I’m not sure that you yourself should make AI safety work your top priority (although you can still e.g. donate to the Long-Term Future Fund, one of GWWC’s top recommendations in this area, and read Holden’s writing and discuss it with others, and so on, none of which require such drastic re-prioritization).
One way out is to simply not put AI as your own, personal, priority (vs say “the wider EA community’s priority”, a separate question altogether). 80,000 Hours’ problem profiles page for instance explicitly says that their list of most pressing world problems, where AI risk features at the top, is
which is already an untrue assumption, as they clarify in their problem framework:
Given the ostensible reluctance in your post, I’m not sure that you yourself should make AI safety work your top priority (although you can still e.g. donate to the Long-Term Future Fund, one of GWWC’s top recommendations in this area, and read Holden’s writing and discuss it with others, and so on, none of which require such drastic re-prioritization).
Also, since other commenters / answerers will likely supply materials in support of prioritizing AI safety, for the sake of good epistemics I think it’s worth signal-boosting a good critique of it, so consider checking out Nuno Sempere’s My highly personal skepticism braindump on existential risk from artificial intelligence.