First, I’m sorry you’ve had this bad experience. I’m wary of creating environments that put a lot of pressure on young people to come to particular conclusions, and I’m bothered when AI Safety recruitment takes place in more isolated environments that minimize inferential distance because it means new people are not figuring it out for themselves.
I relate a lot to the feeling that AI Safety invaded as a cause without having to prove itself in a lot of the ways the other causes had to rigorously prove impact. No doubt it’s the highest prestige cause and attractive to think about (math, computer science, speculating about ginormous longterm impact) in many ways that global health or animal welfare stuff is often not. (You can even basically work on AI capabilities at a big fancy company while getting credit from EAs for doing the most important altruism in the world! There’s nothing like that for the other causes.)
Although I have my own ideas about some bad epistemics going on with prioritizing AI Safety, I want to hear your thoughts about it spelled out more. Is it mainly the deference you’re talking about?
First, I’m sorry you’ve had this bad experience. I’m wary of creating environments that put a lot of pressure on young people to come to particular conclusions, and I’m bothered when AI Safety recruitment takes place in more isolated environments that minimize inferential distance because it means new people are not figuring it out for themselves.
I relate a lot to the feeling that AI Safety invaded as a cause without having to prove itself in a lot of the ways the other causes had to rigorously prove impact. No doubt it’s the highest prestige cause and attractive to think about (math, computer science, speculating about ginormous longterm impact) in many ways that global health or animal welfare stuff is often not. (You can even basically work on AI capabilities at a big fancy company while getting credit from EAs for doing the most important altruism in the world! There’s nothing like that for the other causes.)
Although I have my own ideas about some bad epistemics going on with prioritizing AI Safety, I want to hear your thoughts about it spelled out more. Is it mainly the deference you’re talking about?