Yeah, this is kind of weird, I had a similar experience. I had a friend who was also interested in AI risks and worked on the board of a good university international relations publication. I tried to find someone interested in writing a paper, but no dice.
Slowing down AGI
Strongly against, this is the #1 reason that AI scientists in academia and government are adversarial towards public awareness of AI risks, they are worried about a loss of funding and research progress which is needed to combat short and medium term problems.
Pushing AI progress is extremely important to everyone working in AI whereas slowing it is only vaguely/uncertainly important to people worried about AI. So it’s a poor point to try and fight over.
Honest question: can we invest in making more Stuart Russells? (e.g. Safety-oriented authority figures in AI). Can we use our connections in academia to give promising EAs big prestige-building opportunities (conference invites, publication opportunities, scholarships, research and teaching positions, co-authorships) in academia etc.? (Also can we do this more in general?)
It’s already a problem that AI safety researchers are only cited by other AI safety researchers and are perceived as an island community.
It would of course be good for more AI safety people to enter computer science research and academia however.
Yeah, this is kind of weird, I had a similar experience. I had a friend who was also interested in AI risks and worked on the board of a good university international relations publication. I tried to find someone interested in writing a paper, but no dice.
Strongly against, this is the #1 reason that AI scientists in academia and government are adversarial towards public awareness of AI risks, they are worried about a loss of funding and research progress which is needed to combat short and medium term problems.
Pushing AI progress is extremely important to everyone working in AI whereas slowing it is only vaguely/uncertainly important to people worried about AI. So it’s a poor point to try and fight over.
It’s already a problem that AI safety researchers are only cited by other AI safety researchers and are perceived as an island community.
It would of course be good for more AI safety people to enter computer science research and academia however.