The second video seems really interesting to me, as someone whoâs into moral philosophy. The first video personally falls into âitâs bad on purpose to make you clickâ territory, though.
I skimmed from 37:00 to the end. It wasnât anything groundbreaking. There was one incorrect claim (âAI safteyists encourage work at AGI companiesâ), I think her apparent moral framework that puts disproportionate weight on negative impacts on marginalised groups is not good, and overall she comes across as someone who has just begun thinking about AGI x-risk and so seems a bit naive on some issues. However, âbad on purpose to make you clickâ is very unfair.
But also: she says that hyping AGI encourages races to build AGI. I think this is true! Large language models at todayâs level of capabilityâor even somewhat higher than thisâare clearly not a âwinner takes allâ game; itâs easy to switch to a different model that suits your needs better and I expect the most widely used systems to be the ones that work the best for what people want them to do. While it makes sense that companies will compete to bring better products to market faster, it would be unusual to call this activity an âarms raceâ. Talking about arms races makes more sense if you expect that AI systems of the future will offer advantages much more decisive than typical âfirst moverâ advantages, and this expectation is driven by somewhat speculative AGI discourse.
She also questions whether AI safetyists should be trusted to improve the circumstances of everyone vs their own (perhaps idiosyncratic) priorities. I think this is also a legitimate concern! MIRI were at some point apparently aiming to 1) build an AGI and 2) use this AGI to stop anyone else building an AGI (Section A, point 6). If they were successful, that would put them in a position of extraordinary power. Are they well qualified to do that? Iâm doubtful (though I donât worry about it too much because I donât think theyâll succeed)
There was one incorrect claim (âAI safteyists encourage work at AGI companiesâ)
âAI safetyistsâ absolutely do encourage work at AGI companies. To take one of many examples, 80,000 Hours are âAI safetyistsâ, and their job board currently encourages work at OpenAI, Deepmind, and Anthropic, which are AGI companies.
Fair enough, she mentioned Yudkowsky before making this claim and I had him in mind when evaluating it (incidentally, I wouldnât mind picking a better name for the group of people who do a lot of advocacy about AI X-risk if you have any suggestions)
The second video seems really interesting to me, as someone whoâs into moral philosophy. The first video personally falls into âitâs bad on purpose to make you clickâ territory, though.
If you watch from when I suggest in the link, I think itâs less bad than you make out
I skimmed from 37:00 to the end. It wasnât anything groundbreaking. There was
one incorrect claim (âAI safteyists encourage work at AGI companiesâ), I think her apparent moral framework that puts disproportionate weight on negative impacts on marginalised groups is not good, and overall she comes across as someone who has just begun thinking about AGI x-risk and so seems a bit naive on some issues. However, âbad on purpose to make you clickâ is very unfair.But also: she says that hyping AGI encourages races to build AGI. I think this is true! Large language models at todayâs level of capabilityâor even somewhat higher than thisâare clearly not a âwinner takes allâ game; itâs easy to switch to a different model that suits your needs better and I expect the most widely used systems to be the ones that work the best for what people want them to do. While it makes sense that companies will compete to bring better products to market faster, it would be unusual to call this activity an âarms raceâ. Talking about arms races makes more sense if you expect that AI systems of the future will offer advantages much more decisive than typical âfirst moverâ advantages, and this expectation is driven by somewhat speculative AGI discourse.
She also questions whether AI safetyists should be trusted to improve the circumstances of everyone vs their own (perhaps idiosyncratic) priorities. I think this is also a legitimate concern! MIRI were at some point apparently aiming to 1) build an AGI and 2) use this AGI to stop anyone else building an AGI (Section A, point 6). If they were successful, that would put them in a position of extraordinary power. Are they well qualified to do that? Iâm doubtful (though I donât worry about it too much because I donât think theyâll succeed)
âAI safetyistsâ absolutely do encourage work at AGI companies. To take one of many examples, 80,000 Hours are âAI safetyistsâ, and their job board currently encourages work at OpenAI, Deepmind, and Anthropic, which are AGI companies.
(I havenât watched the video.)
Fair enough, she mentioned Yudkowsky before making this claim and I had him in mind when evaluating it (incidentally, I wouldnât mind picking a better name for the group of people who do a lot of advocacy about AI X-risk if you have any suggestions)