Benjamin—thanks for a thoughtful and original post. Much of your reasoning makes sense from a strictly financial, ROI-maximizing perspective.
But I don’t follow your logic in terms of public sentiment regarding AI safety.
Your wrote ‘Second, an AI crash could cause a shift in public sentiment. People who’ve been loudly sounding caution about AI systems could get branded as alarmists, or people who fell for another “bubble”, and look pretty dumb for a while.’
I don’t see why an AI crash would turn people against AI safety concerns.
Indeed, a logical implication of our ‘Pause AI’ movement, and the public protests against AI companies, is that (1) we actually want AI companies to fail, because they’re pursuing AGI recklessly, (2) we are doing our best to help them to fail, to protect humanity, (3) we are stigmatizing people who invest in AI companies as unethical, and (4) we hope that the value of AI companies, and the Big Tech companies associated with them, plummets like a rock.
I don’t think EAs can have it both ways—profiting from investments in reckless AI companies, while also warning the public about the recklessness of those companies. There might be a certain type of narrow, short-sighted utilitarian reasoning in which such moral hypocrisy makes sense. But to most people, who are intuitive virtue ethicists and/or deontologists, investing in companies that impose extinction risk on our species, just in hopes that we can make enough money to help mitigate those extinction risks, will sound bizarre, contradictory, and delusional.
If we really want to make money, just invest like normal people in crypto when prices are low, and sell when prices are high. There’s no need to put our money into AI companies that we actually want to fail, for the sake of human survival.
I should maybe have been more cautious—how messaging will pan out is really unpredictable.
However, the basic idea is that if you’re saying “X might be a big risk!” and then X turns out to be a damp squib, it looks like you cried wolf.
If there’s a big AI crash, I expect there will be a lot of people rubbing their hands saying “wow those doomers were so wrong about AI being a big deal! so silly to worry about that!”
That said, I agree if your messaging is just “let’s end AI!”, then there’s some circumstances under which you could look better after a crash e.g. especially if it looks like your efforts contributed to it, or it failed due to reasons you predicted / the things you were protesting about (e.g. accidents happening, causing it to get shut down).
However, if the AI crash is for unrelated reasons (e.g. the scaling laws stop working, it takes longer to commercialise than people hope), then I think the Pause AI people could also look silly – why did we bother slowing down the mundane utility we could get from LLMs if there’s no big risk?
Benjamin—thanks for a thoughtful and original post. Much of your reasoning makes sense from a strictly financial, ROI-maximizing perspective.
But I don’t follow your logic in terms of public sentiment regarding AI safety.
Your wrote ‘Second, an AI crash could cause a shift in public sentiment. People who’ve been loudly sounding caution about AI systems could get branded as alarmists, or people who fell for another “bubble”, and look pretty dumb for a while.’
I don’t see why an AI crash would turn people against AI safety concerns.
Indeed, a logical implication of our ‘Pause AI’ movement, and the public protests against AI companies, is that (1) we actually want AI companies to fail, because they’re pursuing AGI recklessly, (2) we are doing our best to help them to fail, to protect humanity, (3) we are stigmatizing people who invest in AI companies as unethical, and (4) we hope that the value of AI companies, and the Big Tech companies associated with them, plummets like a rock.
I don’t think EAs can have it both ways—profiting from investments in reckless AI companies, while also warning the public about the recklessness of those companies. There might be a certain type of narrow, short-sighted utilitarian reasoning in which such moral hypocrisy makes sense. But to most people, who are intuitive virtue ethicists and/or deontologists, investing in companies that impose extinction risk on our species, just in hopes that we can make enough money to help mitigate those extinction risks, will sound bizarre, contradictory, and delusional.
If we really want to make money, just invest like normal people in crypto when prices are low, and sell when prices are high. There’s no need to put our money into AI companies that we actually want to fail, for the sake of human survival.
I should maybe have been more cautious—how messaging will pan out is really unpredictable.
However, the basic idea is that if you’re saying “X might be a big risk!” and then X turns out to be a damp squib, it looks like you cried wolf.
If there’s a big AI crash, I expect there will be a lot of people rubbing their hands saying “wow those doomers were so wrong about AI being a big deal! so silly to worry about that!”
That said, I agree if your messaging is just “let’s end AI!”, then there’s some circumstances under which you could look better after a crash e.g. especially if it looks like your efforts contributed to it, or it failed due to reasons you predicted / the things you were protesting about (e.g. accidents happening, causing it to get shut down).
However, if the AI crash is for unrelated reasons (e.g. the scaling laws stop working, it takes longer to commercialise than people hope), then I think the Pause AI people could also look silly – why did we bother slowing down the mundane utility we could get from LLMs if there’s no big risk?