I’d like to contribute my two-cents in the form of a meta comment on the discussion above, particularly on the points made by @Yarrow Bouchard 🔸 and @David Mathers🔸
What you guys are doing is the very valuable job of sifting through evidence and signals pointing towards factors that could either stall or accelerate progress towards AGI, and making some sort of epistemological analysis as to what evidence we should give more credit to when thinking about timelines, in order to influence our decision-making for very pragmatic day-to-day decisions like how should we best spend our money and time in order to have the best shot at creating goodness for Humanity.
I have my own views on each individual points you raised, but regardless of my opinions, I’d like to talk about the practical uses of such analyses, and about the next step: namely, what do we do with all this?
My best shot at a guiding principle for action in times of uncertainty is to try and act with the following reasoning: What actions can I take, so that even if I turn out to have been completely wrong in my ‘predictions’, my actions are still very likely to make a positive impact on humanity / not be wasted?
In light of this: - Investing tons of money into stocks and shares into frontier AI companies would not be a wise course of action, because in the event of an AI bubble popping, I’d have lost valuable money that I could instead have invested in other impactful causes. - Investing in AI safety and AI security in the broad sense would be a wise course of action, because even if we turn out to be massively wrong as to when AGI will come, our safety investments would not be wasted and deliver actual benefits to society (e.g. improved democratic processes around AI policy, better cybersecurity, better biorisk security, etc.)
To illustrate this further, let’s exaggerate ad absurdum and imagine for a moment that despite all the evidence we though we had, human-made climate change was actually a hoax, and actually the planet is fine without needing to intervene. EVEN if that were the case, the efforts made to combat climate change, such as making sure people and companies stop polluting as much, saving spaces for nature and biodiversity, etc etc would still not have been in vain, as by doing so we delivered really nice things for people and animals.
In other words, I think we should try to pick actions that address the worst case scenario, but also simultaneously wouldn’t go to waste if we turned out to be massively wrong on how likely that worst case scenario is.
Thank you for your supportive comment. I think David Mathers is an exceptionally and commendably valuable contributor to the EA Forum in terms of engaging deeply with the substance of arguments around AI safety and AGI forecasting. David engages in discussions with a high level of reasoning transparency, which I deeply appreciate. It isn’t always clear to me why people who fall on the opposite side of debates around AI safety and AGI forecasting believe what they do, and talking to David has helped me understand this better. I would love to have more discussions about these topics with David, or with interlocutors like him. I feel as though there is still much work to be done in bringing the cruxes of these debates into sharp relief.
The EA Forum has a little-used “Dialogues” feature that I think has some potential. Anyone who would be interested in having a Dialogue on AGI forecasting and/or AGI safety should send me a private message.
On to the rest of your comment:
I think the current investments in AGI safety will end up being wasted. I think it’s a bit like paying philosophers in the 1920s to think about how to mitigate social media addiction, years before the first proper computer was built, and even before the concept of a Turing machine was formalized. There is simply too much unknown about how AGI might eventually be built.
Conversely, investments in narrow, prosaic “AI safety” like making LLM chatbots less likely to give people dangerous medical advice are modestly useful today but will have no applicability to AGI much later on. Other than having the name “AI” in common and running on computers using probably some sort of connectionist architecture, I don’t think today’s AI systems will have any meaningful resemblance to AGI, if it is eventually created.
I’d like to contribute my two-cents in the form of a meta comment on the discussion above, particularly on the points made by @Yarrow Bouchard 🔸 and @David Mathers🔸
What you guys are doing is the very valuable job of sifting through evidence and signals pointing towards factors that could either stall or accelerate progress towards AGI, and making some sort of epistemological analysis as to what evidence we should give more credit to when thinking about timelines, in order to influence our decision-making for very pragmatic day-to-day decisions like how should we best spend our money and time in order to have the best shot at creating goodness for Humanity.
I have my own views on each individual points you raised, but regardless of my opinions, I’d like to talk about the practical uses of such analyses, and about the next step: namely, what do we do with all this?
My best shot at a guiding principle for action in times of uncertainty is to try and act with the following reasoning:
What actions can I take, so that even if I turn out to have been completely wrong in my ‘predictions’, my actions are still very likely to make a positive impact on humanity / not be wasted?
In light of this:
- Investing tons of money into stocks and shares into frontier AI companies would not be a wise course of action, because in the event of an AI bubble popping, I’d have lost valuable money that I could instead have invested in other impactful causes.
- Investing in AI safety and AI security in the broad sense would be a wise course of action, because even if we turn out to be massively wrong as to when AGI will come, our safety investments would not be wasted and deliver actual benefits to society (e.g. improved democratic processes around AI policy, better cybersecurity, better biorisk security, etc.)
To illustrate this further, let’s exaggerate ad absurdum and imagine for a moment that despite all the evidence we though we had, human-made climate change was actually a hoax, and actually the planet is fine without needing to intervene.
EVEN if that were the case, the efforts made to combat climate change, such as making sure people and companies stop polluting as much, saving spaces for nature and biodiversity, etc etc would still not have been in vain, as by doing so we delivered really nice things for people and animals.
In other words, I think we should try to pick actions that address the worst case scenario, but also simultaneously wouldn’t go to waste if we turned out to be massively wrong on how likely that worst case scenario is.
Thank you for your supportive comment. I think David Mathers is an exceptionally and commendably valuable contributor to the EA Forum in terms of engaging deeply with the substance of arguments around AI safety and AGI forecasting. David engages in discussions with a high level of reasoning transparency, which I deeply appreciate. It isn’t always clear to me why people who fall on the opposite side of debates around AI safety and AGI forecasting believe what they do, and talking to David has helped me understand this better. I would love to have more discussions about these topics with David, or with interlocutors like him. I feel as though there is still much work to be done in bringing the cruxes of these debates into sharp relief.
The EA Forum has a little-used “Dialogues” feature that I think has some potential. Anyone who would be interested in having a Dialogue on AGI forecasting and/or AGI safety should send me a private message.
On to the rest of your comment:
I think the current investments in AGI safety will end up being wasted. I think it’s a bit like paying philosophers in the 1920s to think about how to mitigate social media addiction, years before the first proper computer was built, and even before the concept of a Turing machine was formalized. There is simply too much unknown about how AGI might eventually be built.
Conversely, investments in narrow, prosaic “AI safety” like making LLM chatbots less likely to give people dangerous medical advice are modestly useful today but will have no applicability to AGI much later on. Other than having the name “AI” in common and running on computers using probably some sort of connectionist architecture, I don’t think today’s AI systems will have any meaningful resemblance to AGI, if it is eventually created.