I enjoyed reading your insightful reply! Thanks for sharing, Guillaume. You don’t make any arguments I strongly disagree with, and you’ve added many thoughtful suggestions with caveats. The distinction you make between the two sub-questions is useful.
I am curious, though, about what makes you view capacity building (CB) in a more positive light compared to other interventions within AI safety. As you point out, CB also has the potential to backfire. I would even argue that the downside risk of CB might be higher than that of other interventions because it increases the number of people taking the issue seriously and taking proactive action—often with limited information.
For example, while I admire many of the people working at PauseAI, I believe there are quite a few worlds in which those initially involved in setting up the group have had a net-negative impact in expectation. Even early on, there were indications that some people were okay with using violence or radical methods to stop AI (which was then banned by the organizers). However, what happens if these tendencies resurface when “shit hits the fan”? To push back on my own thinking, it still might be a good idea to work on PauseAI due to community diversification argument within AI safety (footnote two).
I agree that other forms of CB, such as MATS, seem more robust. But even here, I can always find compelling arguments for why I should be clueless about the expected value. For instance, an increased number of AI safety researchers working on solving an alignment problem that might ultimately be unsolvable could create a false sense of security.
However, what happens if these tendencies resurface when “shit hits the fan”?
I don’t think this could be pinned on PauseAI, when at no point has PauseAI advocated or condoned violence. Many (basically all?) political campaigns attract radical fringes. Non-violent moderates aren’t responsible for them.
I am curious, though, about what makes you view capacity building (CB) in a more positive light compared to other interventions within AI safety. As you point out, CB also has the potential to backfire. I would even argue that the downside risk of CB might be higher than that of other interventions because it increases the number of people taking the issue seriously and taking proactive action—often with limited information.
Yeah, just to clarify, CB is not necessarily better than other interventions. However, CB with low backfire risks could be promising. This does not necessarily mean doing community building, since community building could backfire depending on how it is done (for example maybe if it is done in a very expansive non-careful way it could more easily backfire). I think the PauseAI example that you gave is a good example of potentially non robust intervention, or at least I would not count it as a low backfire risk capacity building intervention.
One of the motivation of CB would be to put ourselves in a better position to pursue some intervention if we end up less clueless. It might be that we don’t in fact end up less clueless, and that while we have done CB, there are still no robust interventions that we can pursue after some time. In that case, it would be better to pursue determinately good short-term interventions even after doing CB (but then we have to pay the opportunity cost of the resources spent doing CB rather than doing the interventions good in the short term directly).
I am still uncertain about low backfire CB interventions (that are better than doing something good directly), perhaps some way of increasing capital or well targeted community building could be good examples, but it seems like an open question to me.
I enjoyed reading your insightful reply! Thanks for sharing, Guillaume. You don’t make any arguments I strongly disagree with, and you’ve added many thoughtful suggestions with caveats. The distinction you make between the two sub-questions is useful.
I am curious, though, about what makes you view capacity building (CB) in a more positive light compared to other interventions within AI safety. As you point out, CB also has the potential to backfire. I would even argue that the downside risk of CB might be higher than that of other interventions because it increases the number of people taking the issue seriously and taking proactive action—often with limited information.
For example, while I admire many of the people working at PauseAI, I believe there are quite a few worlds in which those initially involved in setting up the group have had a net-negative impact in expectation. Even early on, there were indications that some people were okay with using violence or radical methods to stop AI (which was then banned by the organizers). However, what happens if these tendencies resurface when “shit hits the fan”? To push back on my own thinking, it still might be a good idea to work on PauseAI due to community diversification argument within AI safety (footnote two).
I agree that other forms of CB, such as MATS, seem more robust. But even here, I can always find compelling arguments for why I should be clueless about the expected value. For instance, an increased number of AI safety researchers working on solving an alignment problem that might ultimately be unsolvable could create a false sense of security.
I don’t think this could be pinned on PauseAI, when at no point has PauseAI advocated or condoned violence. Many (basically all?) political campaigns attract radical fringes. Non-violent moderates aren’t responsible for them.
Yeah, just to clarify, CB is not necessarily better than other interventions. However, CB with low backfire risks could be promising. This does not necessarily mean doing community building, since community building could backfire depending on how it is done (for example maybe if it is done in a very expansive non-careful way it could more easily backfire). I think the PauseAI example that you gave is a good example of potentially non robust intervention, or at least I would not count it as a low backfire risk capacity building intervention.
One of the motivation of CB would be to put ourselves in a better position to pursue some intervention if we end up less clueless. It might be that we don’t in fact end up less clueless, and that while we have done CB, there are still no robust interventions that we can pursue after some time. In that case, it would be better to pursue determinately good short-term interventions even after doing CB (but then we have to pay the opportunity cost of the resources spent doing CB rather than doing the interventions good in the short term directly).
I am still uncertain about low backfire CB interventions (that are better than doing something good directly), perhaps some way of increasing capital or well targeted community building could be good examples, but it seems like an open question to me.