I agree with some of what you say, but find myself less concerned about some of the trends. This might be because I have a higher tolerance for some of the traits you argue are present and because AI governance, where I’m mostly engaged now, may just be a much more uncertain topic area than other parts of EA given how new it is. Also, while I identify a lot with the community and am fairly engaged (was a community leader for two years), I don’t engage much on the forum or online so I might be missing a lot of context.
I worry about the framing of EA as not having any solutions and the argument that we should just focus on finding which are the right paths without taking any real-world action on the hypotheses we currently have for impact. I think to understand things like government and refine community views of how to affect it and what should be affected, we need to engage. Engaging quickly exposes ignorance and forces us to be beholden to the real world, not to mention gives a lot of reason to engage with people outside the community.
Once a potential path to impact is identified, and thought through to a reasonable extent, it seems almost necessary to try to take steps to implement it as a next step in determining whether it is a fruitful thing to pursue. Granted, after some time we should step back and re-evaluate, but for the time when you are pursuing the objective it’s not feasible to be second-guessing constantly (similar idea to Nate Soare’s post Diving in).
That said it seems useful to have a more clear view from the outside just how uncertain things are. While beginning to engage with AI governance, it took a long time for me to realize just how little we know about what we should be doing. This despite some explicit statements by people like Carrick Flynn in a post on the forum saying how little we know and a research agenda which is mainly questions about what we should do. I’m not sure what more could be done as I think it’s normal to assume people know what they’re doing, and for me this was only solved by engaging more deeply with the community (though now I think I have a more healthy understanding of just how uncertain most topic areas are).
I guess a big part of the disagreement here might boil down to how uncertain we really are about what we are doing. I would agree a lot more with the post if I was less confident about what we should be doing in general (and again I frame this mostly in AI governance area as it’s what I know best). The norms you advocate are mostly about maintaining cause agnosticism and focusing on deliberation and prioritization (right?) as opposed to being more action oriented. In my case, I’m fairly happy with the action-prioritization balance I observe than I guess you are (though I’m of course not as familiar with how the balance looks elsewhere in the community and don’t read the forum much).
despite some explicit statements by people like Carrick Flynn in a post on the forum saying how little we know and a research agenda which is mainly questions about what we should do
Just in case future readers are interested in having the links, here’s the post and agenda I’m guessing you’re referring to (feel free to correct me, of course!):
That said, I do agree we should work to mitigate some of the problems you mention. It would be good to get people more clear on how uncertain things are, to avoid groupthink and over-homogenization. I think we shouldn’t expect to diverge very much from how other successful movements have happened in the past as there’s not really precedent for that working, though we should strive to test it out and push the boundaries of what works. In that respect I definitely agree we should get a better idea of how homogenous things are now and get more explicit about what the right balance is (though explicitly endorsing some level of homogeneity might have it’s own awkward consequences)
I agree with some of what you say, but find myself less concerned about some of the trends. This might be because I have a higher tolerance for some of the traits you argue are present and because AI governance, where I’m mostly engaged now, may just be a much more uncertain topic area than other parts of EA given how new it is. Also, while I identify a lot with the community and am fairly engaged (was a community leader for two years), I don’t engage much on the forum or online so I might be missing a lot of context.
I worry about the framing of EA as not having any solutions and the argument that we should just focus on finding which are the right paths without taking any real-world action on the hypotheses we currently have for impact. I think to understand things like government and refine community views of how to affect it and what should be affected, we need to engage. Engaging quickly exposes ignorance and forces us to be beholden to the real world, not to mention gives a lot of reason to engage with people outside the community.
Once a potential path to impact is identified, and thought through to a reasonable extent, it seems almost necessary to try to take steps to implement it as a next step in determining whether it is a fruitful thing to pursue. Granted, after some time we should step back and re-evaluate, but for the time when you are pursuing the objective it’s not feasible to be second-guessing constantly (similar idea to Nate Soare’s post Diving in).
That said it seems useful to have a more clear view from the outside just how uncertain things are. While beginning to engage with AI governance, it took a long time for me to realize just how little we know about what we should be doing. This despite some explicit statements by people like Carrick Flynn in a post on the forum saying how little we know and a research agenda which is mainly questions about what we should do. I’m not sure what more could be done as I think it’s normal to assume people know what they’re doing, and for me this was only solved by engaging more deeply with the community (though now I think I have a more healthy understanding of just how uncertain most topic areas are).
I guess a big part of the disagreement here might boil down to how uncertain we really are about what we are doing. I would agree a lot more with the post if I was less confident about what we should be doing in general (and again I frame this mostly in AI governance area as it’s what I know best). The norms you advocate are mostly about maintaining cause agnosticism and focusing on deliberation and prioritization (right?) as opposed to being more action oriented. In my case, I’m fairly happy with the action-prioritization balance I observe than I guess you are (though I’m of course not as familiar with how the balance looks elsewhere in the community and don’t read the forum much).
I think I agree with all of what you say. A potentially relevant post is The Values-to-Actions Decision Chain: a lens for improving coordination.
Just in case future readers are interested in having the links, here’s the post and agenda I’m guessing you’re referring to (feel free to correct me, of course!):
Personal thoughts on careers in AI policy and strategy
AI Governance: A Research Agenda
That said, I do agree we should work to mitigate some of the problems you mention. It would be good to get people more clear on how uncertain things are, to avoid groupthink and over-homogenization. I think we shouldn’t expect to diverge very much from how other successful movements have happened in the past as there’s not really precedent for that working, though we should strive to test it out and push the boundaries of what works. In that respect I definitely agree we should get a better idea of how homogenous things are now and get more explicit about what the right balance is (though explicitly endorsing some level of homogeneity might have it’s own awkward consequences)