It’s not clear to me how you can exclude racists if you “explicitly don’t want agreement”. Presumably there are heuristics for knowing what’s overton-violating and what isn’t, but you need to be specific about how you can improve what people already do. I don’t think the idea of “applause lights” was strawmanning at all toward the top level post, but I see your views are more detailed and careful than that in comments.
Sorry about conflation between ERS and EA, I get it can be a different stream.
I’m not super incurious about certain methodological, sociology of science, metascience opportunities to improve the x-risk community. I just need to see the specifics! For example, I am incurious about frankfurty stuff because it tends to be rather silly, but presumably lots of people are working on peer review and career clout to fix it’s downsides (seems like an economics puzzle to me) and I’m very curious about that.
Firstly I should say the vagueness, whilst frustrating, ie there to both reflect and open up a discussion; it can appear ‘applause-lighty’ if one doesn’t recognise that this is the point of the statement, but I’m not quite sure how it does if you see tye statement as providing a statement of intent and justification for positions that ought to be debated, contested and refined.
On the topic of excluding racists, I think it is basically possible to do given how negatively it impacts a huge range of people from engaging with the community and doing good work. Whilst in my mind it’s clearly motivated by dedp concerns that the racism is unethical and the future that may emerge from a community that has racism so prevelant is deeply problematic, although I’m pretty sure it could still be justified from purely utilitarian perspectives due to the negative impact racism has on the community to function.
I think your demand for specifics here is both admirable, and I can give some that I agree with, and a little besides the point. One of the points we are making is the community at present is such that it doesn’t allow practitioners to explore the sorts of methods we could be using, and many important concepts or assumptions that could aid just aren’t present in this space at present (one example here is I bet if there had been a community containing active discussions around both geoengineering and AI then the idea of simply pausing or stopping AGI development would have been explored much early than it did in ERS at present).
Now onto different conceptual bases. A few examples: we could use concepts from DRR like vulnerability or exposure much more, and I think a vulnerability focused account of xrisk would look very different. We could look at all the different xrisk cascades using causal mapping and find the best points of leverage in this. We could expand out an agents of doom like agenda to target those organisations that produce xrisk most effectively and study specifically how to reduce their power to cause risk the most. Black Swan approaches to xrisk may look further different. Or we may work from assumptions, either ethical (like RCG does) or epistemic (kind of like I do) that GCR is not very seperable from xrisk, so looking at cascades through critical systems may be deeply important. One may take as STS inspired approaches, either directly using methods from there (eg ANT or ethnograohy) or better utilising concepts. As said earlier, Id be interested in a Burkeian political philosophy of xrisk. These are just the research agendas i may be excited about, which is a very narrow slice of what is possible or optimal. The problem is none of these research agendas have the scope and airtime to develop, because current communal structures are ill designed to allow this to happen, and we think this move to pluralism which we lay out (and this forum clearly disagrees with) could allow for many very useful and exciting research agendas to form
Apologies, my bad!
DRR= Disaster Risk Reduction
RCG = Riesgos Catastroficos Globales
STS = Science and Technology Studies/ Science, Technology and Society
ANT= Actor Network Theory
It’s not clear to me how you can exclude racists if you “explicitly don’t want agreement”. Presumably there are heuristics for knowing what’s overton-violating and what isn’t, but you need to be specific about how you can improve what people already do. I don’t think the idea of “applause lights” was strawmanning at all toward the top level post, but I see your views are more detailed and careful than that in comments.
Sorry about conflation between ERS and EA, I get it can be a different stream.
I’m not super incurious about certain methodological, sociology of science, metascience opportunities to improve the x-risk community. I just need to see the specifics! For example, I am incurious about frankfurty stuff because it tends to be rather silly, but presumably lots of people are working on peer review and career clout to fix it’s downsides (seems like an economics puzzle to me) and I’m very curious about that.
Firstly I should say the vagueness, whilst frustrating, ie there to both reflect and open up a discussion; it can appear ‘applause-lighty’ if one doesn’t recognise that this is the point of the statement, but I’m not quite sure how it does if you see tye statement as providing a statement of intent and justification for positions that ought to be debated, contested and refined.
On the topic of excluding racists, I think it is basically possible to do given how negatively it impacts a huge range of people from engaging with the community and doing good work. Whilst in my mind it’s clearly motivated by dedp concerns that the racism is unethical and the future that may emerge from a community that has racism so prevelant is deeply problematic, although I’m pretty sure it could still be justified from purely utilitarian perspectives due to the negative impact racism has on the community to function.
I think your demand for specifics here is both admirable, and I can give some that I agree with, and a little besides the point. One of the points we are making is the community at present is such that it doesn’t allow practitioners to explore the sorts of methods we could be using, and many important concepts or assumptions that could aid just aren’t present in this space at present (one example here is I bet if there had been a community containing active discussions around both geoengineering and AI then the idea of simply pausing or stopping AGI development would have been explored much early than it did in ERS at present). Now onto different conceptual bases. A few examples: we could use concepts from DRR like vulnerability or exposure much more, and I think a vulnerability focused account of xrisk would look very different. We could look at all the different xrisk cascades using causal mapping and find the best points of leverage in this. We could expand out an agents of doom like agenda to target those organisations that produce xrisk most effectively and study specifically how to reduce their power to cause risk the most. Black Swan approaches to xrisk may look further different. Or we may work from assumptions, either ethical (like RCG does) or epistemic (kind of like I do) that GCR is not very seperable from xrisk, so looking at cascades through critical systems may be deeply important. One may take as STS inspired approaches, either directly using methods from there (eg ANT or ethnograohy) or better utilising concepts. As said earlier, Id be interested in a Burkeian political philosophy of xrisk. These are just the research agendas i may be excited about, which is a very narrow slice of what is possible or optimal. The problem is none of these research agendas have the scope and airtime to develop, because current communal structures are ill designed to allow this to happen, and we think this move to pluralism which we lay out (and this forum clearly disagrees with) could allow for many very useful and exciting research agendas to form
Thanks for elaborating! Quick initials check:
DRR
RCG
STS (this one’s familiar, sorta on the tip of my mind, but not quite there)
ANT
Apologies, my bad! DRR= Disaster Risk Reduction RCG = Riesgos Catastroficos Globales STS = Science and Technology Studies/ Science, Technology and Society ANT= Actor Network Theory