Talking about “there should be spaces to articulate hunches”, I am now taking the plunge and speed-writing a few things that have felt slightly off in EA over the last few years. They are definitely not up to the standard of thought-through critique (I’ve tried to indicate confidence levels to at least show how they might compare against each other), but I’d like to know if any of them resonate with other people here who might’ve thought about them more. The list isn’t complete, either.
Things that feel off in EA:
Lots of white male people working on AI alignment (pretty sure that this is not great)
Issues in the community
feeling unwelcome as a woman, and from a non-maths background… why is this still a thing?? >>> feeling quite disillusioned after spending a lot of my early time (very sure)
the negative mental health effect of being surrounded by an optimising mindset all the time (and, as a result, internalising “you need to be better”, as opposed to “you are enough (as a person)”, which maybe results in people being valued/gaining status by how impactful they are, and thereby conflating worth-as-a-person with worth-of-your-actions?) (Very sure, at least in my own experience)
How unaware are we of the place this movement takes in the grander scheme of things? I sometimes worry about the resemblance to eschatological movements, but also biases that we pretty surely have. (Unsure about the eschatology stuff, but definitely worried about blindspots)
How would a community with a different background approach the question of how to do the most good?
What could we learn from an STS/sociology/anthropology analysis of the movement? (these fields tend to study e.g. where a (scientific) movement comes from, and what their values and maybe assumptions are… I feel like such an outside perspective might indicate some blind spots?
Relying so heavily on economic tools, and then thinking the solutions are unbiased? (unsure about the extent to which this is a problem, and to what extent this is based in optimising for impact as such and you can’t )
Lack of respect for established fields, like the whole risk community, and lack of communication between fields. Not using already established methods from those fields, or making use of their expertise. Not really trying to frame things in a way that would be appealing to them (I’m basing this on conversations with staff members at the Institute for Risk and Disaster Reduction in London, and STS (science and technology studies) scholars in Edinburgh and elsewhere). (Fairly sure)
In general, perhaps, neglecting some less “wacky” arguments, like the common-sense argument for X-risk reduction, and also thinking about non-utilitarian values (I feel like we’re on an okay path here)
I have a weird feeling about similarities in thinking to liberal and libertarian ideology. (Just a hunch)
Maybe something about the individual as unit of analysis, and calculating in terms of aggregate welfare (of individuals) ? (Medium sure?)
And, in turn, neglecting other values like justice, equality etc… I am kind of confused that there isn’t more discussion around these values at least.
Maybe neglecting goodness of process over concerns over goodness of outcome? I am unsure about this myself, but the kinds of things I’m thinking about are: How important is it that decisions involving all of humanity are made in a democratic/participatory way. I think this is only partially thinking that participation is good in itself, and mostly having a hunch that it would increase quality of outcome… so it’s more like “we should care about quality of process because we actually care about quality of outcome, but if we focus too much on outcome, the outcome will actually be worse. (Unsure)
Random side note: I’ve been curious about the potential of citizens’ assemblies—maybe as an intervention for improving institutional decision-making? (speculative)
Talking about “there should be spaces to articulate hunches”, I am now taking the plunge and speed-writing a few things that have felt slightly off in EA over the last few years. They are definitely not up to the standard of thought-through critique (I’ve tried to indicate confidence levels to at least show how they might compare against each other), but I’d like to know if any of them resonate with other people here who might’ve thought about them more. The list isn’t complete, either.
Things that feel off in EA:
Lots of white male people working on AI alignment (pretty sure that this is not great)
Issues in the community
feeling unwelcome as a woman, and from a non-maths background… why is this still a thing?? >>> feeling quite disillusioned after spending a lot of my early time (very sure)
the negative mental health effect of being surrounded by an optimising mindset all the time (and, as a result, internalising “you need to be better”, as opposed to “you are enough (as a person)”, which maybe results in people being valued/gaining status by how impactful they are, and thereby conflating worth-as-a-person with worth-of-your-actions?) (Very sure, at least in my own experience)
How unaware are we of the place this movement takes in the grander scheme of things? I sometimes worry about the resemblance to eschatological movements, but also biases that we pretty surely have. (Unsure about the eschatology stuff, but definitely worried about blindspots)
How would a community with a different background approach the question of how to do the most good?
What could we learn from an STS/sociology/anthropology analysis of the movement? (these fields tend to study e.g. where a (scientific) movement comes from, and what their values and maybe assumptions are… I feel like such an outside perspective might indicate some blind spots?
Relying so heavily on economic tools, and then thinking the solutions are unbiased? (unsure about the extent to which this is a problem, and to what extent this is based in optimising for impact as such and you can’t )
Lack of respect for established fields, like the whole risk community, and lack of communication between fields. Not using already established methods from those fields, or making use of their expertise. Not really trying to frame things in a way that would be appealing to them (I’m basing this on conversations with staff members at the Institute for Risk and Disaster Reduction in London, and STS (science and technology studies) scholars in Edinburgh and elsewhere). (Fairly sure)
In general, perhaps, neglecting some less “wacky” arguments, like the common-sense argument for X-risk reduction, and also thinking about non-utilitarian values (I feel like we’re on an okay path here)
I have a weird feeling about similarities in thinking to liberal and libertarian ideology. (Just a hunch)
Maybe something about the individual as unit of analysis, and calculating in terms of aggregate welfare (of individuals) ? (Medium sure?)
And, in turn, neglecting other values like justice, equality etc… I am kind of confused that there isn’t more discussion around these values at least.
Maybe neglecting goodness of process over concerns over goodness of outcome? I am unsure about this myself, but the kinds of things I’m thinking about are: How important is it that decisions involving all of humanity are made in a democratic/participatory way. I think this is only partially thinking that participation is good in itself, and mostly having a hunch that it would increase quality of outcome… so it’s more like “we should care about quality of process because we actually care about quality of outcome, but if we focus too much on outcome, the outcome will actually be worse. (Unsure)
Random side note: I’ve been curious about the potential of citizens’ assemblies—maybe as an intervention for improving institutional decision-making? (speculative)