I think the “ideology” idea is about the normative specification of what EA considers itself to be, but there seem to be 3 waves of EA involved here:
the good-works wave, about cost-effectively doing the most good through charitable works
the existential-risk wave, building more slowly, about preventing existential risk
the longtermism wave, some strange evolution of the existential risk wave, building up now
I haven’t followed the community that closely, but that seems to be the rough timeline. Correct me if I’m wrong.
From my point of view, the narrative of ideology is about ideological influences defining the obvious biases made public in EA: free-market economics, apolitical charity, the perspective of the wealthy. EA’s are visibly ideologues to the extent that they repeat or insinuate the narratives commonly heard from ideologues on the right side of the US political spectrum. They tend to:
discount climate change
distrust regulation and the political left
extoll or expect the free market’s products to save us (TUA, AGI, …)
be blind to social justice concerns
see the influence of money as virtuous, they trust money, in betting and in life
admire those with good betting skills and compare most decisions to bets
see corruption in government or bureaucracy but not in for-profit business organizations
emphasize individual action and the virtues of enabling individual access to resources
I see those communications made public, and I suspect they come from the influences defining the 2nd and 3rd waves of the EA movement, rather than the first, except maybe the influence of probabilism and its Dutch bookie thought experiment? But an influx of folks working in the software industry, where just about everyone sees themselves as an individual but is treated like a replaceable widget in a factory, know to walk a line, because they’re still well-paid. There’s not a strong push toward unions, worker safety, or ludditism. Social justice, distrust of wealth, corruption of business, failures of the free market (for example, regulation-requiring errors or climate change), these are taboo topics among the people I’m thinking of, because it can hurt their careers. But they will get stressed over the next 10-20 years as AI take over. As will the rest of the research community in Effective Altruism.
Despite the supposed rigor exercised by EA’s in their research, the web of trust they spin across their research network is so strong that they discount most outside sources of information and even have a seniority-skewed voting system (karma) on their public research hub that they rely on to inform them of what is good information. I can see it with climate change discussions. They have skepticism toward information from outside the community. Their skepticism should face inward, given their commitments to rationalism.
And the problem of rationalized selfishness is obvious, big picture obvious, I mean obvious in every way in every lesson in every major narrative about every major ethical dilemma inside and outside religion, the knowledge boils down to selfishness (including vices) versus altruism. Learnings about rationalism should promote a strong attempt to work against self-serving rationalization (as in the Scout Mindset but with explicit dislike of evil), and see that rationalization stemming from selfishness, and provide an ethical bent that works through the tension between self-serving rationalization and genuine efforts toward altruism so that, if nothing else, integrity is preserved and evil is avoided. But that never happened among EA’s.
However, they did manage to get upset about the existential guilt involved in self-care, for example, when they could be giving their fun dinner-out money to charity. That showed lack of introspection and an easy surrender to conveniently uncomfortable feelings. And they committed themselves to cost-effective charitable works. And developing excellent models of uncertainty as understood through situations amenable to metaphors involving casinos, betting, cashing out, and bookies. Now, I can’t see anyone missing that many signals of selfish but naive interest in altruism going wrong. Apparently, those signals have been missed. Not only that, but a lot of people who aren’t interested in the conceptual underpinnings of EA “the movement” have been attracted to the EA brand. So that’s ok, so long as all the talk about rationalism and integrity and Scout Mindset is just talk. If so, the usual business can continue. If not, if the talk is not just smoke and mirrors, the problems surface quick because EA confronts people with its lack of rationality, integrity, and Scout Mindset.
I took it as a predictive indicator that EA’s discount critical thinking in favor of their own brand of rationalism, one that to me lacks common-sense (for example, conscious “updating” is bizarrely inefficient as a cognitive effort). Further, their lack of interest in climate destruction was a good warning. Then the strange decision to focus ethical decisions on an implausible future and the moral status of possibly existent trillions of people in the future. The EA community shock and surprise at the collapse of SBF and FTX has been further indication of what is a lack of real-world insight and connection to working streams of information in the real world.
It’s very obvious where the tensions are, that is, between the same things as usual: selfishness/vices and altruism. BTW, I suspect that no changes will be made in how funders are chosen. Furthermore, I suspect that the denial of climate change is more than ideology. It will reveal itself as true fear and a backing away from fundamental ethical values as time goes on. I understand that. If the situation seems hopeless, people give up their values. The situation is not hopeless, but it challenges selfish concerns. Valid ones. Maybe EA’s have no stomach for true existential threats. The implication is that their work in that area is a sham or serves contrary purposes.
It’s a problem because real efforts are diluted by the ideologies involved in the EA community. Community is important because people need to socialize. A research community emphasizes research. Norms for research communities are straightforward. A values-centered community is … suspect. Prone to corruption, misunderstandings about what community entails, and reprisals and criticism to do with normative values not being served by the community day-to-day. Usually, communities attract the like-minded. You would expect or even want homogeneity in that regard, not complain about it.
If EA is just about professionalism in providing cost-effective charitable work that’s great! There’s no community involved, the values are memes and marketing, the metrics are just those involved in charity, not the well-being of community members or their diversity.
If it’s about research products that’s great! Development of research methods and critical thinking skills in the community needs improvement.
Otherwise, comfort, ease, relationships, and good times are the community requirements. Some people can find that in a diverse community that is values-minded. Others can’t.
A community that’s about values is going to generate a lot of churn about stuff that you can’t easily change. You can’t change the financial influences, the ideological influences, (most of) the public claims, and certainly not the self-serving rationalizations, all other things equal. If EA had ever gone down the path of exploring the trade-offs between selfishness and altruism with more care, they might have had hope to be a values-centered community. I don’t see them pulling that off at this point. Just for their lack of interest or understanding. It’s not their fault, but it is their problem.
I favor dissolution of all community-building efforts and a return to research and charity-oriented efforts by the EA community. It’s the only thing I can see that the community can do for the world at large. I don’t offer that as some sort of vote, but instead as a statement of opinion.
Ideology in EA
I think the “ideology” idea is about the normative specification of what EA considers itself to be, but there seem to be 3 waves of EA involved here:
the good-works wave, about cost-effectively doing the most good through charitable works
the existential-risk wave, building more slowly, about preventing existential risk
the longtermism wave, some strange evolution of the existential risk wave, building up now
I haven’t followed the community that closely, but that seems to be the rough timeline. Correct me if I’m wrong.
From my point of view, the narrative of ideology is about ideological influences defining the obvious biases made public in EA: free-market economics, apolitical charity, the perspective of the wealthy. EA’s are visibly ideologues to the extent that they repeat or insinuate the narratives commonly heard from ideologues on the right side of the US political spectrum. They tend to:
discount climate change
distrust regulation and the political left
extoll or expect the free market’s products to save us (TUA, AGI, …)
be blind to social justice concerns
see the influence of money as virtuous, they trust money, in betting and in life
admire those with good betting skills and compare most decisions to bets
see corruption in government or bureaucracy but not in for-profit business organizations
emphasize individual action and the virtues of enabling individual access to resources
I see those communications made public, and I suspect they come from the influences defining the 2nd and 3rd waves of the EA movement, rather than the first, except maybe the influence of probabilism and its Dutch bookie thought experiment? But an influx of folks working in the software industry, where just about everyone sees themselves as an individual but is treated like a replaceable widget in a factory, know to walk a line, because they’re still well-paid. There’s not a strong push toward unions, worker safety, or ludditism. Social justice, distrust of wealth, corruption of business, failures of the free market (for example, regulation-requiring errors or climate change), these are taboo topics among the people I’m thinking of, because it can hurt their careers. But they will get stressed over the next 10-20 years as AI take over. As will the rest of the research community in Effective Altruism.
Despite the supposed rigor exercised by EA’s in their research, the web of trust they spin across their research network is so strong that they discount most outside sources of information and even have a seniority-skewed voting system (karma) on their public research hub that they rely on to inform them of what is good information. I can see it with climate change discussions. They have skepticism toward information from outside the community. Their skepticism should face inward, given their commitments to rationalism.
And the problem of rationalized selfishness is obvious, big picture obvious, I mean obvious in every way in every lesson in every major narrative about every major ethical dilemma inside and outside religion, the knowledge boils down to selfishness (including vices) versus altruism. Learnings about rationalism should promote a strong attempt to work against self-serving rationalization (as in the Scout Mindset but with explicit dislike of evil), and see that rationalization stemming from selfishness, and provide an ethical bent that works through the tension between self-serving rationalization and genuine efforts toward altruism so that, if nothing else, integrity is preserved and evil is avoided. But that never happened among EA’s.
However, they did manage to get upset about the existential guilt involved in self-care, for example, when they could be giving their fun dinner-out money to charity. That showed lack of introspection and an easy surrender to conveniently uncomfortable feelings. And they committed themselves to cost-effective charitable works. And developing excellent models of uncertainty as understood through situations amenable to metaphors involving casinos, betting, cashing out, and bookies. Now, I can’t see anyone missing that many signals of selfish but naive interest in altruism going wrong. Apparently, those signals have been missed. Not only that, but a lot of people who aren’t interested in the conceptual underpinnings of EA “the movement” have been attracted to the EA brand. So that’s ok, so long as all the talk about rationalism and integrity and Scout Mindset is just talk. If so, the usual business can continue. If not, if the talk is not just smoke and mirrors, the problems surface quick because EA confronts people with its lack of rationality, integrity, and Scout Mindset.
I took it as a predictive indicator that EA’s discount critical thinking in favor of their own brand of rationalism, one that to me lacks common-sense (for example, conscious “updating” is bizarrely inefficient as a cognitive effort). Further, their lack of interest in climate destruction was a good warning. Then the strange decision to focus ethical decisions on an implausible future and the moral status of possibly existent trillions of people in the future. The EA community shock and surprise at the collapse of SBF and FTX has been further indication of what is a lack of real-world insight and connection to working streams of information in the real world.
It’s very obvious where the tensions are, that is, between the same things as usual: selfishness/vices and altruism. BTW, I suspect that no changes will be made in how funders are chosen. Furthermore, I suspect that the denial of climate change is more than ideology. It will reveal itself as true fear and a backing away from fundamental ethical values as time goes on. I understand that. If the situation seems hopeless, people give up their values. The situation is not hopeless, but it challenges selfish concerns. Valid ones. Maybe EA’s have no stomach for true existential threats. The implication is that their work in that area is a sham or serves contrary purposes.
It’s a problem because real efforts are diluted by the ideologies involved in the EA community. Community is important because people need to socialize. A research community emphasizes research. Norms for research communities are straightforward. A values-centered community is … suspect. Prone to corruption, misunderstandings about what community entails, and reprisals and criticism to do with normative values not being served by the community day-to-day. Usually, communities attract the like-minded. You would expect or even want homogeneity in that regard, not complain about it.
If EA is just about professionalism in providing cost-effective charitable work that’s great! There’s no community involved, the values are memes and marketing, the metrics are just those involved in charity, not the well-being of community members or their diversity.
If it’s about research products that’s great! Development of research methods and critical thinking skills in the community needs improvement.
Otherwise, comfort, ease, relationships, and good times are the community requirements. Some people can find that in a diverse community that is values-minded. Others can’t.
A community that’s about values is going to generate a lot of churn about stuff that you can’t easily change. You can’t change the financial influences, the ideological influences, (most of) the public claims, and certainly not the self-serving rationalizations, all other things equal. If EA had ever gone down the path of exploring the trade-offs between selfishness and altruism with more care, they might have had hope to be a values-centered community. I don’t see them pulling that off at this point. Just for their lack of interest or understanding. It’s not their fault, but it is their problem.
I favor dissolution of all community-building efforts and a return to research and charity-oriented efforts by the EA community. It’s the only thing I can see that the community can do for the world at large. I don’t offer that as some sort of vote, but instead as a statement of opinion.