As someone who agrees EAs aren’t focused enough on systemic change, I don’t see a single “system” that EAs are ignoring. Rather, I see a failure to use systems thinking to tackle important but hard to measure opportunities for interventions in general. That is, I may have particular ideas for systemic change of particular systems (academia and research, capitalism, societal trust) I’m working on or have worked on, but my critique is simply that EAs (at least in the mainstream movement) tend to ignore this type of thinking at all, when historically the biggest changes in quality of life seem to have come from systemic change and the resulting feedback loops.
It’s hard to point to thoughts not thinked :). A few lines of research and interventions that I would expect to be more pursued in the EA community if this bias wasn’t present:
1. More research and experimentation with new types of governance (on a systemic level, not just including the limited research funding into different ways to count votes).
2. More research and funding into what creates paradigm shifts in science, changes in governance structures, etc.
3. More research into power, and influence, and how they can effect large changes.
4. Much much more looking at trust and coordination failures, and how to handle them.
5. A research program around the problem of externalities and potential approaches to it.
Basically, I’d expect much more of a “5 why’s approach” that looks into the root causes of suffering in the world, rather than trying to fix individual instances of it.
An interesting counter example might be CFAR and the rationality focus in the community, but this seems to be a rare instance, and at any rate tries to fix a systemic problem with a decidely non-systemic solution (there are a few others that OpenPhil has lead, such as looking into changing academic research, but again the mainstream EA community mostly just doesn’t know how to think this way).
If you just look backwards from EAs’ priorities, then you have no good reason to claim that EAs are doing things wrong. Maybe such systemic causes actually are worse, and other causes actually are better. Arguments like this don’t really go anywhere. Especially if you are talking about “thoughts not thinked”, then this is just useless speculation.
Aside from rationality, some relevant areas of interest in EA are human enhancement to eliminate suffering (cf David Pearce—this is absolutely as “root” as it gets, more so than any sort of political or social activism), functional decision theory to enable agents to cooperate without having to communicate, moral uncertainty to enable different moral theories to cooperate (MacAskill and I have both written to push this), the stuff Scott Alexander has written about ‘Moloch’, and value spreading (EA growth but also general advocacy for rationality, animals or other issues).
By the way I’ve had no problem incorporating goals for better governance when quantitatively scoring political candidates. You can probably say that I happen to underestimate or overestimate their importance but the idea that it’s inherently difficult to include them with EA methodology just seems clearly false, having done it. I mean it’s pretty easy to just come up with guesstimates if nothing else.
(on a systemic level, not just including the limited research funding into different ways to count votes).
What’s systemic if not voting mechanisms? Voting seems like a very root part of the government system, more so than economic and social policies for instance.
tries to fix a systemic problem with a decidely non-systemic solution
What would a “systemic solution” look like? Conquering the world? I don’t see what you are getting at here.
I feel like you are implicitly including “big” as part of your definition of “systemic”, and that inherently and unreasonably excludes any feasible goals for small projects.
there are a few others that OpenPhil has lead, such as looking into changing academic research
Well they’re not going to change all of it. They’re going to have to try something small, and hopefully get it to catch on elsewhere.
If you just look backwards from EAs’ priorities, then you have no good reason to claim that EAs are doing things wrong. Maybe such systemic causes actually are worse, and other causes actually are better.
Maybe, but I didn’t say that I’d expect to see lots of projects trying to fix these issues, just that I’d expect to see more research into them, which is obviously the first step to determine correct interventions.
Arguments like this don’t really go anywhere. Especially if you are talking about “thoughts not thinked”, then this is just useless speculation.
What would count as useful speculation if you think that EAs cause prioritization mechanisms are biased?
What’s systemic if not voting mechanisms? Voting seems like a very root part of the government system, more so than economic and social policies for instance.
Voting mechanisms can be systemic if they’re approached that way. For instance, working backwards from a two party system in the US, figuring out what causes this to happen, and recommending mechanisms that fix that.
are human enhancement to eliminate suffering
This is another great example of EA bucking the trend, but I don’t see it as a mainstream EA cause.
functional decision theory to enable agents to cooperate without having to communicate, moral uncertainty to enable different moral theories to cooperate
These are certainly examples of root cause thinking, but to be truly systems thinking they have to take the next step to ask how can we shift the current system to these new foundations.
You can probably say that I happen to underestimate or overestimate their importance but the idea that it’s inherently difficult to include them with EA methodology just seems clearly false, having done it. I mean it’s pretty easy to just come up with guesstimates if nothing else.
The EA Methodology systemically underestimates systemic changes and handwaves away modelling of them. Consider for instance how hard it is to incorporate a feedback loop into a guesstimate model, not to mention flowthrough effects, and that your response here didn’t even mention those as problems.
What would a “systemic solution” look like?
Non-systemic solution: Seeing that people are irrational, then creating an organization that teaches people to be rational.
Systemic solution: Seeing that people are irrational, asking what about the system creates irrational people, and then creating an organization that looks to change that.
I feel like you are implicitly including “big” as part of your definition of “systemic”
I’m including systems thinking as part of my definition. This often leads to “big” interventions, because systems are resillient and often in local attractors, but oftentimes the interventions can be small, but targeted to cause large feedback loops and flowthrough effects. However, the second is only possible through either dumb luck, or skillful systems thinking.
Well they’re not going to change all of it. They’re going to have to try something small, and hopefully get it to catch on elsewhere.
They “have to” do that? Why? Certainly that’s one way to intervene in the system. There are many others as well.
“Hopefully” getting it to catch on elsewhere also seems silly. Perhaps they could try to look into ways to model the network effects, influence and power structures, etc, and use systems thinking to maximize their chances of getting it to catch on elsewhere.
I will note that I notice that I’m feeling very adversarial in this conversation, rather than truth seeking. For that reason I’m not going to participate further.
Maybe, but I didn’t say that I’d expect to see lots of projects trying to fix these issues, just that I’d expect to see more research into them, which is obviously the first step to determine correct interventions.
But you were talking about supposed deficiencies in EA modeling. Now you’re talking about the decision of which things to research and model in the first place. You’re shifting goalposts.
Voting mechanisms can be systemic if they’re approached that way. For instance, working backwards from a two party system in the US, figuring out what causes this to happen, and recommending mechanisms that fix that.
That’s no more systemic than any other way to decide how to improve how to improve voting. Changing voting mechanisms is basically working backwards from the problem of suboptimal politicians in the US, figuring out what system causes this to happen, and recommending mechanisms that fix that. Whether “figuring out” is more guided by empirical observations or by social choice theory doesn’t change the matter.
What would count as useful speculation if you think that EAs cause prioritization mechanisms are biased?
Well you can point out arguments that people are ignoring or rejecting for bad reasons, but that requires more concrete ideas instead of speculation. Maybe the lesson here is to dabble less in “speculation” and spend more time trying to make concrete progress. Show us! What’s a good cause we’ve missed?
This is another great example of EA bucking the trend, but I don’t see it as a mainstream EA cause.
Yes, because right now the only good way to approach it is to pretty much “get better at biology”—there is not enough fundamental knowledge on cognition to make dedicated progress on this specific topic. So EAs’ decisions are rational.
By the way, no other groups of “systems thinkers” are picking up on paradise engineering either.
These are certainly examples of root cause thinking, but to be truly systems thinking they have to take the next step to ask how can we shift the current system to these new foundations.
Like, uh, building institutions and advocacy for responsible AI design, and keeping them closely networked with the EA community, and spreading the idea of functional decision theory as a component of desirable AI design, with papers about FDT cooperation being published by multiple EA groups that focus on AI (MIRI and FRI)?
Consider for instance how hard it is to incorporate a feedback loop into a guesstimate model, not to mention flowthrough effects
Lol. I included “feedback loops” in arithmetic in a Word document. I had governance listed as 5% equal to the sum of other long-run policy issues, but due to the feedback loop of better governance begetting better governance, I decided to increase it to 10%. Done.
Non-systemic solution: Seeing that people are irrational, then creating an organization that teaches people to be rational.
Systemic solution: Seeing that people are irrational, asking what about the system creates irrational people, and then creating an organization that looks to change that.
Right. Let’s build kibbutzim where children are conditioned to make rational decisions. Sounds super tractable to me! Those silly EAs have been missing this low-hanging fruit the entire time.
Also, it’s not even clear how this definition of systems fits with your earlier claims that systems solutions are incorrectly less amenable to EA methodology than non-systems solutions. The concrete thing you’ve said is that EA models are worse at flow-through effects and feedback loops, which even if true (dubious) seems to apply equally well to non-systemic solutions.
I’m including systems thinking as part of my definition. This often leads to “big” interventions, but oftentimes the interventions can be small, but targeted to cause large feedback loops and flowthrough effects.
Except apparently you aren’t including poverty relief, which has large feedback loops and flowthrough effects; and apparently you aren’t including for animal advocacy, which has the same; and apparently you aren’t including EA movement growth, which has the same; and apparently you aren’t including promoting the construction of safe AGI, which has the same; and so on for everything else that EA does.
This looks very no-true-Scotsman-like.
They “have to” do that? Why?
Because they only have a hundred million dollars or so, and uh they don’t have the ability to coerce the general population? Come on.
“Hopefully” getting it to catch on elsewhere also seems silly. Perhaps they could try to look into ways to model the network effects, influence and power structures, etc, and use systems thinking to maximize their chances of getting it to catch on elsewhere
This is pedantry. Saying “hopefully” doesn’t imply that they’re not going to select the option with the highest cause for hopes. It merely implies that they don’t have control over how these things actually play out.
Strongly upvoted. This highlights the difference between criticisms of EA it doesn’t focus enough on systemic change that come from a particularly left-wing perspectives, and others which are based on empirical or ethical disagreements as opposed to political ones. This is a distinction I should have made clear in the OP, and I didn’t. Thanks for the clarification.
As someone who agrees EAs aren’t focused enough on systemic change, I don’t see a single “system” that EAs are ignoring. Rather, I see a failure to use systems thinking to tackle important but hard to measure opportunities for interventions in general. That is, I may have particular ideas for systemic change of particular systems (academia and research, capitalism, societal trust) I’m working on or have worked on, but my critique is simply that EAs (at least in the mainstream movement) tend to ignore this type of thinking at all, when historically the biggest changes in quality of life seem to have come from systemic change and the resulting feedback loops.
Can you give an example of a concrete argument along the lines of this “type of thinking” that was or would be ignored?
It’s hard to point to thoughts not thinked :). A few lines of research and interventions that I would expect to be more pursued in the EA community if this bias wasn’t present:
1. More research and experimentation with new types of governance (on a systemic level, not just including the limited research funding into different ways to count votes).
2. More research and funding into what creates paradigm shifts in science, changes in governance structures, etc.
3. More research into power, and influence, and how they can effect large changes.
4. Much much more looking at trust and coordination failures, and how to handle them.
5. A research program around the problem of externalities and potential approaches to it.
Basically, I’d expect much more of a “5 why’s approach” that looks into the root causes of suffering in the world, rather than trying to fix individual instances of it.
An interesting counter example might be CFAR and the rationality focus in the community, but this seems to be a rare instance, and at any rate tries to fix a systemic problem with a decidely non-systemic solution (there are a few others that OpenPhil has lead, such as looking into changing academic research, but again the mainstream EA community mostly just doesn’t know how to think this way).
If you just look backwards from EAs’ priorities, then you have no good reason to claim that EAs are doing things wrong. Maybe such systemic causes actually are worse, and other causes actually are better. Arguments like this don’t really go anywhere. Especially if you are talking about “thoughts not thinked”, then this is just useless speculation.
Aside from rationality, some relevant areas of interest in EA are human enhancement to eliminate suffering (cf David Pearce—this is absolutely as “root” as it gets, more so than any sort of political or social activism), functional decision theory to enable agents to cooperate without having to communicate, moral uncertainty to enable different moral theories to cooperate (MacAskill and I have both written to push this), the stuff Scott Alexander has written about ‘Moloch’, and value spreading (EA growth but also general advocacy for rationality, animals or other issues).
By the way I’ve had no problem incorporating goals for better governance when quantitatively scoring political candidates. You can probably say that I happen to underestimate or overestimate their importance but the idea that it’s inherently difficult to include them with EA methodology just seems clearly false, having done it. I mean it’s pretty easy to just come up with guesstimates if nothing else.
What’s systemic if not voting mechanisms? Voting seems like a very root part of the government system, more so than economic and social policies for instance.
What would a “systemic solution” look like? Conquering the world? I don’t see what you are getting at here.
I feel like you are implicitly including “big” as part of your definition of “systemic”, and that inherently and unreasonably excludes any feasible goals for small projects.
Well they’re not going to change all of it. They’re going to have to try something small, and hopefully get it to catch on elsewhere.
Maybe, but I didn’t say that I’d expect to see lots of projects trying to fix these issues, just that I’d expect to see more research into them, which is obviously the first step to determine correct interventions.
What would count as useful speculation if you think that EAs cause prioritization mechanisms are biased?
Voting mechanisms can be systemic if they’re approached that way. For instance, working backwards from a two party system in the US, figuring out what causes this to happen, and recommending mechanisms that fix that.
This is another great example of EA bucking the trend, but I don’t see it as a mainstream EA cause.
These are certainly examples of root cause thinking, but to be truly systems thinking they have to take the next step to ask how can we shift the current system to these new foundations.
The EA Methodology systemically underestimates systemic changes and handwaves away modelling of them. Consider for instance how hard it is to incorporate a feedback loop into a guesstimate model, not to mention flowthrough effects, and that your response here didn’t even mention those as problems.
Non-systemic solution: Seeing that people are irrational, then creating an organization that teaches people to be rational.
Systemic solution: Seeing that people are irrational, asking what about the system creates irrational people, and then creating an organization that looks to change that.
I’m including systems thinking as part of my definition. This often leads to “big” interventions, because systems are resillient and often in local attractors, but oftentimes the interventions can be small, but targeted to cause large feedback loops and flowthrough effects. However, the second is only possible through either dumb luck, or skillful systems thinking.
They “have to” do that? Why? Certainly that’s one way to intervene in the system. There are many others as well.
“Hopefully” getting it to catch on elsewhere also seems silly. Perhaps they could try to look into ways to model the network effects, influence and power structures, etc, and use systems thinking to maximize their chances of getting it to catch on elsewhere.
I will note that I notice that I’m feeling very adversarial in this conversation, rather than truth seeking. For that reason I’m not going to participate further.
But you were talking about supposed deficiencies in EA modeling. Now you’re talking about the decision of which things to research and model in the first place. You’re shifting goalposts.
That’s no more systemic than any other way to decide how to improve how to improve voting. Changing voting mechanisms is basically working backwards from the problem of suboptimal politicians in the US, figuring out what system causes this to happen, and recommending mechanisms that fix that. Whether “figuring out” is more guided by empirical observations or by social choice theory doesn’t change the matter.
Well you can point out arguments that people are ignoring or rejecting for bad reasons, but that requires more concrete ideas instead of speculation. Maybe the lesson here is to dabble less in “speculation” and spend more time trying to make concrete progress. Show us! What’s a good cause we’ve missed?
Yes, because right now the only good way to approach it is to pretty much “get better at biology”—there is not enough fundamental knowledge on cognition to make dedicated progress on this specific topic. So EAs’ decisions are rational.
By the way, no other groups of “systems thinkers” are picking up on paradise engineering either.
Like, uh, building institutions and advocacy for responsible AI design, and keeping them closely networked with the EA community, and spreading the idea of functional decision theory as a component of desirable AI design, with papers about FDT cooperation being published by multiple EA groups that focus on AI (MIRI and FRI)?
Lol. I included “feedback loops” in arithmetic in a Word document. I had governance listed as 5% equal to the sum of other long-run policy issues, but due to the feedback loop of better governance begetting better governance, I decided to increase it to 10%. Done.
Right. Let’s build kibbutzim where children are conditioned to make rational decisions. Sounds super tractable to me! Those silly EAs have been missing this low-hanging fruit the entire time.
Also, it’s not even clear how this definition of systems fits with your earlier claims that systems solutions are incorrectly less amenable to EA methodology than non-systems solutions. The concrete thing you’ve said is that EA models are worse at flow-through effects and feedback loops, which even if true (dubious) seems to apply equally well to non-systemic solutions.
Except apparently you aren’t including poverty relief, which has large feedback loops and flowthrough effects; and apparently you aren’t including for animal advocacy, which has the same; and apparently you aren’t including EA movement growth, which has the same; and apparently you aren’t including promoting the construction of safe AGI, which has the same; and so on for everything else that EA does.
This looks very no-true-Scotsman-like.
Because they only have a hundred million dollars or so, and uh they don’t have the ability to coerce the general population? Come on.
This is pedantry. Saying “hopefully” doesn’t imply that they’re not going to select the option with the highest cause for hopes. It merely implies that they don’t have control over how these things actually play out.
Strongly upvoted. This highlights the difference between criticisms of EA it doesn’t focus enough on systemic change that come from a particularly left-wing perspectives, and others which are based on empirical or ethical disagreements as opposed to political ones. This is a distinction I should have made clear in the OP, and I didn’t. Thanks for the clarification.