If you just look backwards from EAs’ priorities, then you have no good reason to claim that EAs are doing things wrong. Maybe such systemic causes actually are worse, and other causes actually are better. Arguments like this don’t really go anywhere. Especially if you are talking about “thoughts not thinked”, then this is just useless speculation.
Aside from rationality, some relevant areas of interest in EA are human enhancement to eliminate suffering (cf David Pearce—this is absolutely as “root” as it gets, more so than any sort of political or social activism), functional decision theory to enable agents to cooperate without having to communicate, moral uncertainty to enable different moral theories to cooperate (MacAskill and I have both written to push this), the stuff Scott Alexander has written about ‘Moloch’, and value spreading (EA growth but also general advocacy for rationality, animals or other issues).
By the way I’ve had no problem incorporating goals for better governance when quantitatively scoring political candidates. You can probably say that I happen to underestimate or overestimate their importance but the idea that it’s inherently difficult to include them with EA methodology just seems clearly false, having done it. I mean it’s pretty easy to just come up with guesstimates if nothing else.
(on a systemic level, not just including the limited research funding into different ways to count votes).
What’s systemic if not voting mechanisms? Voting seems like a very root part of the government system, more so than economic and social policies for instance.
tries to fix a systemic problem with a decidely non-systemic solution
What would a “systemic solution” look like? Conquering the world? I don’t see what you are getting at here.
I feel like you are implicitly including “big” as part of your definition of “systemic”, and that inherently and unreasonably excludes any feasible goals for small projects.
there are a few others that OpenPhil has lead, such as looking into changing academic research
Well they’re not going to change all of it. They’re going to have to try something small, and hopefully get it to catch on elsewhere.
If you just look backwards from EAs’ priorities, then you have no good reason to claim that EAs are doing things wrong. Maybe such systemic causes actually are worse, and other causes actually are better.
Maybe, but I didn’t say that I’d expect to see lots of projects trying to fix these issues, just that I’d expect to see more research into them, which is obviously the first step to determine correct interventions.
Arguments like this don’t really go anywhere. Especially if you are talking about “thoughts not thinked”, then this is just useless speculation.
What would count as useful speculation if you think that EAs cause prioritization mechanisms are biased?
What’s systemic if not voting mechanisms? Voting seems like a very root part of the government system, more so than economic and social policies for instance.
Voting mechanisms can be systemic if they’re approached that way. For instance, working backwards from a two party system in the US, figuring out what causes this to happen, and recommending mechanisms that fix that.
are human enhancement to eliminate suffering
This is another great example of EA bucking the trend, but I don’t see it as a mainstream EA cause.
functional decision theory to enable agents to cooperate without having to communicate, moral uncertainty to enable different moral theories to cooperate
These are certainly examples of root cause thinking, but to be truly systems thinking they have to take the next step to ask how can we shift the current system to these new foundations.
You can probably say that I happen to underestimate or overestimate their importance but the idea that it’s inherently difficult to include them with EA methodology just seems clearly false, having done it. I mean it’s pretty easy to just come up with guesstimates if nothing else.
The EA Methodology systemically underestimates systemic changes and handwaves away modelling of them. Consider for instance how hard it is to incorporate a feedback loop into a guesstimate model, not to mention flowthrough effects, and that your response here didn’t even mention those as problems.
What would a “systemic solution” look like?
Non-systemic solution: Seeing that people are irrational, then creating an organization that teaches people to be rational.
Systemic solution: Seeing that people are irrational, asking what about the system creates irrational people, and then creating an organization that looks to change that.
I feel like you are implicitly including “big” as part of your definition of “systemic”
I’m including systems thinking as part of my definition. This often leads to “big” interventions, because systems are resillient and often in local attractors, but oftentimes the interventions can be small, but targeted to cause large feedback loops and flowthrough effects. However, the second is only possible through either dumb luck, or skillful systems thinking.
Well they’re not going to change all of it. They’re going to have to try something small, and hopefully get it to catch on elsewhere.
They “have to” do that? Why? Certainly that’s one way to intervene in the system. There are many others as well.
“Hopefully” getting it to catch on elsewhere also seems silly. Perhaps they could try to look into ways to model the network effects, influence and power structures, etc, and use systems thinking to maximize their chances of getting it to catch on elsewhere.
I will note that I notice that I’m feeling very adversarial in this conversation, rather than truth seeking. For that reason I’m not going to participate further.
Maybe, but I didn’t say that I’d expect to see lots of projects trying to fix these issues, just that I’d expect to see more research into them, which is obviously the first step to determine correct interventions.
But you were talking about supposed deficiencies in EA modeling. Now you’re talking about the decision of which things to research and model in the first place. You’re shifting goalposts.
Voting mechanisms can be systemic if they’re approached that way. For instance, working backwards from a two party system in the US, figuring out what causes this to happen, and recommending mechanisms that fix that.
That’s no more systemic than any other way to decide how to improve how to improve voting. Changing voting mechanisms is basically working backwards from the problem of suboptimal politicians in the US, figuring out what system causes this to happen, and recommending mechanisms that fix that. Whether “figuring out” is more guided by empirical observations or by social choice theory doesn’t change the matter.
What would count as useful speculation if you think that EAs cause prioritization mechanisms are biased?
Well you can point out arguments that people are ignoring or rejecting for bad reasons, but that requires more concrete ideas instead of speculation. Maybe the lesson here is to dabble less in “speculation” and spend more time trying to make concrete progress. Show us! What’s a good cause we’ve missed?
This is another great example of EA bucking the trend, but I don’t see it as a mainstream EA cause.
Yes, because right now the only good way to approach it is to pretty much “get better at biology”—there is not enough fundamental knowledge on cognition to make dedicated progress on this specific topic. So EAs’ decisions are rational.
By the way, no other groups of “systems thinkers” are picking up on paradise engineering either.
These are certainly examples of root cause thinking, but to be truly systems thinking they have to take the next step to ask how can we shift the current system to these new foundations.
Like, uh, building institutions and advocacy for responsible AI design, and keeping them closely networked with the EA community, and spreading the idea of functional decision theory as a component of desirable AI design, with papers about FDT cooperation being published by multiple EA groups that focus on AI (MIRI and FRI)?
Consider for instance how hard it is to incorporate a feedback loop into a guesstimate model, not to mention flowthrough effects
Lol. I included “feedback loops” in arithmetic in a Word document. I had governance listed as 5% equal to the sum of other long-run policy issues, but due to the feedback loop of better governance begetting better governance, I decided to increase it to 10%. Done.
Non-systemic solution: Seeing that people are irrational, then creating an organization that teaches people to be rational.
Systemic solution: Seeing that people are irrational, asking what about the system creates irrational people, and then creating an organization that looks to change that.
Right. Let’s build kibbutzim where children are conditioned to make rational decisions. Sounds super tractable to me! Those silly EAs have been missing this low-hanging fruit the entire time.
Also, it’s not even clear how this definition of systems fits with your earlier claims that systems solutions are incorrectly less amenable to EA methodology than non-systems solutions. The concrete thing you’ve said is that EA models are worse at flow-through effects and feedback loops, which even if true (dubious) seems to apply equally well to non-systemic solutions.
I’m including systems thinking as part of my definition. This often leads to “big” interventions, but oftentimes the interventions can be small, but targeted to cause large feedback loops and flowthrough effects.
Except apparently you aren’t including poverty relief, which has large feedback loops and flowthrough effects; and apparently you aren’t including for animal advocacy, which has the same; and apparently you aren’t including EA movement growth, which has the same; and apparently you aren’t including promoting the construction of safe AGI, which has the same; and so on for everything else that EA does.
This looks very no-true-Scotsman-like.
They “have to” do that? Why?
Because they only have a hundred million dollars or so, and uh they don’t have the ability to coerce the general population? Come on.
“Hopefully” getting it to catch on elsewhere also seems silly. Perhaps they could try to look into ways to model the network effects, influence and power structures, etc, and use systems thinking to maximize their chances of getting it to catch on elsewhere
This is pedantry. Saying “hopefully” doesn’t imply that they’re not going to select the option with the highest cause for hopes. It merely implies that they don’t have control over how these things actually play out.
If you just look backwards from EAs’ priorities, then you have no good reason to claim that EAs are doing things wrong. Maybe such systemic causes actually are worse, and other causes actually are better. Arguments like this don’t really go anywhere. Especially if you are talking about “thoughts not thinked”, then this is just useless speculation.
Aside from rationality, some relevant areas of interest in EA are human enhancement to eliminate suffering (cf David Pearce—this is absolutely as “root” as it gets, more so than any sort of political or social activism), functional decision theory to enable agents to cooperate without having to communicate, moral uncertainty to enable different moral theories to cooperate (MacAskill and I have both written to push this), the stuff Scott Alexander has written about ‘Moloch’, and value spreading (EA growth but also general advocacy for rationality, animals or other issues).
By the way I’ve had no problem incorporating goals for better governance when quantitatively scoring political candidates. You can probably say that I happen to underestimate or overestimate their importance but the idea that it’s inherently difficult to include them with EA methodology just seems clearly false, having done it. I mean it’s pretty easy to just come up with guesstimates if nothing else.
What’s systemic if not voting mechanisms? Voting seems like a very root part of the government system, more so than economic and social policies for instance.
What would a “systemic solution” look like? Conquering the world? I don’t see what you are getting at here.
I feel like you are implicitly including “big” as part of your definition of “systemic”, and that inherently and unreasonably excludes any feasible goals for small projects.
Well they’re not going to change all of it. They’re going to have to try something small, and hopefully get it to catch on elsewhere.
Maybe, but I didn’t say that I’d expect to see lots of projects trying to fix these issues, just that I’d expect to see more research into them, which is obviously the first step to determine correct interventions.
What would count as useful speculation if you think that EAs cause prioritization mechanisms are biased?
Voting mechanisms can be systemic if they’re approached that way. For instance, working backwards from a two party system in the US, figuring out what causes this to happen, and recommending mechanisms that fix that.
This is another great example of EA bucking the trend, but I don’t see it as a mainstream EA cause.
These are certainly examples of root cause thinking, but to be truly systems thinking they have to take the next step to ask how can we shift the current system to these new foundations.
The EA Methodology systemically underestimates systemic changes and handwaves away modelling of them. Consider for instance how hard it is to incorporate a feedback loop into a guesstimate model, not to mention flowthrough effects, and that your response here didn’t even mention those as problems.
Non-systemic solution: Seeing that people are irrational, then creating an organization that teaches people to be rational.
Systemic solution: Seeing that people are irrational, asking what about the system creates irrational people, and then creating an organization that looks to change that.
I’m including systems thinking as part of my definition. This often leads to “big” interventions, because systems are resillient and often in local attractors, but oftentimes the interventions can be small, but targeted to cause large feedback loops and flowthrough effects. However, the second is only possible through either dumb luck, or skillful systems thinking.
They “have to” do that? Why? Certainly that’s one way to intervene in the system. There are many others as well.
“Hopefully” getting it to catch on elsewhere also seems silly. Perhaps they could try to look into ways to model the network effects, influence and power structures, etc, and use systems thinking to maximize their chances of getting it to catch on elsewhere.
I will note that I notice that I’m feeling very adversarial in this conversation, rather than truth seeking. For that reason I’m not going to participate further.
But you were talking about supposed deficiencies in EA modeling. Now you’re talking about the decision of which things to research and model in the first place. You’re shifting goalposts.
That’s no more systemic than any other way to decide how to improve how to improve voting. Changing voting mechanisms is basically working backwards from the problem of suboptimal politicians in the US, figuring out what system causes this to happen, and recommending mechanisms that fix that. Whether “figuring out” is more guided by empirical observations or by social choice theory doesn’t change the matter.
Well you can point out arguments that people are ignoring or rejecting for bad reasons, but that requires more concrete ideas instead of speculation. Maybe the lesson here is to dabble less in “speculation” and spend more time trying to make concrete progress. Show us! What’s a good cause we’ve missed?
Yes, because right now the only good way to approach it is to pretty much “get better at biology”—there is not enough fundamental knowledge on cognition to make dedicated progress on this specific topic. So EAs’ decisions are rational.
By the way, no other groups of “systems thinkers” are picking up on paradise engineering either.
Like, uh, building institutions and advocacy for responsible AI design, and keeping them closely networked with the EA community, and spreading the idea of functional decision theory as a component of desirable AI design, with papers about FDT cooperation being published by multiple EA groups that focus on AI (MIRI and FRI)?
Lol. I included “feedback loops” in arithmetic in a Word document. I had governance listed as 5% equal to the sum of other long-run policy issues, but due to the feedback loop of better governance begetting better governance, I decided to increase it to 10%. Done.
Right. Let’s build kibbutzim where children are conditioned to make rational decisions. Sounds super tractable to me! Those silly EAs have been missing this low-hanging fruit the entire time.
Also, it’s not even clear how this definition of systems fits with your earlier claims that systems solutions are incorrectly less amenable to EA methodology than non-systems solutions. The concrete thing you’ve said is that EA models are worse at flow-through effects and feedback loops, which even if true (dubious) seems to apply equally well to non-systemic solutions.
Except apparently you aren’t including poverty relief, which has large feedback loops and flowthrough effects; and apparently you aren’t including for animal advocacy, which has the same; and apparently you aren’t including EA movement growth, which has the same; and apparently you aren’t including promoting the construction of safe AGI, which has the same; and so on for everything else that EA does.
This looks very no-true-Scotsman-like.
Because they only have a hundred million dollars or so, and uh they don’t have the ability to coerce the general population? Come on.
This is pedantry. Saying “hopefully” doesn’t imply that they’re not going to select the option with the highest cause for hopes. It merely implies that they don’t have control over how these things actually play out.