Note to casual viewers that the content of this is not what the title makes it sound like. He’s not saying that rationalists are doomed to ultimately lie and cheat each other. Just that here are some reasons why it’s been hard.
From the recent Sarah Constantin post
Wouldn’t a pretty plausible course of action be “accumulate as much power and resources as possible, so you can do even more good”?
Taken to an extreme, this would look indistinguishable from the actions of someone who just wants to acquire as much power as possible for its own sake. Actually building Utopia is always something to get around to later; for now you have to build up your strength, so that the future utopia will be even better.
Lying and hurting people in order to gain power can never be bad, because you are always aiming at the greater good down the road, so anything that makes you more powerful should promote the Good, right?
Obviously, this is a terrible failure mode.
I don’t buy this logic. Obviously there’s a huge difference between taking power and then expending effort into positive activities, or taking power and not giving it up at all. Suppose that tomorrow we all found out that a major corporation was the front for a shady utilitarian network that had accumulated enough power and capital to fill all current EA funding gaps, or something like that. Since at some point you actually do accomplish good, it’s clearly not indistinguishable.
I mean, you can keep kicking things back and say “why not secretly acquire MORE power today and wait till tomorrow, and then you’ll never do any good?” but there’s obvious empirical limitations to that, and besides it’s a problem of decision theory which is present across all kinds of things and doesn’t have much to do with gaining power in particular.
In practical terms, people (not EAs) who try to gain power with future promises of making things nicer are often either corrupt or corruptible, so we have that to worry about. But it’s not sufficient to show that the basic strategy doesn’t work.
...
{epistemic status: extremely low confidence}
The way I see a lot of these organizational problems where they seem to have controversial standards and practices is that core people are getting just a little bit too hung up on EA This and EA That and Community This and Community That… in reality what you should do is take pride in your organization, those few people and resources you have in your control or to your left and right, and make it as strong as possible. Not by cheating to get money or anything, but by fundamentally adhering to good principles of leadership, and really taking pride in it (without thinking about overall consequences all the time). If you do that, you probably won’t have these kinds of problems, which seem to be kind of common whenever the organization itself is made subservient to some higher ideal (e.g. cryonics organizations, political activism, religions). I haven’t been inside these EA organizations so I don’t know how they work, but I know how good leadership works in other places and that’s what seems to be different. It probably sounds obvious that everyone in an EA organization should run it as well as they can, but after I hear about these occasional issues I get the sense that it’s kind of important to just sit and meditate on that basic point instead of always talking about the big blurry community.
To succeed at our goals:
I’d agree with all that. It all seems pretty reasonable.
I think that the main point here isn’t that the strategy of building power and then do good never works, so much as that someone claiming that this is their plan isn’t actually strong evidence that they’re going to follow through, and that it encourages you to be slightly evil more than you have to be.
I’ve heard other people argue that that strategy literally doesn’t work, making a claim roughly along the lines of “if you achieved power by maximizing influence in the conventional way, you wind up in an institutional context which makes pivoting to do good difficult”. I’m not sure how broadly this applies, but it seems to me to be worth considering. For instance, if you become a congressperson by playing normal party politics, it seems to be genuinely difficult to implement reform and policy that is far outside of the political Overton window.
I think that the main point here isn’t that the strategy of building power and then do good never works, so much as that someone claiming that this is their plan isn’t actually strong evidence that they’re going to follow through,
True. But if we already know each other and trust each other’s intentions then it’s different. Most of us have already done extremely costly activities without clear gain as altruists.
and that it encourages you to be slightly evil more than you have to be.
Maybe, but this is common folk wisdom where you should demand more applicable psychological evidence, instead of assuming that it’s actually true to a significant degree. Especially among the atypical subset of the population which is core to EA. Plus, it can be defeated/mitigated, just like other kinds of biases and flaws in people’s thinking.
But if we already know each other and trust each other’s intentions then it’s different. Most of us have already done extremely costly activities without clear gain as altruists.
That signals altruism, not effectiveness. My main concern is that the EA movement will not be able to maintain the epistemic standards necessary to discover and execute on abnormally effective ways of doing good, not primarily that people won’t donate at all. In this light, concerns about core metrics of the EA movement are very relevant. I think the main risk is compromising standards to grow faster rather than people turning out to have been “evil” all along, and I think that growth at the expense of rigor is mostly bad.
Being at all intellectually dishonest is much worse for an intellectual movement’s prospects than it is for normal groups.
instead of assuming that it’s actually true to a significant degree
The OP cites particular instances of cases where she thinks this accusation is true—I’m not worried that this is likely in the future, I’m worried that this happens.
Plus, it can be defeated/mitigated, just like other kinds of biases and flaws in people’s thinking.
I agree, but I think more likely ways of dealing with the issues involve more credible signals of dealing with the issues than just saying that they should be solvable.
I think the main risk is compromising standards to grow faster rather than people turning out to have been “evil” all along, and I think that growth at the expense of rigor is mostly bad.
Okay, so there’s some optimal balance to be had (there are always ways you can be more rigorous and less growth-oriented, towards a very unreasonable extreme). And we’re trying to find the right point, so we can err on either side if we’re not careful. I agree that dishonesty is very bad, but I’m just a bit worried that if we all start treating errors on one side like a large controversy then we’re going to miss the occasions where we err on the other side, and then go a little too far, because we get really strong and socially damning feedback on one side, and nothing on the other side.
The OP cites particular instances of cases where she thinks this accusation is true—I’m not worried that this is likely in the future, I’m worried that this happens.
To be perfectly blunt and honest, it’s a blog post with some anecdotes. That’s fine for saying that there’s a problem to be investigated, but not for making conclusions about particular causal mechanisms. We don’t have an idea of how these people’s motivations changed (maybe they’d have the exact same plans before having come into their positions, maybe they become more fair and careful the more experience and power they get).
Anyway the reason I said that was just to defend the idea that obtaining power can be good overall. Not that there are no such problems associated with it.
Note to casual viewers that the content of this is not what the title makes it sound like. He’s not saying that rationalists are doomed to ultimately lie and cheat each other. Just that here are some reasons why it’s been hard.
From the recent Sarah Constantin post
I don’t buy this logic. Obviously there’s a huge difference between taking power and then expending effort into positive activities, or taking power and not giving it up at all. Suppose that tomorrow we all found out that a major corporation was the front for a shady utilitarian network that had accumulated enough power and capital to fill all current EA funding gaps, or something like that. Since at some point you actually do accomplish good, it’s clearly not indistinguishable.
I mean, you can keep kicking things back and say “why not secretly acquire MORE power today and wait till tomorrow, and then you’ll never do any good?” but there’s obvious empirical limitations to that, and besides it’s a problem of decision theory which is present across all kinds of things and doesn’t have much to do with gaining power in particular.
In practical terms, people (not EAs) who try to gain power with future promises of making things nicer are often either corrupt or corruptible, so we have that to worry about. But it’s not sufficient to show that the basic strategy doesn’t work.
...
{epistemic status: extremely low confidence}
The way I see a lot of these organizational problems where they seem to have controversial standards and practices is that core people are getting just a little bit too hung up on EA This and EA That and Community This and Community That… in reality what you should do is take pride in your organization, those few people and resources you have in your control or to your left and right, and make it as strong as possible. Not by cheating to get money or anything, but by fundamentally adhering to good principles of leadership, and really taking pride in it (without thinking about overall consequences all the time). If you do that, you probably won’t have these kinds of problems, which seem to be kind of common whenever the organization itself is made subservient to some higher ideal (e.g. cryonics organizations, political activism, religions). I haven’t been inside these EA organizations so I don’t know how they work, but I know how good leadership works in other places and that’s what seems to be different. It probably sounds obvious that everyone in an EA organization should run it as well as they can, but after I hear about these occasional issues I get the sense that it’s kind of important to just sit and meditate on that basic point instead of always talking about the big blurry community.
I’d agree with all that. It all seems pretty reasonable.
I think that the main point here isn’t that the strategy of building power and then do good never works, so much as that someone claiming that this is their plan isn’t actually strong evidence that they’re going to follow through, and that it encourages you to be slightly evil more than you have to be.
I’ve heard other people argue that that strategy literally doesn’t work, making a claim roughly along the lines of “if you achieved power by maximizing influence in the conventional way, you wind up in an institutional context which makes pivoting to do good difficult”. I’m not sure how broadly this applies, but it seems to me to be worth considering. For instance, if you become a congressperson by playing normal party politics, it seems to be genuinely difficult to implement reform and policy that is far outside of the political Overton window.
True. But if we already know each other and trust each other’s intentions then it’s different. Most of us have already done extremely costly activities without clear gain as altruists.
Maybe, but this is common folk wisdom where you should demand more applicable psychological evidence, instead of assuming that it’s actually true to a significant degree. Especially among the atypical subset of the population which is core to EA. Plus, it can be defeated/mitigated, just like other kinds of biases and flaws in people’s thinking.
That signals altruism, not effectiveness. My main concern is that the EA movement will not be able to maintain the epistemic standards necessary to discover and execute on abnormally effective ways of doing good, not primarily that people won’t donate at all. In this light, concerns about core metrics of the EA movement are very relevant. I think the main risk is compromising standards to grow faster rather than people turning out to have been “evil” all along, and I think that growth at the expense of rigor is mostly bad.
Being at all intellectually dishonest is much worse for an intellectual movement’s prospects than it is for normal groups.
The OP cites particular instances of cases where she thinks this accusation is true—I’m not worried that this is likely in the future, I’m worried that this happens.
I agree, but I think more likely ways of dealing with the issues involve more credible signals of dealing with the issues than just saying that they should be solvable.
Okay, so there’s some optimal balance to be had (there are always ways you can be more rigorous and less growth-oriented, towards a very unreasonable extreme). And we’re trying to find the right point, so we can err on either side if we’re not careful. I agree that dishonesty is very bad, but I’m just a bit worried that if we all start treating errors on one side like a large controversy then we’re going to miss the occasions where we err on the other side, and then go a little too far, because we get really strong and socially damning feedback on one side, and nothing on the other side.
To be perfectly blunt and honest, it’s a blog post with some anecdotes. That’s fine for saying that there’s a problem to be investigated, but not for making conclusions about particular causal mechanisms. We don’t have an idea of how these people’s motivations changed (maybe they’d have the exact same plans before having come into their positions, maybe they become more fair and careful the more experience and power they get).
Anyway the reason I said that was just to defend the idea that obtaining power can be good overall. Not that there are no such problems associated with it.