I don’t see how this is a criticism of transhumanism. Transhumanism is about far future technologies that actually will change the human condition in ways unlike current technologies. See David Pearce’s essays, e.g. The Hedonistic Imperative. The experiences of recreational drugs provide evidence against your brand of skepticism, rather than for it. Surely you didn’t believe that transhumanism is merely about building better kinds of consumer goods, did you?
The Status Games:
This is a vague complaint and I don’t see how there’s any more “status gaining” in EA than in any other movement or sect of society. You might know better than me, but many of us don’t see this as a problem, so it would be more helpful to explain how bad the problem is and what the consequences of the problem are.
Reasoning by Analogy:
This again is vague and seems to require reading a number of prior blog posts to understand it. Honestly I have a hard time understanding what you are even trying to say, and I haven’t yet read the posts you’ve linked, but it would be helpful to at least summarize the point. I understand you’re used to using these kinds of metaphors and analogies, but not all of us can understand the jargon, even though we’re perfectly well involved with effective altruism.
Babies with a Detonator
If I understand you correctly… you’re complaining that EAs strongly believe certain things (utilitarianism, veganism) to be true? Since when is it a problem to hold beliefs about these things? Surely, EAs are not badly dogmatic or irrational about any of these things, and have reasons for believing them. Could you perhaps justify your complaint? The way you’re phrasing this, you’re only saying “X bothers me” but you haven’t given us a good picture of what needs to be changed and why.
The Size of the Problem:
This doesn’t provide any rational reason for doubt or skepticism. Also, the blog posts you linked don’t seem relevant to this.
The Complexity of The Solution:
Well, sure, but that’s not much of a reason to not do anything. If anything, it’s a reason to be more concerned about finding solutions, as long as the problems are tractable to some extent (which they are).
The Nature of the Solution:
Well, yes. No one said it would be easy. But again, that’s not a reason to give up or not care.
How Large an Uncertainty:
There’s been plenty of philosophical work on what suffering is and why it’s bad. Honestly, that’s not a hugely controversial or difficult topic in philosophy, so I’m not sure why you’re bringing it out as a topic of uncertainty. There is more philosophical dispute regarding the nature of a good life, but it’s not something that really stands in the way of most transhuman goals. Furthermore, as I’ve pointed out already, uncertainty in general doesn’t provide any reason to not care about or not try to change things.
Macrostrategy is Hard:
This seems to be basically the same thing you’ve said earlier about uncertain and difficult problems, and my response will therefore be the same.
Probabilistic Reasoning = Reasoning by Analogy:
I’m not sure exactly what problems you’re referring to, forgive me for not understanding all the jargon. It would be helpful if you actually discussed what sorts of problems there are in EA decision making and communities—you’ve merely given a description of some kind of problem, but with no substantiation of how badly it’s being committed and what the consequences of this error are.
Excessive Trust in Institutions:
I’m quite sure that EAs in general do more due diligence on funding projects than you seem to give credit for. Moreover, institutions also accomplish much more long term work than individual projects, people and prizes. I’m afraid I can’t really answer this any better because it depends on specific examples of institutions being effective or not, and it’s not apparent that the institutions supported by EA are in general less effective than whatever alternatives you may have in mind.
Delusional Optimism:
It’s not at all obvious that this is present, and you’ve given no reason to believe that it is. I haven’t seen any of this.
Convergence of opinions may strengthen separation within EA:
This is badly speculative and neglects countless countervailing possibilities.
Honestly, I’d love to be able to discuss and figure out the concerns you raise, but this so unspecific and simple a set of complaints that I’m not really sure what to make of it. I don’t know if you intended the linked blogs to back up your ideas, but they don’t seem to make things any clearer.
I’ll bite: 1) Transhumanism: The evidence is for the paucity of our knowledge. 2) Status: People are being valued not for the exp value they produce, but by the position they occupy. 3) Analogy: Jargon from Musk, meaning copying and tweaking someone else’s idea instead of thinking of a rocket, for instance, from the ground up—follow the chef and cook link. 4) Detonator: Key word was “cling to”, they stick with one they had to begin with, demonstrating lack of malleability. 5) Size: The size gives reason to doubt the value of action because to the extent you are moved by other forces (ethical or internal) the opportunity cost rises 6) Nature: Same as five. 7) Uncertainty: Same here, more uncertainty, more opportunity cost. 8) Macrostrategy: as with the items before, if you value anything else but aggregative consequentialism inddiferent to risk, the opportunity cost rises. 9)Probabilistic reasoning: No short summary, you’d have to search for reference class tennis, reference class, Bayes theorem, and reasoning by analogy to get it. I agree it is unfortunate that terms sometimes stand for whole articles, books, papers, or ideologies, I said this was a direct reflection of my thinking with myself. 10) Trust in Institutions: link. 11) Delusional: Some cling to ideas, some to heroes, some cling to optimistic expectations, all of them are not letting the truth destroy what can be destroyed by it. 12) I’d be curious to hear the countervailing possibilities. Many people who are examining the movement going forwards seem to agree this is a crucial issue.
Re. 10), it’s worth saying that in the previous post Paul Chistiano and I noted that it wasn’t at all obvious to us why you thought individuals were generally cheaper than institutions, with tax treatment versus administrative overhead leading to an unclear and context-specific conclusion. You never replied, so I still don’t know why you think this. I would definitely be curious to know why you think this.
I will write that post once I am finantially secure with some institutional attachment. I think it is too important for me to write while I expect to receive funding as an individual, and don’t want people to think “he’s saying that because he is not financed by an institution.” Also see this.
1) Transhumanism: The evidence is for the paucity of our knowledge.
Transhumanists don’t claim that we already know exactly how to improve the human condition, just that we can figure it out.
2) Status: People are being valued not for the exp value they produce, but by the position they occupy.
I’m not sure what kind of valuing you’re referring to, but it doesn’t sound like the same thing as the status seeking you first talked about, which is a question of behavior. Really I don’t see where the problem is at all—we don’t have any need to “value” people as if we were sacrificing them, sounds like you’re just getting worked about who’s getting the most praise and adoration! Praise and blame cannot, and never will be, metered out on the basis of pure contribution. They will always be subject to various human considerations, and we’ll always have to accept that.
Analogy: Jargon from Musk, meaning copying and tweaking someone else’s idea instead of thinking of a rocket, for instance, from the ground up—follow the chef and cook link.
There’s always a balance to be had between too many and too few original ideas. I see plenty of EAs who try to follow their own ideas instead of contributing to what we already know. Again, can you provide any evidence that this is an actual problem which is playing out in the movement, rather than your own personal opinion?
Detonator: Key word was “cling to”, they stick with one they had to begin with, demonstrating lack of malleability.
So you expect people to be changing their moral opinions more? Why? Would you complain that civil rights activists cling too much to beliefs in equality? How is this a problem with negative ramifications for the movement, and why does it give you grounds to be “skeptical”? People should be rational, but they shouldn’t change their opinions for the sake of changing them. Sometimes they just have good reasons to believe that they are right.
Size: The size gives reason to doubt the value of action because to the extent you are moved by other forces (ethical or internal) the opportunity cost rises
I’m sorry but I don’t understand how this backs up your point. It seems to actually contradict what you were saying, because if the problems are large then there is a huge cost to ignoring them.
6) Nature: Same as five.
I don’t even think this makes sense. Opportunistic cost and complexity of the issue are orthogonal issues.
7) Uncertainty: Same here, more uncertainty, more opportunity cost.
No… opportunity cost is a function of expected value, not certainty.
8) Macrostrategy: as with the items before, if you value anything else but aggregative consequentialism inddiferent to risk, the opportunity cost rises.
I can’t tell what it is you would value that forces this dilemma.
9)Probabilistic reasoning: No short summary, you’d have to search for reference class tennis, reference class, Bayes theorem, and reasoning by analogy to get it. I agree it is unfortunate that terms sometimes stand for whole articles, books, papers, or ideologies, I said this was a direct reflection of my thinking with myself.
If you can’t figure out how to explain your idea quickly and simply, that should serve as a warning to double check your assumptions and understanding.
10) Trust in Institutions: link.
All I see is that you believe individual EA workers can be more productive than when they work in an institution, but this depends on all sorts of things. You’re not going to find someone who will take $25,000 to fight global poverty as well as an institution is going to with the same money, and you’re not going to recruit lone people to do AI safety research. You need an actual organization to be accountable for producing results. Honestly, I can’t even imagine how this system would look like for many of the biggest EA cause areas. I also think the whole premise looks flawed: if a worker is willing to work for $25,000 when employed by an individual then they will be willing to work for $25,000 when employed by an institution.
11)Delusional: Some cling to ideas, some to heroes, some cling to optimistic expectations, all of them are not letting the truth destroy what can be destroyed by it.
Okay, but like I said before—I can’t take assumption after assumption on faith. We seem to be a pretty reasonable group of people as far as I can tell.
12) I’d be curious to hear the countervailing possibilities.
How about new people entering the EA movement? How about organizations branching out to new areas? How about people continuing to network and form social connections across the movement? How about EAs having a positive and cooperative mindset? Finally, I don’t see any reason to expect ill outcomes to actually take place because I can’t imagine what they would even be. I have a hard time picturing a dystopian scenario where nonprofits are at each other’s throats as feasible and likely.
Furthermore, GWWC was always about poverty, and 80k was always about career selection—there doesn’t seem to have been any of the congealing you mentioned.
~
Honestly, I don’t know what to tell you. You seem to want a movement that is not only filled with perfectly rational people, but also aligns with all your values and ways of thinking. And that’s just not reasonable to expect out of any movement. There’s things about the EA movement that I would like to change, but they don’t make me stop caring about the movement and I don’t stay involved merely out of dissatisfaction with the alternatives. When there’s something I dislike, I just see what I can do to fix it, because that’s the most rational and constructive thing I can do. You’ve clearly got a lot of experience with some of these issues which is great and a valuable resource, but the best way to leverage that is to start a meaningful conversation with others rather than expecting us to simply agree with everything that you say.
and that there are large inferential gaps in our conversation in both directions.
We could try to close the gaps writing to one another here, but then both of us would end up sometimes taking a defensive stance which could hinder discussion progress. My suggestion is that we do one of these
1) We talk via skype or hangouts to understand each other’s mind.
2) We wait organically for the inferential gaps to be filled and for both of us to grow as rationalists, and assume that we will converge more towards the future.
3) The third alternative—something I didn’t think about, but you think might be a good idea.
I don’t see how this is a criticism of transhumanism. Transhumanism is about far future technologies that actually will change the human condition in ways unlike current technologies. See David Pearce’s essays, e.g. The Hedonistic Imperative. The experiences of recreational drugs provide evidence against your brand of skepticism, rather than for it. Surely you didn’t believe that transhumanism is merely about building better kinds of consumer goods, did you?
This is a vague complaint and I don’t see how there’s any more “status gaining” in EA than in any other movement or sect of society. You might know better than me, but many of us don’t see this as a problem, so it would be more helpful to explain how bad the problem is and what the consequences of the problem are.
This again is vague and seems to require reading a number of prior blog posts to understand it. Honestly I have a hard time understanding what you are even trying to say, and I haven’t yet read the posts you’ve linked, but it would be helpful to at least summarize the point. I understand you’re used to using these kinds of metaphors and analogies, but not all of us can understand the jargon, even though we’re perfectly well involved with effective altruism.
If I understand you correctly… you’re complaining that EAs strongly believe certain things (utilitarianism, veganism) to be true? Since when is it a problem to hold beliefs about these things? Surely, EAs are not badly dogmatic or irrational about any of these things, and have reasons for believing them. Could you perhaps justify your complaint? The way you’re phrasing this, you’re only saying “X bothers me” but you haven’t given us a good picture of what needs to be changed and why.
This doesn’t provide any rational reason for doubt or skepticism. Also, the blog posts you linked don’t seem relevant to this.
Well, sure, but that’s not much of a reason to not do anything. If anything, it’s a reason to be more concerned about finding solutions, as long as the problems are tractable to some extent (which they are).
Well, yes. No one said it would be easy. But again, that’s not a reason to give up or not care.
There’s been plenty of philosophical work on what suffering is and why it’s bad. Honestly, that’s not a hugely controversial or difficult topic in philosophy, so I’m not sure why you’re bringing it out as a topic of uncertainty. There is more philosophical dispute regarding the nature of a good life, but it’s not something that really stands in the way of most transhuman goals. Furthermore, as I’ve pointed out already, uncertainty in general doesn’t provide any reason to not care about or not try to change things.
This seems to be basically the same thing you’ve said earlier about uncertain and difficult problems, and my response will therefore be the same.
I’m not sure exactly what problems you’re referring to, forgive me for not understanding all the jargon. It would be helpful if you actually discussed what sorts of problems there are in EA decision making and communities—you’ve merely given a description of some kind of problem, but with no substantiation of how badly it’s being committed and what the consequences of this error are.
I’m quite sure that EAs in general do more due diligence on funding projects than you seem to give credit for. Moreover, institutions also accomplish much more long term work than individual projects, people and prizes. I’m afraid I can’t really answer this any better because it depends on specific examples of institutions being effective or not, and it’s not apparent that the institutions supported by EA are in general less effective than whatever alternatives you may have in mind.
It’s not at all obvious that this is present, and you’ve given no reason to believe that it is. I haven’t seen any of this.
This is badly speculative and neglects countless countervailing possibilities.
Honestly, I’d love to be able to discuss and figure out the concerns you raise, but this so unspecific and simple a set of complaints that I’m not really sure what to make of it. I don’t know if you intended the linked blogs to back up your ideas, but they don’t seem to make things any clearer.
I’ll bite: 1) Transhumanism: The evidence is for the paucity of our knowledge. 2) Status: People are being valued not for the exp value they produce, but by the position they occupy. 3) Analogy: Jargon from Musk, meaning copying and tweaking someone else’s idea instead of thinking of a rocket, for instance, from the ground up—follow the chef and cook link. 4) Detonator: Key word was “cling to”, they stick with one they had to begin with, demonstrating lack of malleability. 5) Size: The size gives reason to doubt the value of action because to the extent you are moved by other forces (ethical or internal) the opportunity cost rises 6) Nature: Same as five. 7) Uncertainty: Same here, more uncertainty, more opportunity cost. 8) Macrostrategy: as with the items before, if you value anything else but aggregative consequentialism inddiferent to risk, the opportunity cost rises. 9)Probabilistic reasoning: No short summary, you’d have to search for reference class tennis, reference class, Bayes theorem, and reasoning by analogy to get it. I agree it is unfortunate that terms sometimes stand for whole articles, books, papers, or ideologies, I said this was a direct reflection of my thinking with myself. 10) Trust in Institutions: link. 11) Delusional: Some cling to ideas, some to heroes, some cling to optimistic expectations, all of them are not letting the truth destroy what can be destroyed by it. 12) I’d be curious to hear the countervailing possibilities. Many people who are examining the movement going forwards seem to agree this is a crucial issue.
Re. 10), it’s worth saying that in the previous post Paul Chistiano and I noted that it wasn’t at all obvious to us why you thought individuals were generally cheaper than institutions, with tax treatment versus administrative overhead leading to an unclear and context-specific conclusion. You never replied, so I still don’t know why you think this. I would definitely be curious to know why you think this.
I will write that post once I am finantially secure with some institutional attachment. I think it is too important for me to write while I expect to receive funding as an individual, and don’t want people to think “he’s saying that because he is not financed by an institution.” Also see this.
Transhumanists don’t claim that we already know exactly how to improve the human condition, just that we can figure it out.
I’m not sure what kind of valuing you’re referring to, but it doesn’t sound like the same thing as the status seeking you first talked about, which is a question of behavior. Really I don’t see where the problem is at all—we don’t have any need to “value” people as if we were sacrificing them, sounds like you’re just getting worked about who’s getting the most praise and adoration! Praise and blame cannot, and never will be, metered out on the basis of pure contribution. They will always be subject to various human considerations, and we’ll always have to accept that.
There’s always a balance to be had between too many and too few original ideas. I see plenty of EAs who try to follow their own ideas instead of contributing to what we already know. Again, can you provide any evidence that this is an actual problem which is playing out in the movement, rather than your own personal opinion?
So you expect people to be changing their moral opinions more? Why? Would you complain that civil rights activists cling too much to beliefs in equality? How is this a problem with negative ramifications for the movement, and why does it give you grounds to be “skeptical”? People should be rational, but they shouldn’t change their opinions for the sake of changing them. Sometimes they just have good reasons to believe that they are right.
I’m sorry but I don’t understand how this backs up your point. It seems to actually contradict what you were saying, because if the problems are large then there is a huge cost to ignoring them.
I don’t even think this makes sense. Opportunistic cost and complexity of the issue are orthogonal issues.
No… opportunity cost is a function of expected value, not certainty.
I can’t tell what it is you would value that forces this dilemma.
If you can’t figure out how to explain your idea quickly and simply, that should serve as a warning to double check your assumptions and understanding.
All I see is that you believe individual EA workers can be more productive than when they work in an institution, but this depends on all sorts of things. You’re not going to find someone who will take $25,000 to fight global poverty as well as an institution is going to with the same money, and you’re not going to recruit lone people to do AI safety research. You need an actual organization to be accountable for producing results. Honestly, I can’t even imagine how this system would look like for many of the biggest EA cause areas. I also think the whole premise looks flawed: if a worker is willing to work for $25,000 when employed by an individual then they will be willing to work for $25,000 when employed by an institution.
Okay, but like I said before—I can’t take assumption after assumption on faith. We seem to be a pretty reasonable group of people as far as I can tell.
How about new people entering the EA movement? How about organizations branching out to new areas? How about people continuing to network and form social connections across the movement? How about EAs having a positive and cooperative mindset? Finally, I don’t see any reason to expect ill outcomes to actually take place because I can’t imagine what they would even be. I have a hard time picturing a dystopian scenario where nonprofits are at each other’s throats as feasible and likely.
Furthermore, GWWC was always about poverty, and 80k was always about career selection—there doesn’t seem to have been any of the congealing you mentioned.
~
Honestly, I don’t know what to tell you. You seem to want a movement that is not only filled with perfectly rational people, but also aligns with all your values and ways of thinking. And that’s just not reasonable to expect out of any movement. There’s things about the EA movement that I would like to change, but they don’t make me stop caring about the movement and I don’t stay involved merely out of dissatisfaction with the alternatives. When there’s something I dislike, I just see what I can do to fix it, because that’s the most rational and constructive thing I can do. You’ve clearly got a lot of experience with some of these issues which is great and a valuable resource, but the best way to leverage that is to start a meaningful conversation with others rather than expecting us to simply agree with everything that you say.
I think we are falling prey to the transparency fallacy
https://en.wikipedia.org/wiki/Illusion_of_transparency
, the double transparency fallacy,
http://lesswrong.com/lw/ki/double_illusion_of_transparency/
and that there are large inferential gaps in our conversation in both directions.
We could try to close the gaps writing to one another here, but then both of us would end up sometimes taking a defensive stance which could hinder discussion progress. My suggestion is that we do one of these
1) We talk via skype or hangouts to understand each other’s mind. 2) We wait organically for the inferential gaps to be filled and for both of us to grow as rationalists, and assume that we will converge more towards the future. 3) The third alternative—something I didn’t think about, but you think might be a good idea.
9) I ended up replying how to on Lesswrong.