1) Transhumanism: The evidence is for the paucity of our knowledge.
Transhumanists don’t claim that we already know exactly how to improve the human condition, just that we can figure it out.
2) Status: People are being valued not for the exp value they produce, but by the position they occupy.
I’m not sure what kind of valuing you’re referring to, but it doesn’t sound like the same thing as the status seeking you first talked about, which is a question of behavior. Really I don’t see where the problem is at all—we don’t have any need to “value” people as if we were sacrificing them, sounds like you’re just getting worked about who’s getting the most praise and adoration! Praise and blame cannot, and never will be, metered out on the basis of pure contribution. They will always be subject to various human considerations, and we’ll always have to accept that.
Analogy: Jargon from Musk, meaning copying and tweaking someone else’s idea instead of thinking of a rocket, for instance, from the ground up—follow the chef and cook link.
There’s always a balance to be had between too many and too few original ideas. I see plenty of EAs who try to follow their own ideas instead of contributing to what we already know. Again, can you provide any evidence that this is an actual problem which is playing out in the movement, rather than your own personal opinion?
Detonator: Key word was “cling to”, they stick with one they had to begin with, demonstrating lack of malleability.
So you expect people to be changing their moral opinions more? Why? Would you complain that civil rights activists cling too much to beliefs in equality? How is this a problem with negative ramifications for the movement, and why does it give you grounds to be “skeptical”? People should be rational, but they shouldn’t change their opinions for the sake of changing them. Sometimes they just have good reasons to believe that they are right.
Size: The size gives reason to doubt the value of action because to the extent you are moved by other forces (ethical or internal) the opportunity cost rises
I’m sorry but I don’t understand how this backs up your point. It seems to actually contradict what you were saying, because if the problems are large then there is a huge cost to ignoring them.
6) Nature: Same as five.
I don’t even think this makes sense. Opportunistic cost and complexity of the issue are orthogonal issues.
7) Uncertainty: Same here, more uncertainty, more opportunity cost.
No… opportunity cost is a function of expected value, not certainty.
8) Macrostrategy: as with the items before, if you value anything else but aggregative consequentialism inddiferent to risk, the opportunity cost rises.
I can’t tell what it is you would value that forces this dilemma.
9)Probabilistic reasoning: No short summary, you’d have to search for reference class tennis, reference class, Bayes theorem, and reasoning by analogy to get it. I agree it is unfortunate that terms sometimes stand for whole articles, books, papers, or ideologies, I said this was a direct reflection of my thinking with myself.
If you can’t figure out how to explain your idea quickly and simply, that should serve as a warning to double check your assumptions and understanding.
10) Trust in Institutions: link.
All I see is that you believe individual EA workers can be more productive than when they work in an institution, but this depends on all sorts of things. You’re not going to find someone who will take $25,000 to fight global poverty as well as an institution is going to with the same money, and you’re not going to recruit lone people to do AI safety research. You need an actual organization to be accountable for producing results. Honestly, I can’t even imagine how this system would look like for many of the biggest EA cause areas. I also think the whole premise looks flawed: if a worker is willing to work for $25,000 when employed by an individual then they will be willing to work for $25,000 when employed by an institution.
11)Delusional: Some cling to ideas, some to heroes, some cling to optimistic expectations, all of them are not letting the truth destroy what can be destroyed by it.
Okay, but like I said before—I can’t take assumption after assumption on faith. We seem to be a pretty reasonable group of people as far as I can tell.
12) I’d be curious to hear the countervailing possibilities.
How about new people entering the EA movement? How about organizations branching out to new areas? How about people continuing to network and form social connections across the movement? How about EAs having a positive and cooperative mindset? Finally, I don’t see any reason to expect ill outcomes to actually take place because I can’t imagine what they would even be. I have a hard time picturing a dystopian scenario where nonprofits are at each other’s throats as feasible and likely.
Furthermore, GWWC was always about poverty, and 80k was always about career selection—there doesn’t seem to have been any of the congealing you mentioned.
~
Honestly, I don’t know what to tell you. You seem to want a movement that is not only filled with perfectly rational people, but also aligns with all your values and ways of thinking. And that’s just not reasonable to expect out of any movement. There’s things about the EA movement that I would like to change, but they don’t make me stop caring about the movement and I don’t stay involved merely out of dissatisfaction with the alternatives. When there’s something I dislike, I just see what I can do to fix it, because that’s the most rational and constructive thing I can do. You’ve clearly got a lot of experience with some of these issues which is great and a valuable resource, but the best way to leverage that is to start a meaningful conversation with others rather than expecting us to simply agree with everything that you say.
and that there are large inferential gaps in our conversation in both directions.
We could try to close the gaps writing to one another here, but then both of us would end up sometimes taking a defensive stance which could hinder discussion progress. My suggestion is that we do one of these
1) We talk via skype or hangouts to understand each other’s mind.
2) We wait organically for the inferential gaps to be filled and for both of us to grow as rationalists, and assume that we will converge more towards the future.
3) The third alternative—something I didn’t think about, but you think might be a good idea.
Transhumanists don’t claim that we already know exactly how to improve the human condition, just that we can figure it out.
I’m not sure what kind of valuing you’re referring to, but it doesn’t sound like the same thing as the status seeking you first talked about, which is a question of behavior. Really I don’t see where the problem is at all—we don’t have any need to “value” people as if we were sacrificing them, sounds like you’re just getting worked about who’s getting the most praise and adoration! Praise and blame cannot, and never will be, metered out on the basis of pure contribution. They will always be subject to various human considerations, and we’ll always have to accept that.
There’s always a balance to be had between too many and too few original ideas. I see plenty of EAs who try to follow their own ideas instead of contributing to what we already know. Again, can you provide any evidence that this is an actual problem which is playing out in the movement, rather than your own personal opinion?
So you expect people to be changing their moral opinions more? Why? Would you complain that civil rights activists cling too much to beliefs in equality? How is this a problem with negative ramifications for the movement, and why does it give you grounds to be “skeptical”? People should be rational, but they shouldn’t change their opinions for the sake of changing them. Sometimes they just have good reasons to believe that they are right.
I’m sorry but I don’t understand how this backs up your point. It seems to actually contradict what you were saying, because if the problems are large then there is a huge cost to ignoring them.
I don’t even think this makes sense. Opportunistic cost and complexity of the issue are orthogonal issues.
No… opportunity cost is a function of expected value, not certainty.
I can’t tell what it is you would value that forces this dilemma.
If you can’t figure out how to explain your idea quickly and simply, that should serve as a warning to double check your assumptions and understanding.
All I see is that you believe individual EA workers can be more productive than when they work in an institution, but this depends on all sorts of things. You’re not going to find someone who will take $25,000 to fight global poverty as well as an institution is going to with the same money, and you’re not going to recruit lone people to do AI safety research. You need an actual organization to be accountable for producing results. Honestly, I can’t even imagine how this system would look like for many of the biggest EA cause areas. I also think the whole premise looks flawed: if a worker is willing to work for $25,000 when employed by an individual then they will be willing to work for $25,000 when employed by an institution.
Okay, but like I said before—I can’t take assumption after assumption on faith. We seem to be a pretty reasonable group of people as far as I can tell.
How about new people entering the EA movement? How about organizations branching out to new areas? How about people continuing to network and form social connections across the movement? How about EAs having a positive and cooperative mindset? Finally, I don’t see any reason to expect ill outcomes to actually take place because I can’t imagine what they would even be. I have a hard time picturing a dystopian scenario where nonprofits are at each other’s throats as feasible and likely.
Furthermore, GWWC was always about poverty, and 80k was always about career selection—there doesn’t seem to have been any of the congealing you mentioned.
~
Honestly, I don’t know what to tell you. You seem to want a movement that is not only filled with perfectly rational people, but also aligns with all your values and ways of thinking. And that’s just not reasonable to expect out of any movement. There’s things about the EA movement that I would like to change, but they don’t make me stop caring about the movement and I don’t stay involved merely out of dissatisfaction with the alternatives. When there’s something I dislike, I just see what I can do to fix it, because that’s the most rational and constructive thing I can do. You’ve clearly got a lot of experience with some of these issues which is great and a valuable resource, but the best way to leverage that is to start a meaningful conversation with others rather than expecting us to simply agree with everything that you say.
I think we are falling prey to the transparency fallacy
https://en.wikipedia.org/wiki/Illusion_of_transparency
, the double transparency fallacy,
http://lesswrong.com/lw/ki/double_illusion_of_transparency/
and that there are large inferential gaps in our conversation in both directions.
We could try to close the gaps writing to one another here, but then both of us would end up sometimes taking a defensive stance which could hinder discussion progress. My suggestion is that we do one of these
1) We talk via skype or hangouts to understand each other’s mind. 2) We wait organically for the inferential gaps to be filled and for both of us to grow as rationalists, and assume that we will converge more towards the future. 3) The third alternative—something I didn’t think about, but you think might be a good idea.