I. It might be worth reflecting upon how large part of this seem tied to something like “climbing the EA social ladder”.
E.g. just from the first part, emphasis mine
Coming to Berkeley and, e.g., running into someone impressiveat an office space already establishes a certain level of trust since they know you aren’t some random person (you’ve come through all the filters from being a random EA to being at the office space). If you’re in Berkeley for a while you can also build up more signals that you are worth people’s time. E.g., be involved in EA projects, hang around cool EAs.
Replace “EA” by some other environment with prestige gradients, and you have something like a highly generic social climbing guide. Seek cool kids, hang around them, go to exclusive parties, get good at signalling.
II. This isn’t to say this is bad . Climbing the ladder to some extent could be instrumentally useful, or even necessary, for an ability to do some interesting things, sometimes.
III. But note the hidden costs. Climbing the social ladder can trade of against building things. Learning all the Berkeley vibes can trade of against, eg., learning the math actually useful for understanding agency.
I don’t think this has any clear bottom line—I do agree for many people caring about EA topics it’s useful to come to the Bay from time to time. Compared to the original post I would probably mainly suggest to also consult virtue ethics and think about what sort of person you are changing yourself to, and if you, for example, most want to become “a highly cool and well networked EA” or e.g. “do things which need to be done”, which are different goals.
(strongly upvoted because I think this is a clean explanation of what I think is an underrated point at the current stage, particularly among younger EAs).
Yeah, it would probably be good if people redirected this energy to climbing ladders in the government/civil service/military or important powerful corporate institutions. But I guess these ladders underpay you in terms of social credit/inner ringing within EA. Should we praise people aiming for 15y-to-high-impact careers more?
Basic profile: advancing into some high-leverage role in government (or some other institution such as the World Bank), from which you can help the larger institution make decisions that are good for the long-run future of the world.
Essentially any career that ends up in an influential position in some government (including executive, judicial, and legislative positions) could qualify here (though of course some are more likely to be relevant than others).
Examples:
Richard Danzig (former Secretary of the Navy, author of Technology Roulette); multiple people who are pursuing degrees in security studies at Georgetown and aiming for (or already heading into) government roles.
...
On track?
As a first pass, the answer to “How on track are you?” seems reasonably approximated by “How quickly and impressively is your career advancing, by the standards of the institution?” People with more experience (and advancement) at the institution will often be able to help you get a clear idea of how this is going (and I generally think it’s important to have good enough relationships with some such people to get honest input from them—this is an additional indicator for whether you’re “on track”).
But note the hidden costs. Climbing the social ladder can trade of against building things. Learning all the Berkeley vibes can trade of against, eg., learning the math actually useful for understanding agency.
This feels like a surprisingly generic counterargument, after the (interesting) point about ladder climbing. “This could have opportunity costs” could be written under every piece of advice for how to spend time.
In fact, it applies less to this posts than to most advice on how to spend time, since the OP claimed that the environment caused them to work harder.
(A hidden cost that’s more tied to ladder climbing is Chana’s point that some of this can be at least somewhat zero-sum.)
I agree with you, being “a highly cool and well networked EA” and “do things which need to be done” are different goals. This post is heavily influenced by my experience as a new community builder and my perception that, in this situation, being “a highly cool and well networked EA” and “do things which need to be done” are pretty similar. If I wasn’t so sociable and network-y, I’d probably still be running my EA reading group with ~6 participants, which is nice but not “doing things which need to be done”. For technical alignment researchers, this is probably less the case, though still much more than I would’ve expected.
This post is heavily influenced by my experience as a new community builder and my perception that, in this situation, being “a highly cool and well networked EA” and “do things which need to be done” are pretty similar
Even though these two goals may lead to similar instrumental actions (e.g. doing important work), I think these two goals grow different motivational structures inside of you. I recently wrote:
It’s not that my actions were wrong, it’s that I did them for the wrong reasons, and that really does matter. Under my model, the cognitive causes (e.g. I want to be like EY) of externally visible actions (study math) are very important, because I think that the responsible cognition gets reinforced into my future action-generators.
For example, since I wanted to be like Eliezer Yudkowsky, I learned math; since I learned math, I got praised on LessWrong; since I got praised, my social-reward circuitry activated; since the social-reward circuitry activated, credit assignment activates and strengthens all of the preceding thoughts which I just listed, therefore making me more of the kind of person who does things because he wants to be like EY.
I can write a similar story for doing things because they are predicted to make me more respected. Therefore, over time, I became more of the kind of person who cares about being respected, and not so much about succeeding at alignment or truly becoming stronger.
Separating out how important networking is for different kinds of roles seems valuable, not only for the people trying to climb the ladder but also for the people already on the ladder. (e.g., maybe some of these folks desperate to find good people to own valuable projects that otherwise wouldn’t get done should be putting more effort into recruiting outside of the Bay.)
I feel like that’s a good argument for why hanging around the cool, smart people can be good for “skilling up”. But a lot of the value of meeting cool, smart people seems to come from developing good models! and surely it’s possible to build good models of e.g community building, AI safety by doing self-directed study, and occasionally reaching out with specific questions as they arise. I think it’s important to split up the value of meeting cool, smart people into A) networking and social signalling, and B) building better models. And maybe we should be focusing on B.
Yeah I also had a strong sense of this from reading this post. It reminded me of this short piece by C. S. Lewis called The Inner Ring, which I highly recommend. Here is a sentence from it that sums it up pretty well I think:
IN the whole of your life as you now remember it, has the desire to be on the right side of that invisible line ever prompted you to any act or word on which, in the cold small hours of a wakeful night, you can look back with satisfaction?
I think the most costly hidden impact is the perception of gatekeeping that occurs with such a system as this. Gatekeeping happens in two ways: for one, those who are less able to travel for reason such as their having to provide for their family or even their being homesick are put at a disadvantage. And two, those who are less able to schmooze (fun word!) and climb that ladder are also put at a disadvantage.
I agree, I think this is a problem, but I am not sure if the cost of solving the problem (I.e. replacing the system) is too high? Much like grades in undergraduate institutions, whether one agrees with their ethicality or not, they are a fairly accurate assessment of how one might do in graduate school because they are so similar in nature. Now, disregarding the argument as to whether or not grades should be used in either, what I am trying to say is that the social ladder that exists within EA exists because the skills that are required to climb this social ladder are skills that are valued within EA. Thus, I do not think we need to so much care about the system because I think it is actually solving for an efficiency problem that is addressed above.
You brought up specifically the opportunity cost, the essay above said that there are a million projects going on always and not enough people to staff them. I think this opportunity cost is apt in order to weed out the people who aren’t that serious about an idea or who just aren’t yet skilled enough. Furthermore, even if this wasn’t the case I do think that EA people are pretty productive when motivated enough, from experience I can say (I could be wrong on this in general, but for me at least) all you really need to know is one well-connected EA in order to have access to 100 more — and even then you can get access to many many more at online or in person events. You may call this time consuming schmoozing, but if thought about impact fully and effectively (qualities EA wants) I maintain that this could be done in one weekend.
For the negative perception it creates: I think it would only really do this for people who are already in EA because otherwise people just wouldn’t see the culture at hand. At the point of their seeing it, a negative perception might still occur, but by that point I would hope they weight the ideals of EA to say that this may be the most effective culture as we have been discussing in this thread.
Please let me know if there is something that I missed!
I think competence-sorted social strata are incredibly important for aligning higher-competence people with actually doing what’s good and seeking what’s actually true. If you use the simple model that people will behave according to what they predict other people will judge as good behaviour,[1] and what they will judge as good behaviour (at least in EA, ideally) is that which they predict will help others the most… then for people who are really good at figuring out what behaviour actually help others the most, they will only be motivated to do those things insofar as they’re surrounded by judgers who also are able to see that those behaviours are good.
So if you have no competence-sorted social strata, the most competent people will be surrounded by people of much lower competence, which in turn means that the high-competence people won’t be motivated to use their competence in order to figure out ways of being good that they know their judgers can’t see. On this oversimplified model, you only really start to harvest the social benefit from extreme competence once you have all the extremely competent people frequently mingling with each other.
This is why I personally am in favour of EAs trying more (on the current margin) to hang out with people similar to them, and worry less (on the current margin) about groupthink. We’re already sufficiently paranoid of groupthink, and a few bubbles getting stuck on crazy is worth the few bubbles bootstrapping themselves to greater heights.
I. It might be worth reflecting upon how large part of this seem tied to something like “climbing the EA social ladder”.
E.g. just from the first part, emphasis mine
Replace “EA” by some other environment with prestige gradients, and you have something like a highly generic social climbing guide. Seek cool kids, hang around them, go to exclusive parties, get good at signalling.
II. This isn’t to say this is bad . Climbing the ladder to some extent could be instrumentally useful, or even necessary, for an ability to do some interesting things, sometimes.
III. But note the hidden costs. Climbing the social ladder can trade of against building things. Learning all the Berkeley vibes can trade of against, eg., learning the math actually useful for understanding agency.
I don’t think this has any clear bottom line—I do agree for many people caring about EA topics it’s useful to come to the Bay from time to time. Compared to the original post I would probably mainly suggest to also consult virtue ethics and think about what sort of person you are changing yourself to, and if you, for example, most want to become “a highly cool and well networked EA” or e.g. “do things which need to be done”, which are different goals.
(strongly upvoted because I think this is a clean explanation of what I think is an underrated point at the current stage, particularly among younger EAs).
Yeah, it would probably be good if people redirected this energy to climbing ladders in the government/civil service/military or important powerful corporate institutions. But I guess these ladders underpay you in terms of social credit/inner ringing within EA. Should we praise people aiming for 15y-to-high-impact careers more?
To support your point, Holden signal-boosted this in his aptitudes over paths post:
We should praise the class of worker in general but leave the individuals alone.
This feels like a surprisingly generic counterargument, after the (interesting) point about ladder climbing. “This could have opportunity costs” could be written under every piece of advice for how to spend time.
In fact, it applies less to this posts than to most advice on how to spend time, since the OP claimed that the environment caused them to work harder.
(A hidden cost that’s more tied to ladder climbing is Chana’s point that some of this can be at least somewhat zero-sum.)
I agree with you, being “a highly cool and well networked EA” and “do things which need to be done” are different goals. This post is heavily influenced by my experience as a new community builder and my perception that, in this situation, being “a highly cool and well networked EA” and “do things which need to be done” are pretty similar. If I wasn’t so sociable and network-y, I’d probably still be running my EA reading group with ~6 participants, which is nice but not “doing things which need to be done”. For technical alignment researchers, this is probably less the case, though still much more than I would’ve expected.
Even though these two goals may lead to similar instrumental actions (e.g. doing important work), I think these two goals grow different motivational structures inside of you. I recently wrote:
Separating out how important networking is for different kinds of roles seems valuable, not only for the people trying to climb the ladder but also for the people already on the ladder. (e.g., maybe some of these folks desperate to find good people to own valuable projects that otherwise wouldn’t get done should be putting more effort into recruiting outside of the Bay.)
I feel like that’s a good argument for why hanging around the cool, smart people can be good for “skilling up”. But a lot of the value of meeting cool, smart people seems to come from developing good models! and surely it’s possible to build good models of e.g community building, AI safety by doing self-directed study, and occasionally reaching out with specific questions as they arise. I think it’s important to split up the value of meeting cool, smart people into A) networking and social signalling, and B) building better models. And maybe we should be focusing on B.
Yeah I also had a strong sense of this from reading this post. It reminded me of this short piece by C. S. Lewis called The Inner Ring, which I highly recommend. Here is a sentence from it that sums it up pretty well I think:
IN the whole of your life as you now remember it, has the desire to be on the right side of that invisible line ever prompted you to any act or word on which, in the cold small hours of a wakeful night, you can look back with satisfaction?
I think the most costly hidden impact is the perception of gatekeeping that occurs with such a system as this. Gatekeeping happens in two ways: for one, those who are less able to travel for reason such as their having to provide for their family or even their being homesick are put at a disadvantage. And two, those who are less able to schmooze (fun word!) and climb that ladder are also put at a disadvantage.
I agree, I think this is a problem, but I am not sure if the cost of solving the problem (I.e. replacing the system) is too high? Much like grades in undergraduate institutions, whether one agrees with their ethicality or not, they are a fairly accurate assessment of how one might do in graduate school because they are so similar in nature. Now, disregarding the argument as to whether or not grades should be used in either, what I am trying to say is that the social ladder that exists within EA exists because the skills that are required to climb this social ladder are skills that are valued within EA. Thus, I do not think we need to so much care about the system because I think it is actually solving for an efficiency problem that is addressed above.
You brought up specifically the opportunity cost, the essay above said that there are a million projects going on always and not enough people to staff them. I think this opportunity cost is apt in order to weed out the people who aren’t that serious about an idea or who just aren’t yet skilled enough. Furthermore, even if this wasn’t the case I do think that EA people are pretty productive when motivated enough, from experience I can say (I could be wrong on this in general, but for me at least) all you really need to know is one well-connected EA in order to have access to 100 more — and even then you can get access to many many more at online or in person events. You may call this time consuming schmoozing, but if thought about impact fully and effectively (qualities EA wants) I maintain that this could be done in one weekend.
For the negative perception it creates: I think it would only really do this for people who are already in EA because otherwise people just wouldn’t see the culture at hand. At the point of their seeing it, a negative perception might still occur, but by that point I would hope they weight the ideals of EA to say that this may be the most effective culture as we have been discussing in this thread.
Please let me know if there is something that I missed!
I think competence-sorted social strata are incredibly important for aligning higher-competence people with actually doing what’s good and seeking what’s actually true. If you use the simple model that people will behave according to what they predict other people will judge as good behaviour,[1] and what they will judge as good behaviour (at least in EA, ideally) is that which they predict will help others the most… then for people who are really good at figuring out what behaviour actually help others the most, they will only be motivated to do those things insofar as they’re surrounded by judgers who also are able to see that those behaviours are good.
So if you have no competence-sorted social strata, the most competent people will be surrounded by people of much lower competence, which in turn means that the high-competence people won’t be motivated to use their competence in order to figure out ways of being good that they know their judgers can’t see. On this oversimplified model, you only really start to harvest the social benefit from extreme competence once you have all the extremely competent people frequently mingling with each other.
This is why I personally am in favour of EAs trying more (on the current margin) to hang out with people similar to them, and worry less (on the current margin) about groupthink. We’re already sufficiently paranoid of groupthink, and a few bubbles getting stuck on crazy is worth the few bubbles bootstrapping themselves to greater heights.
I think it goes one meta-level up from this, but let’s not needlessly complicate things, and this level is predictive enough.