I do have a sense that if the Blackpool hotel scaled to a population of 60 for example, this would give information that couldn’t at all be captured by having 3 distant 20-person hotels. Said information is probably? more interesting to me for social network effect reasons, but I’m not very experienced with EA meetups and stuff, so I might overrate the value of that.
Agrippa
> Safer from nuclear war and pandemics…
Thus undermining a powerful incentive to help reduce x-risk!
If the best projects already have enough money, and are hiring significantly less people than the total number of potentially full time EAs, its possible that funding the next “tier” of worse-than-best projects is worthwhile. And it’s not clear that we have the money to do that.
When people say all of the top orgs have enough money, my interpretation is that I can’t really create any value at all by donating to them. That is, donor A can create 0 utils by donating to $1 to Org Z, because doing so doesn’t actually allow Org Z to scale in a meaningful way.
If I also can’t work at Org Z, then donating to Org Y looks like my next best option.
Reducing EA job search waste
I was aware of the possibility of relevant competition law, but didn’t mention it because I’m just not that familiar. My assumption was that it would not be the same for non-profits, but that could be untrue. I am not very excited about coordination between employers in any case.
Independent of legal worries, one probably doesn’t need to look at resumes to gauge applicant pool—most orgs have team pages, and so one can look at bios.
This is a good point.
Thanks for the post by Kelsey. My thought is that we shouldn’t expect organizations to worry too much about whether the feedback is constructive or even easy to understand, which seems to be the bulk of the work Kelsey is describing. On the one hand it’s bad if EA orgs alienate applicants via the mechanisms Kelsey describes, on the other hand I do still think that something is better than nothing given sufficient maturity. Nonetheless I take your point seriously.
A few years ago I had very different priorities, pursuing them was not making me happy, and I guess at some point I correctly realized that I’d be much happier focusing on altruism instead.
I really crave creative stimulation and I see my engagement with altruism as a creative pursuit. I have many creative interests but I expect my interest in altruism to be more stable over time and more socially reinforced, so it gets priority. (That’s my narrative, at least)
I do experience a sense of disgust / antipathy towards what I see as complacency, or failing to engage the world creatively. This isn’t necessarily related to altruism but usually I see disinterest in altruism as symptomatic of the kind of complacency that I loathe.
I really enjoy maximizing / trying to be good at something, so that’s a big part of where focus on effectiveness comes in. Strategizing is the fun part of altruism for me.
I am very sensitive to the idea of being manipulated or pressed into conformity, and disgusted when I view people as falling victim to this. For this reason I am skeptical of many memes about what altruistic behavior should look like, and prefer a “rational” approach that is harder to manipulate.
I do experience feelings of injustice with respect to stuff like immigration policy, but I don’t prioritize responding to those feelings, maybe because I see them as easy to manipulate or whatever.
There is virtually no sensation of empathy involved. There is also no sensation of guilt, but there is a sense of frustration when I feel that I am failing to actualize my values.
Edit: After reading some other comments, I’ll add that I guess I do feel good about being nice to people close to me, and altruism does generate a similar feeling. I’m hesitant to call this empathy because it’s not true that I feel bad about the suffering of distant people, I just feel good about helping.
Anecdote re: ruthlessness:
During my recent undergrad, I was often openly critical of the cost effectiveness of various initiatives being pushed in my community. I think anyone who has been similarly ruthless is probably familiar with the surprising amount of pushback and alienation that comes from doing this. I think I may have convinced some small portion of people. I ended up deciding that I should focus on circumventing defensiveness by proactively promoting what I thought were good ideas and not criticizing other people’s stupid ideas, which essentially amounts to being very nice.
I wonder how well a good ruthlessness strategy about public contexts generalizes to private contexts and vice versa.
I have never experienced Imposter Syndrome and have a strong sense that I never would under any circumstances. I have clearly have psychological characteristics that would prevent me from experiencing Imposter Syndrome, for example I seem to have low priors about other people’s competence almost always, for better or worse.
I also model myself as having philosophical antibodies against it. But I can’t tell the extent to which these antibodies are actually impactful vs. my personality.
For example: I would argue that if I’m surprised at how competent people think I am, and I strongly think they are wrong, then this means I am good at seeming competent, which is valuable. So this should only boost my view of my capabilities.
Another example: If I’m trying to decide whether I belong in a set of people based on a competence threshold, I should always compare myself to the least competent person in the set. The most competent people aren’t relevant at all, but people with Imposter Syndrome seem to focus on them to the exclusion of the least competent people.
Do people who experience Imposter Syndrome also possess these beliefs, and it just doesn’t matter? Or is this stuff useful to reflect on?
--
You don’t have to have the same skills as them, and it’s very unlikely that you will. You’re probably better at some things than they are … Even if part of what you learn during this experience is “Whoah, this particular type of work is not for me,” that’s a useful thing to learn and will help you move toward whatever your comparative advantage is.
I have never seen writing on Imposter Syndrome that acknowledges a possibility that you really are less competent may have no comparative advantages at all.
Let’s imagine this possibility is true… So what?
If I am not engaging in direct work… I’ve scored a position that is more challenging and lucrative than I would have if people knew how incompetent I am, and there’s little or no moral cost to the mixup. Score!
If I am engaging in direct work… the fact that I am the least competent person in the room does not necessarily mean that I shouldn’t be in the room. I might still be doing the most impactful thing I can be!
If I am working at a competitive direct work position, maybe I think that I’m blocking somebody more competent from taking the position. This seems like the ONLY case where I should actually worry about seeming more competent than I am. Even in this case, I should be comparing myself to the people who couldn’t get my job, not my colleagues!
I have identified relevant factors (nature of work, competitiveness) that should attenuate distress due to Imposter Syndrome, but as far as I can tell, these factors don’t attenuate the distress for people with Imposter Syndrome. Would it be useful for people to imagine their worst fears are true, and evaluate how bad that would really be?
I’m interested in feedback.
How much should conflicting desires to be locally kind and globally good affect our choices about living in EA bubbles, where our locally kind choices might multiply the effectiveness of effective people? I had previously felt it was a strong reason to live in an EA bubble, but perhaps this was due to stupid reasons.
Those stupid reasons: In my previous non-EA group living arrangement, I felt frustrated by the conflict between being locally helpful and globally effective. But then when I got to the EA Hotel, I felt this conflict was resolved yet still wasn’t very locally kind or helpful, so maybe the salience of this conflict only ever existed as a justification for being lazy.
I’m curious to know how other people have experienced the transition to and from EA bubbles with respect to this tension.
I work at a company that (as of recently) allows children to operate online clubs based on their interests. I shared this article (along with http://paulgraham.com/nerds.html ) with my team and my boss. So far it has been warmly recieved. I was reflecting with my boss about what your post meant to me and wrote the following:
---
Explaining fully my intent in sharing would probably require as much effort and eloquence as the essay itself took to write, or more :p.
I am alienated by how little skepticism is commonly directed at parents, teachers, and schooling. So I have these skeptical beliefs and they are so disconnected from the dominant version of reality, that it’s hard to even share (or continue to think) these beliefs.
Like it is hard to make people even understand just how much injustice I believe I personally faced and how damaging this was, much less how much injustice I think most US children face (which is very significantly more). And that drives me insane, that I cannot make this part of myself visible.
This article does a good job of articulating “yes, it really was very bad, and yes everything I’m describing is common.” Even if you do not believe these claims, it is at the very least clear that the author does.
And for me that alone is a relief.
---
Do you think that you would be less altruistic today if you had not had these experiences?
As a non-member of the AMA I apologize if this is unsolicited advice:
To me it seems likely that you can help the psychedelics movement philanthropically as an engineer. Depending on your current earnings, its very plausible that by the time you become a practicing psychiatrist you could have instead donated $1M from continuing to earn to give as a software engineer. My guess would be that you can do more for the psychedelics movement w/ $1M, than with +1 psychiatrist.
I mention this because I think this is a common blindspot when people talk about going back to school.
Very clear to me that this is a huge issue among my personal EA network.
I think calibrating people is step 1 of mitigating the hurt feelings, probably more important than feedback and certainly much cheaper.
My sour grapes:
I previously contacted Rob Wiblin and suggested that 80k publish some stats on the various orgs on the 80k jobs board that would help people calibrate their odds of getting the job. I pointed out that this is quite relevant to assess neglectedness and tractability. He responded by asking if I had tried to contact the orgs myself and suggested I do so, which I consider a dismissal of my IMO uncontroversially good suggestion.
I also posted this: https://forum.effectivealtruism.org/posts/6Dqb8Fkh2AnhzbAAM/reducing-ea-job-search-waste and felt the community was mostly disinterested in the problem. I am glad that your post is getting more traction.
He cares for animals too, and he and his wife are vegan (and not in an asshole-like way).
:-\
Why do I keep meeting so many damned capabilities researchers and AI salespeople?
I thought that we agreed capabilities research was really bad. I thought we agreed that increasing the amount of economic activity in capabiliities was really bad. To me it seems like the single worst thing that I could even do!
This really seems like a pretty consensus view among EA orthodoxy. So why do I keep meeting so many people who, as far as I can tell, are doing the single worst thing that it’s even in their power to do? If there is any legal thing that could get you kicked out of EA spaces, that isn’t sexual misconduct, wouldn’t it be this?I’m not even talking about people who maintain that safety/alignment research requires advancing capabilities or might do so. I’m just talking about people who do regular OpenAI or OpenAI competitor shit.
If you’re supposed to be high status in EA for doing good, aren’t you supposed to be low status if you do the exact opposite? It honestly makes me feel like I’m going insane. Do EA community norms really demand that I’m supposed to act like something is normal and okay even though we all seem to believe that it really isn’t okay at all?
And yes I think there is a strong argument for ostracization. It seems like you would ostracize somebody for being a nuclear arms industry lobbyist. This seems worse. It’s not behaviorally clear that these people care about anything except mild fun and status incentives, so IDK why in the community we would at all align fun and status with doing the most evil thing you can do.
Of course it does seem like 80k is somewhat to blame here since they continue to promote regular-ass jobs at OpenAI in the jobs board as far as I know. Not very clear to me why they do this.
- Speak the truth, even if your voice trembles by 14 Jan 2023 6:10 UTC; 92 points) (
- 5 Apr 2022 6:39 UTC; 36 points) 's comment on What an actually pessimistic containment strategy looks like by (LessWrong;
Maybe I’m just off here about the consensus and nobody cares about what I understand to be the Yudkowsky line. In which case I’d have to ask why people think it’s cool to do capabilities work without even a putative safety payoff. IDK I’d just expect at least some social controversy over this crap lol.
like if at least 20% of the community thinks mundane capabilities work is actually really terrible (and at least 20% does seem to think this, to me), you would think that there would be pretty live debate over the topic? seems pressing and relevant?
maybe the phrase im looking for is “missing moods” or something. it would be one thing if there was a big fight, everyone drew lines in the sand, and then agreed to get along. but nothing like that happened, i just talked to somebody tonight about their work selling AI and basically got a shrug in response to any ethical questions. so im going crazy.
I am a current short term visitor. Re: 1)
A) It’s much easier to filter for “genuine” applicants in this case vs. direct grants. The number of people who would like to have free money is very large, and many of them aren’t EAs. The number of people who would like to live in a hotel in Blackpool with a bunch of EAs is much smaller, and people in this set are much more likely to be genuine EAs.
B) I am interested in living at the hotel because having a place to work around other people is important (even necessary) for me to be productive, and it helps A LOT if we share values and interests. A grant wouldn’t do anything for me. I’m sure my position is common.
Re: 2) I had a lot of fun explaining the hotel to customs.