Big fan of your blog. Some quick counterpoints/counterarguments (that are not meant to be decisive):
Re: …because it makes people upset.
As others noted, the point of EA(G) is to have the maximum impact on moral patients overall, not to be welcoming to individual EAs or make EAs happy. I think it’s not impossible that aiming for community happiness is a better proxy goal than aiming for impact directly (e.g.) , but I think it’d be quite surprising at least to me and should be explicitly argued for more.
More to the point, I’m not convinced that having open EAGs will actually make people happier. To some degree, I read the exclusivity as part of an important signal for people to complain/be upset about, and that as long as we have rejections for reasons other than commitment, people will be similarly upset. I expect with open EAGs, the goalposts will move and people will instead be upset about:
Getting rejected from directly important things like grants or jobs
Networking “tiers” within EAG
More illegible signals of status
probably even EAF karma, downvotes, other social media stuff etc
I think it genuinely makes sense to be upset about these things. And it’s unfortunate and I do think people’s emotions matter. But ultimately we’re here to reduce existential risk or end global poverty or stop factory farming or other important work. Not primarily to make each other happy, especially during work hours
(with maybe a few exceptions, like if you’re an EA therapist or something).
Re: …because you can’t identify promising people.
I think you’re just wrong about the object-level point. Both you and Kelsey (and, I suspect, future analogues) were successful and high-potential in ways that are highly legible to EA-types.
However, I think your argument can basically be preserved by arguing a) that people who are legibly impressive in non-EA ways (a la Austin’s point) will be rejected plus arguing that legibly impressive in non-EA ways is a better predictor of future counterfactual impact than being impressive in EA ways. Or b) people who aren’t legibly impressive will often end up having a huge impact later, so the processes are just really bad at discernment. So I think #1 isn’t my true rejection.
My more central rejection is like, man, all selection processes are imperfect. That doesn’t mean we shouldn’t have selection processes at all. Otherwise the same arguments applies to jobs and grants, and I think it’s probably wrong for us to give grants and jobs without discernment.
More generally, I wish there’s an attempt to grapple with the costs and benefits, rather than just look at the costs.
...because people will refuse to apply out of scrupulosity.
Yes, this is unfortunate. But again I would not guess that great people self-selecting out of something is particularly common.
(I think I might be unusually blind to this type of thing however. For example, I have nearly zero imposter syndrome and I’m given to understand that imposter syndrome is a common thing in EA).
I also think this can be ameliorated through better messaging.
...because of Goodhart’s Law
Oh man I should have a longer post about this at some point, but in general I think EAs and rationalists are too prone to invoke “Goodhart’s Law” as a magic curse that suggests something is maximally bad in the limit, without considering more empirically how useful or bad is something in practice.
And there’s a bunch of advantages of having people aim towards something (even if imperfect), like in general I think we’re better off with more BOTECs and more evaluations, rather than less.
(weak-moderate strength argument) “If you only let people whose work fits the current paradigm to sit at the table, you’re guaranteed not to get these.” To some degree I think there’s a bit of an internal contradiction: to the degree that our legible systems are poorly legibly incentivizing independent thinkers, this is a somewhat self-correcting problem. We’d expect the best independent thinkers to be less affected/damaged by our norms.
...because man does not live by networking alone
I actually don’t have a clear understanding of this point, so I feel not ready to argue against it.
I do think some EAs are looking for EA events/community mostly from a “vibes” angle, where they find being effectively altruistic is very hard, and they want events that helps with maintaining altruistic commitment. I do have a lot of sympathy to this view, and I do agree it is quite important.
...because you can have your cake and eat it too.
I suspect this will not solve most of the emotional problems people have (for reasons I mentioned in that section), or Goodhart’s Law, or the “can’t identify promising people” problem. Though I agree there’s a decent chance it can significantly ameliorate the “man doesn’t live by networking alone” problem and the “people will refuse to apply out of scrupulosity” problem.
In general, I thought this post was interesting and talked about a serious potential issue/mistake in our community, and I’m glad that this conversation is started/re-ignited. I also appreciate having distinct sections that makes it easier to argue against. However, in some ways I find the post also one-sided, overly simplistic, and too rhetorical. I ended up neither upvoting nor downvoting this post as a result.
But ultimately we’re here to reduce existential risk or end global poverty or stop factory farming or other important work. Not primarily to make each other happy, especially during work hours
You raise many good points, but I would like to respond to (not necessarily contradict) this sentiment. Of course you are right, those are the goals of the EA community. But by calling this whole thing a community, we cannot help but createcertain implicit expectations. Namely, that I will not only be treated simply as a means to an end.That means only being assessed and valued by how promising I am, how much my counterfactual impact could be, or how much I could help an EA org. That’s just being treated as an employee, which is fine for most people, as long as the employer does not call the whole enterprise a community.
Rather, it vaguely seems to me that people expect communities to reward and value their engaged members, and consider the wellbeing of the members to be important by itself (and not so the members can be e.g. more productive).
I am not saying this fostering of the community should happen in every EA context, or even at EA globals (maybe a more local context would be more fitting). I am simply saying that if every actor just bluntly considers impact, and at no place community involvement is rewarded, then people are likely and also somewhat justified to feel bitter about the whole community thing.
Both you and Kelsey (and, I suspect, future analogues) were successful and high-potential in ways that are highly legible to EA-types.
I’m curious, how was Scott Alexander “successful and high-potential in ways that are highly legible to EA-types” in his early 20s? I wouldn’t be surprised, at all, if he was, but I’m just curious because I have little idea of what he was like back then. As far as I know, he started posting on LessWrong in 2009, at the age of 24 (and started Slate Star Codex four years later). I’m not sure if that is what you are counting as “early 20s,” or if are referring to his earlier work on LiveJournal, or perhaps on another platform that I’m not aware of. I’ve read very few (perhaps none) of his pre-2009 LJ posts, so I don’t know how notable they were.
Oh hmm I might just be wrong here. Some quick points:
I didn’t know Scott’s exact age, and thought he is younger.
In particular I thought this was written when he was younger (EDIT: than 25), couldn’t figure out exactly when.
EA has more infrastructure/ability to discover great bloggers/would-be bloggers who are interested in EA-ish issues than we previously had.
I think it’s easier to be recognized as an EA blogger than it used to be 5-10 years ago, though probably harder to “make it big” (since more of the low-hanging fruit in EA blogging have been plucked).
I think I wrote that piece in 2010 (based on timestamp on version I have saved, though I’m not 100% sure that’s the earliest draft). I would have been 25-26 then. I agree that’s the first EA-relevant thing I wrote.
The missing info for me was that Scott had yet another alias, as Denise kindly replied. I think the lesson learned is “If you have a good reason not to reveal your identity, at least stick to just one alias”.
Oh man I should have a longer post about this at some point, but in general I think EAs and rationalists are too prone to invoke “Goodhart’s Law” as a magic curse that suggests something is maximally bad in the limit, without considering more empirically how useful or bad is something in practice.
Question, when do you think you will make a post about Goodhart’s Law?
Big fan of your blog. Some quick counterpoints/counterarguments (that are not meant to be decisive):
Re: …because it makes people upset.
As others noted, the point of EA(G) is to have the maximum impact on moral patients overall, not to be welcoming to individual EAs or make EAs happy. I think it’s not impossible that aiming for community happiness is a better proxy goal than aiming for impact directly (e.g.) , but I think it’d be quite surprising at least to me and should be explicitly argued for more.
More to the point, I’m not convinced that having open EAGs will actually make people happier. To some degree, I read the exclusivity as part of an important signal for people to complain/be upset about, and that as long as we have rejections for reasons other than commitment, people will be similarly upset. I expect with open EAGs, the goalposts will move and people will instead be upset about:
Getting rejected from directly important things like grants or jobs
Networking “tiers” within EAG
More illegible signals of status
probably even EAF karma, downvotes, other social media stuff etc
I think it genuinely makes sense to be upset about these things. And it’s unfortunate and I do think people’s emotions matter. But ultimately we’re here to reduce existential risk or end global poverty or stop factory farming or other important work. Not primarily to make each other happy, especially during work hours
(with maybe a few exceptions, like if you’re an EA therapist or something).
Re: …because you can’t identify promising people.
I think you’re just wrong about the object-level point. Both you and Kelsey (and, I suspect, future analogues) were successful and high-potential in ways that are highly legible to EA-types.
However, I think your argument can basically be preserved by arguing a) that people who are legibly impressive in non-EA ways (a la Austin’s point) will be rejected plus arguing that legibly impressive in non-EA ways is a better predictor of future counterfactual impact than being impressive in EA ways. Or b) people who aren’t legibly impressive will often end up having a huge impact later, so the processes are just really bad at discernment. So I think #1 isn’t my true rejection.
My more central rejection is like, man, all selection processes are imperfect. That doesn’t mean we shouldn’t have selection processes at all. Otherwise the same arguments applies to jobs and grants, and I think it’s probably wrong for us to give grants and jobs without discernment.
More generally, I wish there’s an attempt to grapple with the costs and benefits, rather than just look at the costs.
...because people will refuse to apply out of scrupulosity.
Yes, this is unfortunate. But again I would not guess that great people self-selecting out of something is particularly common.
(I think I might be unusually blind to this type of thing however. For example, I have nearly zero imposter syndrome and I’m given to understand that imposter syndrome is a common thing in EA).
I also think this can be ameliorated through better messaging.
...because of Goodhart’s Law
Oh man I should have a longer post about this at some point, but in general I think EAs and rationalists are too prone to invoke “Goodhart’s Law” as a magic curse that suggests something is maximally bad in the limit, without considering more empirically how useful or bad is something in practice.
And there’s a bunch of advantages of having people aim towards something (even if imperfect), like in general I think we’re better off with more BOTECs and more evaluations, rather than less.
(weak-moderate strength argument) “If you only let people whose work fits the current paradigm to sit at the table, you’re guaranteed not to get these.” To some degree I think there’s a bit of an internal contradiction: to the degree that our legible systems are poorly legibly incentivizing independent thinkers, this is a somewhat self-correcting problem. We’d expect the best independent thinkers to be less affected/damaged by our norms.
...because man does not live by networking alone
I actually don’t have a clear understanding of this point, so I feel not ready to argue against it.
I do think some EAs are looking for EA events/community mostly from a “vibes” angle, where they find being effectively altruistic is very hard, and they want events that helps with maintaining altruistic commitment. I do have a lot of sympathy to this view, and I do agree it is quite important.
...because you can have your cake and eat it too.
I suspect this will not solve most of the emotional problems people have (for reasons I mentioned in that section), or Goodhart’s Law, or the “can’t identify promising people” problem. Though I agree there’s a decent chance it can significantly ameliorate the “man doesn’t live by networking alone” problem and the “people will refuse to apply out of scrupulosity” problem.
In general, I thought this post was interesting and talked about a serious potential issue/mistake in our community, and I’m glad that this conversation is started/re-ignited. I also appreciate having distinct sections that makes it easier to argue against. However, in some ways I find the post also one-sided, overly simplistic, and too rhetorical. I ended up neither upvoting nor downvoting this post as a result.
You raise many good points, but I would like to respond to (not necessarily contradict) this sentiment. Of course you are right, those are the goals of the EA community. But by calling this whole thing a community, we cannot help but create certain implicit expectations. Namely, that I will not only be treated simply as a means to an end. That means only being assessed and valued by how promising I am, how much my counterfactual impact could be, or how much I could help an EA org. That’s just being treated as an employee, which is fine for most people, as long as the employer does not call the whole enterprise a community.
Rather, it vaguely seems to me that people expect communities to reward and value their engaged members, and consider the wellbeing of the members to be important by itself (and not so the members can be e.g. more productive).
I am not saying this fostering of the community should happen in every EA context, or even at EA globals (maybe a more local context would be more fitting). I am simply saying that if every actor just bluntly considers impact, and at no place community involvement is rewarded, then people are likely and also somewhat justified to feel bitter about the whole community thing.
I’m curious, how was Scott Alexander “successful and high-potential in ways that are highly legible to EA-types” in his early 20s? I wouldn’t be surprised, at all, if he was, but I’m just curious because I have little idea of what he was like back then. As far as I know, he started posting on LessWrong in 2009, at the age of 24 (and started Slate Star Codex four years later). I’m not sure if that is what you are counting as “early 20s,” or if are referring to his earlier work on LiveJournal, or perhaps on another platform that I’m not aware of. I’ve read very few (perhaps none) of his pre-2009 LJ posts, so I don’t know how notable they were.
Oh hmm I might just be wrong here. Some quick points:
I didn’t know Scott’s exact age, and thought he is younger.
In particular I thought this was written when he was younger (EDIT: than 25), couldn’t figure out exactly when.
EA has more infrastructure/ability to discover great bloggers/would-be bloggers who are interested in EA-ish issues than we previously had.
I think it’s easier to be recognized as an EA blogger than it used to be 5-10 years ago, though probably harder to “make it big” (since more of the low-hanging fruit in EA blogging have been plucked).
I think I wrote that piece in 2010 (based on timestamp on version I have saved, though I’m not 100% sure that’s the earliest draft). I would have been 25-26 then. I agree that’s the first EA-relevant thing I wrote.
See https://web.archive.org/web/20131230140344/http://squid314.livejournal.com/243765.html?(Also I think the webpages you link to are from no later than 2008, and clustered up to November 2008.)
(The dead-child thing was almost certainly written in 2008.) (Edit: see https://web.archive.org/web/20131230140344/http://squid314.livejournal.com/243765.html.)
Thanks for finding this. Assuming he wrote this around the time that it was posted, he’d have been 24.
Maybe I’m just ignorant here, but where’s Scott in that link?
The quoted excerpt from the post, and the original “Dead Child Currency” post in general, is written by Scott.
The missing info for me was that Scott had yet another alias, as Denise kindly replied. I think the lesson learned is “If you have a good reason not to reveal your identity, at least stick to just one alias”.
Wouldn’t be the lesson I’ll take here, but probably not that important! :)
Yvain is Scott’s old LW name.
Question, when do you think you will make a post about Goodhart’s Law?
Sorry the “should” is more like a normative “I wish I could do this” rather than a prediction or promise.
Maybe 15% I’ll do this in the next two months?