Gleb, I’m going to pick on you a bit, but I’m just using you as an example of a broader trend:
for christ’s sake there are too many redundant meta-organizations
ok I finally got that off my chest.
Off the top of my head here are some EA meta-organizations:
-Centre for Effective Altruism: including EA Global, EA Outreach, EA Ventures
-Giving What We Can
-The Life You Can Save
-The A-Factor (if you don’t know, don’t ask, we’re not going down that road again)
-some site that was probably a scam but I don’t even know anymore because I’m really not surprised when I see another Wordpress EA “organization” pop up
Here are some rationality/life-hacking/soylent-standing desk-pomodoro organizations:
-LessWrong (not exactly an organization, but still they were the OGs in the game...)
-CFAR
-SelfSpark
-whatever is on the top of Malcolm Ocean’s linkedin profile right now
I am not saying all of these organizations are completely redundant or useless or bad. (Only some are...) But we pretty much have our bases covered now as far as EA and rationality go.
Especially with “rationality”, agh. All these organizations rely almost exclusively on Kahneman’s theories, which are certainly useful, but it’s so naive (and frankly cultish) to act like you’re going to save the world with them. Human behavior and society are complex, and, believe it or not, there are other theories of rationality (such as the revealed preferences theory). If you want to make people more “rational” for EA purposes, you should have a very specific goal in mind. Who do you want to make more rational? In what contexts? How will you change incentives to make that happen? And most importantly, rational with what values?
I think there’s currently a perceived glamour to the Silicon Valley startup culture, and it’s pushing a lot of people to do startups with thin ideas. I’m not pretending to be immune to this: I still would love to be an entrepreneur. But there’s this sense that if we just soylent and standing desk and pomodoro enough, a good startup will just HAPPEN. But successful startups (ignoring the current bubble) typically come from someone spending a fair amount time in a field, gaining technical expertise, and finally finding a specific problem that hasn’t been solved or isn’t being done effectively.
Sorry to be a jerk. It’s not a terrible idea, just a thin one in an already saturated market.
I still agree with most of this comment as a general trend I’ve noticed in EA… but I don’t think this was the right context for it. It feels too much like punching down, since Gleb is a relatively new EA and clearly means well, he was just in the wrong place at the wrong time.
Gleb, please continue with EA and don’t get discouraged. Lord knows I was an idiot as a new EA.
Lila, would you mind if I asked a mod to get rid of your post above? Feel free to make a similar point on a GWWC blog post. But as you seem to agree, this seems like it could be pretty off-putting for new people to the forum, given that it’s directed at someone quite new to the community and who clearly means very well.
As an aside—it seems like a shame that any kind of metaphorical punching would be happening. We should be holding each other to account and continuously challenging each other to be more effective, but surely we shouldn’t be ridiculing or fighting one another?
Thanks for your skepticism, and your encouragement!
The Theory of Change does lay out quite clearly who we want to help become more rational—the mass audience. LW, CFAR, etc. don’t aim at the mass audience. Here’s an example of how we’re aiming at the mass audience in political contexts.
Here’s some information about our EA work and its impact. Hope this is helpful, and I’m curious about your feedback :-) Always trying to improve.
Yeh, your comment was correct and needed, but where it’s truly needed at punching up (which here obviously means calling out MIRI, CFAR and CEA). That’s what I try to do. Otherwise newer and smaller “orgs” like Gleb’s get criticized for being redundant and CEA gets a free pass for being one of the first movers and then claiming the EA movement that sprung up as its fiefdom and pass to limitless funding. Leave Gleb alone and fight the real battles.
Oh and good on you for being less of an insensitive (but truth telling!) ahole than you often are. ;-)
I appreciate your perspective, but I think there’s a lot of space for charity entrepreneurship. See my response to Lila above, and let me know your thoughts :-)
I’d enjoy reading your reasons for this in a top-level forum post. I expect others would do, and there are certainly plenty who think like you do who could participate in a comment thread discussion of this, which your post could trigger.
Gleb, I’m going to pick on you a bit, but I’m just using you as an example of a broader trend:
for christ’s sake there are too many redundant meta-organizations
ok I finally got that off my chest.
Off the top of my head here are some EA meta-organizations:
-Centre for Effective Altruism: including EA Global, EA Outreach, EA Ventures
-Giving What We Can
-The Life You Can Save
-The A-Factor (if you don’t know, don’t ask, we’re not going down that road again)
-some site that was probably a scam but I don’t even know anymore because I’m really not surprised when I see another Wordpress EA “organization” pop up
Here are some rationality/life-hacking/soylent-standing desk-pomodoro organizations:
-LessWrong (not exactly an organization, but still they were the OGs in the game...)
-CFAR
-SelfSpark
-whatever is on the top of Malcolm Ocean’s linkedin profile right now
I am not saying all of these organizations are completely redundant or useless or bad. (Only some are...) But we pretty much have our bases covered now as far as EA and rationality go.
Especially with “rationality”, agh. All these organizations rely almost exclusively on Kahneman’s theories, which are certainly useful, but it’s so naive (and frankly cultish) to act like you’re going to save the world with them. Human behavior and society are complex, and, believe it or not, there are other theories of rationality (such as the revealed preferences theory). If you want to make people more “rational” for EA purposes, you should have a very specific goal in mind. Who do you want to make more rational? In what contexts? How will you change incentives to make that happen? And most importantly, rational with what values?
I think there’s currently a perceived glamour to the Silicon Valley startup culture, and it’s pushing a lot of people to do startups with thin ideas. I’m not pretending to be immune to this: I still would love to be an entrepreneur. But there’s this sense that if we just soylent and standing desk and pomodoro enough, a good startup will just HAPPEN. But successful startups (ignoring the current bubble) typically come from someone spending a fair amount time in a field, gaining technical expertise, and finally finding a specific problem that hasn’t been solved or isn’t being done effectively.
Sorry to be a jerk. It’s not a terrible idea, just a thin one in an already saturated market.
I still agree with most of this comment as a general trend I’ve noticed in EA… but I don’t think this was the right context for it. It feels too much like punching down, since Gleb is a relatively new EA and clearly means well, he was just in the wrong place at the wrong time.
Gleb, please continue with EA and don’t get discouraged. Lord knows I was an idiot as a new EA.
Lila, would you mind if I asked a mod to get rid of your post above? Feel free to make a similar point on a GWWC blog post. But as you seem to agree, this seems like it could be pretty off-putting for new people to the forum, given that it’s directed at someone quite new to the community and who clearly means very well.
As an aside—it seems like a shame that any kind of metaphorical punching would be happening. We should be holding each other to account and continuously challenging each other to be more effective, but surely we shouldn’t be ridiculing or fighting one another?
Thanks for your skepticism, and your encouragement!
The Theory of Change does lay out quite clearly who we want to help become more rational—the mass audience. LW, CFAR, etc. don’t aim at the mass audience. Here’s an example of how we’re aiming at the mass audience in political contexts.
Here’s some information about our EA work and its impact. Hope this is helpful, and I’m curious about your feedback :-) Always trying to improve.
Yeh, your comment was correct and needed, but where it’s truly needed at punching up (which here obviously means calling out MIRI, CFAR and CEA). That’s what I try to do. Otherwise newer and smaller “orgs” like Gleb’s get criticized for being redundant and CEA gets a free pass for being one of the first movers and then claiming the EA movement that sprung up as its fiefdom and pass to limitless funding. Leave Gleb alone and fight the real battles.
Oh and good on you for being less of an insensitive (but truth telling!) ahole than you often are. ;-)
I appreciate your perspective, but I think there’s a lot of space for charity entrepreneurship. See my response to Lila above, and let me know your thoughts :-)
I’d enjoy reading your reasons for this in a top-level forum post. I expect others would do, and there are certainly plenty who think like you do who could participate in a comment thread discussion of this, which your post could trigger.
I thought that a lot of this stuff was already covered in this post and the links there: http://effective-altruism.com/ea/q6/new_project_announcement_charity_entrepreneurship/ It seemed to have been positively accepted without much commentary, so I’m not sure others would have a lot to say.