Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I think the “passive impact” framing encourages us too much to start lots of things and delegate/automate them. I prefer “maximize (active or passive) impact (e.g. by building a massively scalable organization)”. This includes the strategy “build a really excellent org and obsessively keep working on it until it’s amazing”, which doesn’t pattern-match “passive impact” and seems superior to me because a lot of the impact is often unlocked in the tail-end scenarios.
You might argue that excellent orgs often rely on a great deal of delegation and automation, and I would wholeheartedly agree with that. But I think the “passive impact” framing tends to encourage a thinking pattern that’s less like “building massively scalable systems” and more like “quickly automate something”, and I think that’s worse.
Yeah, it’s an interesting question whether, all else being equal, it’s better to set up many passive impact streams or build one very amazing and large organization.
I think it all depends on the particulars. Some factors are:
What’s your personal fit? I think a really important factor is personal fit. Some people love the idea of staying at one organization for ten years and deeply optimizing all of it and scaling it massively. Others have an existential crisis just thinking of the scenario. Passive impact is a better strategy for when you like things when they’re small and super startup vibe and for if you find it hard to stay interested in the same thing for years on end.
What sort of passive impact are you setting up? I think obsessively optimizing an amazing organization and working hard on replacing yourself with a stellar person, such that it continues to run as an amazing org without you beats starting and staying on the same org probably. On the other hand, digital automation tends to decay a lot more without at least somebody staying on to maintain the project, and that would on average be beaten by optimizing a single org.
Possible caveat.
If the ‘passive impact’ is of the ‘convince someone else to do it’ form, obviously we need some people willing to actually do the active things.
I think we don’t want too much of a culture of
‘Person A who convinces other people to do X get the credit’, and
‘Person B who actually does X gets less credit’
This would make it hard to get people to actually motivate people to be the person B actual do-er of the thing X.
Another possible caveat is that there is some possible deadweight loss in the time spent convincing other person to do X … perhaps after multiple attempts. If A and B are actually equally qualified to do X and have ~ equal EA-impact opportunity costs (~value of time)
… then it may be better for “A to just do X” rather than to have “A spends 10 hours trying to convince B to do X” (or maybe trying B,..., F before finally convincing G to do X)
On that note, anyone want to start EA CB radio and give me a large share of the credit :).
Definitely! It’s a specific instance of a potential meta-trap (another piece here about the idea).
The big questions are:
1. What ratio of meta to direct work should there be in the community?
2. How do we allocate credit?
Which is much beyond the scope of this post, but very important to discuss!
Yeah I think I was channeling Peter’s post here
A couple comments.
First, I think there’s something akin to creating a pyramid scheme for EA by leaning too heavy on this idea, e.g. “earn to give, or better yet get 3 friends to earn to give and you don’t need to donate yourself because you had so much indirect impact!”. I think david_reinstein’s comment is in the same vein and good.
Second, this is a general complaint about the active/passive distinction that is not specific to your proposal but since your proposal relies on it I have to complain about it. :-)
I don’t think the active/passive distinction is real (or at real enough to be useful). I think it just looks that way to people who only earn money by directly trading their labor for it. So-called passive income still requires work (otherwise money would just earn you more money with zero effort), just less of it. And that’s the key. Thus I think it’s better to talk about leverage rather than active/passive.
To say a bit more, trading labor for money/impact by default has 1:1 leverage, i.e. you get linear return on your labor. For example, literally handing out malaria nets, literally serving food to the destitute, etc.. Then you can do work that gets a bit of leverage but is still linear. So maybe you can leverage your knowledge, network, etc. to have 1:n leverage. This might be working as a researcher, doing work for an EA meta-org, etc.. Then there’s opportunities to have non-linear levage where each unit of work gets quadratic or exponential returns. In the realm of money and “passive” income this is stuff like investing in or starting a company (I know, not what people usually think of as “passive” income). In EA this might be defining a new field, starting a new EA org, etc..
Note though that we rely on people having impact in all these different ways for the economy/ecosystem to function. Yes, 1:1 leverage work would best be automated, but sometimes it can’t be, and then it’s a bottleneck and we need someone to do it. If you squeeze out too much of this type work you get something like a high-income/impact trap: no one can be bothered to do important work because it isn’t high leverage enough!
So, I think people should try to have as much leverage as they can, but also we need to be careful about how we promote leverage, especially in EA where there are fewer feedback systems in the economy to help the EA ecosystem self-regulate, so that we don’t end up without anyone to do the essential, low-leverage work.
One of the best passive impact examples I know is Eneasz Brodski’s recording of HPMoR. (Also, can we retroactively reward this?)