There is/was a debate on LessWrong about how valid the efficient market hypothesis is. I think this is super interesting stuff, but I want to claim (with only some brief sketches of arguments here) that, regarding EA projects, the efficient market hypothesis is not at all valid (that is, I think it’s a poor way to model the situation that will lead you to make systematically wrong judgments). I think the main reasons for this are:
EA and the availability of lots of funding for it are relatively new — there’s just not that much time for “market inefficiencies” to have been filled.
The number of people in EA who are able to get funding for, and excited to start, new projects, is really small relative to the number of people doing this in the wider world.
EMH says that we shouldn’t expect great opportunities to make money to just be “lying around” ready for anyone to take. EMH says that, if you have an amazing startup idea, you have to answer “why didn’t anyone do this before?” (ofc. this is a simplification, EMH isn’t really one coherent view)
One might also think that there aren’t great EA projects just “lying around” ready for anyone to do. This would be an “EMH for EA.” But I think it’s not true.
I had to use Wikipedia to get a concise definition of EMH, rather than rely on my memory:
The efficient-market hypothesis (EMH) is a hypothesis in financial economics that states that asset prices reflect all available information. A direct implication is that it is impossible to “beat the market” consistently on a risk-adjusted basis since market prices should only react to new information. [1]
This appears to me to apply exclusively to financial (securities) markets and I think we would be taking (too) far out of its original context in trying to use it to answer questions about whether great EA projects exist. In that sense, I completely agree with you that:
it’s a poor way to model the situation that will lead you to make systematically wrong judgments
In the real (non-financial) world, there are plenty of opportunities to make money, which is one reason entrepreneurs exist and are valuable. Are you aware of people using EMH to suggest we should not expect to find good philanthropic opportunities?
What I’m talking about tends to be more of an informal thing which I’m using “EMH” as a handle for. I’m talking about a mindset where, when you think of something that could be an impactful project, your next thought is “but why hasn’t EA done this already?” I think this is pretty common and it’s reasonably well-adapted to the larger world, but not very well-adapted to EA.
still seems like a fair question. I think the underlying problem you’re pointing to might be that people will then give up on their projects or ideas without having come up with a good answer. An “EMH-style” mindset seems to point to an analytical shortcut: if it hasn’t already been done, it probably isn’t worth doing. Which, I agree is wrong.
I still think EMH has no relevance in this context and that should be the main argument against applying it to EA projects.
If you’re an EA who’s just about to graduate, you’re very involved in the community, and most of the people you think are really cool are EAs, I think there’s a decent chance you’re overrating jobs at EA orgs in your job search. Per the common advice, I think most people in this position should be looking primarily at the “career capital” their first role can give them (skills, connections, resume-building, etc.) rather than the direct impact it will let them have.
At first blush it seems like this recommends you should almost never take an EA job early in your career — since jobs at EA orgs are such a small proportion of all jobs, what are the odds that such a job was optimal from a career capital perspective? I think this is wrong for a number of reasons, but it’s instructive to actually run through the list. One is that a job being at an EA org is correlated with it being good in other ways — e.g. with it having smart, driven colleagues that you get on well with, or with it being in a field connected to one of the world’s biggest problems. Another is that some types of career capital are best gotten at EA orgs or in doing EA projects — e.g. if you want to upskill for community-building work, there’s plausibly no Google/McKinsey of community-building to go get useful career capital for this at. (Though I do think some types of experience, like startup experience, are often transferable to community-building.)
I think a good orientation to have towards this is to try your hardest, when looking at jobs as a new grad, to “wipe the slate clean” of tribal-affiliation-related considerations, and (to a large extent) of impact-related considerations, and assess mostly based on career-capital considerations.
(Context: I worked at an early-stage non-EA startup for 3 years before getting my current job at Open Phil. This was an environment where I was pushed to work really hard, take on a lot of responsibility, and produce high-quality work. I think I’d be way worse at my current job [and less likely to have gotten it] without this experience. My co-workers cared about lots of instrumental stuff EA cares about, like efficiency, good management, feedback culture, etc. I liked them a lot and was really motivated. However, this doesn’t happen to everyone at every startup, and I was plausibly unusually well-suited to it or unusually lucky.)
There is/was a debate on LessWrong about how valid the efficient market hypothesis is. I think this is super interesting stuff, but I want to claim (with only some brief sketches of arguments here) that, regarding EA projects, the efficient market hypothesis is not at all valid (that is, I think it’s a poor way to model the situation that will lead you to make systematically wrong judgments). I think the main reasons for this are:
EA and the availability of lots of funding for it are relatively new — there’s just not that much time for “market inefficiencies” to have been filled.
The number of people in EA who are able to get funding for, and excited to start, new projects, is really small relative to the number of people doing this in the wider world.
I don’t see the connection between EMH and EA projects. Can you elaborate on how those two intersect?
EMH says that we shouldn’t expect great opportunities to make money to just be “lying around” ready for anyone to take. EMH says that, if you have an amazing startup idea, you have to answer “why didn’t anyone do this before?” (ofc. this is a simplification, EMH isn’t really one coherent view)
One might also think that there aren’t great EA projects just “lying around” ready for anyone to do. This would be an “EMH for EA.” But I think it’s not true.
I had to use Wikipedia to get a concise definition of EMH, rather than rely on my memory:
This appears to me to apply exclusively to financial (securities) markets and I think we would be taking (too) far out of its original context in trying to use it to answer questions about whether great EA projects exist. In that sense, I completely agree with you that:
In the real (non-financial) world, there are plenty of opportunities to make money, which is one reason entrepreneurs exist and are valuable. Are you aware of people using EMH to suggest we should not expect to find good philanthropic opportunities?
https://en.wikipedia.org/wiki/Efficient-market_hypothesis
What I’m talking about tends to be more of an informal thing which I’m using “EMH” as a handle for. I’m talking about a mindset where, when you think of something that could be an impactful project, your next thought is “but why hasn’t EA done this already?” I think this is pretty common and it’s reasonably well-adapted to the larger world, but not very well-adapted to EA.
still seems like a fair question. I think the underlying problem you’re pointing to might be that people will then give up on their projects or ideas without having come up with a good answer. An “EMH-style” mindset seems to point to an analytical shortcut: if it hasn’t already been done, it probably isn’t worth doing. Which, I agree is wrong.
I still think EMH has no relevance in this context and that should be the main argument against applying it to EA projects.
If you’re an EA who’s just about to graduate, you’re very involved in the community, and most of the people you think are really cool are EAs, I think there’s a decent chance you’re overrating jobs at EA orgs in your job search. Per the common advice, I think most people in this position should be looking primarily at the “career capital” their first role can give them (skills, connections, resume-building, etc.) rather than the direct impact it will let them have.
At first blush it seems like this recommends you should almost never take an EA job early in your career — since jobs at EA orgs are such a small proportion of all jobs, what are the odds that such a job was optimal from a career capital perspective? I think this is wrong for a number of reasons, but it’s instructive to actually run through the list. One is that a job being at an EA org is correlated with it being good in other ways — e.g. with it having smart, driven colleagues that you get on well with, or with it being in a field connected to one of the world’s biggest problems. Another is that some types of career capital are best gotten at EA orgs or in doing EA projects — e.g. if you want to upskill for community-building work, there’s plausibly no Google/McKinsey of community-building to go get useful career capital for this at. (Though I do think some types of experience, like startup experience, are often transferable to community-building.)
I think a good orientation to have towards this is to try your hardest, when looking at jobs as a new grad, to “wipe the slate clean” of tribal-affiliation-related considerations, and (to a large extent) of impact-related considerations, and assess mostly based on career-capital considerations.
(Context: I worked at an early-stage non-EA startup for 3 years before getting my current job at Open Phil. This was an environment where I was pushed to work really hard, take on a lot of responsibility, and produce high-quality work. I think I’d be way worse at my current job [and less likely to have gotten it] without this experience. My co-workers cared about lots of instrumental stuff EA cares about, like efficiency, good management, feedback culture, etc. I liked them a lot and was really motivated. However, this doesn’t happen to everyone at every startup, and I was plausibly unusually well-suited to it or unusually lucky.)
I agree with this take (and also happen to be sitting next to Eli right now talking to him about it :). I think working at a fast-growing startup in an emerging technology is one of the best opportunities for career capital: https://forum.effectivealtruism.org/posts/ejaC35E5qyKEkAWn2/early-career-ea-s-should-consider-joining-fast-growing