Those expected value arguments about low probability but very high yield opportunities (moonshots) being more valuable than lower yield, but more certain ones, always rubbed me the wrong way. I suppose because in a very real sense, a 1% odds outcome might as well be 0% odds, specially for something that will only be attempted once, but I was also thinking about the economy. I suspect the overwhelming majority of economic activity is directed at lower risk, lower yield opportunities, and that it is necessary for things to be this way for the economy to function: there is some optimal proportion of the economy that should be dedicated to moonshots of course, but I wonder what that is. And similarly for altruism, there is probably some optimal proportion of altruistic effort that should be directed to moonshots, relative to effort on lower risk, lower yield stuff.
Has anyone written about this, about what would be the best proportion of moonshots to non-moonshots in EA? In the economy? My point is that it’s not as simple as saying moonshots are better.
I also recently read someone saying that the worst case with a moonshot is that nothing happens, but that is not true, the moonshot has opportunity cost, all the time, effort, and money spent on it could’ve been used on something else.
Since AI X-risk is a main cause area for EA, shouldn’t significant money be going into mechanistic interpretability? After reading the AI 2027 forecast, the opacity of AIs appears to be the main source of risk coming from them. Making significant progress in this field seems very important for alignment.
I took the Giving What We Can Pledge, I want to say there should be something like it but for mechanistic interpretability, but probably only very few people could be convinced to give 10% of their income to mechanistic interpretability.