I do alignment research at the Alignment Research Center. Learn more about me at markxu.com/about
Mark Xu
Hey Charles! Glad to see that you’re still around.
It seems we can immediately evaluate “earning to give” and the purchasing of labor for EA
I don’t think OpenPhil or the EA Funds are particularly funding constrained, so this seems to suggest that “people who can do useful things with money” is more of a bottleneck than money itself.
It seems easy to construct EA projects that benefit from monies and purchasable talent
I think I disagree about the quality of execution one is likely to get by purchasing talent. I agree that in areas like global health, it’s likely possible to construct scalable projects.
I am pessimistic about applying “standard skills” to projects in the EA space for reasons related to Goodhart’s Law.
It seems implausible that market forces are ineffective
I think my take is “money can coordinate activity around a broad set of things, but EA is bottlenecked by things that are outside this set.”
I also don’t get this section “Talent is very desirable”:
I don’t think this section is very important. It is arguing that paying people less than market rate means they’re effectively “donating their time”. If those people were earning money, they would be donating money instead. In both cases, the amount of donations is roughly constant, assuming some market efficiently. Note that this argument is probably false because the efficiency assumption doesn’t hold in practice.
What is Mark’s model for talent?
I think your guesses are mostly right. Perhaps one analogy is that I think EA is trying to do something similar to “come up with revolutionary insights into fundamental physics”, although that’s not quite right because money can be used to build large measuring instruments, which has no obvious backwards analogue.
However, in either of these cases, it seems that special organizations can find ways to motivate, mentor or cultivate these people, or the environment they grow up in. These organizations can be funded for money.
I agree this is true, but I claim that the current bottleneck by far the organizations/mentors not yet existing. I would much rather someone become a mentor than earn money and try to hire a mentor.
I am confused by EA orgs not meeting basic living thresholds. Could you provide some examples?
The purpose of hiring two people isn’t just to do twice the amount of work. Two people can complement each other, creating a team which is better than the sum of their parts. Even two people with the same job title are never doing exactly the same work, and this matters in determining how much value they’re adding to the firm. I think this works against the point you’re making in this passage. Do you account for this somewhere else in your post, and/or do you think it affects your overall point?
My claim is that having one person with the skill-set of two people is more useful that having both those people. I have some sense that teams are actually rarely better than the sum of their parts, but I have not thought this very much. I don’t account for this and don’t think it weakens my overall point very much.
But if we can’t really even measure talent to begin with, what are we even talking about when we talk about talent? What do you mean when you say “talent”?
I mean something vaguely like “has good judgement” and “if I gave this person a million dollars, I would be quite pleased with what they did with it” and “it would be quite useful for this person to spend time thinking about important things”.
It is difficult to measure this property, which is why hiring talented people is difficult.
I agree I use the word talent a lot and this is unfortunate, but I couldn’t think of a better word to use.
Rather than “earn to give” or “do direct work,” I think it might be “try as hard as you can to become a highly talented person” (maybe by acquiring domain expertise in an important cause area).
“Try and become very talented” is good advice to take from this post. I don’t have a particular method in mind, but becoming the Pareto best in the world at some combination of relevant skills might be a good starting point.
The flip side is that if you value money/monetary donations linearly—or more linearly than other talented people—then you’ve got a comparative advantage in earning to give! The fact that “people don’t value money” means that no one’s taking the exhausting/boring/bad-location jobs that pay really well. If you do, you can earn more than you “should” (in an efficient market) and make an outsize impact.
This is a good point. People able to competently perform work they’re unenthusiastic about should, all else being equal, have an outsized impact because the work they do can more accurately reflect the true value behind the work.
Money Can’t (Easily) Buy Talent
Defusing AGI Danger
I’m excited about more efficient matching between people who want career advice and people who are not-maximally-qualified to give it, but can still give aid nonetheless. For example, when planning my career, I often find it helpful to talk to other students making similar decisions, even though they’re more “more qualified” than me. I suspect that other students/people feel similarly and one doesn’t need to be a career coach to be helpful.
I will now consider everything that Carl writes henceforth to be in a parenthetical.
This creates weird incentives, e.g. I could construct a plausible-but-false view, make a post about it, then make a big show of changing my mind. I don’t think the amounts of money involved make it worth it, but I’m wary of incentivizing things that are so easily gamed.
This is an interesting stategic consideration! Thanks for writing it up.
Note that the probability of AsianTAI/AsianAwarenessNeeded depends on whether or not there is an AI risk hub in Asia. In the extreme, if you expect making aligned AI to take much longer than unaligned AI, then making Asia concerened about AI risk might drive the probability of AsianTAI close to 0. Given how rough the model is, I don’t think this matters that much.
How many EA forum posts will there be with greater than or equal to 10 karma submitted in August of 2020?
metaculus link is broken
In what meaningful ways can forecasting questions be categorized?
This is really broad, but one possible categorization might be questions that have inside view predictions versus questions that have outside view predictions.
How optimistic about “amplification” forecast schemes, where forecasters answer questions like “will a panel of experts say <answer> when considering <question> in <n> years?”
When I look at most forecasting questions, they seem goodharty in a very strong sense. For example, the goodhart tower for COVID might look something like:
1. How hard should I quarantine?
2. How hard I should quarantine is affected by how “bad” COVID will be.
3. How “bad” COVID should be caches out into something like “how many people”, “when vaccine coming”, “what is death rate”, etc.
By the time something I care about becomes specific enough to be predictable/forecastable, it seems like most of the thing I actually cared about has been lost.
Do you have a sense of how questions can be better constructed to lose less of the thing that might have inspired the question?
Systematic undervaluing of some fields is not something I considered and slightly undermines my argument.
I still think the main problem would be identifying rising-star historians in advance instead of in retrospect.