So I’ve read this post and there’s a lot of important thoughts you make here.
Focusing on your takeaways and conclusion, you seem to say that earning to give is bad because buying talent is impractical.
The reasoning is plausible, but I don’t see any evidence for the conclusion you make, and there seems to be direct counterpoints you haven’t addressed.
Here’s what I have to say:
It seems we can immediately evaluate “earning to give” and the purchasing of labor for EA
There’s a very direct way to get a sense if earning to give is effective, and that’s by looking at the projects and funds where earning to give goes, such as in the Open Phil and EA Funds grants database.
Looking at these databases, I think it’s implausible for me, or most other people, to say a large fraction of projects or areas are poorly chosen. This, plus the fact that many of these groups probably can accept more money seems to be an immediate response to your argument.
These funds are particularly apt because they are where a lot of earning to give funds go to.
It seems that following your post directly implies that people who earn to give and have donated haven’t been very effective. This seems implausible, as these people are often highly skilled and almost certainly think of their donations carefully. Also, meta comment or criticism is common.
It seems easy to construct EA projects that benefit from monies and purchasable talent
We know with certainty that millions of Africans will die of malnutrition and lack of basic running water. These causes are far greater than say, COVID deaths. In fact, the secondary effects of COVID are probably more harmful than the virus itself to these people.
The suffering is so stark that projects like simply putting up buckets of water to wash hands would probably alleviate suffering. In addition to saving lives, these projects probably help with demographic transition and other systemic, longer run effects that EAs should like.
Executing these projects would cost pennies per person.
This doesn’t seem like it needs unusual skills that are hard to purchase.
Similarly, I think we could construct many other projects in the EA space that require skills like administrative, logistic, standard computer programming skills, outreach and organizational skills. All of these are available, probably by most people reading this post.
It seems implausible that market forces are ineffective
While blind interpretations of this idea are stupid, it seems plausible that money that causes effective altruistic activities in the same way that buying a pencil does.
Why wouldn’t we say that everyone in an organization and even the supply chain that provides clean water or malaria nets is doing effective altruism?
I also don’t get this section “Talent is very desirable”:
Not quite. Assuming market efficiency, the savings of supplementing an altruistic individual is balanced by lost counterfactual donations. Suppose that Alice currently makes $200,000 a year and is willing to do direct work for $100,000 a year. Hiring Alice seems like a good deal—we get $200,000 of talent for $100,000. However, if Alice would do direct work for half the salary, she should also be willing to donate half her salary. Our savings are balanced by lost donations.
But in my mind, the idea of earning to give is that we have a pool of money and a pool of ex-Ante valuable EA projects. We take this money and buy labor (EA people or non-EA people) to do these projects.
The fact that this same labor can also earn money in other ways, doesn’t create some sort of grid lock, or undermine the concept of buying labor.
So, when I read most of your posts, I feel dumb.
I read your post “The Solomonoff Prior is Malign”. I wish I could also write awesome sentences like “use the Solomonoff prior to construct a utility function that will control the entire future”, but instead, I spent most of my time trying to find a wikipedia page simple enough to explain what those words mean.
Am I missing something here?
What is Mark’s model for talent?
I think one thing that would help clarify things is a clearly articulated model where talent is used in a cause area, and why money fails to purchase this.
You’re interested in AI safety, of like, the 2001 kind. While I am not right now, and not an expert, I can imagine models of this work where the best contributions would supersede slightly worse work, making even skilled people useless.
For these highest tier contributors, making sure that HAL doesn’t close the pod bay doors, perhaps all of your arguments apply. Their talent might be very expensive or require intrinsic motivation that doesn’t respond to money.
Also, maybe what you mean is another class, of an exotic “pathfinder” or leader model. These people are like Peter Singer, Martin Luther King or Stacey Abrams. It’s debatable, but perhaps it may be true these people cannot be predicted and cannot be directly funded.
However, in either of these cases, it seems that special organizations can find ways to motivate, mentor or cultivate these people, or the environment they grow up in. These organizations can be funded for money.
I don’t consider Open Phil to be an example of Earning to Give. My understanding is that basically all of their funding comes from Dustin Moskowitz’s Facebook stock. He completed his work on Facebook before taking the Giving Pledge, so his primary earning activities were not chosen in the spirit of Earning to Give.
It’s also not clear to me that the EA Funds are examples of EtG. The EA Funds take frequent donations, and my impression is that they have many donors. At least, I don’t see any evidence that the donors are purposefully Earning to Give (i.e. that they chose their jobs as a way to maximize earnings with a plan to donate).
It’s possible that you and I have different definitions of EtG. Mark’s post doesn’t explicitly define it. Wikipedia’s definition does not seem to include “normal” donors who give, say, 10% of their not-super-large income.
These examples might not be critical to your first point, but I think you would need to provide other examples of grantmakers that are more obviously funded by EtG (e.g. by evaluating Matt Wage’s personal grantmaking).
Hey Charles! Glad to see that you’re still around.
It seems we can immediately evaluate “earning to give” and the purchasing of labor for EA
I don’t think OpenPhil or the EA Funds are particularly funding constrained, so this seems to suggest that “people who can do useful things with money” is more of a bottleneck than money itself.
It seems easy to construct EA projects that benefit from monies and purchasable talent
I think I disagree about the quality of execution one is likely to get by purchasing talent. I agree that in areas like global health, it’s likely possible to construct scalable projects.
I am pessimistic about applying “standard skills” to projects in the EA space for reasons related to Goodhart’s Law.
It seems implausible that market forces are ineffective
I think my take is “money can coordinate activity around a broad set of things, but EA is bottlenecked by things that are outside this set.”
I also don’t get this section “Talent is very desirable”:
I don’t think this section is very important. It is arguing that paying people less than market rate means they’re effectively “donating their time”. If those people were earning money, they would be donating money instead. In both cases, the amount of donations is roughly constant, assuming some market efficiently. Note that this argument is probably false because the efficiency assumption doesn’t hold in practice.
What is Mark’s model for talent?
I think your guesses are mostly right. Perhaps one analogy is that I think EA is trying to do something similar to “come up with revolutionary insights into fundamental physics”, although that’s not quite right because money can be used to build large measuring instruments, which has no obvious backwards analogue.
However, in either of these cases, it seems that special organizations can find ways to motivate, mentor or cultivate these people, or the environment they grow up in. These organizations can be funded for money.
I agree this is true, but I claim that the current bottleneck by far the organizations/mentors not yet existing. I would much rather someone become a mentor than earn money and try to hire a mentor.
Hey, yo, Mark, It’s me, Charles.
What’s up?
So I’ve read this post and there’s a lot of important thoughts you make here.
Focusing on your takeaways and conclusion, you seem to say that earning to give is bad because buying talent is impractical.
The reasoning is plausible, but I don’t see any evidence for the conclusion you make, and there seems to be direct counterpoints you haven’t addressed.
Here’s what I have to say:
It seems we can immediately evaluate “earning to give” and the purchasing of labor for EA
There’s a very direct way to get a sense if earning to give is effective, and that’s by looking at the projects and funds where earning to give goes, such as in the Open Phil and EA Funds grants database.
Looking at these databases, I think it’s implausible for me, or most other people, to say a large fraction of projects or areas are poorly chosen. This, plus the fact that many of these groups probably can accept more money seems to be an immediate response to your argument.
These funds are particularly apt because they are where a lot of earning to give funds go to.
It seems that following your post directly implies that people who earn to give and have donated haven’t been very effective. This seems implausible, as these people are often highly skilled and almost certainly think of their donations carefully. Also, meta comment or criticism is common.
It seems easy to construct EA projects that benefit from monies and purchasable talent
We know with certainty that millions of Africans will die of malnutrition and lack of basic running water. These causes are far greater than say, COVID deaths. In fact, the secondary effects of COVID are probably more harmful than the virus itself to these people.
The suffering is so stark that projects like simply putting up buckets of water to wash hands would probably alleviate suffering. In addition to saving lives, these projects probably help with demographic transition and other systemic, longer run effects that EAs should like.
Executing these projects would cost pennies per person.
This doesn’t seem like it needs unusual skills that are hard to purchase.
Similarly, I think we could construct many other projects in the EA space that require skills like administrative, logistic, standard computer programming skills, outreach and organizational skills. All of these are available, probably by most people reading this post.
It seems implausible that market forces are ineffective
I am not of the “Chicago school of economics”, but this video vividly explains how money coordinates activity.
While blind interpretations of this idea are stupid, it seems plausible that money that causes effective altruistic activities in the same way that buying a pencil does.
Why wouldn’t we say that everyone in an organization and even the supply chain that provides clean water or malaria nets is doing effective altruism?
I also don’t get this section “Talent is very desirable”:
But in my mind, the idea of earning to give is that we have a pool of money and a pool of ex-Ante valuable EA projects. We take this money and buy labor (EA people or non-EA people) to do these projects.
The fact that this same labor can also earn money in other ways, doesn’t create some sort of grid lock, or undermine the concept of buying labor.
So, when I read most of your posts, I feel dumb.
I read your post “The Solomonoff Prior is Malign”. I wish I could also write awesome sentences like “use the Solomonoff prior to construct a utility function that will control the entire future”, but instead, I spent most of my time trying to find a wikipedia page simple enough to explain what those words mean.
Am I missing something here?
What is Mark’s model for talent?
I think one thing that would help clarify things is a clearly articulated model where talent is used in a cause area, and why money fails to purchase this.
You’re interested in AI safety, of like, the 2001 kind. While I am not right now, and not an expert, I can imagine models of this work where the best contributions would supersede slightly worse work, making even skilled people useless.
For these highest tier contributors, making sure that HAL doesn’t close the pod bay doors, perhaps all of your arguments apply. Their talent might be very expensive or require intrinsic motivation that doesn’t respond to money.
Also, maybe what you mean is another class, of an exotic “pathfinder” or leader model. These people are like Peter Singer, Martin Luther King or Stacey Abrams. It’s debatable, but perhaps it may be true these people cannot be predicted and cannot be directly funded.
However, in either of these cases, it seems that special organizations can find ways to motivate, mentor or cultivate these people, or the environment they grow up in. These organizations can be funded for money.
I don’t consider Open Phil to be an example of Earning to Give. My understanding is that basically all of their funding comes from Dustin Moskowitz’s Facebook stock. He completed his work on Facebook before taking the Giving Pledge, so his primary earning activities were not chosen in the spirit of Earning to Give.
It’s also not clear to me that the EA Funds are examples of EtG. The EA Funds take frequent donations, and my impression is that they have many donors. At least, I don’t see any evidence that the donors are purposefully Earning to Give (i.e. that they chose their jobs as a way to maximize earnings with a plan to donate).
It’s possible that you and I have different definitions of EtG. Mark’s post doesn’t explicitly define it. Wikipedia’s definition does not seem to include “normal” donors who give, say, 10% of their not-super-large income.
These examples might not be critical to your first point, but I think you would need to provide other examples of grantmakers that are more obviously funded by EtG (e.g. by evaluating Matt Wage’s personal grantmaking).
Hey Charles! Glad to see that you’re still around.
I don’t think OpenPhil or the EA Funds are particularly funding constrained, so this seems to suggest that “people who can do useful things with money” is more of a bottleneck than money itself.
I think I disagree about the quality of execution one is likely to get by purchasing talent. I agree that in areas like global health, it’s likely possible to construct scalable projects.
I am pessimistic about applying “standard skills” to projects in the EA space for reasons related to Goodhart’s Law.
I think my take is “money can coordinate activity around a broad set of things, but EA is bottlenecked by things that are outside this set.”
I don’t think this section is very important. It is arguing that paying people less than market rate means they’re effectively “donating their time”. If those people were earning money, they would be donating money instead. In both cases, the amount of donations is roughly constant, assuming some market efficiently. Note that this argument is probably false because the efficiency assumption doesn’t hold in practice.
I think your guesses are mostly right. Perhaps one analogy is that I think EA is trying to do something similar to “come up with revolutionary insights into fundamental physics”, although that’s not quite right because money can be used to build large measuring instruments, which has no obvious backwards analogue.
I agree this is true, but I claim that the current bottleneck by far the organizations/mentors not yet existing. I would much rather someone become a mentor than earn money and try to hire a mentor.