This is an intriguing idea, and I’m all for experimentation in nonprofits generally and with compensation specifically. I also find nonprofit performance incentives potentially valuable and interesting.
One problem I see is lots of funders would hate this: from their perspective it creates a sort of tax on their donation. Instead of the whole donation going to whatever new thing they’d want to fund, a percentage gets set aside for current employees. I think this is part of the reason (per Jared’s smart reply) that grantwriting commissions are looked down upon in the industry.
Another problem could be that lots of donors want to feel like nonprofit employees are not motivated by money, and implying otherwise could make the nonprofit unatttractive.
I think the broader principle-agent issue is that funding and results are orthogonal to one another (probably the central problem for nonprofits generally), so compensating based on funding raised incentivizes employees to pursue flashy or unfairly charismatic projects, overpromise, and embellish or lie about results. (Though to be clear, nonprofits/nonprofit fundraisers already face these incentives).
One analogous idea I’ve noodled around with a bit is results-based bonuses where you set a goal, a probability of success, and a dollar figure for achieving the goal, and you set aside a pool of money where if the employee achieves the result they receive the amount it’s worth divided by the estimated probability of success. If my job were an example, this would look something like if 1Day Sooner’s goal is to have a 50% chance of our work being the but-for cause of saving ~4 million DALYs by 2030, you could set aside $100K of my compensation each year and if we achieved the goal, I’d receive 2x that. one problem is a lot of results take a long-term to materialize (and are hard to prove/calculate reliably), and you might have to pay too many premiums (to adjust for risk + time value of money) to make it worth it.
I can imagine pretty large numbers here. For example, if we value reducing 0.01% existential catastrophe at 100M-1B, I think it’s plausible that we should be backpaying people who created projects of that calibre 1%-10% of the value of the xrisk reduced.
(They can then choose to regrant the money, split it among their own staff who contributed to the xrisk reduction, or spend it on fun stuff).
Seems (almost) strictly easier than figuring out how much xrisk a project reduced in advance of its creation.
and we should really be moving in that direction, at least for xrisk reduction mega-projects.
I’ve been informed since the creation of my motivated reasoning in EA post that a number of places do explicit cost-effectiveness analysis of these things. I assume they’ll be improved in the future.
We’ll eventually have fairly quantitative models of all x-risk reduction efforts (ideally before we all die). My proposal is more forwards-looking than backwards looking.
Tbh I’m not aware of visibly successful xrisk reduction efforts, at least of this magnitude. So this is more of a future problem/incentivization scheme anyway.
I agree in theory, but selecting meaningful ‘results’ is extremely difficult in practice—input welcome!
We’re also talking to the staff about separately paying them some kind of results-based compensation, but much, probably most of what they do can’t meaningfully be quantified, or would be horribly distorted if it were.
Even at the organisational-output level, we can look at things like how many forum posts with what net karma came from the hotel, or what the average income of guests N months after their stay would be, which are examples of the sort of things we ultimately care about but a) it’s hard for any individual to say what numbers would be counterfactually above expectations, and b) the staff have only indirect influence on these, and if they meet some preagreed criteria but these outputs counterfactually decrease, the organisation has clearly gone wrong.
Also, (I only now realise) the unspoken premise of my question was that the vast majority of funding for CEEALAR and projects like it will come from the EA pool or sources adjacent to it. It’s too weird an initiative to qualify for any more general charitable grants that we’ve found.
On that assumption, plus the assumption that EA donors are discerning and want cost-effectiveness for their dollar, our funding is comparable to customers purchasing a product—noisier than a market signal for stock price, but getting more at what we really care about and want to incentivise staff to enable than any other apparent metric.
One problem I see is lots of funders would hate this: from their perspective it creates a sort of tax on their donation. Instead of the whole donation going to whatever new thing they’d want to fund, a percentage gets set aside for current employees. I think this is part of the reason (per Jared’s smart reply) that grantwriting commissions are looked down upon in the industry.
I really hope this isn’t true for EA donors. For most EA organisations, the staff are the supermajority of the cost anyway, so the only questions should be whether this sort of incentive scheme motivates them to be more or less productive per $ than a flat salary, and zooming out, whether such schemes would encourage more total high value work to get done.
Another problem could be that lots of donors want to feel like nonprofit employees are not motivated by money, and implying otherwise could make the nonprofit unatttractive.
Again, are we talking about EA donors? If so, I’d hope they were neutral on what motivated the staff except inasmuch as it related to how much they could get done.
one problem is a lot of results take a long-term to materialize (and are hard to prove/calculate reliably), and you might have to pay too many premiums (to adjust for risk + time value of money) to make it worth it.
This seems plausibly insurmountable to me. Compare for-profit startups: I would be very surprised if those that offered their employees stock-based comp didn’t outperform those that offered them internally judged results-based comp. (do any of them even do the latter?)
Yeah I wasn’t really talking about EA donors per se: I think EA nonprofits should try to be funded by non EA donors (/expand the EA community) to the extent possible and that we also shouldn’t assume there’s a clear differentiation between EA and non-EA donors.
That said, I do think the tax effect I outlined would reasonably be of concern to EA donors or insofar as it’s not because the compensation mechanism will definitely create better results, it may make the argument a bit circular. I also think there’s a principle/agent problem with donors (maximize impact) and non-profit staff (motivated consciously or unconsciously in part by maximizing compensation/job security), and it would be a mistake to assume that shared EA values fully solve that problem.
‘I think EA nonprofits should try to be funded by non EA donors (/expand the EA community) to the extent possible’
The extent possible for ‘weird’ EA projects is often ‘no extent’. We have applied to various non-EA grants that sorta kinda cover the areas you could argue we’re in, and to my knowledge not received any of them. I believe that to date close to (perhaps literally) 100% of our funding has come from EAs or EA-adjacent sources, and I suspect that this will be true of the majority of EA nonprofits.
‘Assuming that shared EA values fully solve the problem’ is exactly what we’re trying to avoid here. Typical nonprofit salaries just work on the assumption that the person doing the job is willing to take a lower salary with no upside, which leads to burn out, lack of motivation and sometimes lack of competence at EA orgs. We’re trying to think of a way to recreate the incentive-driven structure of successful startups, both to give the staff a stronger self-interest-driven motivation and to make future such roles more appealing to stronger candidates.
This is an intriguing idea, and I’m all for experimentation in nonprofits generally and with compensation specifically. I also find nonprofit performance incentives potentially valuable and interesting.
One problem I see is lots of funders would hate this: from their perspective it creates a sort of tax on their donation. Instead of the whole donation going to whatever new thing they’d want to fund, a percentage gets set aside for current employees. I think this is part of the reason (per Jared’s smart reply) that grantwriting commissions are looked down upon in the industry.
Another problem could be that lots of donors want to feel like nonprofit employees are not motivated by money, and implying otherwise could make the nonprofit unatttractive.
I think the broader principle-agent issue is that funding and results are orthogonal to one another (probably the central problem for nonprofits generally), so compensating based on funding raised incentivizes employees to pursue flashy or unfairly charismatic projects, overpromise, and embellish or lie about results. (Though to be clear, nonprofits/nonprofit fundraisers already face these incentives).
One analogous idea I’ve noodled around with a bit is results-based bonuses where you set a goal, a probability of success, and a dollar figure for achieving the goal, and you set aside a pool of money where if the employee achieves the result they receive the amount it’s worth divided by the estimated probability of success. If my job were an example, this would look something like if 1Day Sooner’s goal is to have a 50% chance of our work being the but-for cause of saving ~4 million DALYs by 2030, you could set aside $100K of my compensation each year and if we achieved the goal, I’d receive 2x that. one problem is a lot of results take a long-term to materialize (and are hard to prove/calculate reliably), and you might have to pay too many premiums (to adjust for risk + time value of money) to make it worth it.
I’m much more excited about results-based compensation than funding-based compensation, for nonprofit employees.
I can imagine pretty large numbers here. For example, if we value reducing 0.01% existential catastrophe at 100M-1B, I think it’s plausible that we should be backpaying people who created projects of that calibre 1%-10% of the value of the xrisk reduced.
(They can then choose to regrant the money, split it among their own staff who contributed to the xrisk reduction, or spend it on fun stuff).
This seems like a good example of what I’m concerned about. How could you show that a project reduced x-risk by any specific amount?
Some quick points:
Seems (almost) strictly easier than figuring out how much xrisk a project reduced in advance of its creation.
and we should really be moving in that direction, at least for xrisk reduction mega-projects.
I’ve been informed since the creation of my motivated reasoning in EA post that a number of places do explicit cost-effectiveness analysis of these things. I assume they’ll be improved in the future.
We’ll eventually have fairly quantitative models of all x-risk reduction efforts (ideally before we all die). My proposal is more forwards-looking than backwards looking.
Tbh I’m not aware of visibly successful xrisk reduction efforts, at least of this magnitude. So this is more of a future problem/incentivization scheme anyway.
I agree with that! I didn’t mean that the latter would be better, but that neither seems feasible.
I agree in theory, but selecting meaningful ‘results’ is extremely difficult in practice—input welcome!
We’re also talking to the staff about separately paying them some kind of results-based compensation, but much, probably most of what they do can’t meaningfully be quantified, or would be horribly distorted if it were.
Even at the organisational-output level, we can look at things like how many forum posts with what net karma came from the hotel, or what the average income of guests N months after their stay would be, which are examples of the sort of things we ultimately care about but a) it’s hard for any individual to say what numbers would be counterfactually above expectations, and b) the staff have only indirect influence on these, and if they meet some preagreed criteria but these outputs counterfactually decrease, the organisation has clearly gone wrong.
Also, (I only now realise) the unspoken premise of my question was that the vast majority of funding for CEEALAR and projects like it will come from the EA pool or sources adjacent to it. It’s too weird an initiative to qualify for any more general charitable grants that we’ve found.
On that assumption, plus the assumption that EA donors are discerning and want cost-effectiveness for their dollar, our funding is comparable to customers purchasing a product—noisier than a market signal for stock price, but getting more at what we really care about and want to incentivise staff to enable than any other apparent metric.
I really hope this isn’t true for EA donors. For most EA organisations, the staff are the supermajority of the cost anyway, so the only questions should be whether this sort of incentive scheme motivates them to be more or less productive per $ than a flat salary, and zooming out, whether such schemes would encourage more total high value work to get done.
Again, are we talking about EA donors? If so, I’d hope they were neutral on what motivated the staff except inasmuch as it related to how much they could get done.
This seems plausibly insurmountable to me. Compare for-profit startups: I would be very surprised if those that offered their employees stock-based comp didn’t outperform those that offered them internally judged results-based comp. (do any of them even do the latter?)
Yeah I wasn’t really talking about EA donors per se: I think EA nonprofits should try to be funded by non EA donors (/expand the EA community) to the extent possible and that we also shouldn’t assume there’s a clear differentiation between EA and non-EA donors.
That said, I do think the tax effect I outlined would reasonably be of concern to EA donors or insofar as it’s not because the compensation mechanism will definitely create better results, it may make the argument a bit circular. I also think there’s a principle/agent problem with donors (maximize impact) and non-profit staff (motivated consciously or unconsciously in part by maximizing compensation/job security), and it would be a mistake to assume that shared EA values fully solve that problem.
‘I think EA nonprofits should try to be funded by non EA donors (/expand the EA community) to the extent possible’
The extent possible for ‘weird’ EA projects is often ‘no extent’. We have applied to various non-EA grants that sorta kinda cover the areas you could argue we’re in, and to my knowledge not received any of them. I believe that to date close to (perhaps literally) 100% of our funding has come from EAs or EA-adjacent sources, and I suspect that this will be true of the majority of EA nonprofits.
‘Assuming that shared EA values fully solve the problem’ is exactly what we’re trying to avoid here. Typical nonprofit salaries just work on the assumption that the person doing the job is willing to take a lower salary with no upside, which leads to burn out, lack of motivation and sometimes lack of competence at EA orgs. We’re trying to think of a way to recreate the incentive-driven structure of successful startups, both to give the staff a stronger self-interest-driven motivation and to make future such roles more appealing to stronger candidates.