We (CEEALAR) have been talking to our paid staff about the idea of doing something like ‘annual compensation of $x plus y% of any funding the organisation receives’ - the idea being to try and emulate the incentives behind the stock-based compensation that startup employees receive.
I’m surprised that on reflection I haven’t heard of other EA orgs trying something like this. Does it seem like a reasonable idea? Are there hidden pitfalls—or upsides? Are there more nuanced ways of doing something like this that might work better?
A couple of concerns with the initial idea:
An employee of a for-profit gets to keep their stock even after they leave, so long-term benefits they bring to the org would still be rewarded in expectation, which wouldn’t apply in this approach. Perhaps we could make the per-instance value of y lower, but commit to ‘paying’ them a gradually decreasing percentage of donations even after they leave?
Unlike a for-profit we’re not trying to maximise income—the ideal would presumably be that we stabilise long-term at maybe a couple of years runway, or a single year, but with a stream of reliable regular donations coming in. So as long as the organisation keeps operating, y would end up being a constant, rather than the variable with unlimited upside that stock would be. Maybe this is just fine, though? The better work the staff do, the more likely the org is to keep operating, and the value of y in that scenario could just be ‘something slightly higher than a simple salary would have been’ - without the huge potential payoff, but still aligning incentives pretty well.
Thoughts?
ETA: To clarify based on comments, the idea is not that staff would necessarily be involved with fundraising, any more than startup employees are expected to promote the stock price. The idea is that ‘nonprofit staff doing a good job overall’ is to EA funding as ‘for-profit staff doing a good job overall’ is to stock price: a noisy signal, but more encompassing than perhaps any other.
This is an intriguing idea, and I’m all for experimentation in nonprofits generally and with compensation specifically. I also find nonprofit performance incentives potentially valuable and interesting.
One problem I see is lots of funders would hate this: from their perspective it creates a sort of tax on their donation. Instead of the whole donation going to whatever new thing they’d want to fund, a percentage gets set aside for current employees. I think this is part of the reason (per Jared’s smart reply) that grantwriting commissions are looked down upon in the industry.
Another problem could be that lots of donors want to feel like nonprofit employees are not motivated by money, and implying otherwise could make the nonprofit unatttractive.
I think the broader principle-agent issue is that funding and results are orthogonal to one another (probably the central problem for nonprofits generally), so compensating based on funding raised incentivizes employees to pursue flashy or unfairly charismatic projects, overpromise, and embellish or lie about results. (Though to be clear, nonprofits/nonprofit fundraisers already face these incentives).
One analogous idea I’ve noodled around with a bit is results-based bonuses where you set a goal, a probability of success, and a dollar figure for achieving the goal, and you set aside a pool of money where if the employee achieves the result they receive the amount it’s worth divided by the estimated probability of success. If my job were an example, this would look something like if 1Day Sooner’s goal is to have a 50% chance of our work being the but-for cause of saving ~4 million DALYs by 2030, you could set aside $100K of my compensation each year and if we achieved the goal, I’d receive 2x that. one problem is a lot of results take a long-term to materialize (and are hard to prove/calculate reliably), and you might have to pay too many premiums (to adjust for risk + time value of money) to make it worth it.
I’m much more excited about results-based compensation than funding-based compensation, for nonprofit employees.
I can imagine pretty large numbers here. For example, if we value reducing 0.01% existential catastrophe at 100M-1B, I think it’s plausible that we should be backpaying people who created projects of that calibre 1%-10% of the value of the xrisk reduced.
(They can then choose to regrant the money, split it among their own staff who contributed to the xrisk reduction, or spend it on fun stuff).
This seems like a good example of what I’m concerned about. How could you show that a project reduced x-risk by any specific amount?
Some quick points:
Seems (almost) strictly easier than figuring out how much xrisk a project reduced in advance of its creation.
and we should really be moving in that direction, at least for xrisk reduction mega-projects.
I’ve been informed since the creation of my motivated reasoning in EA post that a number of places do explicit cost-effectiveness analysis of these things. I assume they’ll be improved in the future.
We’ll eventually have fairly quantitative models of all x-risk reduction efforts (ideally before we all die). My proposal is more forwards-looking than backwards looking.
Tbh I’m not aware of visibly successful xrisk reduction efforts, at least of this magnitude. So this is more of a future problem/incentivization scheme anyway.
I agree with that! I didn’t mean that the latter would be better, but that neither seems feasible.
I agree in theory, but selecting meaningful ‘results’ is extremely difficult in practice—input welcome!
We’re also talking to the staff about separately paying them some kind of results-based compensation, but much, probably most of what they do can’t meaningfully be quantified, or would be horribly distorted if it were.
Even at the organisational-output level, we can look at things like how many forum posts with what net karma came from the hotel, or what the average income of guests N months after their stay would be, which are examples of the sort of things we ultimately care about but a) it’s hard for any individual to say what numbers would be counterfactually above expectations, and b) the staff have only indirect influence on these, and if they meet some preagreed criteria but these outputs counterfactually decrease, the organisation has clearly gone wrong.
Also, (I only now realise) the unspoken premise of my question was that the vast majority of funding for CEEALAR and projects like it will come from the EA pool or sources adjacent to it. It’s too weird an initiative to qualify for any more general charitable grants that we’ve found.
On that assumption, plus the assumption that EA donors are discerning and want cost-effectiveness for their dollar, our funding is comparable to customers purchasing a product—noisier than a market signal for stock price, but getting more at what we really care about and want to incentivise staff to enable than any other apparent metric.
I really hope this isn’t true for EA donors. For most EA organisations, the staff are the supermajority of the cost anyway, so the only questions should be whether this sort of incentive scheme motivates them to be more or less productive per $ than a flat salary, and zooming out, whether such schemes would encourage more total high value work to get done.
Again, are we talking about EA donors? If so, I’d hope they were neutral on what motivated the staff except inasmuch as it related to how much they could get done.
This seems plausibly insurmountable to me. Compare for-profit startups: I would be very surprised if those that offered their employees stock-based comp didn’t outperform those that offered them internally judged results-based comp. (do any of them even do the latter?)
Yeah I wasn’t really talking about EA donors per se: I think EA nonprofits should try to be funded by non EA donors (/expand the EA community) to the extent possible and that we also shouldn’t assume there’s a clear differentiation between EA and non-EA donors.
That said, I do think the tax effect I outlined would reasonably be of concern to EA donors or insofar as it’s not because the compensation mechanism will definitely create better results, it may make the argument a bit circular. I also think there’s a principle/agent problem with donors (maximize impact) and non-profit staff (motivated consciously or unconsciously in part by maximizing compensation/job security), and it would be a mistake to assume that shared EA values fully solve that problem.
‘I think EA nonprofits should try to be funded by non EA donors (/expand the EA community) to the extent possible’
The extent possible for ‘weird’ EA projects is often ‘no extent’. We have applied to various non-EA grants that sorta kinda cover the areas you could argue we’re in, and to my knowledge not received any of them. I believe that to date close to (perhaps literally) 100% of our funding has come from EAs or EA-adjacent sources, and I suspect that this will be true of the majority of EA nonprofits.
‘Assuming that shared EA values fully solve the problem’ is exactly what we’re trying to avoid here. Typical nonprofit salaries just work on the assumption that the person doing the job is willing to take a lower salary with no upside, which leads to burn out, lack of motivation and sometimes lack of competence at EA orgs. We’re trying to think of a way to recreate the incentive-driven structure of successful startups, both to give the staff a stronger self-interest-driven motivation and to make future such roles more appealing to stronger candidates.
It’s a principal-agent problem, and given the goals of having the staff help fundraise, you probably want to think about what their marginal contribution would be, and what aligns goals. I can imagine you might want to have the formula be something like “1% of any funding over 70% of current operating costs up to 200% of current operating costs.”
The idea would not necessarily be to have the staff help fundraise, any more than a startup that pays equity expects its employees to pump the stock price.
Who’s the principal here? CEELAR? Or EA overall?
In many ways, this is a multi-level alignment problem, so yes. Narrowly, it’s aligning employees with CEELAR, but very broadly, it’s aligning employee motivations with maximizing good in the universe—we just have better metrics for the former.
‘we just have better metrics for the former’
Can you clarify this? Which statement are you referring to by ‘the former’? What metrics?
We can build better metrics for aligning principals and agents in the context of a single company with clear goals and metrics for success (fundraising, surveys of how well they are doing, funder evaluations, etc.) than we can for aligning it with “humanity and good things generally” (where we know we have an as-yet intractable alignment problem.)
To be honest, I don’t think that sounds like a good idea. On the other hand, it might make sense to release a plan of how you’d spend more funding to staff which would include how you might increase compensation based on funding.
I’ve left some replies in the discussion here—I’d be interested if you read them and still thought it was a bad idea, and if so, why.
Based on the broadly negative responses to date though, this seems like it might be the most sensible option.
I guess when I think about existing charities a lot of them have these perverse incentives to do things to get funding, rather than fix the problem, even without these bonuses.
On the other hand, I’m keen to see staff paid fairly and I think people are more likely to think of working somewhere longterm if they see that there’s a possibility of this.