I can imagine pretty large numbers here. For example, if we value reducing 0.01% existential catastrophe at 100M-1B, I think it’s plausible that we should be backpaying people who created projects of that calibre 1%-10% of the value of the xrisk reduced.
(They can then choose to regrant the money, split it among their own staff who contributed to the xrisk reduction, or spend it on fun stuff).
Seems (almost) strictly easier than figuring out how much xrisk a project reduced in advance of its creation.
and we should really be moving in that direction, at least for xrisk reduction mega-projects.
I’ve been informed since the creation of my motivated reasoning in EA post that a number of places do explicit cost-effectiveness analysis of these things. I assume they’ll be improved in the future.
We’ll eventually have fairly quantitative models of all x-risk reduction efforts (ideally before we all die). My proposal is more forwards-looking than backwards looking.
Tbh I’m not aware of visibly successful xrisk reduction efforts, at least of this magnitude. So this is more of a future problem/incentivization scheme anyway.
I agree in theory, but selecting meaningful ‘results’ is extremely difficult in practice—input welcome!
We’re also talking to the staff about separately paying them some kind of results-based compensation, but much, probably most of what they do can’t meaningfully be quantified, or would be horribly distorted if it were.
Even at the organisational-output level, we can look at things like how many forum posts with what net karma came from the hotel, or what the average income of guests N months after their stay would be, which are examples of the sort of things we ultimately care about but a) it’s hard for any individual to say what numbers would be counterfactually above expectations, and b) the staff have only indirect influence on these, and if they meet some preagreed criteria but these outputs counterfactually decrease, the organisation has clearly gone wrong.
Also, (I only now realise) the unspoken premise of my question was that the vast majority of funding for CEEALAR and projects like it will come from the EA pool or sources adjacent to it. It’s too weird an initiative to qualify for any more general charitable grants that we’ve found.
On that assumption, plus the assumption that EA donors are discerning and want cost-effectiveness for their dollar, our funding is comparable to customers purchasing a product—noisier than a market signal for stock price, but getting more at what we really care about and want to incentivise staff to enable than any other apparent metric.
I’m much more excited about results-based compensation than funding-based compensation, for nonprofit employees.
I can imagine pretty large numbers here. For example, if we value reducing 0.01% existential catastrophe at 100M-1B, I think it’s plausible that we should be backpaying people who created projects of that calibre 1%-10% of the value of the xrisk reduced.
(They can then choose to regrant the money, split it among their own staff who contributed to the xrisk reduction, or spend it on fun stuff).
This seems like a good example of what I’m concerned about. How could you show that a project reduced x-risk by any specific amount?
Some quick points:
Seems (almost) strictly easier than figuring out how much xrisk a project reduced in advance of its creation.
and we should really be moving in that direction, at least for xrisk reduction mega-projects.
I’ve been informed since the creation of my motivated reasoning in EA post that a number of places do explicit cost-effectiveness analysis of these things. I assume they’ll be improved in the future.
We’ll eventually have fairly quantitative models of all x-risk reduction efforts (ideally before we all die). My proposal is more forwards-looking than backwards looking.
Tbh I’m not aware of visibly successful xrisk reduction efforts, at least of this magnitude. So this is more of a future problem/incentivization scheme anyway.
I agree with that! I didn’t mean that the latter would be better, but that neither seems feasible.
I agree in theory, but selecting meaningful ‘results’ is extremely difficult in practice—input welcome!
We’re also talking to the staff about separately paying them some kind of results-based compensation, but much, probably most of what they do can’t meaningfully be quantified, or would be horribly distorted if it were.
Even at the organisational-output level, we can look at things like how many forum posts with what net karma came from the hotel, or what the average income of guests N months after their stay would be, which are examples of the sort of things we ultimately care about but a) it’s hard for any individual to say what numbers would be counterfactually above expectations, and b) the staff have only indirect influence on these, and if they meet some preagreed criteria but these outputs counterfactually decrease, the organisation has clearly gone wrong.
Also, (I only now realise) the unspoken premise of my question was that the vast majority of funding for CEEALAR and projects like it will come from the EA pool or sources adjacent to it. It’s too weird an initiative to qualify for any more general charitable grants that we’ve found.
On that assumption, plus the assumption that EA donors are discerning and want cost-effectiveness for their dollar, our funding is comparable to customers purchasing a product—noisier than a market signal for stock price, but getting more at what we really care about and want to incentivise staff to enable than any other apparent metric.