I agree in theory, but selecting meaningful ‘results’ is extremely difficult in practice—input welcome!
We’re also talking to the staff about separately paying them some kind of results-based compensation, but much, probably most of what they do can’t meaningfully be quantified, or would be horribly distorted if it were.
Even at the organisational-output level, we can look at things like how many forum posts with what net karma came from the hotel, or what the average income of guests N months after their stay would be, which are examples of the sort of things we ultimately care about but a) it’s hard for any individual to say what numbers would be counterfactually above expectations, and b) the staff have only indirect influence on these, and if they meet some preagreed criteria but these outputs counterfactually decrease, the organisation has clearly gone wrong.
Also, (I only now realise) the unspoken premise of my question was that the vast majority of funding for CEEALAR and projects like it will come from the EA pool or sources adjacent to it. It’s too weird an initiative to qualify for any more general charitable grants that we’ve found.
On that assumption, plus the assumption that EA donors are discerning and want cost-effectiveness for their dollar, our funding is comparable to customers purchasing a product—noisier than a market signal for stock price, but getting more at what we really care about and want to incentivise staff to enable than any other apparent metric.
I agree in theory, but selecting meaningful ‘results’ is extremely difficult in practice—input welcome!
We’re also talking to the staff about separately paying them some kind of results-based compensation, but much, probably most of what they do can’t meaningfully be quantified, or would be horribly distorted if it were.
Even at the organisational-output level, we can look at things like how many forum posts with what net karma came from the hotel, or what the average income of guests N months after their stay would be, which are examples of the sort of things we ultimately care about but a) it’s hard for any individual to say what numbers would be counterfactually above expectations, and b) the staff have only indirect influence on these, and if they meet some preagreed criteria but these outputs counterfactually decrease, the organisation has clearly gone wrong.
Also, (I only now realise) the unspoken premise of my question was that the vast majority of funding for CEEALAR and projects like it will come from the EA pool or sources adjacent to it. It’s too weird an initiative to qualify for any more general charitable grants that we’ve found.
On that assumption, plus the assumption that EA donors are discerning and want cost-effectiveness for their dollar, our funding is comparable to customers purchasing a product—noisier than a market signal for stock price, but getting more at what we really care about and want to incentivise staff to enable than any other apparent metric.