It’s a principal-agent problem, and given the goals of having the staff help fundraise, you probably want to think about what their marginal contribution would be, and what aligns goals. I can imagine you might want to have the formula be something like “1% of any funding over 70% of current operating costs up to 200% of current operating costs.”
The idea would not necessarily be to have the staff help fundraise, any more than a startup that pays equity expects its employees to pump the stock price.
In many ways, this is a multi-level alignment problem, so yes. Narrowly, it’s aligning employees with CEELAR, but very broadly, it’s aligning employee motivations with maximizing good in the universe—we just have better metrics for the former.
We can build better metrics for aligning principals and agents in the context of a single company with clear goals and metrics for success (fundraising, surveys of how well they are doing, funder evaluations, etc.) than we can for aligning it with “humanity and good things generally” (where we know we have an as-yet intractable alignment problem.)
It’s a principal-agent problem, and given the goals of having the staff help fundraise, you probably want to think about what their marginal contribution would be, and what aligns goals. I can imagine you might want to have the formula be something like “1% of any funding over 70% of current operating costs up to 200% of current operating costs.”
The idea would not necessarily be to have the staff help fundraise, any more than a startup that pays equity expects its employees to pump the stock price.
Who’s the principal here? CEELAR? Or EA overall?
In many ways, this is a multi-level alignment problem, so yes. Narrowly, it’s aligning employees with CEELAR, but very broadly, it’s aligning employee motivations with maximizing good in the universe—we just have better metrics for the former.
‘we just have better metrics for the former’
Can you clarify this? Which statement are you referring to by ‘the former’? What metrics?
We can build better metrics for aligning principals and agents in the context of a single company with clear goals and metrics for success (fundraising, surveys of how well they are doing, funder evaluations, etc.) than we can for aligning it with “humanity and good things generally” (where we know we have an as-yet intractable alignment problem.)