In many ways, this is a multi-level alignment problem, so yes. Narrowly, it’s aligning employees with CEELAR, but very broadly, it’s aligning employee motivations with maximizing good in the universe—we just have better metrics for the former.
We can build better metrics for aligning principals and agents in the context of a single company with clear goals and metrics for success (fundraising, surveys of how well they are doing, funder evaluations, etc.) than we can for aligning it with “humanity and good things generally” (where we know we have an as-yet intractable alignment problem.)
Who’s the principal here? CEELAR? Or EA overall?
In many ways, this is a multi-level alignment problem, so yes. Narrowly, it’s aligning employees with CEELAR, but very broadly, it’s aligning employee motivations with maximizing good in the universe—we just have better metrics for the former.
‘we just have better metrics for the former’
Can you clarify this? Which statement are you referring to by ‘the former’? What metrics?
We can build better metrics for aligning principals and agents in the context of a single company with clear goals and metrics for success (fundraising, surveys of how well they are doing, funder evaluations, etc.) than we can for aligning it with “humanity and good things generally” (where we know we have an as-yet intractable alignment problem.)