Reasonable if you don’t want to publicly go into internecine tensions, but the obvious question seems to be how you see this relating to principles-first EA, which is, on its face, a similar idea.
One tension that CEA laudably attempts to navigate is that EA is actually not self-recommending. There are worlds where dwelling on prioritization and personal-morality questions just aren’t that impactful. We may live in such a world given the urgency of addressing transformative AI and other matters.
My read is that CEA feels compelled to take views and allocate resources based on these considerations. In one part, it’s important for them that users of their programs take jobs or actions from a specific subset of jobs/actions to count as “successes” by CEA’s lights.
My tack is to really tie myself to the mast regarding getting people to engage with EA ideas for their own sake. We’ll pursue this with vigor and be intellectually challenging, but when it comes to what people *do* with these ideas, the chips will fall where they may. I anticipate that I’ll pay my impact bills this way, but I’m not maximizing impact. I’m maximizing EA ideas.
I anticipate that I’ll pay my impact bills this way, but I’m not maximizing impact. I’m maximizing EA ideas.
Would you mind saying more? Not a reasoning-transparent justification, more so a sketch of the high-level generators. Wondering if it’s along the lines of Richard Ngo’s
I think “maximize expected utility while obeying some constraints” looks very different from actually taking non-consequentialist decision procedures seriously.
In principle the utility-maximizing decision procedure might not involve thinking about “impact” at all.
And this is not even an insane hypothetical, IMO thinking about impact is pretty corrosive to one’s ability to do excellent research for example.
I think your read is basically right. Thinking explicitly and granularly about the direct chain from your actions to last-mile impact and being sensitive to perturbations of that measurement is one area I think many current orgs over-invest in. I believe it is inconsistent with the processes that created those orgs in the first place (which I’m now trying to replicate without much focus on the direct, measurable outputs).
The biggest issue I see is people spending up to 20% of their best hours getting bogged down in metrics and explicit planning when they could be spending much of that time doing things they’re excited about which they’ve done a quick sense-check on.
I think this is one of the great strengths of liberal, big-tent projects. Support plausibly great people all playing to their strengths. Some of them will disappoint and under-perform your hyper-planned model, sure, but the over-performers will more than make up for it. I want to embody this principle in my org and the groups we support.
Is the point here that you are still ultimately interested in outcomes, but that you think that the current focus on explicitly measuring and project planning hurts more than it helps, and that curiosity and a thriving intellectual scene where people are more willing to run experiments will achieve better outcomes than more explicit attempts to do so?
Yes. I am at-all interested in outcomes such that I will regard myself as having failed a sanity check if very few people go on to do ambitious, impactful work after engaging with my events/programs/groups. But I am very bound to being permissive about what counts here. Thoughtfulness about high impact is the bar, not my EV calculation of impact.
To put it in the form of a critique, I think too many community building programs adopt metrics like “number of participants who go into roles at AIM, MATS, GovAI, etc.” and that this is too prescriptive and discourages people from really forming their own world models in an EA context.
My metric is whether I’m impressed with the pushback I get on my takes when I go into these spaces or whether I’m learning new and plausibly very important things about big problems.
Reasonable if you don’t want to publicly go into internecine tensions, but the obvious question seems to be how you see this relating to principles-first EA, which is, on its face, a similar idea.
One tension that CEA laudably attempts to navigate is that EA is actually not self-recommending. There are worlds where dwelling on prioritization and personal-morality questions just aren’t that impactful. We may live in such a world given the urgency of addressing transformative AI and other matters.
My read is that CEA feels compelled to take views and allocate resources based on these considerations. In one part, it’s important for them that users of their programs take jobs or actions from a specific subset of jobs/actions to count as “successes” by CEA’s lights.
My tack is to really tie myself to the mast regarding getting people to engage with EA ideas for their own sake. We’ll pursue this with vigor and be intellectually challenging, but when it comes to what people *do* with these ideas, the chips will fall where they may. I anticipate that I’ll pay my impact bills this way, but I’m not maximizing impact. I’m maximizing EA ideas.
Would you mind saying more? Not a reasoning-transparent justification, more so a sketch of the high-level generators. Wondering if it’s along the lines of Richard Ngo’s
I think your read is basically right. Thinking explicitly and granularly about the direct chain from your actions to last-mile impact and being sensitive to perturbations of that measurement is one area I think many current orgs over-invest in. I believe it is inconsistent with the processes that created those orgs in the first place (which I’m now trying to replicate without much focus on the direct, measurable outputs).
The biggest issue I see is people spending up to 20% of their best hours getting bogged down in metrics and explicit planning when they could be spending much of that time doing things they’re excited about which they’ve done a quick sense-check on.
I think this is one of the great strengths of liberal, big-tent projects. Support plausibly great people all playing to their strengths. Some of them will disappoint and under-perform your hyper-planned model, sure, but the over-performers will more than make up for it. I want to embody this principle in my org and the groups we support.
Is the point here that you are still ultimately interested in outcomes, but that you think that the current focus on explicitly measuring and project planning hurts more than it helps, and that curiosity and a thriving intellectual scene where people are more willing to run experiments will achieve better outcomes than more explicit attempts to do so?
Yes. I am at-all interested in outcomes such that I will regard myself as having failed a sanity check if very few people go on to do ambitious, impactful work after engaging with my events/programs/groups. But I am very bound to being permissive about what counts here. Thoughtfulness about high impact is the bar, not my EV calculation of impact.
To put it in the form of a critique, I think too many community building programs adopt metrics like “number of participants who go into roles at AIM, MATS, GovAI, etc.” and that this is too prescriptive and discourages people from really forming their own world models in an EA context.
My metric is whether I’m impressed with the pushback I get on my takes when I go into these spaces or whether I’m learning new and plausibly very important things about big problems.
Makes sense. Also: great post.