I anticipate that I’ll pay my impact bills this way, but I’m not maximizing impact. I’m maximizing EA ideas.
Would you mind saying more? Not a reasoning-transparent justification, more so a sketch of the high-level generators. Wondering if it’s along the lines of Richard Ngo’s
I think “maximize expected utility while obeying some constraints” looks very different from actually taking non-consequentialist decision procedures seriously.
In principle the utility-maximizing decision procedure might not involve thinking about “impact” at all.
And this is not even an insane hypothetical, IMO thinking about impact is pretty corrosive to one’s ability to do excellent research for example.
I think your read is basically right. Thinking explicitly and granularly about the direct chain from your actions to last-mile impact and being sensitive to perturbations of that measurement is one area I think many current orgs over-invest in. I believe it is inconsistent with the processes that created those orgs in the first place (which I’m now trying to replicate without much focus on the direct, measurable outputs).
The biggest issue I see is people spending up to 20% of their best hours getting bogged down in metrics and explicit planning when they could be spending much of that time doing things they’re excited about which they’ve done a quick sense-check on.
I think this is one of the great strengths of liberal, big-tent projects. Support plausibly great people all playing to their strengths. Some of them will disappoint and under-perform your hyper-planned model, sure, but the over-performers will more than make up for it. I want to embody this principle in my org and the groups we support.
Is the point here that you are still ultimately interested in outcomes, but that you think that the current focus on explicitly measuring and project planning hurts more than it helps, and that curiosity and a thriving intellectual scene where people are more willing to run experiments will achieve better outcomes than more explicit attempts to do so?
Yes. I am at-all interested in outcomes such that I will regard myself as having failed a sanity check if very few people go on to do ambitious, impactful work after engaging with my events/programs/groups. But I am very bound to being permissive about what counts here. Thoughtfulness about high impact is the bar, not my EV calculation of impact.
To put it in the form of a critique, I think too many community building programs adopt metrics like “number of participants who go into roles at AIM, MATS, GovAI, etc.” and that this is too prescriptive and discourages people from really forming their own world models in an EA context.
My metric is whether I’m impressed with the pushback I get on my takes when I go into these spaces or whether I’m learning new and plausibly very important things about big problems.
Would you mind saying more? Not a reasoning-transparent justification, more so a sketch of the high-level generators. Wondering if it’s along the lines of Richard Ngo’s
I think your read is basically right. Thinking explicitly and granularly about the direct chain from your actions to last-mile impact and being sensitive to perturbations of that measurement is one area I think many current orgs over-invest in. I believe it is inconsistent with the processes that created those orgs in the first place (which I’m now trying to replicate without much focus on the direct, measurable outputs).
The biggest issue I see is people spending up to 20% of their best hours getting bogged down in metrics and explicit planning when they could be spending much of that time doing things they’re excited about which they’ve done a quick sense-check on.
I think this is one of the great strengths of liberal, big-tent projects. Support plausibly great people all playing to their strengths. Some of them will disappoint and under-perform your hyper-planned model, sure, but the over-performers will more than make up for it. I want to embody this principle in my org and the groups we support.
Is the point here that you are still ultimately interested in outcomes, but that you think that the current focus on explicitly measuring and project planning hurts more than it helps, and that curiosity and a thriving intellectual scene where people are more willing to run experiments will achieve better outcomes than more explicit attempts to do so?
Yes. I am at-all interested in outcomes such that I will regard myself as having failed a sanity check if very few people go on to do ambitious, impactful work after engaging with my events/programs/groups. But I am very bound to being permissive about what counts here. Thoughtfulness about high impact is the bar, not my EV calculation of impact.
To put it in the form of a critique, I think too many community building programs adopt metrics like “number of participants who go into roles at AIM, MATS, GovAI, etc.” and that this is too prescriptive and discourages people from really forming their own world models in an EA context.
My metric is whether I’m impressed with the pushback I get on my takes when I go into these spaces or whether I’m learning new and plausibly very important things about big problems.
Makes sense. Also: great post.