my quick response (because this comment is already awfully long) would be that they seem useful but limited heuristics (what exactly makes a theory of change in deeply uncertain and empirically-poor domains “plausible”? [. . . .]
I think that’s right. But if I understand correctly, a collective rationality approach would commend thousands of actions to us, more than we can do even if we went 100% with that approach. So there seemingly has to be some way to triage candidate actions.
More broadly, I worry a lot about what might fill the vacuum if we significantly move away from the current guardrails created by cost-effectiveness analysis (at least in neartermism). I think it is awfully easy for factors like strength of emotional attachment to an issue, social prestige, ease of getting funding, and so forth to infect charitable efforts. Ideally, our theories about impact should be testable, such that we can tell when we misjudged an initiative as too promising and redirect our energies elsewhere. I worry that many initiatives suggested by a collective rationality approach are not “falsifiable” in that way; the converse is that it could also be hard to tell if we were underinvesting in them. So, at EA’s current size/influence level, I may be willing to give up on the potential for working toward certain types of impact because I think maintaining the benefits of the guardrails is more important.
Incidentally, one drawback of longtermist cause areas in general for me is the paucity of feedback loops, often hazy theories of change, and so on. The sought-after ends for longtermism are so important (e.g., the continuation of humanity, avoidance of billions of death from nuclear war) that one can reasonably choose to overlook many methodological issues. But—while remaining open to specific proposals—I worry that many collective-rationality-influenced approaches might carry many of the methodological downsides of current longtermist cause areas while often not delivering potential benefits at the same order of magnitude as AI safety or nuclear safety.
To the extent that we’re talking about EAs not doing things that are commonly done (like taking the time to cast an intelligent vote), I am admittedly uneasy about suggesting EAs not “do their part” and free-ride off of everyone else’s community-sustaining efforts. At the same time, I wouldn’t have begrudged Anthony Fauci for not voting during the recent public health emergency!
Most collective-action results do allow for some degree of free-riding; even measles vaccination works at 95% so we can exempt those with relative medical contraindications and (in some places/cases) sincere religious objections and still get the benefits. Self-declaring oneself as worthy of one of the free-riding slots can be problematic though! I think I’d need to consider this in more specific contexts rather than at the 100K-foot view to refine my approach as opposed to recognizing the tension.
In practice, we might not be that far apart in approach to some things although we may get there by somewhat different means. I posit that living life in “EA mode” 24⁄7 is not feasible—at least not for long—and will result in various maladies that are inimical to impact even on the traditional EA model. So for activities like “doing one’s part as a citizen of one country,” there may be lower practical differences / trade-off decisions here than one might think at the 100K-level.
I think the main point I was trying to make here is that the empirical question of “how bad internalized racism is”, i.e. how much it decreases development and flourishing, is one that seems hard if not impossible to address via quantitative empirical analysis.
I’m not actually troubled by that; this may because I am less utilitarian than the average EA. Without suggesting that all possible ~penultimate or ultimate goals are equally valid, I think “how desirable would X ~penultimate goal be” is significantly less amenable to quantitative empirical analysis than “can we / how can we effectively reach X goal.” But I could be in the minority on that point.
I think that’s right. But if I understand correctly, a collective rationality approach would commend thousands of actions to us, more than we can do even if we went 100% with that approach. So there seemingly has to be some way to triage candidate actions.
More broadly, I worry a lot about what might fill the vacuum if we significantly move away from the current guardrails created by cost-effectiveness analysis (at least in neartermism). I think it is awfully easy for factors like strength of emotional attachment to an issue, social prestige, ease of getting funding, and so forth to infect charitable efforts. Ideally, our theories about impact should be testable, such that we can tell when we misjudged an initiative as too promising and redirect our energies elsewhere. I worry that many initiatives suggested by a collective rationality approach are not “falsifiable” in that way; the converse is that it could also be hard to tell if we were underinvesting in them. So, at EA’s current size/influence level, I may be willing to give up on the potential for working toward certain types of impact because I think maintaining the benefits of the guardrails is more important.
Incidentally, one drawback of longtermist cause areas in general for me is the paucity of feedback loops, often hazy theories of change, and so on. The sought-after ends for longtermism are so important (e.g., the continuation of humanity, avoidance of billions of death from nuclear war) that one can reasonably choose to overlook many methodological issues. But—while remaining open to specific proposals—I worry that many collective-rationality-influenced approaches might carry many of the methodological downsides of current longtermist cause areas while often not delivering potential benefits at the same order of magnitude as AI safety or nuclear safety.
To the extent that we’re talking about EAs not doing things that are commonly done (like taking the time to cast an intelligent vote), I am admittedly uneasy about suggesting EAs not “do their part” and free-ride off of everyone else’s community-sustaining efforts. At the same time, I wouldn’t have begrudged Anthony Fauci for not voting during the recent public health emergency!
Most collective-action results do allow for some degree of free-riding; even measles vaccination works at 95% so we can exempt those with relative medical contraindications and (in some places/cases) sincere religious objections and still get the benefits. Self-declaring oneself as worthy of one of the free-riding slots can be problematic though! I think I’d need to consider this in more specific contexts rather than at the 100K-foot view to refine my approach as opposed to recognizing the tension.
In practice, we might not be that far apart in approach to some things although we may get there by somewhat different means. I posit that living life in “EA mode” 24⁄7 is not feasible—at least not for long—and will result in various maladies that are inimical to impact even on the traditional EA model. So for activities like “doing one’s part as a citizen of one country,” there may be lower practical differences / trade-off decisions here than one might think at the 100K-level.
I’m not actually troubled by that; this may because I am less utilitarian than the average EA. Without suggesting that all possible ~penultimate or ultimate goals are equally valid, I think “how desirable would X ~penultimate goal be” is significantly less amenable to quantitative empirical analysis than “can we / how can we effectively reach X goal.” But I could be in the minority on that point.