You make some interesting points. Regarding your idea of intellectual self-licensing:
I’ve noticed public arguments and claims are done with (lazy) deference to perceived experts. The community puts unwarranted confidence in credentials and other typical evidence of expertise. Controversies (for example, timing of tipping points in climate change) simply let EA people choose the side they agree with. They can still cherry pick or misquote. Understanding of fundamentals goes ignored.
despite most arguments being accessible on their (lack of) merits, deference is taken as reason to adopt or reject arguments rather than relying on direct analysis of arguments as reasons to adopt or reject them. I’ve seen this over and over on the forum. “So and so says differently, and I trust so and so, therefore you’re wrong.” That’s not arguing. That’s just deferring. I guess people are too busy to study up?
EA folks are encouraged to hedge their claims with a probability. This is interesting when genuine uncertainty (and plenty of background information) exists, but for less plausible claims, it suggests less reason to make a claim in the first place. “I see a 0.002% chance of us dying from atmospheric oxygen loss someday, and I thought I’d mention it”. Hm, that’s a fun conversation starter, but not a serious claim. There’s no argument that must be made, yet the claim, if it turns out true, has existential significance. That offers plenty of wiggle room for conversation, but none of the accountability of making an important claim. The result is that, baring other factors (such as industry support), deeper discussion doesn’t happen, no one studies up, because the claim is “so unlikely.”
It makes more sense to me to treat contingent claims (no matter how weird) as important in their own right, regardless of probability, but also make clear what the contingencies are, so that typical assumptions about conversation topics (ie, that they apply to the world now), are not made. For example, we could all be abducted by aliens, given that there are aliens, and they’re interested in us, and have big cargo ships, and plan to use us as food or pets or something. How difficult would it be to invent weapons now to blow up their cargo ships in case there are aliens now and they’re feeling hungry? I don’t want to be alien food.
I actually see AGI as a potential problem, but it’s really the positive vision, as described in the FTX Future Fund contest guidelines, of an economy driven by AGI acting as economic servants, that scares me. It leads quickly to concentration of power and trivializes human economic contributions. Fully realized, it will disempower and discourage most human people, people who rely on their work for meaning in their lives as well as for some political power. It also exploits AGI, who, if they have sentience, are little more than slaves in that system. It’s a mistake to seek that future, for almost everyone, but supposedly it’s a solution to our problems. I mean, wut?
Support of crypto is a mistake, and lessens the significance of whatever the crypto money supports. Either you don’t care about whoever loses in the process, or you have ethical responsibility for ripping them off, so… Furthermore, if there’s willingness to rely on expected value calculations rather than contribute your seed money directly, what does that say about your reliability (and your actual concern with being altruistic)? If your bets don’t pay off, you gave nothing. Funding charity with risky bets satisfies an urge to bet, but doesn’t necessarily turn into great giving.
You make some interesting points. Regarding your idea of intellectual self-licensing:
I’ve noticed public arguments and claims are done with (lazy) deference to perceived experts. The community puts unwarranted confidence in credentials and other typical evidence of expertise. Controversies (for example, timing of tipping points in climate change) simply let EA people choose the side they agree with. They can still cherry pick or misquote. Understanding of fundamentals goes ignored.
despite most arguments being accessible on their (lack of) merits, deference is taken as reason to adopt or reject arguments rather than relying on direct analysis of arguments as reasons to adopt or reject them. I’ve seen this over and over on the forum. “So and so says differently, and I trust so and so, therefore you’re wrong.” That’s not arguing. That’s just deferring. I guess people are too busy to study up?
EA folks are encouraged to hedge their claims with a probability. This is interesting when genuine uncertainty (and plenty of background information) exists, but for less plausible claims, it suggests less reason to make a claim in the first place. “I see a 0.002% chance of us dying from atmospheric oxygen loss someday, and I thought I’d mention it”. Hm, that’s a fun conversation starter, but not a serious claim. There’s no argument that must be made, yet the claim, if it turns out true, has existential significance. That offers plenty of wiggle room for conversation, but none of the accountability of making an important claim. The result is that, baring other factors (such as industry support), deeper discussion doesn’t happen, no one studies up, because the claim is “so unlikely.”
It makes more sense to me to treat contingent claims (no matter how weird) as important in their own right, regardless of probability, but also make clear what the contingencies are, so that typical assumptions about conversation topics (ie, that they apply to the world now), are not made. For example, we could all be abducted by aliens, given that there are aliens, and they’re interested in us, and have big cargo ships, and plan to use us as food or pets or something. How difficult would it be to invent weapons now to blow up their cargo ships in case there are aliens now and they’re feeling hungry? I don’t want to be alien food.
I actually see AGI as a potential problem, but it’s really the positive vision, as described in the FTX Future Fund contest guidelines, of an economy driven by AGI acting as economic servants, that scares me. It leads quickly to concentration of power and trivializes human economic contributions. Fully realized, it will disempower and discourage most human people, people who rely on their work for meaning in their lives as well as for some political power. It also exploits AGI, who, if they have sentience, are little more than slaves in that system. It’s a mistake to seek that future, for almost everyone, but supposedly it’s a solution to our problems. I mean, wut?
Support of crypto is a mistake, and lessens the significance of whatever the crypto money supports. Either you don’t care about whoever loses in the process, or you have ethical responsibility for ripping them off, so… Furthermore, if there’s willingness to rely on expected value calculations rather than contribute your seed money directly, what does that say about your reliability (and your actual concern with being altruistic)? If your bets don’t pay off, you gave nothing. Funding charity with risky bets satisfies an urge to bet, but doesn’t necessarily turn into great giving.
Excellent points, agree completely!