Thank you for writing this. I share many of these, but I’m very uncertain about them.
Here it is:
Giving a range of probabilities when you should give a probability + giving confidence intervals over probabilities + failing to realize that probabilities of probabilities just reduce to simple probabilities
I think this is rational, I think of probabilities in terms of bets and order books. I think this is close to my view, and the analogy of financial markets is not irrelevant.
Unstable beliefs about stuff like AI timelines in the sense of I’d be pretty likely to say something pretty different if you asked tomorrow
Changing literally day-to-day seems extreme, but month-to-month seems very reasonable given the speed of everything that’s happening, and it matches e.g. the volatility of NVIDIA stock price.
Axiologies besides ~utilitarianism
To me, “utilitarianism” seems pretty general, as long as you can arbitrarily define utility and you can arbitrarily choose between Negative/Rule/Act/Two-level/Total/Average/Preference/Classical utilitarianism. I really liked this section of a recent talk by Toby Ord (Starting from “It starts by observing that the three main traditions in Western philosophy each emphasize a different focal point:”). (I also don’t know if axiology is the right word for what we want to express here, we might be talking past each other)
Veg(etari)anism for terminal reasons; veg(etari)anism as ethical rather than as a costly indulgence
I mostly agree with you, but second order effects seem hard to evaluate and both costs and benefits are so minuscule (and potentially negative) that I find it hard to do a cost-benefit-analysis.
Thinking personal flourishing (or something else agent-relative) is a terminal goal worth comparable weight to the impartial-optimization project
I agree with you, but for some it might be an instrumentally useful intentional framing. I think some use phrases like “[Personal flourishing] for its own sake, for the sake of existential risk.” (see also this comment for a fun thought experiment for average utilitarians, but I don’t think many believe it)
Cause prioritization that doesn’t take seriously the cosmic endowment is astronomical, likely worth >10^60 happy human lives and we can nontrivially reduce x-risk
Some think the probability of extinction per century is only going up with humanity increasing capabilities, and are not convinced by arguments that we’ll soon reach close-to-speed-of-light travel which will make extinction risk go down. See also e.g. Why I am probably not a longtermist(except point 1). I find this very reasonable.
Deciding in advance to boost a certain set of causes [what determines that set??], or a “portfolio approach” without justifying the portfolio-items
I agree, I think this makes a ton of sense for people in community building that need to work with many cause areas (e.g. CEA staff, Peter Singer), but I fear that it makes less sense for private individuals maximizing their impact.
Not noticing big obvious problems with impact certificates/markets
I think many people notice big obvious problems with impact certificates/markets, but think that the current system is even worse, or that they are at least worth trying and improving, to see if at their best they can in some cases be better than the alternatives we have. The current funding systems also have big obvious problems. What big obvious problems do you think they are missing?
Naively using calibration as a proxy for forecasting ability
I agree with this, just want to mention that it seems better than a common alternative that I see: using LessWrong-sounding-ness/reputation as a proxy for forecasting ability
Thinking you can (good-faith) bet on the end of the world by borrowing money … I think many people miss that utility is about ∫consumption not ∫bankroll (note the bettor typically isn’t liquidity-constrained)
I somewhat agree with you, but I think that many people model it a bit like this: “I normally consume 100k/year, you give me 10k now so I will consume 110k this year, and if I lose the bet I will consume only 80k/year X years in the future”. But I agree that in practice the amounts are small and it doesn’t work for many reasons.
Thanks for the engagement. Sorry for not really engaging back. Hopefully someday I’ll elaborate on all this in a top-level post.
Briefly: by axiological utilitarianism, I mean classical (total, act) utilitarianism, as a theory of the good, not as a decision procedure for humans to implement.
Thank you for writing this. I share many of these, but I’m very uncertain about them.
Here it is:
I think this is rational, I think of probabilities in terms of bets and order books. I think this is close to my view, and the analogy of financial markets is not irrelevant.
Changing literally day-to-day seems extreme, but month-to-month seems very reasonable given the speed of everything that’s happening, and it matches e.g. the volatility of NVIDIA stock price.
To me, “utilitarianism” seems pretty general, as long as you can arbitrarily define utility and you can arbitrarily choose between Negative/Rule/Act/Two-level/Total/Average/Preference/Classical utilitarianism. I really liked this section of a recent talk by Toby Ord (Starting from “It starts by observing that the three main traditions in Western philosophy each emphasize a different focal point:”). (I also don’t know if axiology is the right word for what we want to express here, we might be talking past each other)
I mostly agree with you, but second order effects seem hard to evaluate and both costs and benefits are so minuscule (and potentially negative) that I find it hard to do a cost-benefit-analysis.
I agree with you, but for some it might be an instrumentally useful intentional framing. I think some use phrases like “[Personal flourishing] for its own sake, for the sake of existential risk.” (see also this comment for a fun thought experiment for average utilitarians, but I don’t think many believe it)
Some think the probability of extinction per century is only going up with humanity increasing capabilities, and are not convinced by arguments that we’ll soon reach close-to-speed-of-light travel which will make extinction risk go down. See also e.g. Why I am probably not a longtermist (except point 1). I find this very reasonable.
I agree, I think this makes a ton of sense for people in community building that need to work with many cause areas (e.g. CEA staff, Peter Singer), but I fear that it makes less sense for private individuals maximizing their impact.
I think many people notice big obvious problems with impact certificates/markets, but think that the current system is even worse, or that they are at least worth trying and improving, to see if at their best they can in some cases be better than the alternatives we have. The current funding systems also have big obvious problems. What big obvious problems do you think they are missing?
I agree with this, just want to mention that it seems better than a common alternative that I see: using LessWrong-sounding-ness/reputation as a proxy for forecasting ability
I somewhat agree with you, but I think that many people model it a bit like this: “I normally consume 100k/year, you give me 10k now so I will consume 110k this year, and if I lose the bet I will consume only 80k/year X years in the future”. But I agree that in practice the amounts are small and it doesn’t work for many reasons.
Thanks for the engagement. Sorry for not really engaging back. Hopefully someday I’ll elaborate on all this in a top-level post.
Briefly: by axiological utilitarianism, I mean classical (total, act) utilitarianism, as a theory of the good, not as a decision procedure for humans to implement.