bayesian regret figures by princeton math phd warren smith show that approval voting roughly doubles the human welfare impact of democracy.
Their result is that, in their model, outcomes more closely match voter preferences. But my example is one where voter preferences are opposite to what many EAs think is best for human welfare.
doing some ballpark math to see how many lives that would save:
Suppose the USA, by adopting range voting and thus making better decisions, lowers the risk of a 2-billion population crash in 50 years, by 5%. I consider this a conservative estimate.
These numbers just seem totally made up. Why should we believe that approval voting has anything like such a large impact?
“best for human welfare” just means the sum of all individual (self interested) utilities. so voter preferences cannot be opposite of what is best for human welfare, by definition.
caveat: there’s a disparity between intrinsic and instrumental preferences, in other words voters don’t actually know what they want. but to solve that you need an entirely different paradigm, namely election by jury.
better voting methods give you the best you can get from the mediocre human brains you have to work with.
> These numbers just seem totally made up. Why should we believe that approval voting has anything like such a large impact?
the page directly addresses that question quite incisively, citing the bayesian regret figures. the upgrade from plurality voting to score voting is roughly double the effect of having democracy in the first place. and approval voting is just the binary (slightly less optimal but dead simple and politically practical) version of score voting.
so voter preferences cannot be opposite of what is best for human welfare, by definition.
This is clearly not true. The example I gave was foreign aid, which benefits foreigners at the expense of citizens. Since only one of these groups can vote, there is little reason to think that the preferences of this subgroup will align with overall human welfare. And we know it doesn’t—hence the polling data.
This is true for most EA cause areas. Existential risk work is about protecting the interests of future generations; animal welfare work is about protecting the interests of animals—neither of which groups can vote.
the page directly addresses that question quite incisively, citing the bayesian regret figures.
No methodology or source is given for why we should expect a 5% decline in the risk of 2 billion deaths.
This is clearly not true. The example I gave was foreign aid, which benefits foreigners at the expense of citizens. Since only one of these groups can vote, there is little reason to think that the preferences of this subgroup will align with overall human welfare.
that’s incorrect. a rational entity’s goal is to rationalize the net utility of the smallest group that includes itself. genes are just trying to maximize their expected number of copies made. the appearance of “altruism” is an illusion caused by:
it’s logically and empirically proven that you cannot actually aim for maximizing the welfare of the “universe”. if you try to maximize the sum of utility, that would justify trying to make as many new people as possible, so as not to “pre-murder” them and it would mean people should decrease their personal utility as much as possible, as long as it increases net utility. or kill one person if it helps cause two people to be born. whereas if you try to maximize average utility, then you want to kill people who are less happy than average. both of these are obviously untenable and don’t remotely fit with observed actual human behavior. this is arguably the most elementary fact in the whole of ethical theory.
i discuss all of this in my “ethics 101” primer here.
This is true for most EA cause areas. Existential risk work is about protecting the interests of future generations; animal welfare work is about protecting the interests of animals—neither of which groups can vote.
the point is that if you want to altruistically help future generations, or animals for that matter, it makes sense to do so in the most efficient way possible. but the fundamental desire to be truly altruistic in the first place is irrational. “altruism” as we normally use the term is just the selfish behavior of genes trying to help copies of. themselves that happen to be in other bodies. again, clearly explained in this veritasium video and is just trivial biology 101.
No methodology or source is given for why we should expect a 5% decline in the risk of 2 billion deaths.
it’s absolutely given, right there in plain english. the BR figures are cited, and there are multiple plausible independent lines of reasoning from which to derive comparable figures. i don’t know why you’re just ignoring that as if it’s not right there written plain as day.
caveat: there’s a disparity between intrinsic and instrumental preferences, in other words voters don’t actually know what they want.
There’s a disparity between “utility” in the context of a voting system vs “utility” in the context of EA. In other words, what voters want is not necessarily what best improves their actual wellbeing. Is that the same disparity you’re talking about, or something different?
Their result is that, in their model, outcomes more closely match voter preferences. But my example is one where voter preferences are opposite to what many EAs think is best for human welfare.
These numbers just seem totally made up. Why should we believe that approval voting has anything like such a large impact?
“best for human welfare” just means the sum of all individual (self interested) utilities. so voter preferences cannot be opposite of what is best for human welfare, by definition.
caveat: there’s a disparity between intrinsic and instrumental preferences, in other words voters don’t actually know what they want. but to solve that you need an entirely different paradigm, namely election by jury.
better voting methods give you the best you can get from the mediocre human brains you have to work with.
> These numbers just seem totally made up. Why should we believe that approval voting has anything like such a large impact?
the page directly addresses that question quite incisively, citing the bayesian regret figures. the upgrade from plurality voting to score voting is roughly double the effect of having democracy in the first place. and approval voting is just the binary (slightly less optimal but dead simple and politically practical) version of score voting.
This is clearly not true. The example I gave was foreign aid, which benefits foreigners at the expense of citizens. Since only one of these groups can vote, there is little reason to think that the preferences of this subgroup will align with overall human welfare. And we know it doesn’t—hence the polling data.
This is true for most EA cause areas. Existential risk work is about protecting the interests of future generations; animal welfare work is about protecting the interests of animals—neither of which groups can vote.
No methodology or source is given for why we should expect a 5% decline in the risk of 2 billion deaths.
that’s incorrect. a rational entity’s goal is to rationalize the net utility of the smallest group that includes itself. genes are just trying to maximize their expected number of copies made. the appearance of “altruism” is an illusion caused by:
kin selection.
reciprocal altruism.
it’s logically and empirically proven that you cannot actually aim for maximizing the welfare of the “universe”. if you try to maximize the sum of utility, that would justify trying to make as many new people as possible, so as not to “pre-murder” them and it would mean people should decrease their personal utility as much as possible, as long as it increases net utility. or kill one person if it helps cause two people to be born. whereas if you try to maximize average utility, then you want to kill people who are less happy than average. both of these are obviously untenable and don’t remotely fit with observed actual human behavior. this is arguably the most elementary fact in the whole of ethical theory.
https://plato.stanford.edu/entries/repugnant-conclusion/
i discuss all of this in my “ethics 101” primer here.
the point is that if you want to altruistically help future generations, or animals for that matter, it makes sense to do so in the most efficient way possible. but the fundamental desire to be truly altruistic in the first place is irrational. “altruism” as we normally use the term is just the selfish behavior of genes trying to help copies of. themselves that happen to be in other bodies. again, clearly explained in this veritasium video and is just trivial biology 101.
it’s absolutely given, right there in plain english. the BR figures are cited, and there are multiple plausible independent lines of reasoning from which to derive comparable figures. i don’t know why you’re just ignoring that as if it’s not right there written plain as day.
There’s a disparity between “utility” in the context of a voting system vs “utility” in the context of EA. In other words, what voters want is not necessarily what best improves their actual wellbeing. Is that the same disparity you’re talking about, or something different?