so voter preferences cannot be opposite of what is best for human welfare, by definition.
This is clearly not true. The example I gave was foreign aid, which benefits foreigners at the expense of citizens. Since only one of these groups can vote, there is little reason to think that the preferences of this subgroup will align with overall human welfare. And we know it doesn’t—hence the polling data.
This is true for most EA cause areas. Existential risk work is about protecting the interests of future generations; animal welfare work is about protecting the interests of animals—neither of which groups can vote.
the page directly addresses that question quite incisively, citing the bayesian regret figures.
No methodology or source is given for why we should expect a 5% decline in the risk of 2 billion deaths.
This is clearly not true. The example I gave was foreign aid, which benefits foreigners at the expense of citizens. Since only one of these groups can vote, there is little reason to think that the preferences of this subgroup will align with overall human welfare.
that’s incorrect. a rational entity’s goal is to rationalize the net utility of the smallest group that includes itself. genes are just trying to maximize their expected number of copies made. the appearance of “altruism” is an illusion caused by:
it’s logically and empirically proven that you cannot actually aim for maximizing the welfare of the “universe”. if you try to maximize the sum of utility, that would justify trying to make as many new people as possible, so as not to “pre-murder” them and it would mean people should decrease their personal utility as much as possible, as long as it increases net utility. or kill one person if it helps cause two people to be born. whereas if you try to maximize average utility, then you want to kill people who are less happy than average. both of these are obviously untenable and don’t remotely fit with observed actual human behavior. this is arguably the most elementary fact in the whole of ethical theory.
i discuss all of this in my “ethics 101” primer here.
This is true for most EA cause areas. Existential risk work is about protecting the interests of future generations; animal welfare work is about protecting the interests of animals—neither of which groups can vote.
the point is that if you want to altruistically help future generations, or animals for that matter, it makes sense to do so in the most efficient way possible. but the fundamental desire to be truly altruistic in the first place is irrational. “altruism” as we normally use the term is just the selfish behavior of genes trying to help copies of. themselves that happen to be in other bodies. again, clearly explained in this veritasium video and is just trivial biology 101.
No methodology or source is given for why we should expect a 5% decline in the risk of 2 billion deaths.
it’s absolutely given, right there in plain english. the BR figures are cited, and there are multiple plausible independent lines of reasoning from which to derive comparable figures. i don’t know why you’re just ignoring that as if it’s not right there written plain as day.
This is clearly not true. The example I gave was foreign aid, which benefits foreigners at the expense of citizens. Since only one of these groups can vote, there is little reason to think that the preferences of this subgroup will align with overall human welfare. And we know it doesn’t—hence the polling data.
This is true for most EA cause areas. Existential risk work is about protecting the interests of future generations; animal welfare work is about protecting the interests of animals—neither of which groups can vote.
No methodology or source is given for why we should expect a 5% decline in the risk of 2 billion deaths.
that’s incorrect. a rational entity’s goal is to rationalize the net utility of the smallest group that includes itself. genes are just trying to maximize their expected number of copies made. the appearance of “altruism” is an illusion caused by:
kin selection.
reciprocal altruism.
it’s logically and empirically proven that you cannot actually aim for maximizing the welfare of the “universe”. if you try to maximize the sum of utility, that would justify trying to make as many new people as possible, so as not to “pre-murder” them and it would mean people should decrease their personal utility as much as possible, as long as it increases net utility. or kill one person if it helps cause two people to be born. whereas if you try to maximize average utility, then you want to kill people who are less happy than average. both of these are obviously untenable and don’t remotely fit with observed actual human behavior. this is arguably the most elementary fact in the whole of ethical theory.
https://plato.stanford.edu/entries/repugnant-conclusion/
i discuss all of this in my “ethics 101” primer here.
the point is that if you want to altruistically help future generations, or animals for that matter, it makes sense to do so in the most efficient way possible. but the fundamental desire to be truly altruistic in the first place is irrational. “altruism” as we normally use the term is just the selfish behavior of genes trying to help copies of. themselves that happen to be in other bodies. again, clearly explained in this veritasium video and is just trivial biology 101.
it’s absolutely given, right there in plain english. the BR figures are cited, and there are multiple plausible independent lines of reasoning from which to derive comparable figures. i don’t know why you’re just ignoring that as if it’s not right there written plain as day.