Even if hunter-gatherers globally were significantly less happy than current humans, we should expect some hunter-gatherer groups to be happier than the current mean human. The larger the number of hunter-gatherer groups we study, the higher the chance of getting unusually happy ones. I do not know whether the one you mention is an outlier or not.
I think wellbeing surveys which look into affect are more reliable than ones assessing life satisfaction. In essence, because I am much more confindent on the ability of humans to assess their present state than the mean state over the past year (which is susceptible to lots of biases, like the peak-end rule).
Even if wellbeing surveys are not great, increasing their adoption could be instrumentally valuable to increase the focus on wellbeing (i.e. conscious states, including non-hedonistic sounding ones like those related to relationships, freedom, and artistic expression), which is what ultimately matters.
I agree optimising for nearterm wellbeing may hinder logterm wellbeing. However, I believe measured related to existential risk are better proxies for longterm progress than health, wealth, and education. For example, for the risk of:
Assuming constant wellbeing per human per year, technogical progress could still be useful to increase the number of humans, and therefore increase the total wellbeing of all humans.
Hi Henry,
A few notes:
Even if hunter-gatherers globally were significantly less happy than current humans, we should expect some hunter-gatherer groups to be happier than the current mean human. The larger the number of hunter-gatherer groups we study, the higher the chance of getting unusually happy ones. I do not know whether the one you mention is an outlier or not.
I think wellbeing surveys which look into affect are more reliable than ones assessing life satisfaction. In essence, because I am much more confindent on the ability of humans to assess their present state than the mean state over the past year (which is susceptible to lots of biases, like the peak-end rule).
Even if wellbeing surveys are not great, increasing their adoption could be instrumentally valuable to increase the focus on wellbeing (i.e. conscious states, including non-hedonistic sounding ones like those related to relationships, freedom, and artistic expression), which is what ultimately matters.
I agree optimising for nearterm wellbeing may hinder logterm wellbeing. However, I believe measured related to existential risk are better proxies for longterm progress than health, wealth, and education. For example, for the risk of:
Nuclear war, number of nuclear warheads.
Engineered pandemics, cost of sequencing a full human genome, and vaccine hesitancy.
Advanced AI, AI timelines.
Assuming constant wellbeing per human per year, technogical progress could still be useful to increase the number of humans, and therefore increase the total wellbeing of all humans.