I overlooked your comment thread with Max Daniel in your launch post last year. Have you thought more about this? Is this a fair summary of June last year?
There seemed to have been strategic uncertainty about near- and long-term work
There is no consensus regarding population ethics, but Michael personally tends to lean to a person-affecting view which tends to focus the near-term
For broader appeal outside of EA, nearer-term work might be preferable
Relevant quotes:
I expect that HLI’s primary audience to be those who have decided that they want to focus on near-term human happiness maximization. However, we want to leave open the possibility of working on improving the quality of lives of humans in the longer-term, as well as non-humans in the nearer- and longer-term.
Internally, we did discuss whether we should make this explicit or not. I was leaning towards doing so and saying that our fourth belief was something about prioritising making people happy rather than making people happy [I suppose this is supposed to say “making happy people”]. In the end, we decided not to mention this. One reason is that, as noted above, it’s not (yet) totally clear what HLI will focus on, hence we don’t know what our colours are so as to be able to nail them to the mast, so to speak.
Another reason is that we assumed it would be confusing to many of our readers if we launched into an explanation of why we were making people happier as opposed to making happy people (or preventing the making of unhappy animals). We hope to attract the interest of non-EAs to our project; outside EA we doubt many people will have these alternatives to making people happier in mind.
Thanks for this! Our position hasn’t changed much since the last post. We still plan to focus on mostly near-term (human) welfare maximisation, but we’d like to see if we can, in the next couple of years, do/say something useful about welfare maximisation in other areas (i.e. animals, the long-term). We haven’t thought much about what this would be yet: we want to develop expertise in the area that seems most useful (by our lights) before thinking about expanding our focus.
Speaking personally, I take what is effectively a worldview diversification view to moral uncertainty (this is a change) although the rationale is different (I plan to write this up at some point). This, combined with my person-affecting sympathies, means I want to put most, but not all, of my efforts into helping humans in the near-term.
I overlooked your comment thread with Max Daniel in your launch post last year. Have you thought more about this? Is this a fair summary of June last year?
There seemed to have been strategic uncertainty about near- and long-term work
There is no consensus regarding population ethics, but Michael personally tends to lean to a person-affecting view which tends to focus the near-term
For broader appeal outside of EA, nearer-term work might be preferable
Relevant quotes:
Thanks for this! Our position hasn’t changed much since the last post. We still plan to focus on mostly near-term (human) welfare maximisation, but we’d like to see if we can, in the next couple of years, do/say something useful about welfare maximisation in other areas (i.e. animals, the long-term). We haven’t thought much about what this would be yet: we want to develop expertise in the area that seems most useful (by our lights) before thinking about expanding our focus.
Speaking personally, I take what is effectively a worldview diversification view to moral uncertainty (this is a change) although the rationale is different (I plan to write this up at some point). This, combined with my person-affecting sympathies, means I want to put most, but not all, of my efforts into helping humans in the near-term.