Dentist doing earning to give. I have pledged to donate a minimum of 50 % (aiming for 60%) or 40.000-50.000 $ annually (at the beginning of my career). While I do expect to mainly do āgiving nowā, I do plan, in periods of limited effective donating opportunities, to do āinvesting to giveā.
As a longtermist and total utilitarian, finding the cause that increases utilities (no matter the time or type of sentient being) the most time/ācost-effectively is my goal. In the pursuit of this goal, I so far care mostly about: alignment, s-risk, artificial sentience and WAW (wild animal welfare) (but feel free to change my mind).
I heard about EA for the first time in 2018 in relation to an animal rights organization I worked for part-time (Animal Alliance in Denmark). However I have only had minimal interactions with EAs.
Male, 25 years old and diagnosed with aspergers (autism) and dyslexia.
Really excited to see an org in the WAW space focusing exclusively on policy. Especially since the org sounds to be pragmatic, very knowledgeable on the topic, have great communication skills and in general have a great teamāwhat more can you ask for! Looking forward to donating.
Thanks a lot for the post! Just wanted to say that your posts (especially the one you made last year) have inspired a large part of my donations.
Of the almost $30k I donated over the last year, 70% went to AI policy orgs (mainly Palisade Research). Iām not sure where I would have donated without your posts, but I canāt say with certainty that they would have ended up the same place.
First off, I must sayāI really like that answer.
I guess Iām concerned about how much of a value lock-in there will be with the creation of AGI. And I find it hard to imagine a majority caring about wild animal suffering or mass producing happiness (e.g. creating a large amount of happy artificial sentience). But I do agreeāI shouldnāt giv it a 90% likelihood.
True. But I think thatās more of an argument that the future is uncertain (which ofc is a relevant argument). But even with the technology I donāt necessarily think weāll have a majority interested in eliminating all forms of suffering (especially in the wild) or mass producing happiness.
As a utilitarianist I personally believe alignment to be the most important course areaāthough weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.
I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of ā1, I would describe my believe for future scenarios like this:
~5% likelihood: ~101000 (Very good future e.g. hedonium shockwave)
~90% likelihood: ā1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain)
~5% likelihood: ~-10100 (s-risk like scenarios)
My reasoning for thinking āscenario 2ā is more likely than āscenario 1ā is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, weāll find a way to keep the earth habitable, and thus wild animal suffering.
Based on these 3 scenarios, you donāt have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) donāt seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if youāre a classical utilitarian, the future is likely to be good.
So now Iām asking, what am I getting wrong? Why is the future likely to be net positive?
Thank you for the post! Canāt belive I only saw it now.
I do agree that altruism can and should be seen as something thatās net positive to once own happiness for most. But: 1. My post was mainly intended for people who are already āhardcoreā EAs and are willing to make a significant personal sacrifice for the greater good. 2. You make some interesting comparisons to religion that I somewhat agree with. Though I donāt think religion is as time consuming as EA is for many EAs. Iām also sure EA would seem less like a personal sacrifice if you were surrounded by EAs. 3. Trying to make EA more mainstream is not simple. Many ideas seems ratical to the average person. You could ofc try to make the ideas seem more in line with the average viewpoint. But I donāt think thatās worth it if it makes us less efficient.
As a utilitarianist I personally believe alignment to be the most important course areaāthough weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.
I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of ā1, I would describe my believe for future scenarios like this:
~5% likelihood: ~101000 (Very good future e.g. hedonium shockwave)
~90% likelihood: ā1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain)
~5% likelihood: ~-10100 (s-risk like scenarios)
My reasoning for thinking āscenario 2ā is more likely than āscenario 1ā is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, weāll find a way to keep the earth habitable, and thus wild animal suffering.
Based on these 3 scenarios, you donāt have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) donāt seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if youāre a classical utilitarian, the future is likely to be good.
So now Iām asking, what am I getting wrong? Why is the future likely to be net positive?
Well I totally agree with you, and the simple answer isāI donāt. All of the graph/ātables I made (except fig. 5 - which was only a completely made up example) are based on averages (including made up numbers thatās supposed to represent the average). They donāt take into account personal factorsālike your income or how expensive rent+utilities are in your area. Therefore the models should only be used as a very rough guide, that canāt work by itself (I guess one could make a more complex model that includes these factors). Therefore one should also make a budget, to see how it would look in real life, as I suggested in this section.
X-risk reduction (especially alinement) is highly neglected and itās less clear how our actions can impact the value of the future. However, I think the impact of both is very uncertain and I still think working on s-risk reduction and long-termest animal work is of high impact.
I agree. :) Your idea of lobbying and industry-specific actions might also be more neglected. In terms of WAW, I think it could help reduce the amount of human caused suffering to wild animals, but likely not have an impact on natural caused suffering.
Thanks a lot for the post! Iām happy that people are trying to combine the field of longtermism and animal welfare.
Hereās a few initial thoughts from a non-professional (note I didnāt read the full post so I might have missed something):
I generally believe that moral circle expansion, especially for wild animals and artificial sentience, is one of the best universal ways to help ensure a net positive future. I think that invertebrates or artificial sentience will make up the majority of moral patients in the future. I also suspect this to be good in a number of different future scenarios, since it could lower the chance of s-risks and better the scenario for animals (or artificial sentience) no matter if there will be a lock-in scenario or not.
I think progression in short-term direct WAW interventions is also very important since I find it hard to believe that many people will care about WAW unless they can see a clear way of changing the status quo (even if current WAW interventions will only have a minimal impact). I also think short-term WAW interventions could help to change the narrative that interfering in nature is inherently bad. (Note: I have personally noticed several people that have similar values as me (in terms of caring greatly about WAW in the far future) only caring little about short-term interventions.)
It could of course be argued that working directly on reducing the likelihood of certain s-risk and working on AI-alignment might be a more efficient way of ensuring a better future for animals. I certainly think this might be true, however I think these measures are less reliable due to the uncertainty of the future.
I think Brian Tomasik has written great pieces on why animal-focused hedonistic imperative and gene drive might be less promising and more unlikely than it seems. I do personally also believe that itās unlikely to ever happen on a large scale for wild animals. However, I think if it happens and itās done right (without severely disrupting ecosystems), genetic engineering could be the best way of increasing net well-being in the long term. But I havenāt thought that much about this.
Anyways, I wouldnāt be surprised if you already have considered all of these arguments.
Iām really looking forward to your follow-up post :)
I do agree that t in the formula is quite complicated to understand (and does not mean the same as the typical meant by tractability), I tried to explain it, but since no one edited my work, I might be overestimating the understandability of my formulations. ātā is something like āthe cost-effectiveness of reducing the likelihood of x-risk by 1 percentage pointā divided by āthe cost-effectiveness of increasing net happiness of x-risk by 1 percentā.
When thatās been said, I still think that the analysis lacks an estimation for how good the future will be. Which could make the numbers for ātā and ānet negative futureā (or u(negative)) āmore objectiveā.
I do somewhat agree (my beliefs on this has also somewhat changed after discussing the theory with others). I think āconventionalā WAW work has some direct (advocacy) and indirect (research) influence on peoples values, which could help avoid or make certain lock-in scenarios less severe. However, I think this impact is less than I previously thought, and Iām now of the belive that more direct work into how we can mitigate such risk is more impactful.
If I understand you correctly you believe the formula does not take into account how good the future will be. I do somewhat agree that there is a related problem in my analysis, however I donāt think that the problem is related to the formula.
The problem your talking about is actually being taken into account by ātā. You should note that the formula is about ānet well-beingā, so āall well-beingā minus āall sufferingā. So if future ānet well-beingā is very low, then the tractability of WAW will be high (aka ātā being low). E.g. lets say ānet well-beingā = 1 (made up unit), than itās gonna be alot easier to increase by 1 % than if ānet well-beingā = 1000.
However I do agree that estimations for expectations on how good the future is going to be, is technically needed for making this analysis correctly. Specifically for estimating ātā and ānet negative futureā (or u(negative)) in for the āmain formulaā. I may fix this in the future.
(I hope itās not confusing that Iām answering both your comments at once).
While I will have to consider this for longer, my preliminary thought is that I agree with most of what you said. Which means that I might not believe in some of my previous statements.
Thanks for the link to that post. I do agree and I can definitely see how some of these biases have influenced a couple of my thoughts.
--
On your last point, but future-focused WAW interventions, Iām thinking of things that you mention in the tractability section of your post:...
Okay, I see. Well actually, my initial thought was that all of those four options had a similar impact on the longterm future. Which would justify focusing on short-term interventions and advocacy (which would correspond with working on point number three and four). However after further consideration, I think the first two are of higher impact when considering the far future. Which means I (at least for right now) agree with your earlier statement:
āSo rather than talking about āwild animal welfare interventionsā, Iād argue that youāre really only talking about āfuture-focused wild animal welfare interventionsā. And I think making that distinction is important, because I donāt think your reasoning supports present-focused WAW work.ā
While I still think the āflow through effectā is very real for WAW, I do think that itās probably true working on s-risks more directly might be of higher impact.
--
I was curious if you have some thoughts on these conclusions (concluded based on a number of things you said and my personal values):
Since working on s-risk directly is more impactful than working on it indirectly, direct work should be done when possible.
There is no current organization working purely on animal related s-risk (as far as I know). So if thatās your main concern, your options are start-up or convincing an ās-risk mitigation organizationā that you should work on this area full time.
Animal Ethics works on advocating moral circle expansion. But since this is of less direct impact to the longterm future, this has less of an effect on reducing s-risk than more direct work.
If youāre also interested in reducing other s-risks (e.g. artificial sentience), then working for an organization that directly tries to reduce the probability of a number of s-risk is your best option (e.g. Center on Long-Term Risk or Center for Reducing Suffering).
Iād argue thereās a much lower bar for an option value preference. To have a strong preference for option value, you need only assume that youāre not the most informed, most capable person to make that decision.
I do agree that there are more capable people to make that decision than me and there will be even better in the future. But I donāt believe this to be the right assessment for the desirability of option value. I think the more correct question is āwhether the future person/āpeople in power (which may be the opinion of the average human in case of a āsingleton democracyā) would be more capable than me?ā.
I feel unsure whether my morals will be better or worse than that future person or people because of the following:
The vast majority of moral patients currently, according to my knowledge, are invertebrates (excluding potential/āunknown sentient beings like aliens, AI made by aliens, AI sentient humans already made unknowingly, microorganisms etc.). My impression is that the mean moral circle is wider than it was 10 years ago and that most peopleās moral circle increases with the decrease in poverty, the decrease in personal problems and the increase in free time. However, whether or not the majority will ever care about āant-sufferingā and the belief that interventions should be done is unclear to me. (So this argument can go both ways)
A similar argument can be used for future AI sentients. My impression is that a lot of humans care somewhat about AI sentients and that this will most likely increase in the future. However, Iām unsure how much people will care if AI sentients mainly come from non-communicating computers that have next to nothing in common with humans.
To what extent do you think approaches like AI-alignment will protect against S-risks? Or phrased another way, how often will unaligned super-intelligence result in a S-risk scenario.
Well, I think working on AI-alignment could significantly decrease the likelihood of s-risks where humans are the main ones suffering. So if thatās your main concern, then working on AI-alignment is the best option (both with your and my beliefs).
While I donāt think that the probability of āAGI-caused S-riskā is high. I also donāt think the AGI will prevent or care largely for invertebrates or artificial sentience. E.g. I donāt think the AGI will stop a person from doing directed panspermia or prevent the development of artificial sentience. I think the AGI will most likely have similar values to the people who created it or control it (which might again be (partly) the whole human adult population).
Iām also worried that if WAW concerns are not spread, nature conservation (or less likely but even worse, the spread of nature) will be the enforced value. Which could prevent our attempts to make nature better and ensure that the natural suffering will continue.
And since you asked for beliefs of the likelihood, here you go (partly copied from my explanation in Appendix 4):
I put the āprobabilityā for an āAI misalignment caused s-riskā as being pretty low (1 %), because most scenarios of AI misalignment, will according to my previous statements, be negligible (talking about s-risk, not x-risk). It would in this case only be relevant if AI keeps us and/āor animals alive āpermanentlyā to have net negative lives (which most likely would require traveling outside of the solar system). I also put āhow bad the scenario would beā pretty low (0,5) because I think most likely (but not guaranteed) the impact will be minimal to animals (which technically might mean that it would not be considered a s-risk).
I want to try explore some of the assumptions that are building your world model. Why do you think that the world, in our current moment, contains more suffering than pleasure? What forces do you think resulted in this equilibrium?
I would argue that whether or not the current world is net positive or net negative depends on the experience of invertebrates since they make up the majority of moral patients. Most people caring about WAW believe one of the following:
That invertebrates most likely suffer more than they experience pleasure.
It is unclear whether invertebrates suffer or experience pleasure more.
Iām actually leaning more towards the latter. My guess is thereās a 60 % probability that they suffer more and a 40 % probability that they feel pleasure more.
So the cause for my belief that the current world is slightly more likely to be net negative is simply: evolution did not take ethics into account. (So the current situation is unrelated to my faith in humanity).
With all that said, I still think the future is more likely to be net positive than net negative.
I think that the interventions that decrease the chance of future wild animal suffering are only a subset of all WAW things you could do, though. For example, figuring out ways to make wild animals suffer less in the present would come under āWAWā, but I wouldnāt expect to make any difference to the more distant future. Thatās because if we care about wild animals, weāll figure out what to do sooner or later.
I do agree that current WAW interventions have a relatively low expected impact compared with other WAW work (e.g. moral circle expansion) if only direct effects are counted.
Here are some reasons why I think current interventions/āresearch may help the longterm future.
Doing more foundational work now means we can earlier start more important research and interventions, when the technology is available. (Probably a less important factor)
Current research gives us a better answer to how much pleasure and suffering wild animals experience, which helps inform future decisions on the spread of wildlife. (This may not be that relevant yet)
Showcasing that interventions can have a positive effect on the welfare of wildlife, could help convince more people that helping wildlife is tractable and the morally right thing to do (even if itās unnatural). (I think this to be the most important effect)
So I think current interventions could have a significant impact on moral circle expansion. Especially because I think you have to have two beliefs to care for WAW work: believe that the welfare of wildlife is important (especially for smaller animals like insects, which likely make up the majority of suffering) and believe that interfering with nature could be positive for welfare. The latter may be difficult to achieve without proven interventions since few people think we should intervene in nature.
Whether direct moral circle expansion or indirect (via. interventions) are more impactful are unclear to me. Animal Ethics mainly work on the former and Wild Animal Initiative works mainly on the latter. Iām currently expecting to donate to both.
So rather than talking about āwild animal welfare interventionsā, Iād argue that youāre really only talking about āfuture-focused wild animal welfare interventionsā. And I think making that distinction is important, because I donāt think your reasoning supports present-focused WAW work.
I think having an organization working directly on this area could be of high importance (as I know only the Center For Reducing Suffering and the Center on Long-Term Risk work partly on this area). But how do you think itās possible to currently work on āfuture-focused wild animal welfare interventionsā? Other than doing research, I donāt see how else you can work specifically on āWAW future scenariosā. Itās likely just my limited imagination or me misunderstanding what you mean, but I donāt know how we can work on that now.
Jens Aslaug šø
Dentist doing earning to give. I have pledged to donate a minimum of 50 % (aiming for 60%) or 40.000-50.000 $ annually (at the beginning of my career). While I do expect to mainly do āgiving nowā, I do plan, in periods of limited effective donating opportunities, to do āinvesting to giveā.
As a longtermist and total utilitarian, finding the cause that increases utilities (no matter the time or type of sentient being) the most time/ācost-effectively is my goal. In the pursuit of this goal, I so far care mostly about: alignment, s-risk, artificial sentience and WAW (wild animal welfare) (but feel free to change my mind).
I heard about EA for the first time in 2018 in relation to an animal rights organization I worked for part-time (Animal Alliance in Denmark). However I have only had minimal interactions with EAs.
Male, 25 years old and diagnosed with aspergers (autism) and dyslexia.
Really excited to see an org in the WAW space focusing exclusively on policy. Especially since the org sounds to be pragmatic, very knowledgeable on the topic, have great communication skills and in general have a great teamāwhat more can you ask for! Looking forward to donating.
Thanks a lot for the post! Just wanted to say that your posts (especially the one you made last year) have inspired a large part of my donations.
Of the almost $30k I donated over the last year, 70% went to AI policy orgs (mainly Palisade Research). Iām not sure where I would have donated without your posts, but I canāt say with certainty that they would have ended up the same place.
First off, I must sayāI really like that answer.
I guess Iām concerned about how much of a value lock-in there will be with the creation of AGI. And I find it hard to imagine a majority caring about wild animal suffering or mass producing happiness (e.g. creating a large amount of happy artificial sentience). But I do agreeāI shouldnāt giv it a 90% likelihood.
True. But I think thatās more of an argument that the future is uncertain (which ofc is a relevant argument). But even with the technology I donāt necessarily think weāll have a majority interested in eliminating all forms of suffering (especially in the wild) or mass producing happiness.
Net positive/ānegative: āAll goodā minus āall badā
In expectation: Weighted average outcome (prob Ć impact)
I have a question I would like some thoughts on:
As a utilitarianist I personally believe alignment to be the most important course areaāthough weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.
I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of ā1, I would describe my believe for future scenarios like this:
~5% likelihood: ~101000 (Very good future e.g. hedonium shockwave)
~90% likelihood: ā1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain)
~5% likelihood: ~-10100 (s-risk like scenarios)
My reasoning for thinking āscenario 2ā is more likely than āscenario 1ā is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, weāll find a way to keep the earth habitable, and thus wild animal suffering.
Based on these 3 scenarios, you donāt have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) donāt seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if youāre a classical utilitarian, the future is likely to be good.
So now Iām asking, what am I getting wrong? Why is the future likely to be net positive?
Thank you for the post! Canāt belive I only saw it now.
I do agree that altruism can and should be seen as something thatās net positive to once own happiness for most. But:
1. My post was mainly intended for people who are already āhardcoreā EAs and are willing to make a significant personal sacrifice for the greater good.
2. You make some interesting comparisons to religion that I somewhat agree with. Though I donāt think religion is as time consuming as EA is for many EAs. Iām also sure EA would seem less like a personal sacrifice if you were surrounded by EAs.
3. Trying to make EA more mainstream is not simple. Many ideas seems ratical to the average person. You could ofc try to make the ideas seem more in line with the average viewpoint. But I donāt think thatās worth it if it makes us less efficient.
I have a question I would like some thoughts on:
As a utilitarianist I personally believe alignment to be the most important course areaāthough weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.
I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of ā1, I would describe my believe for future scenarios like this:
~5% likelihood: ~101000 (Very good future e.g. hedonium shockwave)
~90% likelihood: ā1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain)
~5% likelihood: ~-10100 (s-risk like scenarios)
My reasoning for thinking āscenario 2ā is more likely than āscenario 1ā is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, weāll find a way to keep the earth habitable, and thus wild animal suffering.
Based on these 3 scenarios, you donāt have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) donāt seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if youāre a classical utilitarian, the future is likely to be good.
So now Iām asking, what am I getting wrong? Why is the future likely to be net positive?
Well I totally agree with you, and the simple answer isāI donāt. All of the graph/ātables I made (except fig. 5 - which was only a completely made up example) are based on averages (including made up numbers thatās supposed to represent the average). They donāt take into account personal factorsālike your income or how expensive rent+utilities are in your area. Therefore the models should only be used as a very rough guide, that canāt work by itself (I guess one could make a more complex model that includes these factors). Therefore one should also make a budget, to see how it would look in real life, as I suggested in this section.
X-risk reduction (especially alinement) is highly neglected and itās less clear how our actions can impact the value of the future. However, I think the impact of both is very uncertain and I still think working on s-risk reduction and long-termest animal work is of high impact.
Center on long term risk
I agree. :) Your idea of lobbying and industry-specific actions might also be more neglected. In terms of WAW, I think it could help reduce the amount of human caused suffering to wild animals, but likely not have an impact on natural caused suffering.
Thanks a lot for the post! Iām happy that people are trying to combine the field of longtermism and animal welfare.
Hereās a few initial thoughts from a non-professional (note I didnāt read the full post so I might have missed something):
I generally believe that moral circle expansion, especially for wild animals and artificial sentience, is one of the best universal ways to help ensure a net positive future. I think that invertebrates or artificial sentience will make up the majority of moral patients in the future. I also suspect this to be good in a number of different future scenarios, since it could lower the chance of s-risks and better the scenario for animals (or artificial sentience) no matter if there will be a lock-in scenario or not.
I think progression in short-term direct WAW interventions is also very important since I find it hard to believe that many people will care about WAW unless they can see a clear way of changing the status quo (even if current WAW interventions will only have a minimal impact). I also think short-term WAW interventions could help to change the narrative that interfering in nature is inherently bad.
(Note: I have personally noticed several people that have similar values as me (in terms of caring greatly about WAW in the far future) only caring little about short-term interventions.)
It could of course be argued that working directly on reducing the likelihood of certain s-risk and working on AI-alignment might be a more efficient way of ensuring a better future for animals. I certainly think this might be true, however I think these measures are less reliable due to the uncertainty of the future.
I think Brian Tomasik has written great pieces on why animal-focused hedonistic imperative and gene drive might be less promising and more unlikely than it seems. I do personally also believe that itās unlikely to ever happen on a large scale for wild animals. However, I think if it happens and itās done right (without severely disrupting ecosystems), genetic engineering could be the best way of increasing net well-being in the long term. But I havenāt thought that much about this.
Anyways, I wouldnāt be surprised if you already have considered all of these arguments.
Iām really looking forward to your follow-up post :)
I do agree that t in the formula is quite complicated to understand (and does not mean the same as the typical meant by tractability), I tried to explain it, but since no one edited my work, I might be overestimating the understandability of my formulations. ātā is something like āthe cost-effectiveness of reducing the likelihood of x-risk by 1 percentage pointā divided by āthe cost-effectiveness of increasing net happiness of x-risk by 1 percentā.
When thatās been said, I still think that the analysis lacks an estimation for how good the future will be. Which could make the numbers for ātā and ānet negative futureā (or u(negative)) āmore objectiveā.
I do somewhat agree (my beliefs on this has also somewhat changed after discussing the theory with others). I think āconventionalā WAW work has some direct (advocacy) and indirect (research) influence on peoples values, which could help avoid or make certain lock-in scenarios less severe. However, I think this impact is less than I previously thought, and Iām now of the belive that more direct work into how we can mitigate such risk is more impactful.
If I understand you correctly you believe the formula does not take into account how good the future will be. I do somewhat agree that there is a related problem in my analysis, however I donāt think that the problem is related to the formula.
The problem your talking about is actually being taken into account by ātā. You should note that the formula is about ānet well-beingā, so āall well-beingā minus āall sufferingā. So if future ānet well-beingā is very low, then the tractability of WAW will be high (aka ātā being low). E.g. lets say ānet well-beingā = 1 (made up unit), than itās gonna be alot easier to increase by 1 % than if ānet well-beingā = 1000.
However I do agree that estimations for expectations on how good the future is going to be, is technically needed for making this analysis correctly. Specifically for estimating ātā and ānet negative futureā (or u(negative)) in for the āmain formulaā. I may fix this in the future.
(I hope itās not confusing that Iām answering both your comments at once).
While I will have to consider this for longer, my preliminary thought is that I agree with most of what you said. Which means that I might not believe in some of my previous statements.
Thanks for the link to that post. I do agree and I can definitely see how some of these biases have influenced a couple of my thoughts.
--
Okay, I see. Well actually, my initial thought was that all of those four options had a similar impact on the longterm future. Which would justify focusing on short-term interventions and advocacy (which would correspond with working on point number three and four). However after further consideration, I think the first two are of higher impact when considering the far future. Which means I (at least for right now) agree with your earlier statement:
While I still think the āflow through effectā is very real for WAW, I do think that itās probably true working on s-risks more directly might be of higher impact.
--
I was curious if you have some thoughts on these conclusions (concluded based on a number of things you said and my personal values):
Since working on s-risk directly is more impactful than working on it indirectly, direct work should be done when possible.
There is no current organization working purely on animal related s-risk (as far as I know). So if thatās your main concern, your options are start-up or convincing an ās-risk mitigation organizationā that you should work on this area full time.
Animal Ethics works on advocating moral circle expansion. But since this is of less direct impact to the longterm future, this has less of an effect on reducing s-risk than more direct work.
If youāre also interested in reducing other s-risks (e.g. artificial sentience), then working for an organization that directly tries to reduce the probability of a number of s-risk is your best option (e.g. Center on Long-Term Risk or Center for Reducing Suffering).
I do agree that there are more capable people to make that decision than me and there will be even better in the future. But I donāt believe this to be the right assessment for the desirability of option value. I think the more correct question is āwhether the future person/āpeople in power (which may be the opinion of the average human in case of a āsingleton democracyā) would be more capable than me?ā.
I feel unsure whether my morals will be better or worse than that future person or people because of the following:
The vast majority of moral patients currently, according to my knowledge, are invertebrates (excluding potential/āunknown sentient beings like aliens, AI made by aliens, AI sentient humans already made unknowingly, microorganisms etc.). My impression is that the mean moral circle is wider than it was 10 years ago and that most peopleās moral circle increases with the decrease in poverty, the decrease in personal problems and the increase in free time. However, whether or not the majority will ever care about āant-sufferingā and the belief that interventions should be done is unclear to me. (So this argument can go both ways)
A similar argument can be used for future AI sentients. My impression is that a lot of humans care somewhat about AI sentients and that this will most likely increase in the future. However, Iām unsure how much people will care if AI sentients mainly come from non-communicating computers that have next to nothing in common with humans.
Well, I think working on AI-alignment could significantly decrease the likelihood of s-risks where humans are the main ones suffering. So if thatās your main concern, then working on AI-alignment is the best option (both with your and my beliefs).
While I donāt think that the probability of āAGI-caused S-riskā is high. I also donāt think the AGI will prevent or care largely for invertebrates or artificial sentience. E.g. I donāt think the AGI will stop a person from doing directed panspermia or prevent the development of artificial sentience. I think the AGI will most likely have similar values to the people who created it or control it (which might again be (partly) the whole human adult population).
Iām also worried that if WAW concerns are not spread, nature conservation (or less likely but even worse, the spread of nature) will be the enforced value. Which could prevent our attempts to make nature better and ensure that the natural suffering will continue.
And since you asked for beliefs of the likelihood, here you go (partly copied from my explanation in Appendix 4):
I put the āprobabilityā for an āAI misalignment caused s-riskā as being pretty low (1 %), because most scenarios of AI misalignment, will according to my previous statements, be negligible (talking about s-risk, not x-risk). It would in this case only be relevant if AI keeps us and/āor animals alive āpermanentlyā to have net negative lives (which most likely would require traveling outside of the solar system). I also put āhow bad the scenario would beā pretty low (0,5) because I think most likely (but not guaranteed) the impact will be minimal to animals (which technically might mean that it would not be considered a s-risk).
I would argue that whether or not the current world is net positive or net negative depends on the experience of invertebrates since they make up the majority of moral patients. Most people caring about WAW believe one of the following:
That invertebrates most likely suffer more than they experience pleasure.
It is unclear whether invertebrates suffer or experience pleasure more.
Iām actually leaning more towards the latter. My guess is thereās a 60 % probability that they suffer more and a 40 % probability that they feel pleasure more.
So the cause for my belief that the current world is slightly more likely to be net negative is simply: evolution did not take ethics into account. (So the current situation is unrelated to my faith in humanity).
With all that said, I still think the future is more likely to be net positive than net negative.
I do agree that current WAW interventions have a relatively low expected impact compared with other WAW work (e.g. moral circle expansion) if only direct effects are counted.
Here are some reasons why I think current interventions/āresearch may help the longterm future.
Doing more foundational work now means we can earlier start more important research and interventions, when the technology is available. (Probably a less important factor)
Current research gives us a better answer to how much pleasure and suffering wild animals experience, which helps inform future decisions on the spread of wildlife. (This may not be that relevant yet)
Showcasing that interventions can have a positive effect on the welfare of wildlife, could help convince more people that helping wildlife is tractable and the morally right thing to do (even if itās unnatural). (I think this to be the most important effect)
So I think current interventions could have a significant impact on moral circle expansion. Especially because I think you have to have two beliefs to care for WAW work: believe that the welfare of wildlife is important (especially for smaller animals like insects, which likely make up the majority of suffering) and believe that interfering with nature could be positive for welfare. The latter may be difficult to achieve without proven interventions since few people think we should intervene in nature.
Whether direct moral circle expansion or indirect (via. interventions) are more impactful are unclear to me. Animal Ethics mainly work on the former and Wild Animal Initiative works mainly on the latter. Iām currently expecting to donate to both.
I think having an organization working directly on this area could be of high importance (as I know only the Center For Reducing Suffering and the Center on Long-Term Risk work partly on this area). But how do you think itās possible to currently work on āfuture-focused wild animal welfare interventionsā? Other than doing research, I donāt see how else you can work specifically on āWAW future scenariosā. Itās likely just my limited imagination or me misunderstanding what you mean, but I donāt know how we can work on that now.