Feel free to message me on here.
JackM
OK thanks for your perspective, although it doesn’t seem convincing to me. I could be more convinced by an argument that inequality / poverty in rich countries results in poor decision-making in those same rich countries.
If extinction and non-extinction are “attractor states”, from what I gather, a state that is expected to last an extremely long time, what exactly isn’t an attractor state?
Any state that isn’t very persistent. For example, an Israel-Gaza ceasefire. We could achieve it, but from history we know it’s unlikely to last very long. The fact that it is unlikely to last makes it less desirable to work towards than if we were confident it would last a long time.
The extinction vs non-extinction example is the classic attractor state example, but not the only one. Another one people talk about is stable totalitarianism. Imagine China or the US can win the race to superintelligence. Which country wins the race essentially controls the world for a very long time given how powerful superintelligence would be. So we have two different attractor states—one where China wins and has long-term control and one where the US wins and has long-term control. Longtermist EAs generally think the state where the US wins is the much better one—the US is a liberal democracy whereas China is an authoritarian state. So if we just manage to ensure the US wins we would experience the better state for a very long time, which seems very high value.
There are ways to counter this. You can argue the states aren’t actually that persistent e.g. you don’t think superintelligence is that powerful or even realistic in the first place. Or you can argue one isn’t clearly better than the other. Or you can argue that there’s not much we can do to achieve one state over other. You touch on this last point when you say that longtermist interventions may be subject to washing out themselves, but it’s important to note that longtermist interventions often aim to achieve short-term outcomes that persist into the long-term, as opposed to long-term outcomes (I explain this better here).
Saving a life through bed nets just doesn’t seem to me to put the world in a better attractor state which makes it vulnerable to washing out. Medical research doesn’t either.
I’m skeptical of this link between eradicating poverty and reducing AI risk. Generally richer countries’ governments are not very concerned about extreme poverty. To the extent that they are, it is the remit of certain departments like USAID that have little if any link to AI development. If we have an AI catastrophe it is probably going to be the fault of a leading AI lab like OpenAI and/or the relevant regulators or legislators not doing their job well enough. I just don’t see why these actors would do any better just because there is no extreme poverty halfway across the world—as I say, global poverty is way down their priority list if it is on it at all.
Sorry a convo with an LLM isn’t likely to convince me of anything, for starters the response on hedonism mainly consists of assertions that there are some philosophers that have opposing views to hedonism. I knew that already...
Is not a life which has a few moments of glory, perhaps leaves some lasting creative achievement, but has a sum of negative hedonistic experiences, a life worth living?
I would personally say no unless the moments of glory help others sufficiently to offset the negative experiences of the life in question.
In other words, I am a hedonist and I suspect a lot of others in this thread are too.
This moral theory just seems too ad-hoc and convoluted to me and ultimately leads to conclusions I find abhorrent i.e. animals can’t speak up for themselves in a way that is clearly intelligible for humans so we are at liberty to inflict arbitrary amounts of suffering to them.
I personally find a utilitarian ethic much more intuitive and palatable, but I’m not going to get into the weeds trying to convince you to change your underlying ethic.
Agreed!
What if the mother wasn’t there (say she is no longer alive) and it was just the dying baby? The only thing the baby would say is “wah wah wah” which is neither an argument for its own welfare nor the welfare of anyone else.
(I’m trying to demonstrate that the ability to speak up for yourself shouldn’t be a criterion in determining the strength of your moral rights...).
Generally I think that those in richer countries are going to shape the future not those in poorer countries, so I’m not sure I agree with you about “wise decision processes” rising to the top if we end extreme poverty.
For example, if we create AI that causes an existential catastrophe, that is going to be the fault of people in richer countries.
Another example—I am concerned about risks of lock in which could enable mass suffering to persist for a very long time. E.g. we spread to the stars while factory farming is still widespread and so end up spreading factory farming too. Or we create digital sentience while we still don’t really care about non-human sentience and so end up creating vast amounts of digital suffering. I can’t see how ending poverty in lower income countries is going to reduce these risks which, if they happen, will be the fault of those in richer countries. Furthermore, ending factory farming seems important to widen the moral circle and reduce these risks.
What is the argument for Health and development interventions being best from a long-term perspective?
I think animal welfare work is underrated from a long-term perspective. There is a risk that we lock-in values that don’t give adequate consideration to non-human sentience which could enable mass suffering to persist for a very long time. E.g. we spread to the stars while factory farming is still widespread and so end up spreading factory farming too. Or we create digital sentience while we still don’t really care about non-human sentience and so end up creating vast amounts of digital suffering. I think working to end factory farming is one way to widen the moral circle and prevent these moral catastrophes from occurring.
It is commonly assumed a lot of interventions will likely fall prey to the “washing-out hypothesis” where the impact of the intervention becomes less significant as time goes on, meaning that the effects of actions in the near future matter more than their long-term consequences. In other words, over time, the differences between the outcomes of various actions tend to fade or “wash out.” So in practice most people would assume the long-term impact of something like medical research is, in expectation, zero.
Longtermists aim to avoid “washing out”. One way is to find interventions that steer between “attractor states”. For example, extinction is an attractor state in that, once humans go extinct, they will stay that way forever (assuming humans don’t re-evolve). Non-extinction is also an attractor state, although to a lesser extent. Increasing the probability of achieving the better attractor state (probably non-extinction by a large margin, if we make certain foundational assumptions) has high expected value that stretches into the far future. This is because the persistence of the attractor states allows the expected value of reducing extinction risk not to “wash out” over time.
This is all explained better in the paper The Case for Strong Longtermism which I would recommend you read.
At a risk of getting off topic from the core question, which interventions do you think are most effective in ensuring we thrive in the future with better cooperative norms? I don’t think it’s clear that this would be EA global health interventions. I would think boosting innovation and improving institutions are more effective.
Also boosting economic growth would probably be better than so-called randomista interventions from a long-term perspective.
What do you think about people who do go through with suicide? These people clearly thought their suffering outweighed any happiness they experienced.
I’m not sure how I have stigmatised any particular response.
Thank you for justifying your vote for global health!
One counterargument to your position is that, with the same amount of money, one can help significantly more non-human animals than humans. Check out this post. An estimated 1.1. billion chickens are helped by broiler and cage-free campaigns in a given year. Each dollar can help an estimated 64 chickens to a total of 41 chicken-years of life.
This contrasts to needing $5,000 to save a human life through top-ranked GiveWell charities.
Personally I would gain more value from knowing why people would prefer $100m to go to global health over animal welfare (or vice versa) than knowing if people would prefer this. This is partly because it already seems clear that the forum (which isn’t even a representative sample of EAs) has a leaning towards animal welfare over global health.
So if my comment incentivises people to comment more but vote less then that is fine by me. Of course my comment may not incentivise people to comment more in which case I apologise.
Interesting to note that, as it stands, there isn’t a single comment on the debate week banner in favor of Global Health. There are votes for global health (13 in total at time of writing), but no comments backing up the votes. I’m sure this will change, but I still find it interesting.
One possible reason is that the arguments for global health > animal welfare are often speciesist and people don’t really want to admit that they are speciesist—but I’m admittedly not certain of this.
In a nutshell—there is more suffering to address in non-human animals, and it is a more neglected area.
What we need is an argument for why it would be good in expectation, compared to all these other cause areas.
Yeah the strong longtermism paper elucidates this argument. I also provide a short sketch of the argument here. At its core is the expected vastness of the future that allows longtermism to beat other areas. The argument for “normal” longtermism i.e. not “strong” is pretty much the same structure.
Future well being does matter but focusing on existential risk doesn’t lead to greater future well-being necessarily. It leads to humans being alive. If the future is filled with human suffering, then focus on existential risk could be one of the worst focus areas.
Yes that’s true. Again we’re dealing with expectations and most people expect the future to be good if we manage not to go extinct. But it’s also worth noting that reducing extinction risk is just one class of reducing existential risk. If you think the future will be bad, you can work to improve the future conditional on us being alive or, in theory, you can work to make us go extinct (but this is of course a bit out there). Improving the future conditional on us being alive might involve tackling climate change, improving institutions, or aligning AI.
And, to reiterate, while we focus on these areas to some extent now, I don’t think we focus on them as much as we would in a world where society at large accepts longtermism.
I have downvoted the LLM answers. I don’t like your approach of simply posting long conversations with LLMs on a forum for various reasons. Firstly, your prompts are such that the LLM provides very broad answers that don’t go very deep into specific points and often don’t engage with the specific arguments people have put forward. Secondly, your prompts are worded in a leading, biased way.
Here is an LLM opining on this very question (I know this is hypocritical but I thought it would be an amusing and potentially effective way to illustrate the point). Note the conclusion saying “leverage the LLM as a tool, not as a crutch”.