*
mlsbt
The problem of artificial suffering
If you vaporized me and created a copy of me somewhere else, that would just be totally fine. I would think of it as teleporting. It’d be chill.
...
If that’s right, “constant replacement” could join a number of other ideas that feel so radically alien (for many) that they must be “impossible to live with,” but actually are just fine to live with. (E.g., atheism; physicalism; weird things about physics. I think many proponents of these views would characterize them as having fairly normal day-to-day implications while handling some otherwise confusing questions and situations better.)
These contradict each other. Let’s say, like you imagined in an earlier post, that one day I’ll be able to become a digital person by destroying my physical body in a futuristic brain-scanning process. It’s pretty obvious that the connected conscious experience I’ve (I hope!) experienced my whole life, would, at that transition, come to an end. Whether or not it counts as me dying, and whether this new person ‘is’ me, are to some extent just semantics. But your and Parfit’s position seems to define away the basic idea of personal identity just to solve its problems. My lifelong connected conscious awareness would undeniably cease to exist; the awareness that was me will enter the inky nothingness. The fact that my clone is walking and talking is completely orthogonal to this basic reality.
So if I tried to live with this idea “for a full week”, except at the end of the week I know I’d be shot and replaced, I’d be freaking out, and I think you would be too. Any satisfactory theory of personal identity has to avoid equating death with age-related change. I should read Reasons and Persons, but none of the paradoxes you link to undermine this ‘connected consciousness’ idea of personal identity (which differs from what Bernard Williams—and maybe Parfit?--would call psychological continuity). As I understand it, psychological continuity allows for any given awareness to end permanently as long as it’s somewhere replaced, but what I’m naively calling ‘connected consciousness’ doesn’t allow this.
Another way of putting it; in your view, the only reason death is undesirable is that it permanently ends your relationships and projects. I also care about this aspect, but for me, and I think most non-religious people, death is primarily undesirable because I don’t want to sleep forever!
I think this is pretty strong evidence that Holden and Parfit are p-zombies :)
Great post. #9 is interesting because the inverse might also be true, making your idea even stronger: maybe a great thing you can do for the short term is to make the long term go well. X-risk interventions naturally overlap with maintaining societal stability, because 1) a rational global order founded in peace and mutual understanding, which relatively speaking we have today more than ever before, reduces the probability of global catastrophes; and less convincingly 2) a catastrophe that nevertheless doesn’t kill everyone would indefinitely set the remaining population back at square one for all neartermist cause areas. Maintaining the global stability we’ve enjoyed since the World Wars is a necessary precondition for the coterminous vast improvements in global health and poverty, and it seems like a great bulk of X-risk work boils down to that. Your #2 is also relevant.
- 1 Jan 2022 17:13 UTC; 20 points) 's comment on Convergence thesis between longtermism and neartermism by (
They contradict each other in the sense that your full theory, since it includes the particular consequence that vaporization is chill, is I think not something anyone but a small minority would be fine to live with. Quantum mechanics and atheism impose no such demands. It’s not too strong a claim to call this idea fine to live with when you’re just going about your daily life, ignoring the vaporization part. “Fine to live with” has to include every consequence, not just the ones that are indeed fine to live with. I interpreted the second quote as arguing that not just you but the general public could get used to this theory, in the same way they got used to quantum mechanics, because it doesn’t really affect their day-to-day. This is why I brought up your brain-scan hypothetical; here, the vaporization-is-chill consequence clearly affects their daily lives by offering a potentially life-or-death scenario.
I don’t think death is like sleeping forever, I think it’s like simply not existing at all. In a particular, important sense, I think the person I am at this moment will no longer exist after it.
Let’s say I die. A week later, a new medical procedure is able to revive me. What is the subjective conscious experience of the physical brain during this week? There is none—exactly like during a dreamless sleep. Of course death isn’t actually like sleeping forever; what’s relevant is that the conscious experience associated with the dead brain atom-pile matches that of the alive, sleeping brain, and also that of a rock.
What I meant was to try imagining that you disappear every second and are replaced by someone similar, and try imagining that over the course of a full week. (I think getting shot is adding distraction here—I don’t think anyone wants someone they care about to experience getting shot.)
It’s not the gunshot that matters here. If at the end of this week I knew I’d painlessly, peacefully pass away, only to be reassembled immediately nearby with my family none the wiser, I would be freaking out just as much as in the gunshot scenario. The shorter replacemet timescale (a second instead of a week) is the real distraction; it brings in some weird and mostly irrelevant intuitions, even though they’re functionally equivalent theories. Here’s what I think would happen in the every-second scenario, assuming that I knew your theory was correct: I would quickly realize (albeit over the course of many separate lives and with the thoughts of fundamentally different people) that each successive Martin dies immediately, and that in my one-second wake are thousands of former Martins sleeping dreamlessly. This may eventually become fine to live with only to the extent that the person living it doesn’t actually believe it—even if they believe they believe it. If I stayed true to my convictions and remained mentally alright, I’d probably spend most of my time staring at a picture of my family or something. This is why your call to try living with this idea for a week rings hollow to me. It’s like a deep-down atheist trying to believe in God for a week; the emotional reaction can’t be faked, even if you genuinely believe you believe in God.
I don’t find it obvious that there’s something meaningful or important about the “connected conscious experience.” If I imagine a future person with my personality and memories, it’s not clear to me that this person lacks anything that “Holden a moment from now” has.
I agree, this future person lacks nothing—from future person’s perspective. From the perspective of about-to-be vaporized present person, who has the strongest claim to their own identity, future person lack any meaningful connection to present person beyond the superficial, as present person’s brain’s conscious experience will soon be permanently nothing, a state that future person’s brain doesn’t share. Through my normal life, even if all my brain’s atoms eventually get replaced, it seems there is this ‘connected consciousness’ preserving one particular personal identity, rather than a new but otherwise identical one replacing it wholesale like in the teleporter hypothetical.
If I died, was medically revived a week later, and found a newly constructed Martin doing his thing, I would be pretty annoyed, and I think we’d both realize, given full mutual knowledge of our respective origins, that Martin’s personal identity belongs to me and not him.
I don’t intend these vague outlines to be an actual competing conception of personal identity, I have no idea what the real answer is. My core argument is that any theory that renders death-and-replacement functionally equivalent to normal life is unsatisfactory. You did inspire me to check out Reasons and Persons from the library; I hope I’m proven wrong by some thought experiment, and also that I’m not about to die.
That Wired article is fantastic. I see this threshold of 5 microns all over the place and it turns out to be completely false and based on a historical accident. It’s crazy how once a couple authorities define the official knowledge (in this case, the first few scientists and public health bodies to look at Ward’s paper), it can last for generations with zero critical engagement and cause maybe thousands of deaths.
I’m confused about the distinction between fomite and droplet transmission. Is droplet transmission a term reserved for all non-inhalation respiratory pathogen transmission (like touching a droplet on a surface and then touching your face, or the droplet landing on your mouth), so it includes some forms of fomite transmission? I’m seeing conflicting sources and a lot that mention the >5 μm rule so don’t seem too trustworthy.
This is a great post and I think this type of thinking is useful for someone who’s specifically debating between working at / founding a small EA organization (that doesn’t have high status outside EA) vs a non-EA organization (or like, Open Phil) early in their career. Ultimately I don’t think it’s that relevant (though still valuable for other reasons) when making career decisions outside this scope, because I don’t think that conflating the EA mission and community is valid. The EA mission is just to do the most good possible; whether or not the community that has sprung up around this mission is a useful vehicle for you as an individual to do the most good you can is a different and difficult question. If you believe that EA as a movement will grow significantly in wealth and ability to affect the world, you could rationally choose to align yourself with EA groups and organizations for career capital / status reasons (not considering first-order impact). However, it seems like the EA community greatly values externally successful people, for instance when hiring; there’s very little insider bias, or at least it’s easy to overcome. When considering next steps I think the mindset of “which option maximizes my lifetime impact” is more correct and useful, though harder to answer individually, than an indirect question like “which option is more aligned with the current EA community” or “which option is ranked higher by 80000 Hours” in almost all cases. I’m sorry if I misunderstood your post, I’m trying to sort out my own thoughts as well. Again, conflating the community and mission is still a useful approximation if you’re considering working for one of the smaller EA organizations, or in a ‘smaller’ role.
That’s a good point, at my level thinking about the details of lifetime impact between two good paths might be almost completely intractable. I don’t remember where I first saw that specific idea, it seems like a pretty natural endpoint to the whole EA mindset. And I’ll check out that book, it’s been recommended to me before.
I think it’s usually okay for an issue-based analysis of the medium-term future to disregard relatively unlikely (though still relevant!) AI / x-risk scenarios. By relatively unlikely, I just mean significantly less likely than business-as-usual, within the particular time frame we’re thinking about. As you said, If the world becomes unrecognizably different in this time frame, factory farming probably stops being a major issue and this analysis is less important. But if it doesn’t, or in the potentially very long time before it does, we won’t gain very much strategic clarity about decreasing farmed animal suffering by approaching it with a longtermist lens. There’s a lot of suffering that probably won’t affect the long-run future but is still worth thinking about effectively. In other words, I don’t think longtermism helps us think about how to be animal advocates today.
This is a great post and the most passionate defense I’ve seen of something like ‘improving institutional decision-making’, but broader, being an underrated cause area. I’m sympathetic to your ideas on the importance of good leadership, and the lack of it (and of low-trust, low-coordination environments more generally) as a plausible root cause behind many of the problems EAs care about most. However, I don’t think this post has the evidence to support your key conclusions, beyond the general intuition that leadership is important.
Some of your thoughts:
If you want to have maximum impact you typically want to focus on leadership and governance. Most solvable problems in the world are really leadership and governance problems at their core.
If you want that impact to be lasting, you should focus on building organizations, institutions, or ecosystems that endure over time.
If you are trying to positively impact any group or initiative, leadership is most often your point of maximum leverage.
Corruption is Mexico’s one fundamental problem.
Note the last point isn’t a key conclusion, but is illustrative of the lack of evidence in this post. Is corruption Mexico’s fundamental problem? The IADB report pretty convincingly argues that societal trust is vital to economic development, and is your best piece of evidence. But it doesn’t argue that trust is the most (or most fundamental) factor, especially outside of Latin America, as opposed to things like effective institutions or more basic economic factors. And note that it indicates that Mexico has the second-highest level of trust in Latin America. Trust isn’t lack of corruption isn’t leadership/governance; they’re all related, but it leaves me confused as to what specifically you’re arguing for.
The rest of your points are huge claims, but other than the IADB report your evidence seems to be the blog post about Haiti and DR’s divergence, and your list of real-world examples. The post about Haiti is suggestive, but is a fundamentally limited example as the history of one small, idiosyncratic country. It discusses the corruption of the Duvaliers, but also a host of other factors, and furthermore argues that the divergence began decades before François came to power. So corruption vs trust isn’t the slam-dunk takeaway that it would need to be to even start thinking about generalizing from Haiti to the world.
Your list of places where ecosystem-building “actually [is] already working” is DARPA, a building at MIT, a math team, and a bunch of clubs. Regarding evidence of their cost-effective impact relative to the current EA paradigm, I’ll give you the first three, which are your “building ecosystems on a limited budget” category. But again, this doesn’t get us far beyond the general intuition that everyone already agrees with, that good leadership is good.
It’s true that the best interventions can often only be identified with hindsight, but that’s less applicable to meta-level criticisms of EA like yours. There are a lot of wonderful-sounding ideas like ecosystem-building out there, that hit all the right intuitions and are hard to explicitly argue against. But should EA make this pivot? That question needs more evidence than what’s in this post.
I don’t think asymmetric burden of proof applies when one side is making a positive claim against the current weight of evidence. But I fully agree that more research would be worthwhile.
I didn’t call for a ton more analysis, I pointed that the post largely relies on vibes. There’s a difference.
- 6 Aug 2022 14:05 UTC; 0 points) 's comment on Open Thread: June — September 2022 by (
All ethical arguments are based on intuition, and here this one is doing a lot of work: “we tend to underestimate the quality of lives barely worth living”. To me this is the important crux because the rest of the argument is well-trodden. Yes, moral philosophy is hard and there are no obvious unproblematic answers, and yes, small numbers add up. Tännsjö, Zapffe, Metzinger, and Benatar play this weird trick where they introspectively set an arbitrary line that separates net-negative and net-positive experience, extrapolate it to the rest of humanity, and based on that argue that most people spend most of their time on the wrong side of it. More standard intuitions point in the opposite direction; for not-super-depressed people, things can and do get really bad before not-existing starts to outshine existing! Admittedly “not-super-depressed people” is a huge qualifier, but on Earth the number of people who have, from our affluent Western country perspective, terrible lives, yet still want to exist, swamps the number of the (even idly) suicidally depressed. It’s very implausible to me that I exist right above this line of neutrality when 1) most people have much worse lives than me and 2) they generally like living.
And whenever I see this argument that liking life is just a cognitive bias I imagine this conversation:
A: How are you?
B: Fine, how are–
A: Actually your life sucks.
I’m confused how this squares with Lant Pritchett’s observation that variation in headcount poverty rates across nations, regardless of where you set the poverty line, is completely accounted for by variation in the median of the distribution of consumption expenditures.
I agree that your (excellent) analysis shows that the welfare increase is dominated by lifting the bottom half of the income distribution. I agree that this welfare effect is what we want. Pritchett’s argument is linked to yours because he claims the only (and therefore best) way to cause this effect is national development. He writes: “all plausible, general, measures of the basics of human material wellbeing [including headcount poverty] will have a strong, non-linear, empirically sufficient and empirically necessary relationship to GDPPC.” (Here non-linear refers to a stronger elasticity of these wellbeing metrics at lower than higher levels of GDPPC).
Of course as you point out national development can’t really be the only thing that decreases poverty—redistribution would too. But every single data point we have of countries shows that the rich got rich through development, not redistribution. And every single data point we have of rich countries shows that the bottom half of their income distributions is doing very well, relative to LMICs. So yes, redistribution would cause great welfare gains for a bit, but it’s not going to turn a $5000 GDPPC nation to a $50000 one. And the welfare gains from that nation’s decreased poverty headcount are going to dwarf the redistribution-caused welfare gains, even given your adjustments. (This isn’t an argument against redistribution as EA cause area, which could still be great; it’s an argument that redistribution’s efficacy isn’t really a point against the greater importance of the search for growth).
Regarding the correlation/causation, I’d be more sympathetic to your point if it was a nice and average correlation. Pritchett: “The simple correlation between the actual $3.20/day or $5.50/day headcount poverty rate and headcount poverty as predicted using only the median of the country distribution is .994 and for $1.90 it is .991. These are about as high a correlation as real world data can produce.” It’s very implausible that this incredibly strong relationship would break with some new intervention that increases median consumption. Not a single policy in the history of the world that changed a country’s median consumption has broken it.
To your final point that the cost of increasing median consumption might be way too high (relative to redistribution) - first of all, as Hillebrandt/Halstead pointed out, evaluating that claim should be a much larger priority in EA than it is right now. But development economics seems to have worked in the past, with just the expenses associated with a normal academic field! I’m sorry but I’m going to quote Pritchett again:
There are a number of countries (e.g. China, India, Vietnam, Indonesia) that said (1) “Based on our reading of the existing evidence (including from economists) we are going to shift from policy stance X to policy stance Y in order to accelerate growth”, (2) these countries did in fact shift from policy stance X to Y and (3) the countries did in fact have a large (to massive) accelerations of growth relative to [business as usual] as measured by standard methods (Pritchett et al 2016).
One had to be particularly stubborn and clever to make the argument: “Politicians changed policies to promote growth based on evidence and then there was growth but (a) this was just dumb luck, the policy shift did not actually cause the shift in growth something else did or (b) (more subtly) the adopted policies did work but that was just dumb luck as there was not enough evidence the policies would work for this to count as a win for ‘evidence’ changing policy.
TL;DR: Increasing productivity still beats redistribution in the long-term given reasonable assumptions about costs.
That makes sense! I was interpreting your post and comment as a bit more categorical than was probably intended. Looking forward to your post.
I think the wording of your options is a bit misleading. It’s valuable to publish your criticism of any topic that’s taking up non-trivial EA resources, regardless of its true worth as a topic—otherwise we might be wasting bednets money. The important question is whether or not infinite ethics fits this category (I’m unsure, but my best guess is no right now and maybe yes in a few years). Whether or not something is a “serious problem” or “deserves criticism”, at least for me, seems to point to a substantively different claim. More like, “I agree/disagree with the people who think infinite ethics is a valuable research field”. That’s not the relevant question.
This type of piece is what the Criticism contest was designed for, and I hope it gets a lot of attention and discussion. EA should have the courage of its convictions; global poverty and AI alignment aren’t going to be solved by a friend group, let alone the same friend group.
I’m using ‘friend group’ as something like a relatively small community with tight social ties and large and diverse set of semi-reliable identifiers.
EA attracts people who want to do large amounts of good. Weighted by engagement, the EA community is made up of people for whom this initial interest in EA was reinforced socially or financially, often both. Many EAs believe that AI alignment is an extremely difficult technical problem, on the scale of questions motivating major research programs in math and physics. My claim is that such a problem won’t be directly solved by this relatively tiny subset of technically-inclined do-gooders, nice people who like meet-ups and have suspiciously convergent interests outside of AI stuff.
EA is a friend group, algebraic geometers are not. Importantly, even if you don’t believe alignment is that difficult, we’d still solve it more quickly without tacking on this whole social framework. It worries me that alignment research isn’t catching on in mainstream academia (like climate change did); this seems to indicate that some factor in the post above (like groupthink) is preventing EAs from either constructing a widely compelling argument for AI safety, or making it compelling for outsiders who aren’t into the whole EA thing.
Basically we shouldn’t tie causes unnecessarily to the EA community—which is a great community—unless we have a really good reason.
Yea, WBE risk seems relatively neglected, maybe because of the really high expectations for AI research in this community. The only article I know talking about it is this paper by Anders Sandberg from FHI. He makes the interesting point that similar incentives that allow animal testing in today’s world could easily lead to WBE suffering. In terms of preventing suffering his main takeaway is:
The other best practices he mentions, like perfectly blocking pain receptors, would be helpful but only become a real solution with a better theory of suffering.