*
mlsbt
I think it’s usually okay for an issue-based analysis of the medium-term future to disregard relatively unlikely (though still relevant!) AI / x-risk scenarios. By relatively unlikely, I just mean significantly less likely than business-as-usual, within the particular time frame we’re thinking about. As you said, If the world becomes unrecognizably different in this time frame, factory farming probably stops being a major issue and this analysis is less important. But if it doesn’t, or in the potentially very long time before it does, we won’t gain very much strategic clarity about decreasing farmed animal suffering by approaching it with a longtermist lens. There’s a lot of suffering that probably won’t affect the long-run future but is still worth thinking about effectively. In other words, I don’t think longtermism helps us think about how to be animal advocates today.
That’s a good point, at my level thinking about the details of lifetime impact between two good paths might be almost completely intractable. I don’t remember where I first saw that specific idea, it seems like a pretty natural endpoint to the whole EA mindset. And I’ll check out that book, it’s been recommended to me before.
This is a great post and I think this type of thinking is useful for someone who’s specifically debating between working at / founding a small EA organization (that doesn’t have high status outside EA) vs a non-EA organization (or like, Open Phil) early in their career. Ultimately I don’t think it’s that relevant (though still valuable for other reasons) when making career decisions outside this scope, because I don’t think that conflating the EA mission and community is valid. The EA mission is just to do the most good possible; whether or not the community that has sprung up around this mission is a useful vehicle for you as an individual to do the most good you can is a different and difficult question. If you believe that EA as a movement will grow significantly in wealth and ability to affect the world, you could rationally choose to align yourself with EA groups and organizations for career capital / status reasons (not considering first-order impact). However, it seems like the EA community greatly values externally successful people, for instance when hiring; there’s very little insider bias, or at least it’s easy to overcome. When considering next steps I think the mindset of “which option maximizes my lifetime impact” is more correct and useful, though harder to answer individually, than an indirect question like “which option is more aligned with the current EA community” or “which option is ranked higher by 80000 Hours” in almost all cases. I’m sorry if I misunderstood your post, I’m trying to sort out my own thoughts as well. Again, conflating the community and mission is still a useful approximation if you’re considering working for one of the smaller EA organizations, or in a ‘smaller’ role.
That Wired article is fantastic. I see this threshold of 5 microns all over the place and it turns out to be completely false and based on a historical accident. It’s crazy how once a couple authorities define the official knowledge (in this case, the first few scientists and public health bodies to look at Ward’s paper), it can last for generations with zero critical engagement and cause maybe thousands of deaths.
I’m confused about the distinction between fomite and droplet transmission. Is droplet transmission a term reserved for all non-inhalation respiratory pathogen transmission (like touching a droplet on a surface and then touching your face, or the droplet landing on your mouth), so it includes some forms of fomite transmission? I’m seeing conflicting sources and a lot that mention the >5 μm rule so don’t seem too trustworthy.
They contradict each other in the sense that your full theory, since it includes the particular consequence that vaporization is chill, is I think not something anyone but a small minority would be fine to live with. Quantum mechanics and atheism impose no such demands. It’s not too strong a claim to call this idea fine to live with when you’re just going about your daily life, ignoring the vaporization part. “Fine to live with” has to include every consequence, not just the ones that are indeed fine to live with. I interpreted the second quote as arguing that not just you but the general public could get used to this theory, in the same way they got used to quantum mechanics, because it doesn’t really affect their day-to-day. This is why I brought up your brain-scan hypothetical; here, the vaporization-is-chill consequence clearly affects their daily lives by offering a potentially life-or-death scenario.
I don’t think death is like sleeping forever, I think it’s like simply not existing at all. In a particular, important sense, I think the person I am at this moment will no longer exist after it.
Let’s say I die. A week later, a new medical procedure is able to revive me. What is the subjective conscious experience of the physical brain during this week? There is none—exactly like during a dreamless sleep. Of course death isn’t actually like sleeping forever; what’s relevant is that the conscious experience associated with the dead brain atom-pile matches that of the alive, sleeping brain, and also that of a rock.
What I meant was to try imagining that you disappear every second and are replaced by someone similar, and try imagining that over the course of a full week. (I think getting shot is adding distraction here—I don’t think anyone wants someone they care about to experience getting shot.)
It’s not the gunshot that matters here. If at the end of this week I knew I’d painlessly, peacefully pass away, only to be reassembled immediately nearby with my family none the wiser, I would be freaking out just as much as in the gunshot scenario. The shorter replacemet timescale (a second instead of a week) is the real distraction; it brings in some weird and mostly irrelevant intuitions, even though they’re functionally equivalent theories. Here’s what I think would happen in the every-second scenario, assuming that I knew your theory was correct: I would quickly realize (albeit over the course of many separate lives and with the thoughts of fundamentally different people) that each successive Martin dies immediately, and that in my one-second wake are thousands of former Martins sleeping dreamlessly. This may eventually become fine to live with only to the extent that the person living it doesn’t actually believe it—even if they believe they believe it. If I stayed true to my convictions and remained mentally alright, I’d probably spend most of my time staring at a picture of my family or something. This is why your call to try living with this idea for a week rings hollow to me. It’s like a deep-down atheist trying to believe in God for a week; the emotional reaction can’t be faked, even if you genuinely believe you believe in God.
I don’t find it obvious that there’s something meaningful or important about the “connected conscious experience.” If I imagine a future person with my personality and memories, it’s not clear to me that this person lacks anything that “Holden a moment from now” has.
I agree, this future person lacks nothing—from future person’s perspective. From the perspective of about-to-be vaporized present person, who has the strongest claim to their own identity, future person lack any meaningful connection to present person beyond the superficial, as present person’s brain’s conscious experience will soon be permanently nothing, a state that future person’s brain doesn’t share. Through my normal life, even if all my brain’s atoms eventually get replaced, it seems there is this ‘connected consciousness’ preserving one particular personal identity, rather than a new but otherwise identical one replacing it wholesale like in the teleporter hypothetical.
If I died, was medically revived a week later, and found a newly constructed Martin doing his thing, I would be pretty annoyed, and I think we’d both realize, given full mutual knowledge of our respective origins, that Martin’s personal identity belongs to me and not him.
I don’t intend these vague outlines to be an actual competing conception of personal identity, I have no idea what the real answer is. My core argument is that any theory that renders death-and-replacement functionally equivalent to normal life is unsatisfactory. You did inspire me to check out Reasons and Persons from the library; I hope I’m proven wrong by some thought experiment, and also that I’m not about to die.
Great post. #9 is interesting because the inverse might also be true, making your idea even stronger: maybe a great thing you can do for the short term is to make the long term go well. X-risk interventions naturally overlap with maintaining societal stability, because 1) a rational global order founded in peace and mutual understanding, which relatively speaking we have today more than ever before, reduces the probability of global catastrophes; and less convincingly 2) a catastrophe that nevertheless doesn’t kill everyone would indefinitely set the remaining population back at square one for all neartermist cause areas. Maintaining the global stability we’ve enjoyed since the World Wars is a necessary precondition for the coterminous vast improvements in global health and poverty, and it seems like a great bulk of X-risk work boils down to that. Your #2 is also relevant.
- 1 Jan 2022 17:13 UTC; 20 points) 's comment on Convergence thesis between longtermism and neartermism by (
I think this is pretty strong evidence that Holden and Parfit are p-zombies :)
If you vaporized me and created a copy of me somewhere else, that would just be totally fine. I would think of it as teleporting. It’d be chill.
...
If that’s right, “constant replacement” could join a number of other ideas that feel so radically alien (for many) that they must be “impossible to live with,” but actually are just fine to live with. (E.g., atheism; physicalism; weird things about physics. I think many proponents of these views would characterize them as having fairly normal day-to-day implications while handling some otherwise confusing questions and situations better.)
These contradict each other. Let’s say, like you imagined in an earlier post, that one day I’ll be able to become a digital person by destroying my physical body in a futuristic brain-scanning process. It’s pretty obvious that the connected conscious experience I’ve (I hope!) experienced my whole life, would, at that transition, come to an end. Whether or not it counts as me dying, and whether this new person ‘is’ me, are to some extent just semantics. But your and Parfit’s position seems to define away the basic idea of personal identity just to solve its problems. My lifelong connected conscious awareness would undeniably cease to exist; the awareness that was me will enter the inky nothingness. The fact that my clone is walking and talking is completely orthogonal to this basic reality.
So if I tried to live with this idea “for a full week”, except at the end of the week I know I’d be shot and replaced, I’d be freaking out, and I think you would be too. Any satisfactory theory of personal identity has to avoid equating death with age-related change. I should read Reasons and Persons, but none of the paradoxes you link to undermine this ‘connected consciousness’ idea of personal identity (which differs from what Bernard Williams—and maybe Parfit?--would call psychological continuity). As I understand it, psychological continuity allows for any given awareness to end permanently as long as it’s somewhere replaced, but what I’m naively calling ‘connected consciousness’ doesn’t allow this.
Another way of putting it; in your view, the only reason death is undesirable is that it permanently ends your relationships and projects. I also care about this aspect, but for me, and I think most non-religious people, death is primarily undesirable because I don’t want to sleep forever!
Yea, WBE risk seems relatively neglected, maybe because of the really high expectations for AI research in this community. The only article I know talking about it is this paper by Anders Sandberg from FHI. He makes the interesting point that similar incentives that allow animal testing in today’s world could easily lead to WBE suffering. In terms of preventing suffering his main takeaway is:
Principle of assuming the most (PAM): Assume that any emulated system could have the same mental properties as the original system and treat it correspondingly.
The other best practices he mentions, like perfectly blocking pain receptors, would be helpful but only become a real solution with a better theory of suffering.
This is a great post and the most passionate defense I’ve seen of something like ‘improving institutional decision-making’, but broader, being an underrated cause area. I’m sympathetic to your ideas on the importance of good leadership, and the lack of it (and of low-trust, low-coordination environments more generally) as a plausible root cause behind many of the problems EAs care about most. However, I don’t think this post has the evidence to support your key conclusions, beyond the general intuition that leadership is important.
Some of your thoughts:
If you want to have maximum impact you typically want to focus on leadership and governance. Most solvable problems in the world are really leadership and governance problems at their core.
If you want that impact to be lasting, you should focus on building organizations, institutions, or ecosystems that endure over time.
If you are trying to positively impact any group or initiative, leadership is most often your point of maximum leverage.
Corruption is Mexico’s one fundamental problem.
Note the last point isn’t a key conclusion, but is illustrative of the lack of evidence in this post. Is corruption Mexico’s fundamental problem? The IADB report pretty convincingly argues that societal trust is vital to economic development, and is your best piece of evidence. But it doesn’t argue that trust is the most (or most fundamental) factor, especially outside of Latin America, as opposed to things like effective institutions or more basic economic factors. And note that it indicates that Mexico has the second-highest level of trust in Latin America. Trust isn’t lack of corruption isn’t leadership/governance; they’re all related, but it leaves me confused as to what specifically you’re arguing for.
The rest of your points are huge claims, but other than the IADB report your evidence seems to be the blog post about Haiti and DR’s divergence, and your list of real-world examples. The post about Haiti is suggestive, but is a fundamentally limited example as the history of one small, idiosyncratic country. It discusses the corruption of the Duvaliers, but also a host of other factors, and furthermore argues that the divergence began decades before François came to power. So corruption vs trust isn’t the slam-dunk takeaway that it would need to be to even start thinking about generalizing from Haiti to the world.
Your list of places where ecosystem-building “actually [is] already working” is DARPA, a building at MIT, a math team, and a bunch of clubs. Regarding evidence of their cost-effective impact relative to the current EA paradigm, I’ll give you the first three, which are your “building ecosystems on a limited budget” category. But again, this doesn’t get us far beyond the general intuition that everyone already agrees with, that good leadership is good.
It’s true that the best interventions can often only be identified with hindsight, but that’s less applicable to meta-level criticisms of EA like yours. There are a lot of wonderful-sounding ideas like ecosystem-building out there, that hit all the right intuitions and are hard to explicitly argue against. But should EA make this pivot? That question needs more evidence than what’s in this post.