As you say, whether proton decay will happen seems to be an open question. If you’re feeling highly confident you could knock off another couple of zeroes to represent that credence and still end up with a number that eclipses everything else.
Arepo
I find this argument unconvincing. The vast majority of ‘simulations’ humans run are very unlike our actual history. The modal simulated entity to date is probably an NPC from World of Warcraft, a zergling from Starcraft or similar. This makes it incredibly speculative to imagine what our supposed simulators might be like, what resources they might have available and what their motivations might be.
Also the vast majority of ‘simulations’ focus on ‘exciting’ moments—pitched Team Fortress battles, epic RPG narratives, or at least active interaction with the simulators. If you and your workmates are just tapping away in your office on your keyboard doing theoretical existential risk research, the probability that someone like us has spent their precious resources to (re)create you seem radically lowered than if you’re (say) fighting a pitched battle.
If we’re looking at upper bounds, even the Stelliferous Era is highly conservative. The Black Hole era could last up to 10<sup>100</sup> years https://en.wikipedia.org/wiki/The_Five_Ages_of_the_Universe and it’s at least conceivable under known physics that we could farm their rotational energy or still more speculatively their Hawking radiation.
Usual Pascalian reasoning applies in that this would allow such a ridiculously large number of person-years that even with an implausibly low credence in its possibility the expectation dwarfs the whole stellar era.
[Question] What’s happening with the EVF trustee recruitment?
I agree—but that’s why it stuck in my mind so strongly. I remember thinking how incongruous it was at the time.
I have a distinct memory, albeit one which could plausibly be false, of Eliezer once stating that he was ’100% sure that nonhuman animals aren’t conscious’ because of his model of consciousness. If he said it, it’s now been taken down from whichever site it appeared on. I’m now genuinely curious whether anyone else remembers this (or some actual exchange on which my psyche might have based it)
I fully agree with the title of this post, although I do think Yudkowsky can be valuable if you treat him as an “interesting idea generator”, as long as you treat said ideas with a very skeptical eye.
Fwiw I think the rule thinkers in philosophy popular in EA and rat circles has itself been quite harmful. Yeah, there’s some variance in how good extremely smart people are at coming up with original takes, but for the demonstrably smart people I think ‘interesting idea generation’ is more of a case of ‘we can see them reasoning hypercarefully and scrupulously about their area of competence almost all the time they speak on it, sometimes they also come up with genuinely novel ideas, and when those ideas are outside their realm of expertise maybe they slightly underresearch and overindex on them’. I’m thinking of uncontroversially great thinkers like Feynman, Einstein, Newton, as well as more controversially great thinkers like Bryan Caplan, Elon Musk, here.
There is an opportunity cost to noise, and that cost is higher to a community the louder and more prominently it’s broadcast within that community. You, the OP and many others have gone to substantial lengths to debunk almost casually thrown out views by EY that, as others have said, have made their way into rat circles almost unquestioned. Yet the cycle keeps repeating because ‘interesting idea generation’ gets so much shrift.
Meanwhile, there are many more good ideas than there is bandwidth to look into them. In practice, this means for every bad idea a Yudkowsky or Hanson overconfidently throws out, some reasonable idea generated by someone more scrupulous but less good at self-marketing gets lost.
to rebalance the movement’s portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes. In March 2023, Open Philanthropy’s Alexander Berger invited Claire Zabel (Open Phil), James Snowden (Open Phil), Max Dalton (CEA), Nicole Ross (CEA), Niel Bowerman (80k), Will MacAskill (GPI), and myself (Open Phil, staffing the group) to join a working group on this and related questions.
In the proposals discussed, was the idea that non-AI-related causes would decrease the share of support they received from current levels? Or would eg the EAG replacement process be offset by making one of the others non-AI focused (or increase the amount of support those causes received in some other way)?
There was significant disagreement whether 80k (which was chosen as a concrete example to shed light on a more general question that many meta-orgs run into) should be more explicit about its focus on longtermism/existential risk.
I have to say, this really worries me. It seems like it should be self-evidently good after FTX and all the subsequent focus on honesty and virtue that EA organisations should be as transparent as possible about their motivations. Do we know what the rationale of the people who disagreed was?
[Question] Has anyone looked at collaborating with software & data science bootcamps re their final projects?
This seems like a self-fulfilling prophecy. If we never put effort into building a community around ways to reduce global poverty, we’ll never know what value they could have generated.
Also it seems a priori really implausible that longtermists could usefully do more things in their sphere alone than that EAs focusing on the whole of the rest of EA-concern-space could.
Community building can be nonspecific, where you try to get a build a group of people who have some common interest (such as something under big tent EA), or specific, where you try to get people who are working on some specific thing (such as working on AI/longtermist projects, or moving in that direction). My sense is that (per the OP), community builders are being pressured to do the latter.
Sorry—the latter.
I realise I didn’t make this distinction, so I’m shifting the goalposts slightly, but I think it’s worth distinguishing between ‘direct work’ organisations and EA infrastructure. It seems pretty clear from the OP that the latter is being strongly encouraged to primarily support EA/longtermist work.
At this point, I have no idea what to believe. I don’t know if this is the case of the doomiest voices being the loudest, while the world is actually populated with academics, programmers and researchers who form the silent, unconcerned majority—or whether we genuinely are all screwed.
My sense is that this is broadly true, at least in the sense of ‘unconcerned’ meaning ‘have a credence in AI doom of max 10% and often lower’. All the programmers and academics working on AI presumably don’t think they’re accelerating or enabling the apocalypse, otherwise they could stop—these are highly skilled software engineers who would have no trouble finding a new job.
Also, every coder in the world knows that programs often don’t do what you think they’re going to do, and that as much as possible you have to break them into components and check rigorously in lab conditions how they behave before putting them in the wild. Of course there are bugs that get through in every program, and of course there are ways that this whole picture could be wrong. Nonetheless, it gives me a much more optimistic sense of ‘number of people (implicitly) working on AI safety’ than many people in the EA movement seem to have.
At the moment, I’m pretty worried that, on the current trajectory, AI safety will end up eating EA. Though I’m very worried about what the next 5-10 years will look like in AI, and though I think we should put significantly more resources into AI safety even than we have done, I still think that AI safety eating EA would be a major loss.
I wonder how this would look different from the current status quo:
Wytham Abbey cost £15m, and its site advertises it as basically being primarily for AI/x-risk use (as far as I can see it doesn’t advertise what it’s been used for to date)
Projects already seem to be highly preferentially supported based on how longtermist/AI-themed they are. I recently had a conversation with someone at OpenPhil in which, if I understood/remembered correctly, they said the proportion of OP funding going to nonlongtermist stuff was about 10%. [ETA sounds like this is wrong]
The global health and development fund seems to have been discontinued . The infrastructure fund, I’ve heard on the grapevine, strongly prioritises projects with a longtermist/AI focus. The other major source of money in the EA space is the Survival and Flourishing Fund, which lists its goal as ‘to bring financial support to organizations working to improve humanity’s long-term prospects for survival and flourishing’. The Nonlinear Network is also explicitly focused on AI safety, and the metacharity fund is nonspecific. The only EA-facing fund I know of that excludes longtermist concerns is the animal welfare one. Obviously there’s also Givewell, but they’re not really part of the EA movement, inasmuch as they only support existing and already-well-developed-and-evidenced charities and not EA startups/projects/infrastructure like the other funding groups mentioned do.
These three posts by very prominent EAs all make the claim that we should basically stop talking about either EA and/or longtermism and just tell people they’re highly likely to die from AI (thus guiding them to ignore the—to my mind comparable—risks that they might die from supervolcanoes, natural or weakly engineered pandemics, nuclear war, great power war, and all the other stuff that longtermists uniquely would consider to be of much lesser importance because of the lower extinction risk).
And anecdotally, I share the OP’s experience that AI risk dominates EA discussion at EA cocktail parties.
To me this picture makes everything but AI safety already look like an afterthought.
Great to see more independent actors moving into this space! Is there any interaction between MCF and Non-Linear Network?
Two people downvoted this comment O_o
Could someone say why they downvoted this comment?
One way for us to find that out would be for the person who was sent the memo and thought it was a silly idea to make themselves known, and show the evidence that they shot it down or at least assert publicly that they didn’t encourage it.
Since there seems to be little downside to them doing so if that’s what happened, if no-one makes such a statement we should increase our credence that they were seriously entertaining it.
I mainly had in mind Pablo’s summary. It’s been a long time since I read Brian’s essay, and I don’t have bandwidth to review it now, so if he says something substantially different there, my argument might not apply. But basically every argument I remember hearing about how the simulation argument implies we should modify our behaviour presupposes that we have some level of inferential knowledge of our simulators (this presupposition being hidden in the assumption that simulations would be primarily ancestor simulations). This presupposition seems basically false to me, because, for example:
a. A zergling would struggle to gain much inferential knowledge of its simulators’ motivations.
b. A zergling looking around at the scope and complexity of its universe would typically observe that it itself is 2-dimensional (albeit with some quasi-3D properties), and is made from approx 38x94 ‘atoms’. Perhaps more advanced simulations would both be more numerous (and hence a higher proportion of simulationspace) and more complex, but it still seems hard to imagine they’ll average to anything like the same level of complexity as we see in our universe, or have a consistent difference from it.
c. If the simulation argument is correct for a single layer of reality, it seems (to the degree permitted by a and b) far more likely that it’s correct for multiple, perhaps vast numbers of layers of reality (insert ‘spawn more Overlords’ joke here). Thus the people whose decisions and motivations a zergling is trying to ultimately guess at is not ours, but someone whose distance from us is approx n(|human - zergling|), where n is the number of layers. It’s hard to imagine the zergling—or us—could make any intelligible assumptions at all about them at that level of removal.
To show this in Pablo’s argument:
For this to be ‘plausible’ is to assert that we know our simulators’ motivations well enough to know that whatever they hoped to gain by running us will ‘plausibly’ be motivating enough for them to do it a second time in much the same form, and that their simulators will at least permit it, and so on.
Another version of the anti-x-risk argument from simulation I’ve heard (and which I confess with hindsight I was conflating Pablo’s with—maybe it’s part of Brian’s argument?) is that the simulators will likely switch off our universe if it expands beyond a certain size due to resource constraints. Again, this argument implies IMO vastly too high confidence in both their motivation and resource limits.