Currently I’m at METR (formally ARC Evals).
Previously founded Global Challenges Project, and EA Virtual Programs.
Currently I’m at METR (formally ARC Evals).
Previously founded Global Challenges Project, and EA Virtual Programs.
In general it doesn’t seem logical to me to bucket cause areas as either “longtermist” or “neartermist”.
I think this bucketing can paint an overly simplistic image of EA cause prioritization that is something like:
Are you longtermist?
If so, prioritize AI safety, maybe other x-risks, and maybe global catastrophic risks
If not, prioritize global health or factory farming depending on your view on how much non-human animals matter compared to humans
But really the situation is way more complicated than this, and I don’t think the simplification is accurate enough to be worth spreading.
There was a time when I thought ending factory farming was highest priority, motivated by a longtermist worldview
There was also a time when I thought bio-risk reduction was highest priority, motivated by a neartermist worldview
(now I think AI-risk reduction is highest priority regardless of what I think about longtermism)
When thinking through cause prioritization, I think most EAs (including me) over-emphasize the importance of philosophical considerations like longtermism or speciesism, and under-emphasize the importance of empirical considerations like AI timelines, how much effort it would take to make bio-weapons obsolete or what diseases cause the most intense suffering.
Yeah it’s James and I funded by EAIF
In talking to many Brown University students about EA (most of who are very progressive), I have noticed that longtermist-first and careers-first EA outreach does better and seems to be because of these objections that come up in response to ‘GiveWell style EA’.
That is very helpful- thank you EdoArad!
(and I’ll be sure to update you on how our program turns out)
Thank you so much!
I agree and am adding this to our list of types of projects to suggest to students :)
Thank you Brian!
We have considered this, and have it as part of our “funnel”, but still think there is room for this kind of projects program in addition.
I also like the idea of EA Uni groups encouraging interested members to start these other (EA related) student groups you mention (Alt Protein group, OFTW and GRC). At Brown, we already have OFTW and GRC, and I’m in the process of getting some students from Brown EA to start an Alt Protein group as well :)
This is really cool! Thank you for doing this!
Also, I’m curious—to what extent is AI safety is discussed in your group?
I noticed the cover of Superintelligence has a quote of Bill Gates saying “I highly recommend this book” and I’m curious if AI safety is something Microsoft employees discuss often.
I do think there is a good case for interventions aimed at improving the existential risk profile of post-disaster civilization being competitive with interventions aimed at improving the existential risk profile of our current civilization.
I’d love to hear more about this and see any other places where this is discussed.
(I’m only addressing a small part of your question, not the main question)
When we are looking at the potential branches in the future, should you make the choice that will lead you to the cluster of outcomes with the highest average utility or to the cluster with the highest possible utility?
I’d say the one with the highest average utility if they are all equally likely. Basically, go with the one with the highest expected value.
What do you think are the most likely ways that plant based and cell based products might both fail to significantly replace factory farmed products?
Sounds very exciting!
And seems like there is some overlap with EA Uni group fellowships so I would be happy to talk to you about those if you want; although maybe better to talk to the community builders more involved in syllabus writing. ( this Intro Fellowship I’m running at Brown EA )
Hi Max,
I’m curious how big you are thinking this “EA curriculum” might be. Are you thinking of something similar to an EA Uni group fellowship (usually ~4 hours/ week for ~ 8 weeks) or are you thinking of something much larger?
I agree with Marisa
Rather than a single body of knowledge being a standard education for EAs, I like the fellowship structure that many EA Uni groups use.
For me, one of the main goals in running these fellowship to expose students to enough EA ideas and discussions to decide for themselves what knowledge and skills they want to build up in order to do good better. For some people, this will involve economics, statistics, and decision analysis knowledge, but for others, it will look totally different.
(For fellowship syllabus examples you can check out this Intro Fellowship I’m running at Brown EA, and this In-Depth Fellowship run by EA Oxford).
Also the EA Oxford In-Depth Fellowship
Thanks so much for your response Ross!
The values obtained for table 1 on reduction in far future potential were obtained from a survey of existential risk researchers at Ea global 2018 see methods:
Yeah that makes sense—I was just curious if the reasonings in the introduction were from the reasonings of those who filled out the survey. But thanks for clarifying!
Surviving the new environment might also favour the development of stable yet repressive social structures that would prevent rebuilding of civilization to previous levels. This could be facilitated by dominant groups having technology of the previous civilization.
Very interesting and makes sense—thank you!
I have two questions/clarifications:
(1) Regarding:
Reasons that civilization might not recover include: …
Are the reasons mentioned in this section what leads to the estimated reduction in far future potential in Table 1? Or are there other reasons that play into those estimates as well?
(2) Regarding:
Another way to far future impact is the trauma associated with the catastrophe making future catastrophes more likely, e.g. global totalitarianism (Bostrom & Cirkovic, 2008)
Intuitively I feel that the trauma associated with the catastrophe would make people prioritize GCR mitigation and thereby make future catastrophes less likely. Or is the worry that something like global totalitarianism would happen precisely in the name of GCR mitigation?
I’d be curious to hear more thoughts on this, but also I haven’t read the book cited there, so maybe that would clear up my confusion :)
Thank you for all your great work on this—super exciting!
I am wondering why you say that “Human reconstruction will be beneficial to the next civilization.”
I think it would be great if we could leave messages to a future non-human civilization to help them achieve a grand future and reduce their x-risk (by learning from our mistakes, for example). But I don’t feel that human reconstruction is particularly important.
If anything, I worry that this future advanced civilization might reconstruct humans to enslave us. And if they are not the type to enslave us, then I feel pretty good about them existing and homo sapiens not existing.
I mostly want to +1 to Jonas’ comment and share my general sentiment here, which overall is that this whole situation makes me feel very sad. I feel sad for the distress and pain this has caused to everyone involved.
I’d also feel sad if people viewed Owen here as having anything like a stereotypical sexual predator personality.
My sense is that Owen cares extraordinarily about not hurting others.
It seems to me like this problematic behavior came from a very different source – basically problems with poor theory of mind and underestimating power dynamics. Owen can speak for himself on this; I’m just noting as someone who knows him that I hope people can read his reflections genuinely and with an open mind of trying to understand him.
That doesn’t make Owen’s actions ok – it’s definitely not – but it does make me hopeful and optimistic that Owen has learnt from his mistakes and will be able to tread cautiously and not make problems of this sort again.
Personally, I hope Owen can be involved in the community again soon.
[Edited to add: I’m not at all confident here and just sharing my perspective based on my (limited) experience. I don’t think people should give my opinion/judgment much weight. I haven’t engaged at all deeply in understanding this, and don’t plan to engage more]