Indestructibly optimistic.
Dem0sthenes
Is not the greatest refuge what Elon is planning for Mars? That’s a separate track though we should be working to expedite and ensure that that civilizational plan b is operational ASAP.
Where is the evidence for this claim? This all seems like reason and words :P
Yes that makes sense and aligns with my thinking as well. Do you have a sense of how much the EA community gives to AI vs nuclear vs bioweapon existential risks? Or how to go about figuring that out?
Are we already past the precipice? Would we know if that was the case? I really wonder. Things seem to be spiraling. Many of todays conditions mirror the malaise and misery of the 1930s but the capacity for destruction is orders of magnitude higher. I wonder if we need to act with much greater urgency and boldness to be more proactive about XR..…
It’s perhaps not that surprising that this specific counterpoint calling for a ton of more analysis comes from a student still in academia / undergrad…
The intra-EA jargon is strong with this one, young padawan(s).
What’s the future fund?
I would like to address the cyberbullying that downvoted my comment here: https://forum.effectivealtruism.org/posts/jhCGX8Gwq44TmyPJv/ea-s-culture-and-thinking-are-severely-limiting-its-impact?commentId=WNqs7wH8WBEQq6xjb#WNqs7wH8WBEQq6xjb
Saying “it might be worthwhile to consider the source” is very different than “an ad homin” attack. Honestly the norm of never considering the source ignores the very practical cui bono school of public discourse and really just speaks again to the EA implicit academic naivete and assumptions.
Respectfully, I must inquire: what’s the actual personal attack that I have been, most egregiously I might add, accused of making? There’s nothing wrong with being an undergrad. That is a value neutral statement. Just saying consider the source my internet fren. No attack intended and labeling it as such to pile onto a mob of downvotes strikes me as a bit of cyberbullying, perhaps most accurately described as “white knighting” to use the parlance of the Very Online crowd. I must say though. I’m a little hurt and honestly a bit offended by your behavior. I would urge you to reflect on the moral certainty implicit in this interaction and what this says about the EA community.
Hacker News is more adjacent (digitally native, young, affluent) than you might think. It is a bit different than EA but not as different as say the crowd in a Teamsters union hall or oil derrick in the North Sea.
What did you find specifically funny?
Lol full alignment of the state is not exactly a historically proud tradition… there were several notable efforts to align all the various sectors for the greater capacity of the state and society, strengthening all involved like how many small sticks bound together forms a much more robust whole.
What’s your evidence for EA being big tent? Has there been a survey done of new EA members and their perception? Focus groups? Other qualitative research? Curious for the basis of your claims. Thanks much!
These types of questions deserve more focused dialogue and debate.
Shouldn’t it be possible to do a simple chart of ballpark funding to longtermist versus neartermist causes over time from EA aligned orgs?
This comment from another post seems very apropos for this discussion:
“You can read ungodly reams of essays defining effective altruism—which makes me wonder if the people who wrote them think that they are creating the greatest possible utility by using their time that way ”
Yes 1000% on the cultural factors that have desensitized us to nuclear risk. Tyler Cowen has a nice series of posts out today on this subject: https://marginalrevolution.com/marginalrevolution/2022/08/which-is-the-hingy-est-century.html
I might find time to go back and reread the Precipice and dig into the probabilities you reference. Those seem odd. It’s also odd because something that reduces humanity to subsistence levels for a very long time and eliminates ninety some odd percent of the population is absolutely catastrophic. I suppose I’m a hyberbolic discounter at heart and do think that while we should care about the far future, it’s really silly to get into the 1-1 logic that a human a billion years from now should be equally valued for decision making as one today or ten years from now.
Thanks for sharing. I’ll check out your post.
Do you know how one might get another copy of the Precipice? I donated mine to a friend.
Thank you for the very thorough and well researched essay.
Hi Stephen! Thanks for the post. What are the typical frameworks that you use to think about existential threats? Sometimes for instance we utilize probabilities to describe the chance of say nuclear Armageddon though that seems a bit off from a frequentinost philosophical perspective. For example, that type of event either happens or it doesn’t. We can’t run 100 earth high fidelity simulations and count the various outcomes and then calculate the probability of various catastrophes. I work with data in my day job so these types of questions are top of mind.