@Toby Tremlett🔹 is there a way to see the final debate week banner? I wanted to include a screenshot in the slides for my local group’s next meetup, but can’t find a way to access the banner now that debate week is over.
joko
South German EA Retreat
I appreciate the context, thank you. However, two points came to mind:
It seems like the purpose is quite different from the medium-sized university department you described, running workshops and retreats vs what standard academic departments do (offices, lecture halls, labs). So I’m not sure how good the comparison is
You point out that in the context of university buildings, it’s not a lot of money. But in the context of CEA’s other spending, it does seem like a lot. CEA received $14 million in funding from FTX[1], which has been discussed a lot. So it’s understandable that spending a supposedly similar amount of money on a single venue without much public explanation will raise some eyebrows.
Either way, I don’t think anyone can really judge whether the investment was a good decision based on the currently availabe information. Which is why I’d appreciate a more detailed explanation from CEA.
- ^
number taken from the wiki entry on CEA. I chose to use this comparison because I couldn’t immediately find recent numbers for how much money CEA is spending in total, but I assume that 15 million is a significant portion of it.
EA Pubquiz
Workshop: Doing Good Better with your Career
Charity Donation Giving Game
Doing Good Better. An Introduction to Effective Altruism
Thank you for this post, looking forward to the other parts of the series! I enjoy this format of explaining how EA came to care about a specific cause area and what shaped the current understanding of the topic. I’d be interested in more “history of [cause area / some aspect of EA culture]” posts.
What is the main idea this video is trying to convey? Based on the title and description, I assumed the goal would be to introduce key ideas of longtermism/x-risks and promote WWOTF. It did the latter, but I don’t think the video presents longtermist ideas in a very clear way.
Earlier today, I watched the video with a couple of friends who have never heard about longtermism and x-risks before. It did not do a good job at sparking discussion. When talking about the video, the main takeaways were something like:
civilizations have collapsed before
if it happens again, we will most likely recover
to make sure that we actually recover, we should stop burning coal. However, everyone was already convinced that we should stop burning coal because of climate change arguments.
my friends were mostly confused about what longtermism is and why it is related to EA
Afterwards, I suggested reading Will’s guest essay in the NYT. From my impression, that article got my friends a lot more excited about reading WWOTF and seemed to resolve the confusion about longtermism and EA. In the future, I will definitely send the NYT article to people as an introduction to longtermism or this WWOTF book review by Ali Abdaal for people who just really prefer watching videos.
What concerns are there that you think the mechanize founders haven’t considered? I haven’t engaged with their work that much, but it seems like they have been part of the AI safety debate for years now, with plenty of discussion on this Forum and elsewhere (e.g. I can’t think of many AIS people that have been as active on this Forum as @Matthew_Barnett has been for the last few years). I feel like they have communicated their models and disagreements a (more than) fair amount already, so I don’t know what you would expect to change in further discussions?