Find more about me (world views, things I recommend): https://bit.ly/evanderhammer-website
My LinkedIn: https://www.linkedin.com/in/evander-hammer
Evander H. 🔸
AI Safety Collab 2025 Summer—Local Organizer Sign-ups Open
AGI by 2028 is more likely than not
I think we should focus on short timelines, still I think there are not the most likely scenario. Most likely is imo a delay of maybe two years.
Consequentialists should be strong longtermists
It just makes theoretically sense. In practice it doesn’t matter, e.g. RSI and loss of control is a near term risk.
Bioweapons are an existential risk
Mainly thinking about A(G)I engineered bioweapons.
AI Safety Collab 2025 - Local Organizer Sign-ups Open
AI Safety Collab 2025 - Feedback on Plans & Expression of Interest
I assume that the primary goal is to reduce extreme suffering or negative experiences. Based on the evidence I’ve reviewed, efforts to alleviate suffering in factory farming appear to be far more cost-effective in achieving this goal.
I don’t see compelling evidence that improvements in global health significantly enhance worldwide peace and security, which could potentially reduce existential risks from advanced AI. This connection would have been, in my view, the strongest argument for prioritizing global health interventions.
While I believe global health initiatives should never be completely abandoned—as they demonstrate tangible success—I generally consider existential risk mitigation and reducing extreme animal suffering to be significantly higher priorities. In my assessment, these areas are at least 10 times more promising than global health interventions, and potentially far greater.
@peterhartree Feel free to ignore: Are there any updates in your workflow of listening to G Docs or PDFs? The post is now roughly 1 year ago.
- I have been using speechify for a year now. I think it’s decent, but the UI and the frequent crashes are a bit annoying for me. So I just wanted to see if there are better products out there :)
I really liked the discussion week on PauseAI. I’d like to see another one on this topic, taking up the new developments in reasons and evidence.
When?
Probably there are other topics that didn’t have a week, so they should be prioritized. I think PauseAI is one of the most important topics. So, maybe in the next 3 − 9 months?
Quick note/warning about speechify:
I’ve been using it for about 6 months.
I’ve had several bugs using the app and customer support couldn’t help. I spent a total of about 2 hours trying to fix them, but nothing worked.
The main problem was that I could not use it outside my home. Every 2 minutes or so it would detect “poor internet connection” and shut down. This is extremely annoying when you’re trying to listen to a newspaper while walking.
Hopefully they will fix this. I still use it, but only occasionally.
FYI: The template is in your bin, so it will be permanently deleted soonish. I find it helpful, so maybe remove it from the bin :)
My template I used in the past: https://docs.google.com/document/d/1o2_Jffq_Qo_tkwVGlW43_bFTpMocJqIktsRm5YaYt9c/edit#heading=h.bjl1ckl27qyh
Cool that there are still meetups in Braunschweig. Keep up the good energy :)
Experience regarding discount
I tried the 50 % discount, but they increased the original prize to 200 $, so I had to pay 100 $. I massaged the support and finally after several weeks they refunded 30 $. So discount works but maybe you have to contact the support :)
German EA Intro Program Report—Summer 2023 (and before)
Thanks for doing this Alex :)
Just for your info: This link isn’t working anymore: https://ankiweb.net/shared/info/1742469645
I, personally, would appreciate having the option to download the cards directly on Anki.
ML4G Germany—AI Alignment Camp
ML4G Germany—AI Alignment Camp
Cool, that this is happening. I’m excited about EA Braunschweig :)
90% agree
The consciousness argument:
Consciousness and qualia are the only things I’m 100% certain exist. Here’s the key insight: when you fully describe any conscious state, you must include objective properties—its spatial extent, colors, sounds, and crucially, how much suffering or happiness it contains.
This isn’t a matter of opinion. Suffering in a conscious experience is as objectively real as the color red in that same experience. It’s a fundamental feature of reality, not something that becomes bad because someone judges it to be bad.
The personal identity argument:
Derek Parfit’s work on personal identity strengthens this view. If there are no robust, continuous “persons” in the traditional sense—if we’re more like streams of consciousness than persistent selves—then subjective preferences become philosophically problematic. Whose preferences? What grounds them?
But conscious experiences of suffering and joy remain objectively real features of the universe, independent of any “person” having preferences about them.
Why this supports objective morality:
Among contemporary intellectuals, Sam Harris articulates this position best in “The Moral Landscape.” Consciousness creates objective facts about well-being. Some conscious states are objectively better or worse than others, based on the suffering or flourishing they contain.
Philosophical honesty:
I remain somewhat nihilistic because ultimate justification hits bedrock—we can’t justify foundational assumptions infinitely. But for practical purposes, consciousness grounds the only objective moral facts we need.
(My rough credences: 99% nihilism about most moral claims, 0.9% hedonistic realism, 0.09% other anti-realism, 0.01% non-hedonistic realism)