Volunteer organizer @ EA Belgium, YouTuber @ A Happier World.
My name is pronounced roughly as âyeroonâ (IPA: jÉËĘun). You can leave anonymous feedback here: https://ââadmonymous.co/ââjeroen_w
Volunteer organizer @ EA Belgium, YouTuber @ A Happier World.
My name is pronounced roughly as âyeroonâ (IPA: jÉËĘun). You can leave anonymous feedback here: https://ââadmonymous.co/ââjeroen_w
Itâs OK to eat honey
I try to avoid it, but itâs hard for me to believe itâs as bad or worse than most animal products. Especially in the quantities itâs usually consumed. Who eats a kg of honey per year? I do think the treatment of bees is very unclear. But Iâve also heard that some non-animal products involve a lot of insects, like avocados, so Iâm curious how it compares.
I checked parts of the study, and the 0.12% figure is for P(AI-caused existential catastrophe by 2100) according to the âAI skepticsâ. This is what is written about the definition of existential catastrophe just before it:
Participants made an initial forecast on the core question they disagreed about (weâll call this U, for âultimate questionâ): by 2100, will AI cause an existential catastrophe? We defined âexistential catastropheâ as an event in which at least one of the following occurs:
Humanity goes extinct
Humanity experiences âunrecoverable collapse,â which means either:
<$1 trillion global GDP annually [in 2022 dollars] for at least a million years (continuously), beginning before 2100; or
Human population remains below 1 million for at least a million years (continuously), beginning before 2100.
That sounds similar to the classic existential risk definition?
(Another thing thatâs important to note is that the study specifically sought forecasters skeptical of AI. So it doesnât tell us much if anything about what a group of random superforecasters would actually predict!)
I am very very surprised your âsecond bucketâ contains the possibility of humans potentially having nice lives! I suspect if you had asked me the definition of p(doom) before I read your initial comment, I would actually have mentioned the definition of existential risks that includes the permanent destruction of future potential. But I simply never took that second part seriously? Hence my initial confusion. I just assumed disempowerment or a loss of control would lead to literal extinction anyway, and that most people shared this assumption. In retrospect, that was probably naive of me. Now Iâm genuinely curious how much of peopleâs p(doom) estimates actually comes from actual extinction versus other scenarios...
Interesting, I thought p(doom) was about literal extinction? If it also refers to unrecoverable collapse, then Iâm really surprised that takes up 15-30% of your potential scenarios! I always saw that part of the existential risk definition as negligible.
Youâre right that this is an important distinction to make.
You make a fair point, but what other tool do we have than our voice? Iâve read Matthewâs last post and skimmed through others. I see some concerning views, but I can also understand how he arrives at them. But what puzzles me often with some AI folks is the level of confidence needed to take such high-stakes actions. Why not err on the side of caution when the stakes are potentially so high?
Perhaps instead of trying to change someoneâs moral views, we could just encourage taking moral uncertainty seriously? I personally lean towards hedonic act utilitarianism, yet I often default to âcommon sense moralityâ because Iâm just not certain enough.
I donât have strong feelings on know how to best tackle this. I wonât have good answers to any questions. Iâm just voicing concern and hoping others with more expertise might consider engaging constructively.
Good point, I guess my lasting impression wasnât entirely fair to how things played out. In any case, the most important part of my message is that I hope he doesnât feels discouraged from actively participating in EA.
On top of mentioning a specific opportunity, I think this post makes a great case in general for considering work like this (great wage & benefits, little experience necessary, somewhat mundane, shiftwork). I do feel a bit uncomfortable about the part where you mention using personal sway to influence the hiring process though, as this could undermine fair hiring practices, but I could be overreacting.
Thanks for sharing this, while I personally believe the shift in focus on AI is justified (I also believe working on animal welfare is more impactful than global poverty), I can definitely sympathize with many of the other concerns you shared and agree with many of them (especially LessWrong lingo taking over, the underreaction to sexism/âracism, and the Nonlinear controversy not being taken seriously enough). While I would completely understand in your situation if you donât want to interact with the community anymore, I just want to share that I believe your voice is really important and I hope you continue to engage with EA! I wouldnât want the movement to discourage anyone who shares its principles (like âletâs use our time and resources to help others the mostâ), but disagrees with how itâs being put into practice, from actively participating.
Iâm not sure how to word this properly, and Iâm uncertain about the best approach to this issue, but I feel itâs important to get this take out there.
Yesterday, Mechanize was announced, a startup focused on developing virtual work environments, benchmarks, and training data to fully automate the economy. The founders include Matthew Barnett, Tamay Besiroglu, and Ege Erdil, who are leaving (or have left) Epoch AI to start this company.
Iâm very concerned we might be witnessing another situation like Anthropic, where people with EA connections start a company that ultimately increases AI capabilities rather than safeguarding humanityâs future. But this time, we have a real opportunity for impact before itâs too late. I believe this project could potentially accelerate capabilities, increasing the odds of an existential catastrophe.
Iâve already reached out to the founders on X, but perhaps there are people more qualified than me who could speak with them about these concerns. In my tweets to them, I expressed worry about how this project could speed up AI development timelines, asked for a detailed write-up explaining why they believe this approach is net positive and low risk, and suggested an open debate on the EA Forum. While their vision of abundance sounds appealing, rushing toward it might increase the chance we never reach it due to misaligned systems.
I personally donât have a lot of energy or capacity to work on this right now, nor do I think I have the required expertise, so I hope that others will pick up the slack. Itâs important we approach this constructively and avoid attacking the three founders personally. The goal should be productive dialogue, not confrontation.
Does anyone have thoughts on how to productively engage with the Mechanize team? Or am I overreacting to what might actually be a beneficial project?
No guest bedrooms. We encouraged tents and sleeping bags. Some people just went home for the night, while others came only for one day. This meant for both editions only 5-8 people ended up staying overnight, with most of them sleeping indoors in the living room.
Got it, thanks for the reply!
What are some reasons to remain optimistic about the world from an EA perspective? Or how can we keep up with the most important news (ex. USAID /â PEPFAR) without drowning in it?
The news is just incredibly depressing. The optimism I once had before the pandemic is just gone. Yeah, global health and development may still continue to improve. And thatâs not insignificant. But moral circle expansion? Animal welfare? AI risks?
Same, I love it as well. Though my Facebook connection is broken and will likely never be fully repaired. I can remain logged in until I send a picture, then the connection breaks. And I keep forgetting. Iâve talked with the support team about it and it seems quite hopeless.
Yeah, and even when finding a classic EA âhigh impact jobâ doesnât work, finding a good E2G job may not work either. And you may not find the time to volunteer. It sucks, but you just try with what you have and what you can. This will be different for everybody. It may require a lot of self-forgiveness. I sure struggle(d) with it. But this is different from completely giving up on having an impact!
My guess is, but I could be wrong, EA forum content is often just difficult to share with a broader audience as itâs usually not the target audience? And even when itâs ideas worth sharing with a broader audience, it may still be filled with EA jargon /â way of speaking thatâs difficult to follow for a lot of people. I am saying this assuming most peopleâs followers arenât EAs but friends, colleagues and family. Even within EA, people are focused on different cause areas and many may not priorize reading stuff outside their cause area. I am not saying all of this is bad, I havenât thought that through, but it does make sense to me. Itâs similar to academic papers in a way, you generally wouldnât share those on your social media platforms. But you do send them to people you think could be interested, just like how in EA I feel like posts are shared in messages with each other all the time.
I do think encouraging to share and add a picture can help and is a good idea!
Thanks for the write-up! This is a very useful post.
I have been wondering though, since the shift in strategy, is EA outreach still a priority? Such as YouTube channels, podcasts, other online media,⌠targeting a broader audience. And if not, why?
Even though I have a personal vested interest in this topic (running a YouTube channel previously funded by EAIF), I do believe that projects like these could be highly effective and worth funding, regardless of my own involvement.
My main point of criticism, that I didnât see anyone else mention in the top-level comments, is that the pledge just seems too vague and broad. A 10 percent pledge is very concrete and measurable. Of course there is a difference in opinion in terms of what charities count as impactful, just like with careers. But with careers the difference in opinion is too broad for this pledge to be useful. Some could just interpret this pledge as âIâll become a doctor or work for an ngoâ without giving much extra thought. While with the 10% pledge there is a clear significant minimum commitment. I donât think it solves the issue of value drift either. I imagine that in reality, when value drift happens, the individual wonât be quick to admit their values have drifted even if they arenât actively looking for the most impactful career opportunities anymore. And there is also a higher chance people will just forget about the pledge and ignore any reminders. You could try and make the pledge more measurable, but then I think youâll quickly run into Goodhartâs Law. Iâm skeptical there is a way to have a career pledge and avoid these issues. I could be wrong, and I do welcome trying out new things like this!
For me, it doesnât need to be hard-working or smarter people. Anyone you can cowork with who is supportive will do. But my challenge is to actually create such an environment! Online doesnât work that well for me, it needs to be in-person. Itâs so much more impactful than any other productivity hack.