Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).
yanni kyriacos
Yeah this is a good point, which I’ve considered, which is why I basically only do it at home.
This is an extremely “EA” request from me but I feel like we need a word for people (i.e. me) who are Vegans but will eat animal products if they’re about to be thrown out. OpportuVegan? UtilaVegan?
I think it would be good if lots of EAs answered this twitter poll, so we could get a better sense for the communities views on the topic of Enlightenment / Awakening: https://twitter.com/SpencrGreenberg/status/1782525718586413085?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet
I think Peter might be hoping people read this as “a rich and influential guy might be persuadable!” rather than “let’s discuss the minutiae of what constitutes an EA”. I’ve watched quite a few of Bryan’s videos and honestly I could see this guy swing either way on this (could be SBF, could be Dustin, honestly can’t tell how this shakes out).
Has anyone seen an analysis that takes seriously the idea that people should eat some fruits, vegetables and legumes over others based on how much animal suffering they each cause?
I.e. don’t eat X fruit, eat Y one instead, because X fruit is [e.g.] harvested in Z way, which kills more [insert plausibly sentient creature].
The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).
I thought it might be useful to spell that out.
I recently discovered the idea of driving all blames into oneself, which immediately resonated with me. It is relatively hardcore; the kind of thing that would turn David Goggins into a Buddhist.
Gemini did a good job of summarising it:This quote by Pema Chödron, a renowned Buddhist teacher, represents a core principle in some Buddhist traditions, particularly within Tibetan Buddhism. It’s called “taking full responsibility” or “taking self-blame” and can be a bit challenging to understand at first. Here’s a breakdown:
What it Doesn’t Mean:
Self-Flagellation: This practice isn’t about beating yourself up or dwelling on guilt.
Ignoring External Factors: It doesn’t deny the role of external circumstances in a situation.
What it Does Mean:
Owning Your Reaction: It’s about acknowledging how a situation makes you feel and taking responsibility for your own emotional response.
Shifting Focus: Instead of blaming others or dwelling on what you can’t control, you direct your attention to your own thoughts and reactions.
Breaking Negative Cycles: By understanding your own reactions, you can break free from negative thought patterns and choose a more skillful response.
Analogy:
Imagine a pebble thrown into a still pond. The pebble represents the external situation, and the ripples represent your emotional response. While you can’t control the pebble (the external situation), you can control the ripples (your reaction).
Benefits:
Reduced Suffering: By taking responsibility for your own reactions, you become less dependent on external circumstances for your happiness.
Increased Self-Awareness: It helps you understand your triggers and cultivate a more mindful response to situations.
Greater Personal Growth: By taking responsibility, you empower yourself to learn and grow from experiences.
Here are some additional points to consider:
This practice doesn’t mean excusing bad behavior. You can still hold others accountable while taking responsibility for your own reactions.
It’s a gradual process. Be patient with yourself as you learn to practice this approach.
Be the meme you want to see in the world (screenshot).
Yeah, Case Studies as Research need to be treated very carefully (i.e. they can still be valuable exercises but the analyser needs to be aware of their weaknesses)
Thanks!
I hope you’re right. Thanks for the example, it seems like a good one.
What are some historical examples of a group (like AI Safety folk) getting something incredibly wrong about an incoming Technology? Bonus question: what led to that group getting it so wrong? Maybe there is something to learn here.
Just another example of “lean into your strength”
Yeah I’m in the top 1% for extraversion, I don’t really feel shame or embarrassment and I have lots of initiative. Makes up for the mediocre IQ ;)
I seem to be having some impact simply emailing politicians and having meetings with them to discuss the potentially catastrophic risks from AI.
I consider myself pretty mediocre (based on school/uni results).
This is something anyone with enough context (i.e. in a particular cause area) could do. It just takes initiative.
This is such a common-sense take, that it worries me it needs writing. I assume this is happening over on twitter (where I don’t have an account)? The average non-EA would consider this take to be extremely obvious and is partly why I think we should be considered about the composition of the movement in general.
So I did a quick check today—I’ve sent 19 emails to politicians about AI Safety / x-risk and received 4 meetings. They’ve all had a really good vibe, and I’ve managed to get each of them to commit to something small (e.g. email XYZ person about setting up an AI Safety Institute). I’m pretty happy with the hit rate (4/19). I might do another forum quick take once I’ve sent 50.
Ah, I missed that. Yeah I might do a write up :)
nah let’s lean all the way in, for one day a year, the wild west out here.
This seems close enough that I might co-opt it :)
https://en.wikipedia.org/wiki/Freeganism