Another thing you can do is send comments proposed legislation on regulations.gov. I did so last week about a recent californian bill on open-sourcing model weights (now closed). In the checklist (screenshot below) they say: “the comment process is not a vote – one well supported comment is often more influential than a thousand form letters”. There are people much more qualified on AI risk than I over here, so in case you didn’t know, you might want to keep an eye on new regulation coming up. It doesn’t take much time and seems to have a fairly big impact.
Neil Warren
There’s an AGI on LessWrong
[Question] How does it feel to switch from earn-to-give?
I wrote a post on moth traps. It makes a rather different point, but I still figure I’d better post it here than not. https://www.lesswrong.com/posts/JteNtoLBFZB9niiiu/the-smallest-possible-button-or-moth-traps
I agree with all this! Thank you for the comment (the John Wesley sermon looks particularly interesting). I plan to make money for the explicit goal of giving it away, and will keep all your caveats in mind.
Ashamed of wealth
I think this is my favorite so far. There’s a certain hope and cozyness radiating from it. Great introduction to the hopeful let’s-save-the-world! side of EA that I will send to all my non-EA friends.
Your videos are extremely practical for that purpose. In my experience there’s a certain “legitness” that comes with a nicely animated video on YouTube with more than 100k views, that a blog post doesn’t have. So thanks! :)
Okay forget what I said, I sure can tie myself up in knots. Here’s another attempt:
If a person is faced with the decision to either save 100 out of 300 people for sure, or have a 60% chance of saving everyone, they are likely (in my experience asking friends) to answer something like “I don’t gamble with human lives” or “I don’t see the point of thought experiments like this”. Eliezer Yudkowsky claims in his “something to protect” post that if those same people were faced with this problem and a loved one was among the 300, they would have more incentive to ‘shut up and multiply’. People are more likely to choose what has more expected value if they are more entangled with the end result (and less likely to eg signal indignation at having to gamble with lives).
I see this in practice, and I’m sure you can relate: I’ve often been told by family members that putting numbers on altruism takes the whole spirit out of it, or that “malaria isn’t the only important thing, coral is important too! ” , or that “money is complicated and you can’t equate wasted money with wasted opportunities for altruism”.
These ideas look perfectly reasonable to them but I don’t think they would hold up for a second if their child had cancer: “putting numbers on cancer treatment for your child takes the whole spirit out of saving them (like you could put a number on love)”, or “your child surviving isn’t the only important thing, coral is important too” or “money is complicated, and you can’t equate wasting money with spending less on your child’s treatment”.
Those might be a bit personal. My point is that entangling the outcome with something you care about makes you more likely to try making the right choice. Perhaps I shouldn’t have used the word “rationality” at all. “Rationality” might be a valuable component in making the right choice, but for my purposes I only care about making the right choice no matter how you get there.
The practical insight is that you should start by thinking about what you actually care about, and then backchain from there. If I start off deciding that I want to maximize my family’s odds of survival, I think I am more likely to take AI risk seriously (in no small part, I think, because signalling sanity by scoffing at ‘sci-fi scenarios’ is no longer something that matters).
I am designing a survey I will send tonight to some university students to test this claim.
Hello! Thanks for commenting!
How does that work? In your specific case, what are you invested in but also are detached from the outcome? I can imagine enjoying life as working like this: eg I don’t care what I’m learning about if I’m reading a book for pleasure. Parts of me also enjoy the work I tell myself helps with AI safety. But there are certainly some parts of it that I dislike, but that I do anyway, because I attach a lot of importance to the outcome.
Those are interesting points!
1) Mud-dredging makes rationality a necessity. If you’ve taken DMT and have had a cosmic revelation where you discovered that everything is connected and death is an illusion, then you don’t need to actively not die. I know people to whom death or life is all the same: my point is that if you care about the life/death outcome, you must be on the offensive, somewhat. If you sit in the same place for long enough, you die. There are posts about “rationality = winning”, and I’m not going to get into semantics but what I meant here by rationality was “that which gets what you want”. You can’t afford to eg ignore truth when something you value is at risk. Part of it was referencing this post, which made clear for me that entangling my rationality with reality more thoroughly would force me into improving it.
2) I’m not sure what you mean. We may be talking about two different things: what I meant by “rationality” was specifically what gets you good performance. I didn’t mean some daily applied system which has both pros and cons to mental health or performance. I’m thinking about something wider than that.
As for that last point, I seem to have regrettably framed creativity and rationality as mutually incompatible. I wrote in the drawbacks of muddredging that aiming at something can impede creativity, which I think is true. The solution for me is splitting time up into “should” injunctions time and free time fooling around. Not a novel solution or anything. Again it’s a spectrum, so I’m not advocating for full-on muddgredging: that would be bad for performance (and mental health) in the long run. This post is the best I’ve read that explores this failure mode. I certainly don’t want to appear like I’m disparaging creativity.
(However, I do think that rationality is more important than creativity. I care more about making sure my family members don’t die than about me having fun, and so when I reflect on it all I decide that I’ll be treating creativity as a means, not an end, for the time being. It’s easy to say I’ll be using creativity as a means, but in practice, I love doing creative things and so it becomes an end.)
Detachment vs attachment [AI risk and mental health]
An especially good idea for EA orgs, because doublethedonation seems vaguely untrustworthy (see Jack Lewars’ comment). Thanks for the comment!
I did not know about benevity. High value comment overall, thank you for your contribution!
Is there any situation you predict in which Google donation matches would affect METR’s vision? What is the probability of that happening, and what is the value of donations to METR by Google employees?
If you’re asking for advice, it seems to me that refusing donations on principle is not a good idea, and that donation matching from Google for employee donations carry no legal bearing (but I have no idea) and are worth the money. Besides, I understand the importance of METR independence, but are Google and METR’s goals very orthogonal? Your final calculation would need to involve degree of orthogonality as well. I’m not a very valuable data point for this question, however.
That’s an interesting anecdote! I donated for the first time a few days ago, and did not know “Giving Tuesday” existed, so I’m one of today’s lucky 10,000. I really hope organisations like GWWC that help funnel money to the right charities engage in tricks like this; not investing your money immediately, but finding various opportunities to increase the pot. It would probably be worth the time and money at GWWC to centralize individual discoveries like this, and have a few people constantly looking out for opportunities. The EA forum only partially solves this.
This should totally be explicitly mentioned and acknowledged in GWWC and 80K hours.
And good job on coming out of lurkerdom! You’ll lose the little green plant soon as well.
Thanks for checking for Chesterton fences.
I agree. Indeed, “this is because the spirit of the pledge is to voluntarily forego a certain portion of your income and use it to improve the lives of others”, sounds suspiciously not cold-hearted-economist-ey enough for an EA org.
GWWC is probably valuable as it is precisely because it offers a warmer aspect and a community and all that, but you’re right, donation matching is just one more variable that should be counted into the utility calculation.
Double the donation: EA inadequacy found?
Some of these are low-quality questions. Hopefully they contain some useful data about the type of thoughts some people have, though. I left even the low-quality ones if ever they are useful, but don’t feel forced to read beyond the bolded beginning, I don’t want to waste your time.
What is 80,000 hours’ official timeline? Take-off speed scenario? I ask this question to ask how much time you guys think you’re operating on. This affects some earn-to-give scenarios like “should I look for a scalable career in which it might take years, but I could be reliably making millions by the end of that time?” versus closer-scale scenarios like “become a lawyer next year and donate to alignment think tanks now.”
How worried should I be about local effects of narrow AI? The coordination problem of humanity as well as how much attention is given to alignment and how much attention is given to other EA projects like malaria prevention or biosecurity are things that matter a lot. They could be radically affected by short-term effects of narrow AI, like, say, propaganda machines with LLMs or bioweapon factories with protein folders. Is enough attention allocated to short-term AI effects? Everybody talks about alignment, which is the real problem we need to solve, but all the little obstacles we’ll face on the way will matter a lot as well because they affect how alignment goes!
Does AI constrict debate a bit? What I mean by this is: most questions here are somewhat related to AI and so are most EA thinking efforts I know of. It just seems to be that AI swallows every other cause up. Is this a problem? Because it’s a highly technical subject, are you too swamped with people who want to help in the best way, discover that most things that are not AGI don’t really matter because of how much the latter shapes literally everything else, but simply wouldn’t be very useful in the field? Nah nevermind this isn’t a clear question. This might be better: is there such a thing as too-much-AI burnout? For EAs, should a little bit of a break exist, something which is still important, only a little less, with which they could concentrate on, if only because they will go a little insane concentrating on AI only? Hm.
What is the most scalable form of altruism that you’ve found? Starting a company and hopefully making a lot of money down the line might be pretty scalable—given enough time, your yearly donations could be in the millions, not thousands. Writing a book, writing blog posts, making YouTube videos or starting a media empire to spread the EA memeplex would also be a scalable form of altruism, benefiting the ideas that save the most lives. AI alignment work is (and technically also capabilities, but capabilities is worse than useless without alignment) scalable, in a way, because once friendly AGI is created pretty much every other problem humanity faces melts away. Thanks to your studies and thinking, which method, out of these or more that you know, might be the most scalable form of altruism you can imagine?
What book out of the three free books you offer should I give to a friend? I have not yet read the 80,000 Hours guide and nor have I read Doing Good Better, but I have read The Precipice. I want to see if I can convert a align a friend to EA ideas by having them read a book, not sure which one is the best though. Do you have any suggestions? Thanks for offering the book free, by the way! I’m a high-schooler and don’t even have a bank account, so this is very valuable.
How is the Kurzgesagt team? I know that question looks out of nowhere and you probably aren’t responsible for whatever part of 80K Hours takes cares of the PR, but I noticed that you sponsored the Kurzgesagt video about Botlzmann brains that came out today. I’ve noticed that over time, Kurzgesagt seems to have become more and more aligned with the EA style of thinking. Have you met the team personally? What ambitions do they have? Are they planning on collaborating with EA organizations in the far future, or this is just part of one “batch” of videos? Are they planning on a specifically-about-altruism video soon? Or, more importantly: Kurzgesagt does not have any videos on AGI, the alignment problem, or existential threats in general (despite flirting with bioweapons, nukes and climate change). Are they planning on one?
How important is PR to you and do you have future plans for PR scalability? As in, do you have a plan for racking up an order of magnitude more readers/followers/newsletter subscribers/whatever or not? Should you? Have you thought about the question enough to establish it wouldn’t be worth the effort/time/money? Is there any way people on here could help? I don’t know what you guys use to measure utilons/QALY, but how have you tried calculating the dollar-to-good ratio of PR efforts on your part?
Do you think most people, if well explained, would agree with EA reasoning? Or is there a more fundamental human-have-different-enough-values thing going on? People care about things like other humans and animals, only some things like scope insensitivity stop them from spending every second of their time trying to do as much altruism as possible. Do you think it’s just that? Do you think for the average person it might only take a single book seriously read, or a few blog posts/videos for them to embark on the path that leads toward using their career for good in an effective manner? How much do you guys think about this?
I’ll probably think of more questions if I continue thinking about this, but I’ll stop it here. You probably won’t get all the way down to this comment anyway, I posted pretty late and this won’t get upvoted much. But thanks for the post anyway, it had me thinking about this kind of things! Good day!
The book in my opinion is better, and relies so much on vast realizations and plot twists that it’s better to read it blind—before the series and before even the blurb at the back of the book! So for those who didn’t know it was a book, here it is: https://www.amazon.fr/Three-Body-Problem-Cixin-Liu/dp/0765377063