Some of these are low-quality questions. Hopefully they contain some useful data about the type of thoughts some people have, though. I left even the low-quality ones if ever they are useful, but don’t feel forced to read beyond the bolded beginning, I don’t want to waste your time.
What is 80,000 hours’ official timeline? Take-off speed scenario? I ask this question to ask how much time you guys think you’re operating on. This affects some earn-to-give scenarios like “should I look for a scalable career in which it might take years, but I could be reliably making millions by the end of that time?” versus closer-scale scenarios like “become a lawyer next year and donate to alignment think tanks now.”
How worried should I be about local effects of narrow AI? The coordination problem of humanity as well as how much attention is given to alignment and how much attention is given to other EA projects like malaria prevention or biosecurity are things that matter a lot. They could be radically affected by short-term effects of narrow AI, like, say, propaganda machines with LLMs or bioweapon factories with protein folders. Is enough attention allocated to short-term AI effects? Everybody talks about alignment, which is the real problem we need to solve, but all the little obstacles we’ll face on the way will matter a lot as well because they affect how alignment goes!
Does AI constrict debate a bit? What I mean by this is: most questions here are somewhat related to AI and so are most EA thinking efforts I know of. It just seems to be that AI swallows every other cause up. Is this a problem? Because it’s a highly technical subject, are you too swamped with people who want to help in the best way, discover that most things that are not AGI don’t really matter because of how much the latter shapes literally everything else, but simply wouldn’t be very useful in the field? Nah nevermind this isn’t a clear question. This might be better: is there such a thing as too-much-AI burnout? For EAs, should a little bit of a break exist, something which is still important, only a little less, with which they could concentrate on, if only because they will go a little insane concentrating on AI only? Hm.
What is the most scalable form of altruism that you’ve found? Starting a company and hopefully making a lot of money down the line might be pretty scalable—given enough time, your yearly donations could be in the millions, not thousands. Writing a book, writing blog posts, making YouTube videos or starting a media empire to spread the EA memeplex would also be a scalable form of altruism, benefiting the ideas that save the most lives. AI alignment work is (and technically also capabilities, but capabilities is worse than useless without alignment) scalable, in a way, because once friendly AGI is created pretty much every other problem humanity faces melts away. Thanks to your studies and thinking, which method, out of these or more that you know, might be the most scalable form of altruism you can imagine?
What book out of the three free books you offer should I give to a friend? I have not yet read the 80,000 Hours guide and nor have I read Doing Good Better, but I have read The Precipice. I want to see if I can convert a align a friend to EA ideas by having them read a book, not sure which one is the best though. Do you have any suggestions? Thanks for offering the book free, by the way! I’m a high-schooler and don’t even have a bank account, so this is very valuable.
How is the Kurzgesagt team? I know that question looks out of nowhere and you probably aren’t responsible for whatever part of 80K Hours takes cares of the PR, but I noticed that you sponsored the Kurzgesagt video about Botlzmann brains that came out today. I’ve noticed that over time, Kurzgesagt seems to have become more and more aligned with the EA style of thinking. Have you met the team personally? What ambitions do they have? Are they planning on collaborating with EA organizations in the far future, or this is just part of one “batch” of videos? Are they planning on a specifically-about-altruism video soon? Or, more importantly: Kurzgesagt does not have any videos on AGI, the alignment problem, or existential threats in general (despite flirting with bioweapons, nukes and climate change). Are they planning on one?
How important is PR to you and do you have future plans for PR scalability? As in, do you have a plan for racking up an order of magnitude more readers/followers/newsletter subscribers/whatever or not? Should you? Have you thought about the question enough to establish it wouldn’t be worth the effort/time/money? Is there any way people on here could help? I don’t know what you guys use to measure utilons/QALY, but how have you tried calculating the dollar-to-good ratio of PR efforts on your part?
Do you think most people, if well explained, would agree with EA reasoning? Or is there a more fundamental human-have-different-enough-values thing going on? People care about things like other humans and animals, only some things like scope insensitivity stop them from spending every second of their time trying to do as much altruism as possible. Do you think it’s just that? Do you think for the average person it might only take a single book seriously read, or a few blog posts/videos for them to embark on the path that leads toward using their career for good in an effective manner? How much do you guys think about this?
I’ll probably think of more questions if I continue thinking about this, but I’ll stop it here. You probably won’t get all the way down to this comment anyway, I posted pretty late and this won’t get upvoted much. But thanks for the post anyway, it had me thinking about this kind of things! Good day!
Thanks for the interesting questions, but unfortunately, they were posted a little too late for the team to answer. Glad to hear writing them helped you clarify your thinking a bit!
Some of these are low-quality questions. Hopefully they contain some useful data about the type of thoughts some people have, though. I left even the low-quality ones if ever they are useful, but don’t feel forced to read beyond the bolded beginning, I don’t want to waste your time.
What is 80,000 hours’ official timeline? Take-off speed scenario? I ask this question to ask how much time you guys think you’re operating on. This affects some earn-to-give scenarios like “should I look for a scalable career in which it might take years, but I could be reliably making millions by the end of that time?” versus closer-scale scenarios like “become a lawyer next year and donate to alignment think tanks now.”
How worried should I be about local effects of narrow AI? The coordination problem of humanity as well as how much attention is given to alignment and how much attention is given to other EA projects like malaria prevention or biosecurity are things that matter a lot. They could be radically affected by short-term effects of narrow AI, like, say, propaganda machines with LLMs or bioweapon factories with protein folders. Is enough attention allocated to short-term AI effects? Everybody talks about alignment, which is the real problem we need to solve, but all the little obstacles we’ll face on the way will matter a lot as well because they affect how alignment goes!
Does AI constrict debate a bit? What I mean by this is: most questions here are somewhat related to AI and so are most EA thinking efforts I know of. It just seems to be that AI swallows every other cause up. Is this a problem? Because it’s a highly technical subject, are you too swamped with people who want to help in the best way, discover that most things that are not AGI don’t really matter because of how much the latter shapes literally everything else, but simply wouldn’t be very useful in the field? Nah nevermind this isn’t a clear question. This might be better: is there such a thing as too-much-AI burnout? For EAs, should a little bit of a break exist, something which is still important, only a little less, with which they could concentrate on, if only because they will go a little insane concentrating on AI only? Hm.
What is the most scalable form of altruism that you’ve found? Starting a company and hopefully making a lot of money down the line might be pretty scalable—given enough time, your yearly donations could be in the millions, not thousands. Writing a book, writing blog posts, making YouTube videos or starting a media empire to spread the EA memeplex would also be a scalable form of altruism, benefiting the ideas that save the most lives. AI alignment work is (and technically also capabilities, but capabilities is worse than useless without alignment) scalable, in a way, because once friendly AGI is created pretty much every other problem humanity faces melts away. Thanks to your studies and thinking, which method, out of these or more that you know, might be the most scalable form of altruism you can imagine?
What book out of the three free books you offer should I give to a friend? I have not yet read the 80,000 Hours guide and nor have I read Doing Good Better, but I have read The Precipice. I want to see if I can convert a align a friend to EA ideas by having them read a book, not sure which one is the best though. Do you have any suggestions? Thanks for offering the book free, by the way! I’m a high-schooler and don’t even have a bank account, so this is very valuable.
How is the Kurzgesagt team? I know that question looks out of nowhere and you probably aren’t responsible for whatever part of 80K Hours takes cares of the PR, but I noticed that you sponsored the Kurzgesagt video about Botlzmann brains that came out today. I’ve noticed that over time, Kurzgesagt seems to have become more and more aligned with the EA style of thinking. Have you met the team personally? What ambitions do they have? Are they planning on collaborating with EA organizations in the far future, or this is just part of one “batch” of videos? Are they planning on a specifically-about-altruism video soon? Or, more importantly: Kurzgesagt does not have any videos on AGI, the alignment problem, or existential threats in general (despite flirting with bioweapons, nukes and climate change). Are they planning on one?
How important is PR to you and do you have future plans for PR scalability? As in, do you have a plan for racking up an order of magnitude more readers/followers/newsletter subscribers/whatever or not? Should you? Have you thought about the question enough to establish it wouldn’t be worth the effort/time/money? Is there any way people on here could help? I don’t know what you guys use to measure utilons/QALY, but how have you tried calculating the dollar-to-good ratio of PR efforts on your part?
Do you think most people, if well explained, would agree with EA reasoning? Or is there a more fundamental human-have-different-enough-values thing going on? People care about things like other humans and animals, only some things like scope insensitivity stop them from spending every second of their time trying to do as much altruism as possible. Do you think it’s just that? Do you think for the average person it might only take a single book seriously read, or a few blog posts/videos for them to embark on the path that leads toward using their career for good in an effective manner? How much do you guys think about this?
I’ll probably think of more questions if I continue thinking about this, but I’ll stop it here. You probably won’t get all the way down to this comment anyway, I posted pretty late and this won’t get upvoted much. But thanks for the post anyway, it had me thinking about this kind of things! Good day!
Thanks for the interesting questions, but unfortunately, they were posted a little too late for the team to answer. Glad to hear writing them helped you clarify your thinking a bit!