One potential route for people in effective altruism who want to reform the worldās ideas is publishing papers in academic journals and participating in academic conferences.
I think where academic publishing would be most beneficial for increasing the rigour of EAās thinking would be AGI. Thatās the area where Tyler Cowen said people should āpublish, publish, publishā, if Iām correctly remembering whatever interview or podcast he said that on.
I think academic publishing has been great for the quality of EAās thinking about existential risk in general. If I imagine a counterfactual scenario where that scholarship never happened and everything was just published on forums and blogs, it seems like it would be much worse by comparison.
Part of what is important about academic publishing is exposure to diverse viewpoints in a setting where the standards for rigour are high. If some effective altruists started a Journal of Effective Altruism and only accepted papers from people with some prior affiliation with the community, then that would probably just be an echo chamber, which would be kind of pointless.
I liked the Essays on Longtermism anthology because it included critics of longtermism as well as proponents. I think thatās an example of academic publishing successfully increasing the quality of discourse on a topic.
When it comes to AGI, I think it would be helpful to see some response to the ideas about AGI you tend to see in EA from AI researchers, cognitive scientists, and philosophers who are not already affiliated with EA or sympathetic to its views on AGI. There is widespread disagreement with EAās views on AGI from AI researchers, for example. It could be useful to read detailed explanations of why they disagree.
Part of why academic publishing could be helpful here is that itās a commitment to serious engagement with experts who disagree in a long-form format where youāre held to a high standard, rather than ignoring these disagreements or dismissing them with a meme or with handwavy reasoning or an appeal to the EA communityās opinion ā which is what tends to happen on forums and blogs.
EA really exists in a strange bubble on this topic, its epistemic practices are unacceptably bad, scandalously bad ā if itās a letter grade, itās an F in bright red ink ā and people in EA could really improve their reasoning in this area by engaging with experts who disagree without the intent to dismiss or humiliate them, but to actually try to understand why they think what they do and seriously consider if theyāre right. (Examples of scandalously bad epistemic practices include many people in EA apparently never once even hearing that an opposing point of view on LLMs scaling to AGI even exists, despite it being the majority view among AI experts, let alone understanding the reasons behind that view, some people in EA openly mocking people who disagree with them, including world-class AI experts, and, in at least one instance, someone with a prominent role who responded to an essay on AI safety/āalignment that expressed an opposing opinion without reading it, just based on guessing what it might have said. These are the sort of easily avoidable mistakes that predictably lead to having poorly informed and poorly thought-out opinions, which, of course, are more likely to be wrong as a result. Obviously these are worrying signs for the state of the discourse, so whatās going on here?)
Only weird masochists who dubiously prioritize their time will come onto to forums and blogs to argue with people in EA about AGI. The only real place where different ideas clash online ā Twitter ā is completely useless for serious discourse, and, in fact, much worse than useless, since it always seems to end up causing polarization, people digging in on opinions, crude oversimplification, and in-group/āout-group thinking. Humiliation contests and personal insults are the norm on Twitter, which means people are forming their opinions not based on considering the reasons for holding those opinions, but based on needing to āwinā. Obviously thatās not how good thinking gets done.
Academic publishing ā or, failing that, something that tries to approximate it in terms of the long-form format, the formality, the high standards for quality and rigour, the qualifications required to participate, and the norms of civility and respect ā seems the best path forward to get that F up to a passing grade.
I think where academic publishing would be most beneficial for increasing the rigour of EAās thinking would be AGI.
AGI is a subset of global catastrophic risks, so EA-associated people have extensively academically published on AGIāI personally have about 10 publications related to AI.
Examples of scandalously bad epistemic practices include many people in EA apparently never once even hearing that an opposing point of view on LLMs scaling to AGI even exists, despite it being the majority view among AI experts, let alone understanding the reasons behind that view, some people in EA openly mocking people who disagree with them, including world-class AI experts, and, in at least one instance, someone with a prominent role who responded to an essay on AI safety/āalignment that expressed an opposing opinion without reading it, just based on guessing what it might have said. These are the sort of easily avoidable mistakes that predictably lead to having poorly informed and poorly thought-out opinions, which, of course, are more likely to be wrong as a result. Obviously these are worrying signs for the state of the discourse, so whatās going on here?)
I agree that those links are examples of not good epistemics. But in the example of not being aware that the current paradigm may not scale to AGI, this is commonly discussed in EA, such as here and by Carl Shulman (I think here or here). I would be interested in your overall letter grades for epistemics. My quick take would be: Ideal: A+ Less Wrong: A EA Forum: A- (not rigorously referenced, but overall better calibrated to reality and what is most important than academia, more open to updating) Academia: A- (rigorously referenced, but a bias towards being precisely wrong rather than approximately correct, which actually is related to the rigorously referenced part. Also a big bias towards conventional topics.) In-person dialog outside these spaces: C Online dialog outside these spaces: D
AGI is a subset of global catastrophic risks, so EA-associated people have extensively academically published on AGIāI personally have about 10 publications related to AI.
Somewhat different point being made here. Publications on existential risk from AI generally just make some assumptions about AGI with some probability, maybe deferring to some survey or forecast. What I meant is academic publishing about the object-level, technical questions around AGI. For example, what the potential obstacles are to LLMs scaling to AGI. Things like that.
But in the example of not being aware that the current paradigm may not scale to AGI, this is commonly discussed in EA, such as here and by Carl Shulman (I think here or here).
Thatās interesting. I really donāt get the impression that this concept is commonly discussed in EA or something people are widely aware of ā at least not beyond a surface level. I searched for āparadigmā in the Daniel Kokotajlo interview and was able to find it. This is actually one of the only discussions of this question Iāve seen in EA beyond a surface gloss. So, thank you for that. I do think Daniel Kokotajloās arguments are incredibly hand-wavy though. To give my opinionated, biased summary:
AI experts in the past said deep learning couldnāt do certain things and now it can do them, so he doesnāt trust experts predicting limits to deep learning progress involving things like data efficiency and continual learning
The amount of money being invested in AI will most likely within 5-10 years solve all those limits (such as data efficiency and continual learning) in any case
Continual learning or online learning will probably be solved relatively soon (no further explanation)
Continual learning or online learning probably isnāt necessary for an intelligence explosion (no further explanation)
The job of AI researchers at OpenAI, Anthropic, DeepMind, etc. does not require human-level general intelligence but is automatable by relatively narrow and unpowerful AI systems without first solving limitations like data inefficiency and a lack of continual learning (extremely dubious and implausible, I donāt buy this for a second)
Iād appreciate a pointer of what to look for in the Carl Schulman interviews, if you can remember a search term that might work. I searched for āparadigmā and ādeep learningā and didnāt turn up anything.
I would be interested in your overall letter grades for epistemics. My quick take would be: Ideal: A+ Less Wrong: A EA Forum: A- (not rigorously referenced, but overall better calibrated to reality and what is most important than academia, more open to updating) Academia: A- (rigorously referenced, but a bias towards being precisely wrong rather than approximately correct, which actually is related to the rigorously referenced part. Also a big bias towards conventional topics.) In-person dialog outside these spaces: C Online dialog outside these spaces: D
This is a fun game!
Ideal: A+
LessWrong: F, expelled from school, hopefully the parents find a good therapist (counselling for the whole family is recommended)[1]
EA Forum: maybe a B- overall, C+ if Iām feeling testy, encompassing a wide spectrum from F to A+, overall story is quite mixed, hard to give an average (there are many serious flaws, including quite frequently circling the wagons around bad ideas or to shut down entirely legitimate and correct criticism, disagreement, or the pointing out of factual errors)
Academia: extremely variable from field to field, journal to journal, and institution to institution, so hard to give a single letter grade that encompasses the whole diversity and complexity of academia worldwide, but, per titotalās point, given that academia encompasses essentially all human scientific achievement, from the Standard Model of particle physics to the modern synthesis in evolutionary biology to the development of cognitive behavioural therapy in psychology, itās hard to say it could be anything other than an A+
In-person dialogue outside these spaces: extremely variable, depends who youāre talking to, so I donāt know how to give a letter grade since, in theory, this includes literally everyone in the entire world; I strive to meet and know people who I can have great conversations with, but a random person off the street, who knows (my favourite people Iāve ever talked to, A+, my least favourite people Iāve ever talked to, F)
Online dialog outside these spaces: quite terrible in general, if youāre thinking of platforms like Twitter, Reddit, Bluesky, Instagram, TikTok, and so on, so, yeah, probably a D for those places, but YouTube stands out as a shining star ā not necessarily on average or in aggregate, since YouTube is so vast that feels inestimable ā but the best of YouTube is incredibly good, including the ex-philosopher ContraPoints, the wonderful science communicator Hank Green, the author and former graduate student in film studies Lindsay Ellis, and at least a few high-quality video podcasts (which often feature academic guests) and academic channels that upload lectures and panels, who Iām proud to give an A+ and a certificate for good attendance
LessWrong is not only an abyss of irrationality and delusion, itās also quite morally evil. The world would be much better off ā and most of all, its impressionable victims ā if it stopped existing and everyone involved found something better to do, like LARPing or writing sci-fi.
Daniel said āI would say that thereās like maybe a 30% or 40% chance that something like this is true, and that the current paradigm basically peters out over the next few years.ā
It might have been Carl on the Dwarkesh podcast, but I couldnāt easily find a transcript. But Iāve heard from several others (maybe Paul Christiano?) that they have 10-40% chance that AGI is going to take much longer (or is even impossible), either because the current paradigm doesnāt get us there, or because we canāt keep scaling compute exponentially as fast as we have in the last decade once it becomes a significant fraction of GDP.
Yes, Daniel Kokotajlo did say that, but then he also said if that happens, all the problems will be solved fairly quickly anyway (within 5-10 years), so AGI will be only be delayed from maybe 2030 to 2035, or something like that.
Overall, I find his approach to this question to be quite dismissive of possibilities or scenarios other than near-term AGI and overzealous in his belief that either scaling or sheer financial investment (or utterly implausible scenarios about AI automating AI research) will assuredly solve all roadblocks on the way to AGI in very short order. This is not really a scientific approach, but just hand-waving conceptual arguments and overconfident gut intuition.
So, because he doesnāt really think the consequences of even fundamental problems with the current AI paradigm could end up being particularly significant, I give Kokotajlo credit for thinking about this idea in the first place (which is like saying I give a proponent of the covid lab leak hypothesis credit for thinking about the idea that the virus could have originated naturally), but I donāt give him credit for a particularly good or wise consideration of this issue.
Iād be very interested in seeing the discussions of these topics from Carl Schulman and/āor Paul Christiano you are remembering. I am curious to know how deeply they reckon with this uncertainty. Do they mostly dismiss it and hand-wave it away like Kokotajlo? Or do they take it seriously?
In the latter case, it could be helpful for me because Iād have someone else to cite when Iām making the argument that these fundamental, paradigm-level considerations around AI need to be taken seriously when trying to forecast AGI.
Thanks. Do they actually give probability distributions for deep learning being the wrong paradigm for AGI, or anything similar to that?
It looks Ege Erdil said 50% for that question, or something close to that question.
Ajeya Cotra said much less than 50%, but she didnāt say how much less.
I didnāt see Daniel Kokotajlo give a number in that post, but then we have the 30-40% number he gave above, on the 80,000 Hours Podcast.
The probability distributions shown in the graphs at the top of the post are only an indirect proxy for that question. For example, despite Kokotajloās percentage being 30-40%, he still thinks that will most likely only slow down AGI by 5-10 years.
Iām just looking at the post very briefly and not reading the whole thing, so I might have missed the key parts youāre referring to.
Was there another example before this? Steven Byrnes commented on one of my posts from October and we had an extended back-and-forth, so Iām a little bit familiar with his views.
I do think EA is a bit too critical of academia and peer review. But despite this, most of the top 10 most highly published authors in peer-reviewed journals in the global catastrophic risk field have at least some connection with EA.
I think where academic publishing would be most beneficial for increasing the rigour of EAās thinking would be AGI. Thatās the area where Tyler Cowen said people should āpublish, publish, publishā, if Iām correctly remembering whatever interview or podcast he said that on.
I think academic publishing has been great for the quality of EAās thinking about existential risk in general. If I imagine a counterfactual scenario where that scholarship never happened and everything was just published on forums and blogs, it seems like it would be much worse by comparison.
Part of what is important about academic publishing is exposure to diverse viewpoints in a setting where the standards for rigour are high. If some effective altruists started a Journal of Effective Altruism and only accepted papers from people with some prior affiliation with the community, then that would probably just be an echo chamber, which would be kind of pointless.
I liked the Essays on Longtermism anthology because it included critics of longtermism as well as proponents. I think thatās an example of academic publishing successfully increasing the quality of discourse on a topic.
When it comes to AGI, I think it would be helpful to see some response to the ideas about AGI you tend to see in EA from AI researchers, cognitive scientists, and philosophers who are not already affiliated with EA or sympathetic to its views on AGI. There is widespread disagreement with EAās views on AGI from AI researchers, for example. It could be useful to read detailed explanations of why they disagree.
Part of why academic publishing could be helpful here is that itās a commitment to serious engagement with experts who disagree in a long-form format where youāre held to a high standard, rather than ignoring these disagreements or dismissing them with a meme or with handwavy reasoning or an appeal to the EA communityās opinion ā which is what tends to happen on forums and blogs.
EA really exists in a strange bubble on this topic, its epistemic practices are unacceptably bad, scandalously bad ā if itās a letter grade, itās an F in bright red ink ā and people in EA could really improve their reasoning in this area by engaging with experts who disagree without the intent to dismiss or humiliate them, but to actually try to understand why they think what they do and seriously consider if theyāre right. (Examples of scandalously bad epistemic practices include many people in EA apparently never once even hearing that an opposing point of view on LLMs scaling to AGI even exists, despite it being the majority view among AI experts, let alone understanding the reasons behind that view, some people in EA openly mocking people who disagree with them, including world-class AI experts, and, in at least one instance, someone with a prominent role who responded to an essay on AI safety/āalignment that expressed an opposing opinion without reading it, just based on guessing what it might have said. These are the sort of easily avoidable mistakes that predictably lead to having poorly informed and poorly thought-out opinions, which, of course, are more likely to be wrong as a result. Obviously these are worrying signs for the state of the discourse, so whatās going on here?)
Only weird masochists who dubiously prioritize their time will come onto to forums and blogs to argue with people in EA about AGI. The only real place where different ideas clash online ā Twitter ā is completely useless for serious discourse, and, in fact, much worse than useless, since it always seems to end up causing polarization, people digging in on opinions, crude oversimplification, and in-group/āout-group thinking. Humiliation contests and personal insults are the norm on Twitter, which means people are forming their opinions not based on considering the reasons for holding those opinions, but based on needing to āwinā. Obviously thatās not how good thinking gets done.
Academic publishing ā or, failing that, something that tries to approximate it in terms of the long-form format, the formality, the high standards for quality and rigour, the qualifications required to participate, and the norms of civility and respect ā seems the best path forward to get that F up to a passing grade.
AGI is a subset of global catastrophic risks, so EA-associated people have extensively academically published on AGIāI personally have about 10 publications related to AI.
I agree that those links are examples of not good epistemics. But in the example of not being aware that the current paradigm may not scale to AGI, this is commonly discussed in EA, such as here and by Carl Shulman (I think here or here). I would be interested in your overall letter grades for epistemics. My quick take would be:
Ideal: A+
Less Wrong: A
EA Forum: A- (not rigorously referenced, but overall better calibrated to reality and what is most important than academia, more open to updating)
Academia: A- (rigorously referenced, but a bias towards being precisely wrong rather than approximately correct, which actually is related to the rigorously referenced part. Also a big bias towards conventional topics.)
In-person dialog outside these spaces: C
Online dialog outside these spaces: D
Somewhat different point being made here. Publications on existential risk from AI generally just make some assumptions about AGI with some probability, maybe deferring to some survey or forecast. What I meant is academic publishing about the object-level, technical questions around AGI. For example, what the potential obstacles are to LLMs scaling to AGI. Things like that.
Thatās interesting. I really donāt get the impression that this concept is commonly discussed in EA or something people are widely aware of ā at least not beyond a surface level. I searched for āparadigmā in the Daniel Kokotajlo interview and was able to find it. This is actually one of the only discussions of this question Iāve seen in EA beyond a surface gloss. So, thank you for that. I do think Daniel Kokotajloās arguments are incredibly hand-wavy though. To give my opinionated, biased summary:
AI experts in the past said deep learning couldnāt do certain things and now it can do them, so he doesnāt trust experts predicting limits to deep learning progress involving things like data efficiency and continual learning
The amount of money being invested in AI will most likely within 5-10 years solve all those limits (such as data efficiency and continual learning) in any case
Continual learning or online learning will probably be solved relatively soon (no further explanation)
Continual learning or online learning probably isnāt necessary for an intelligence explosion (no further explanation)
The job of AI researchers at OpenAI, Anthropic, DeepMind, etc. does not require human-level general intelligence but is automatable by relatively narrow and unpowerful AI systems without first solving limitations like data inefficiency and a lack of continual learning (extremely dubious and implausible, I donāt buy this for a second)
Iād appreciate a pointer of what to look for in the Carl Schulman interviews, if you can remember a search term that might work. I searched for āparadigmā and ādeep learningā and didnāt turn up anything.
This is a fun game!
Ideal: A+
LessWrong: F, expelled from school, hopefully the parents find a good therapist (counselling for the whole family is recommended)[1]
EA Forum: maybe a B- overall, C+ if Iām feeling testy, encompassing a wide spectrum from F to A+, overall story is quite mixed, hard to give an average (there are many serious flaws, including quite frequently circling the wagons around bad ideas or to shut down entirely legitimate and correct criticism, disagreement, or the pointing out of factual errors)
Academia: extremely variable from field to field, journal to journal, and institution to institution, so hard to give a single letter grade that encompasses the whole diversity and complexity of academia worldwide, but, per titotalās point, given that academia encompasses essentially all human scientific achievement, from the Standard Model of particle physics to the modern synthesis in evolutionary biology to the development of cognitive behavioural therapy in psychology, itās hard to say it could be anything other than an A+
In-person dialogue outside these spaces: extremely variable, depends who youāre talking to, so I donāt know how to give a letter grade since, in theory, this includes literally everyone in the entire world; I strive to meet and know people who I can have great conversations with, but a random person off the street, who knows (my favourite people Iāve ever talked to, A+, my least favourite people Iāve ever talked to, F)
Online dialog outside these spaces: quite terrible in general, if youāre thinking of platforms like Twitter, Reddit, Bluesky, Instagram, TikTok, and so on, so, yeah, probably a D for those places, but YouTube stands out as a shining star ā not necessarily on average or in aggregate, since YouTube is so vast that feels inestimable ā but the best of YouTube is incredibly good, including the ex-philosopher ContraPoints, the wonderful science communicator Hank Green, the author and former graduate student in film studies Lindsay Ellis, and at least a few high-quality video podcasts (which often feature academic guests) and academic channels that upload lectures and panels, who Iām proud to give an A+ and a certificate for good attendance
LessWrong is not only an abyss of irrationality and delusion, itās also quite morally evil. The world would be much better off ā and most of all, its impressionable victims ā if it stopped existing and everyone involved found something better to do, like LARPing or writing sci-fi.
Daniel said āI would say that thereās like maybe a 30% or 40% chance that something like this is true, and that the current paradigm basically peters out over the next few years.ā
It might have been Carl on the Dwarkesh podcast, but I couldnāt easily find a transcript. But Iāve heard from several others (maybe Paul Christiano?) that they have 10-40% chance that AGI is going to take much longer (or is even impossible), either because the current paradigm doesnāt get us there, or because we canāt keep scaling compute exponentially as fast as we have in the last decade once it becomes a significant fraction of GDP.
Yes, Daniel Kokotajlo did say that, but then he also said if that happens, all the problems will be solved fairly quickly anyway (within 5-10 years), so AGI will be only be delayed from maybe 2030 to 2035, or something like that.
Overall, I find his approach to this question to be quite dismissive of possibilities or scenarios other than near-term AGI and overzealous in his belief that either scaling or sheer financial investment (or utterly implausible scenarios about AI automating AI research) will assuredly solve all roadblocks on the way to AGI in very short order. This is not really a scientific approach, but just hand-waving conceptual arguments and overconfident gut intuition.
So, because he doesnāt really think the consequences of even fundamental problems with the current AI paradigm could end up being particularly significant, I give Kokotajlo credit for thinking about this idea in the first place (which is like saying I give a proponent of the covid lab leak hypothesis credit for thinking about the idea that the virus could have originated naturally), but I donāt give him credit for a particularly good or wise consideration of this issue.
Iād be very interested in seeing the discussions of these topics from Carl Schulman and/āor Paul Christiano you are remembering. I am curious to know how deeply they reckon with this uncertainty. Do they mostly dismiss it and hand-wave it away like Kokotajlo? Or do they take it seriously?
In the latter case, it could be helpful for me because Iād have someone else to cite when Iām making the argument that these fundamental, paradigm-level considerations around AI need to be taken seriously when trying to forecast AGI.
Here are some probability distributions from a couple of them.
Thanks. Do they actually give probability distributions for deep learning being the wrong paradigm for AGI, or anything similar to that?
It looks Ege Erdil said 50% for that question, or something close to that question.
Ajeya Cotra said much less than 50%, but she didnāt say how much less.
I didnāt see Daniel Kokotajlo give a number in that post, but then we have the 30-40% number he gave above, on the 80,000 Hours Podcast.
The probability distributions shown in the graphs at the top of the post are only an indirect proxy for that question. For example, despite Kokotajloās percentage being 30-40%, he still thinks that will most likely only slow down AGI by 5-10 years.
Iām just looking at the post very briefly and not reading the whole thing, so I might have missed the key parts youāre referring to.
Hereās another example of someone in the LessWrong community thinking that LLMs wonāt scale to AGI.
Was there another example before this? Steven Byrnes commented on one of my posts from October and we had an extended back-and-forth, so Iām a little bit familiar with his views.