Wow, youâve read a lot! My intro text to effective altruism (sort of) was Peter Singerâs The Life You Can Save, published in 2009, but itâs probably redundant with a lot of the stuff youâve already read and know.
If youâre interested in reading more about longtermism, the Oxford University Press anthology Essays on Longtermism: Present Action for the Distant Future, published in August, is free to read online, both on the website and as a PDF. Some of the essays changed my mind, some I saw major flaws with, and overall I now have a harsh view of longtermism because scholarship like Essays on Longtermism has failed to turn up much thatâs interesting or important.
An epistemology/âphilosophy of science book I love that isnât directly about EA at all but somehow seems to keep coming up in discussions in and around EA is The Beginning of Infinity by the physicist David Deutsch. Deutschâs TED Talk is a good, quick introduction to the core idea of the book. Deutschâs hour-long interview on the TED Interview is a good preview of whatâs in the book and a good introduction to Deutschâs ideas and worldview.
This book is absolutely not part of the EA âcanonâ, nor is it a book that a large percentage of people in EA have read, but I think itâs a book that a large percentage of people in EA should read. Deutschâs ideas about inductivism and AGI are the ones that are most clearly, directly relevant to EA.
I wonât say that I know Deutschâs ideas are correct â I donât â but I really appreciate his pushback against inductivism and against deep learning as a path to AGI, and I appreciate the level of creativity and originality of his ideas.
The big asterisk or question mark I would put over Julia Galefâs work is that she co-founded the Center for Applied Rationality (CFAR). Galef left CFAR in 2016, so she may not be responsible for the bad stuff that happened at CFAR. At least around 2017-2019, the stories about what happened at CFAR are really bad. One of the CFAR co-founders described how CFAR employees would deliberately, consciously behave in deceptive, manipulative ways toward their workshop participants in order to advance CFARâs ideas about existential risk from AI. The most stomach-churning thing of all is that CFAR organized a summer camp for kids where, according to one person who was involved, things were even worse than at CFAR itself. I donât know the specifics of what happened at the summer camp, but I hate the idea that kids may have been harmed in some way by CFARâs work.
Galef may not be responsible at all for any of this, but I think itâs interesting how much of a failure this whole idea of ârationality trainingâ turned out to be, and how unethically and irrationally the people in key roles involved in this project behaved.
I think the source you mention is talking about people deceiving themselves
stomach-churning thing of all is that CFAR organized a summer camp for kids where, according to one person who was involved, things were even worse than at CFAR itself
Idk man I think this summary is a few shades more alarming than the post you are taking as evidence.
The Facebook comment by a CFAR co-founder that you quoted in that post says (emphasis added):
The actual drive in the background was a lot more like âKeep running workshops that wow peopleâ with an additional (usually consciously (!) hidden) thread about luring people into being scared about AI risk in a very particular way and possibly recruiting them to MIRI-type projects.
The âusually consciouslyâ is key.
In that post, you said you were âa bit freaked outâ about an aspect of the kidsâ summer camp that was run by CFAR. You also said that the dynamics at the summer camp were âat times a bit more dysfunctional thanâ CFAR. Whatâs the disconnect between what you wrote and my summary?
At times a bit more dysfunctional than CFAR sounds to me like you were saying the camp was worse than CFAR. If thatâs not what you were trying to say, what were you trying to say?
Wow, youâve read a lot! My intro text to effective altruism (sort of) was Peter Singerâs The Life You Can Save, published in 2009, but itâs probably redundant with a lot of the stuff youâve already read and know.
If youâre interested in reading more about longtermism, the Oxford University Press anthology Essays on Longtermism: Present Action for the Distant Future, published in August, is free to read online, both on the website and as a PDF. Some of the essays changed my mind, some I saw major flaws with, and overall I now have a harsh view of longtermism because scholarship like Essays on Longtermism has failed to turn up much thatâs interesting or important.
An epistemology/âphilosophy of science book I love that isnât directly about EA at all but somehow seems to keep coming up in discussions in and around EA is The Beginning of Infinity by the physicist David Deutsch. Deutschâs TED Talk is a good, quick introduction to the core idea of the book. Deutschâs hour-long interview on the TED Interview is a good preview of whatâs in the book and a good introduction to Deutschâs ideas and worldview.
This book is absolutely not part of the EA âcanonâ, nor is it a book that a large percentage of people in EA have read, but I think itâs a book that a large percentage of people in EA should read. Deutschâs ideas about inductivism and AGI are the ones that are most clearly, directly relevant to EA.
I wonât say that I know Deutschâs ideas are correct â I donât â but I really appreciate his pushback against inductivism and against deep learning as a path to AGI, and I appreciate the level of creativity and originality of his ideas.
The big asterisk or question mark I would put over Julia Galefâs work is that she co-founded the Center for Applied Rationality (CFAR). Galef left CFAR in 2016, so she may not be responsible for the bad stuff that happened at CFAR. At least around 2017-2019, the stories about what happened at CFAR are really bad. One of the CFAR co-founders described how CFAR employees would deliberately, consciously behave in deceptive, manipulative ways toward their workshop participants in order to advance CFARâs ideas about existential risk from AI. The most stomach-churning thing of all is that CFAR organized a summer camp for kids where, according to one person who was involved, things were even worse than at CFAR itself. I donât know the specifics of what happened at the summer camp, but I hate the idea that kids may have been harmed in some way by CFARâs work.
Galef may not be responsible at all for any of this, but I think itâs interesting how much of a failure this whole idea of ârationality trainingâ turned out to be, and how unethically and irrationally the people in key roles involved in this project behaved.
I think the source you mention is talking about people deceiving themselves
Idk man I think this summary is a few shades more alarming than the post you are taking as evidence.
The Facebook comment by a CFAR co-founder that you quoted in that post says (emphasis added):
The âusually consciouslyâ is key.
In that post, you said you were âa bit freaked outâ about an aspect of the kidsâ summer camp that was run by CFAR. You also said that the dynamics at the summer camp were âat times a bit more dysfunctional thanâ CFAR. Whatâs the disconnect between what you wrote and my summary?
At times a bit more dysfunctional than CFAR sounds to me like you were saying the camp was worse than CFAR. If thatâs not what you were trying to say, what were you trying to say?