Hi, my name is James Fodor. I am a longtime student and EA organiser from Melbourne. I love science, history, philosophy, and using these to make a difference in the world.
Fods12
The FTX crisis highlights a deeper cultural problem within EA—we don’t sufficiently value good governance
Effective Altruism is an Ideology, not (just) a Question
Critical Review of ‘The Precipice’: A Reassessment of the Risks of AI and Pandemics
The Fermi Paradox has not been dissolved
A Critique of AI Takeover Scenarios
Running Effective Altruism Groups: A Literature Review
Intrinsic limitations of GPT-4 and other large language models, and why I’m not (very) worried about GPT-n
Report and Data for EAGxAustralia 2023
Critique of Superintelligence Part 1
EAGxAsia-Pacific 2020 Applications Now Open
Critique of Superintelligence Part 5
Concern about the EA London COVID protocol
Critique of Superintelligence Part 2
Hi everyone, thanks for your comments. I’m not much for debating in comments, but if you would like to discuss anything further with me or have any questions, please feel free to send me a message.
I just wanted to make one clarification that I feel didn’t come across strongly in the original post. Namely, I don’t think its a bad thing that EA is an ideology. I do personally disagree with some commonly believed assumptions or methodological preferences etc, but the fact that EA itself is an ideology I think is a good thing, because it gives EA substance. If EA were merely a question I think it would have very little to add to the world.
The point of this post was therefore not to argue that EA should try to avoid being an ideology, but that we should realise the assumptions and methodological frameworks we typically adopt as an EA community, critically evaluate whether they are all justified, and then to the extent they are justified defend them with the best arguments we can muster, of course always remaining open-minded to new evidence or arguments that might change our minds.
It doesn’t seem to me this has much relevance to EA.
I think it is appropriate for the movement to reflect at this time on whether there are systematic problems or failings within the community that might have contributed to this problem. I have publicly argued that there are, and though I might be wrong about that, I do think its entirely reasonable to explore these issues. I don’t think its reasonable to just continually assert that it was all down to a handful of bad actors and refuse to discuss the possibility of any deeper or broader problems. I like to think that the EA community can learn and grow from this experience.
I disagree that events can’t be evidence for or against philosophical positions. If empirical claims about human behaviour or the real-world operation of ethical principles are relevant to the plausibility of competing ethical theories, then I think events can provide evidential value for philosophical positions. Of course that raises a much broader set of issues and doesn’t really detract from the main point of this post, but I thought I would push back on that specific aspect.
Thanks for your thoughts. Regarding spreading my argument across 5 posts, I did this in part because I thought connected sequences of posts were encouraged?
Regarding the single quantity issue, I don’t think it is a red herring, because if there are multiple distinct quantities then the original argument for self-sustaining rapid growth becomes significantly weaker (see my responses to Flodorner and Lukas for more on this).
You say “Might the same thing be true of AI—that a few factors really do allow for drastic improvements in problem-solving across many domains? It’s not at all clear that it isn’t.” I believe we have good reason to think no such few factors exist. I would say because A) this does not seem to be how human intelligence works and B) because this does not seem to be consistent with the history of progress in AI research. Both I would say are characterised by many different functionalities or optimisations for particular tasks. Not to say there are no general principles but I think these are not as extensive as you seem to believe. However regardless of this point, I would just say that if Bostrom’s argument is to succeed I think he needs to give some persuasive reasons or evidence as to why we should think such factors exist. Its not sufficient just to argue that they might.
“Is it really “grossly immoral” to do the same thing in crypto without telling depositors?”
Yes