This one is tricky, because it seems bad to tell people who already experience Chidi Anagonye-style crippling self-doubt that they should self-doubt even more.
EA self-doubt has always seemed weirdly compartmentalized to me. Even the humblest of people in the movement is often happy to dismiss considered viewpoints by highly intelligent people on the grounds that it doesn’t satisfy EA principles. This includes me—I think we are sometimes right to do so, but probably do so far too much nonetheless.
(from phone)
That was an example of an ea being highly upvoted for dismissing multiple extremely smart and well meaning people’s life’s work as ‘really flimsy and incredibly speculative’ because he wasn’t satisfied that they could justify their work within a framework that the ea movement had decided is one of the only ones worth contemplating. As if that framework itself isn’t incredibly speculative (and therefore if you reject any of its many suppositions, really flimsy)
I’m not sure I share your view of that post. Some quotes from it:
...he just believed it was really important for humanity to make space settlements in order for it to survive long-term… From what I could tell, [my professor] probably spend less than 10 hours seriously figuring out if space settlements would actually be more valuable to humanity than other alternatives.
...
Take SpaceX, Blue Origin, Neurolink, OpenAI. Each of these started with a really flimsy and incredibly speculative moral case. Now, each is probably worth at least $10 Billion, some much more. They all have very large groups of brilliant engineers and scientists. They all don’t seem to have researchers really analyzing the missions to make sure they actually make sense.
...
My impression is that Andrew Carnegie spent very little, if anything, to figure out if libraries were really the best use of his money, before going ahead and funding 3,000 libraries.
...
I rarely see political groups seriously red-teaming their own policies, before they sign them into law, after which the impacts can last for hundreds of years.
I don’t think any of these observations hinge on the EA framework strongly? Like, do we have reason to believe Andrew Carnegie spent a significant amount trying to figure out if libraries were a great donation target by his own lights, as opposed to according to the EA framework?
The thing that annoyed me about that post was that at the time it was written, it seemed to me that the EA movement was also fairly guilty of this! (It was written before the criticism/red teaming contest.)
I’m not familiar enough with the case of Andrew Carnegie to comment and I agree on the point of political tribalism. The other two are what bother me.
On the professor, the problem is there explicitly: you omitted a key line ‘I tried asking for his opinion on existential threats’, which is a strongly EA-identifying approach, and one which many people feel is too simplistic. Eg see Gideon Futurman’s EAGx Rotterdam talk when it’s up—he argues the way EAs think about x-risk is far too simplified, focusing on single-event narratives, ignoring countless possible trajectories that could end in extinction or similar any one of which is vanishingly unlikely, but which collectively we should take much more seriously. Whether or not one agrees with this view, it seems to me to be one a smart person could reasonably hold, and shows that by asking someone ‘his opinion on existential threats, and which specific scenarios these space settlements would help with’, you’re pigeonholing them into EA-aligned specific-single-event way of thinking.
As for Elon Musk, I think the same problem is there implicitly: he’s written a paper called ‘Making Humans a Multiplanetary Species’, spoken extensively on the subject and spent his life thinking that it’s important, and while you could reasonably disagree with his arguments, I don’t see any grounds for dismissing them as ‘really flimsy and incredibly speculative’ without engagement, unless your reason for doing so is ‘there exists a pool of important research which contradicts them and which I think is correct’. There are certainly plenty of other smart people who think as he does, some of them EAs (though maybe that doesn’t contribute to my original complaint). Since there’s a very clear mathematical argument that it’s harder to kill all of a more widespread and numerous civilisation, to say that the case is ‘really flimsy’, you basically need to assume the EA-aligned narrative that AI is highly likely to kill us all.
EA self-doubt has always seemed weirdly compartmentalized to me. Even the humblest of people in the movement is often happy to dismiss considered viewpoints by highly intelligent people on the grounds that it doesn’t satisfy EA principles. This includes me—I think we are sometimes right to do so, but probably do so far too much nonetheless.
Seems plausible, I think it would be good to have a dedicated “translator” who tries to understand & steelman views that are less mainstream in EA.
Wasn’t sure about the relevance of that link?
(from phone) That was an example of an ea being highly upvoted for dismissing multiple extremely smart and well meaning people’s life’s work as ‘really flimsy and incredibly speculative’ because he wasn’t satisfied that they could justify their work within a framework that the ea movement had decided is one of the only ones worth contemplating. As if that framework itself isn’t incredibly speculative (and therefore if you reject any of its many suppositions, really flimsy)
Thanks!
I’m not sure I share your view of that post. Some quotes from it:
...
...
...
I don’t think any of these observations hinge on the EA framework strongly? Like, do we have reason to believe Andrew Carnegie spent a significant amount trying to figure out if libraries were a great donation target by his own lights, as opposed to according to the EA framework?
The thing that annoyed me about that post was that at the time it was written, it seemed to me that the EA movement was also fairly guilty of this! (It was written before the criticism/red teaming contest.)
I’m not familiar enough with the case of Andrew Carnegie to comment and I agree on the point of political tribalism. The other two are what bother me.
On the professor, the problem is there explicitly: you omitted a key line ‘I tried asking for his opinion on existential threats’, which is a strongly EA-identifying approach, and one which many people feel is too simplistic. Eg see Gideon Futurman’s EAGx Rotterdam talk when it’s up—he argues the way EAs think about x-risk is far too simplified, focusing on single-event narratives, ignoring countless possible trajectories that could end in extinction or similar any one of which is vanishingly unlikely, but which collectively we should take much more seriously. Whether or not one agrees with this view, it seems to me to be one a smart person could reasonably hold, and shows that by asking someone ‘his opinion on existential threats, and which specific scenarios these space settlements would help with’, you’re pigeonholing them into EA-aligned specific-single-event way of thinking.
As for Elon Musk, I think the same problem is there implicitly: he’s written a paper called ‘Making Humans a Multiplanetary Species’, spoken extensively on the subject and spent his life thinking that it’s important, and while you could reasonably disagree with his arguments, I don’t see any grounds for dismissing them as ‘really flimsy and incredibly speculative’ without engagement, unless your reason for doing so is ‘there exists a pool of important research which contradicts them and which I think is correct’. There are certainly plenty of other smart people who think as he does, some of them EAs (though maybe that doesn’t contribute to my original complaint). Since there’s a very clear mathematical argument that it’s harder to kill all of a more widespread and numerous civilisation, to say that the case is ‘really flimsy’, you basically need to assume the EA-aligned narrative that AI is highly likely to kill us all.
Thanks!