It seems like we have some pretty different intuitions here. Thanks for sharing!
I was thinking of many of my claims as representing low bars. To me, “at least some things to learn from a community” isn’t saying all that much. I’m sure he, and us, and many others, have at least some things that would be valuable to learn from many communities.
”Thirdly I think it’s insulting to suppose these guys haven’t thought about their impact a lot simply because they don’t use QALY-adjacent language” → A lot of the people I knew, in the field (including the person I mentioned), pretty clearly hadn’t thought about the impact a whole lot. It’s not just that they weren’t using QALYs, it’s just that they weren’t really comparing it to similar things. That’s not unusual, most people in most fields don’t seem to be trying hard to optimize the impact globally, in my experience.
I really don’t mean to be insulting to them, I’m just describing my impression. These people have lots of other great qualities.
One thing that would clearly prove me wrong would be some lengthy documents outlining the net benefit, compared to things like bunkers, in the long-term. And, it would be nice if it were clear that lots of SpaceX people paid attention to these documents.
A lot of the people I knew, in the field (including the person I mentioned), pretty clearly hadn’t thought about the impact a whole lot. It’s not just that they weren’t using QALYs, it’s just that they weren’t really comparing it to similar things.
Re this particular example, after you had the conversation did the person agree with you that they clearly hadn’t thought about it? If not, can you account for their disagreement other than claiming that they were basically irrational?
I seem to have quite strongly differing intuitions from most people active in central EA roles, and quite similar ones (at least about the limitations to EA-style research) to many people I’ve spoken to who believe the motte of EA but are sceptical of the bailey (ie of actual EA orgs and methodology). I worry that EA has very strong echo chamber effects reflected in eg the OP, in Linch’s comment below, and Hauke’s about Bill Gates, in various other comments in this thread suggesting ‘almost no-one’ thinks about these questions with clarity and in countless of other such casual dismissals I’ve heard by EAs of smart people taking positions not couched in sufficiently EA terms.
FWIW I also don’t think claiming someone has lots of other great qualities is inconsistent with being insulting to them.
I don’t disagree that it’s plausible we can bring something. I just think that assuming we can do so is extremely arrogant (not by you in particular, but as a generalised attitude among EAs). We need to respect the views of intelligent people who think this stuff is important, even if they can’t or don’t explain why in the terms we would typically use. For PR reasons alone, this stuff is important—I can only point to anecdotes, but so many intelligent people I’ve spoken to find EAs collectively insufferable because of this sort of attitude, and so end up not engaging with ideas that might otherwise have appealed to them. Maybe someone could run a Mechanical Turk study on how such messaging affects reception of theoretically unrelated EA ideas.
It seems like we have some pretty different intuitions here. Thanks for sharing!
I was thinking of many of my claims as representing low bars. To me, “at least some things to learn from a community” isn’t saying all that much. I’m sure he, and us, and many others, have at least some things that would be valuable to learn from many communities.
”Thirdly I think it’s insulting to suppose these guys haven’t thought about their impact a lot simply because they don’t use QALY-adjacent language” → A lot of the people I knew, in the field (including the person I mentioned), pretty clearly hadn’t thought about the impact a whole lot. It’s not just that they weren’t using QALYs, it’s just that they weren’t really comparing it to similar things. That’s not unusual, most people in most fields don’t seem to be trying hard to optimize the impact globally, in my experience.
I really don’t mean to be insulting to them, I’m just describing my impression. These people have lots of other great qualities.
One thing that would clearly prove me wrong would be some lengthy documents outlining the net benefit, compared to things like bunkers, in the long-term. And, it would be nice if it were clear that lots of SpaceX people paid attention to these documents.
Re this particular example, after you had the conversation did the person agree with you that they clearly hadn’t thought about it? If not, can you account for their disagreement other than claiming that they were basically irrational?
I seem to have quite strongly differing intuitions from most people active in central EA roles, and quite similar ones (at least about the limitations to EA-style research) to many people I’ve spoken to who believe the motte of EA but are sceptical of the bailey (ie of actual EA orgs and methodology). I worry that EA has very strong echo chamber effects reflected in eg the OP, in Linch’s comment below, and Hauke’s about Bill Gates, in various other comments in this thread suggesting ‘almost no-one’ thinks about these questions with clarity and in countless of other such casual dismissals I’ve heard by EAs of smart people taking positions not couched in sufficiently EA terms.
FWIW I also don’t think claiming someone has lots of other great qualities is inconsistent with being insulting to them.
I don’t disagree that it’s plausible we can bring something. I just think that assuming we can do so is extremely arrogant (not by you in particular, but as a generalised attitude among EAs). We need to respect the views of intelligent people who think this stuff is important, even if they can’t or don’t explain why in the terms we would typically use. For PR reasons alone, this stuff is important—I can only point to anecdotes, but so many intelligent people I’ve spoken to find EAs collectively insufferable because of this sort of attitude, and so end up not engaging with ideas that might otherwise have appealed to them. Maybe someone could run a Mechanical Turk study on how such messaging affects reception of theoretically unrelated EA ideas.