One thing I’ve noticed is that direct work tends to put you much more in contact with reality (for lack of a better term) than community-building; it’s much easier to see what you’re accomplishing and what is and isn’t working. This can be especially important for people trying to build and/or demonstrate skills.
Davis_Kingsley
Terrorism, Tylenol, and dangerous information
Good points!
One note I’ll add is that similar attacks with vehicles or bladed weapons were used against Israel prior to their adoption by ISIS, though these attacks are not as widely reported by Western media since they don’t happen in Europe or the US; that said, it’s quite possible that ISIS themselves got the idea from Palestinian attackers, especially if the “copycat hypothesis” is true.
I actually quite disagree—I believe history indicates national militaries very frequently miss effective ways to conduct war. There’s a famous phrase, “fighting the last war”, that describes how military planners almost always miss innovations and changes in conditions during peacetime and only adapt when forced to by direct conflict.
For example, between World War One and World War Two, the world’s militaries converged on several dangerously false theories with respect to what the next war would look like, and many weapons and strategies used in the early phases of World War Two were ineffective as a result.
Prior to World War Two it was widely believed that battleships were the decisive naval unit, that strategic bombing with large fleets of conventional bombers would be devastating and unstoppable, and that war would likely consist of battles across trenches and fixed fortifications.
By the end of World War Two, battleships were not only not the decisive naval unit but altogether obsolete in favor of aircraft carriers; strategic bombing wasted the lives of many soldiers, killed civilians indiscriminately, and didn’t even work; fixed fortifications infamously failed and were no longer considered serious defenses.
I am quite confident that similar mistakes are being made now, and could even point you to some likely suspects if you like—and all this despite very substantial effort into arms development!
I like that you’ve put the effort into creating this, but I’m not fond of the background assumptions here—there seem to be some elements that not all EAs might necessarily share. For instance, one section begins “Intrinsic moral rights do not exist”—that’s certainly not what I believe and it seems inconsistent with other sections that talk about the “intrinsic moral weight” of animal populations, etc.
While the fact that you’ve “shown your work” with the Excel spreadsheet helps people evaluate the same issues with different weights, if someone is interested in areas that you’ve chosen to exclude it’s less apparent how to proceed.
I do appreciate the work you’ve put into this, though!
I don’t think there’s much practical difference between “intrinsic moral interests” and “intrinsic moral rights”, but that’s not really the point—it’s more that I think given such differences in perspective between EAs, I’m not sure that documents like this are great for EA as a movement. I would at least prefer to see them presented less… authoritatively?
Like I said, that’s not really the point—it also doesn’t meaningfully resolve that particular issue, because of course the whole dispute is whose well-being counts, with anti-abortion advocates claiming that human fetuses count and pro-abortion people claiming that human fetuses don’t.
I dunno, maybe I’m overly cautious, but I’m not fond of someone publishing a well-made and official-looking “based on EA principles, here’s who to vote for” document, since “EA principles” quite vary—I think if EA becomes seen as politically aligned (with either major US party) that constitutes a huge constraint on our movement’s potential.
You said the problem was stating it authoritatively rather than the actual conclusions, I made it sound less authoritative but now you’re saying that the actual conclusions matter.
Sorry, I perhaps wasn’t specific enough in my original reply. The “less authoritative” thing was meant to apply to the entire document, not just this one section—that’s why I also said I wasn’t sure documents like this are good for EA as a movement.
I think there’s something unhealthy and self-reinforcing about tiptoeing around like that. The point here is to advertise a better set of implicit norms, so that maybe people (inside and outside EA) can finally treat political policy as just another question to answer rather than playing meta-games.
Strong disagree. Political policy in practice isn’t “just another question to answer”—maybe it should be, but that’s not the world we live in—and acting as if it is strikes me as risky.
Neither is poverty alleviation or veganism or anything else in practice.
Again, strong disagree—many things are not politicized and can be answered more directly. One of the main strengths of EA, in my view, is that it isn’t just another culture war position (yet?) - consider Robin Hanson’s points on “pulling the rope sideways”.
Just posting to acknowledge that I’ve seen this—my full reply will be long enough that I’m probably going to make it a separate post.
I don’t agree with all of the decisions being made here, but I really admire the level of detail and transparency going into these descriptions, especially those written by Oliver Habryka. Seeing this type of documentation has caused me to think significantly more favorably of the fund as a whole.
Will there be an update to this post with respect to what projects actually fund following these recommendations? One aspect that I’m not clear on is to what extent CEA will “automatically” follow these recommendations and to what extent there will be significant further review.
I think this comment, while quite rude, does get at something valuable. There’s an argument that goes “hmm, the outside view says this is absurd, we should be really sure of our inside view before proceeding” and I think that’s sometimes a bit of a neglected perspective in rationalist/EA spaces.
I happen to know that the inside view on HPMoR bringing people into the community is very strong, and that the inside view on Eli Tyre doing good and important work is also very strong. I’m less familiar with the details behind the other grants that anoneaagain highlighted, but I do think that being aware and recognizing the… unorthodoxy of these proposals is important, even if the inside view does end up overriding that.
The most important thing in life is to be free to do things. There are only two ways to insure that freedom — you can be rich or you can you reduce your needs to zero. I will never be rich, so I have chosen to crank down my desires. The bureaucracy cannot take anything from me, because there is nothing to take.
Colonel John Boyd
[Question] What would EAs most want to see from a “rationality” or similar project in the EA space?
Hmm, I remember seeing a criticism somewhere in the EA-sphere that went something like:
“The term “longtermism” is misleading because in practice “longtermism” means “concern over short AI timelines”, and in fact many “longtermists” are concerned with events on a much shorter time scale than the rest of EA.”
I thought that was a surprising and interesting argument, though I don’t recall who initially made it. Does anyone remember?
[Question] Research into people’s willingness to change cause *areas*?
Thanks! Good to know.
I (very anecdotally) think there are lots of people who are interested in donating to quite specific cause areas, e.g. “my father died of cancer so I donate to cancer charities” or “I want to donate to help homelessness in my area”—haven’t studied that in depth though.
Thanks, I’m impressed by this reply and your willingness to go out there and do a survey. I will have more substantive feedback later as I want to consult with someone else before making a further statement—ping me if I haven’t replied by Friday.
It seems clear to me that in some cases positive systemic change is possible, even with relatively limited teams working on them.
However, systemic changes can also lead to substantial problems. Even some of the examples you gave here are far from objectively good—I note with some worry that historical attempts to place the means of production into the hands of the people have led to some of the greatest disasters of human history.
The “downside risks” of these sorts of approaches seem very high. That isn’t to say that nobody should do them, but I would be quite cautious about supporting unusual new ventures in these areas. To some extent your category 4 (ideologically safe) seems to screen off this objection, but the fact that you put public ownership of the means of production down afterwards makes me worry about how effectively that categorization will be applied.