Taymon
I think it would be good for CEA to provide a clear explanation, that it (not LW) stands behind as an organization, of exactly what real value it views as being on the line here, and why it thinks it was worthwhile to risk that value.
Correction: The annual Petrov Day celebration in Boston has never used the button.
Since you’re (among other things) listing reference classes that people might put claims about transformative AI into, I’ll note that millennarianism is a common one among skeptics. I.e., “lots of [mostly religious] groups throughout history have claimed that society is soon going to be swept away by an as-yet-unseen force and replaced with something new, and they were all deluded, so you are probably deluded too”.
Reading this thread, I sort of get the impression that the crux here is between people who want EA to be more institutional (for which purpose the current name is kind of a problem) and people who want it to be more grassroots (for which purpose the current name works pretty okay).
There are other issues with the current name, like the thing where it opens us up to accusations of hypocrisy every time we fail to outperform anyone on anything, but I’m not sure that that’s really what’s driving the disagreement here. Partly, this is because people have tried to come up with better names over the years (though not always with a view towards driving serious adoption of them; often just as an intellectual exercise), and I don’t think any of the candidates have produced widespread reactions of “oh yeah I wish we’d thought of that in 2012”, even among people who see problems with the current name. So coming up with a name that’s better than “effective altruism”, by the lights of what the community currently is, seems like a pretty hard problem. (Obviously this is skewed somewhat by the inertia behind the current name, but I don’t think that fully explains what’s going on here.) When people do suggest different names, it tends to be because they think some or all of the community is emphasizing the wrong things, and want to pivot towards right ones.
“Global priorities community” definitely sounds incompatible with a grassroots direction; if I said that I was starting a one-person global priorities project in my basement, this would sound ridiculously grandiose and like I’d been severely Dunning-Krugered, whereas with an EA project this is fine.
For what it’s worth, I’d prefer a name that’s clearly compatible with both the institutional and the grassroots side, because it seems clear to me that both of these are in scope for the EA mandate and it’s not acceptable to trade off either of them. The current name sounds a little more grassroots than I’d like, but again, I don’t have any better ideas.
At one point I pitched Impartialist Maximizing Rationalist-Empiricist-Epistemological Welfarist-Axiological Ideology, or IMREEWAI for short, but for some strange reason nobody liked that idea :-P
Do you think the Biden campaign had room for more funding, i.e., that your donation made a Biden victory more likely on the margin (by enough to be worth it)? I am pretty skeptical of this; I suspect they already had more money than they were able to spend effectively. (I don’t have a source for this other than Maciej Cegłowski, who has relevant experience but whom I don’t agree with on everything; on the other hand, I can’t recall ever hearing anyone make the case that U.S. presidential general-election campaigns do have room for more funding, and I’d be pretty surprised if there were such a case and it was strong.)
“Neglectedness” is a good heuristic for cause areas but I think that when donating to specific orgs it can wind up just confusing things and RFMF is the better thing to ask about.
I’m less certain about the Georgia campaign but still skeptical there, partly because it’s a really high-profile race (since it determines control of the Senate and isn’t competing for airtime with any other races) and partly because I think substantive electoral reform is likely to remain intractable even if the Democrats win. But I’d be interested to see a more thorough analysis of this.
Alcor claims on their brochure that membership dues “may be” tax-deductible. It’s not clear to me how they concluded that. Somebody should probably ask them.
The second point there seems like the one that’s actually relevant. It strikes me as unlikely that doing this with blockchain is less work than with conventional payment systems even if the developers have done blockchain things before, and conventional payment systems are even faster and more fungible with other assets than Ethereum. I’m reading the second point there as suggesting something like, you’re hoping that funding for this will come in substantial part from people who are blockchain enthusiasts rather than EAs, and who therefore wouldn’t be interested if it used conventional payment infrastructure?
(I agree that the “relics” idea is, at best, solving a different problem.)
The post seems relatively optimistic. I’m worried that this may be motivated reasoning, and/or political reasoning (e.g., that people won’t listen to anyone who isn’t telling them that we can solve the crisis without doing anything too costly). Mind you, I’m not any kind of expert, I’m just suspicious-by-default given that most other analysis I’ve seen seems less optimistic (note that there are probably all kinds of horrible selection biases in what I’m reading and I have no idea what they are). Also, the author isn’t an expert; they seem to have consulted experts for the post, but this still reduces my confidence in its conclusions, because those experts could have been selected for agreeing with a conclusion that the author came up with for non-expert-informed reasons.
I’m more likely to do this if there’s a specific set of data I’m supposed to collect, so that I can write it down before I forget.
Yeah, I should have known I’d get called out for not citing any sources. I’m honestly not sure I’d particularly believe most studies on this no matter what side they came out on; too many ways they could fail to generalize. I am pretty sure I’ve seen LW and SSC posts get cited as more authoritative than their epistemic-status disclaimers suggested, and that’s most of why I believe this; generalizability isn’t a concern here since we’re talking about basically the same context. Ironically, though, I can’t remember which posts. I’ll keep looking for examples.
“Breakthroughs” feel like the wrong thing to hope for from posts written by non-experts. A lot of the LW posts that the community now seems to consider most valuable weren’t “breakthroughs”. They were more like explaining a thing, such that each individual fact in the explanation was already known, but the synthesis of them into a single coherent explanation that made sense either hadn’t previously been done, or had been done only within the context of an academic field buried in inferential distance. Put another way, it seems like it’s possible to write good popularizations of a topic without being intimately familiar with the existing literature, if it’s the right kind of topic. Though I imagine this wouldn’t be much comfort to someone who is pessimistic about the epistemic value of popularizations in general.
The Huemer post kind of just felt like an argument for radical skepticism outside of one’s own domain of narrow expertise, with everything that implies.
It seems clear to me that epistemic-status disclaimers don’t work for the purpose of mitigating the negative externalities of people saying wrong things, especially wrong things in domains where people naturally tend towards overconfidence (I have in mind anything that has political implications, broadly construed). This follows straightforwardly from the phenomenon of source amnesia, and anecdotally, there doesn’t seem to be much correlation between how much, say, Scott Alexander (whom I’m using here because his blog is widely read) hedges in the disclaimer of any given post and how widely that post winds up being cited later on.
This post caused me to apply to a six-month internal rotation program at Google as a security engineer. I start next Tuesday.
I would like to see efforts at calibration training for people running EA projects. This would be useful for helping to push those projects in a more strategic direction, by having people lay out predictions regarding outcomes at the outset, kind of like what Open Phil does with respect to their grants.
Can you give an example of a time when you believe that the EA community got the wrong answer to an important question as a result of not following your advice here, and how we could have gotten the right answer by following it?
Links aren’t working.
Apologies if this is a silly question, but could you give examples of specific, concrete problems that you think this analysis is relevant to?
As far as I’m aware, the first person to explicitly address the question “why are literary utopias consistently places you wouldn’t actually want to live?” was George Orwell, in “Why Socialists Don’t Believe in Fun”. I consider this important prior art for anyone looking at this question.
EAsphere readers may also be familiar with the Fun Theory Sequence, which Orwell was an important influence on.
On a related note, I get the impression that utopianism was not as outright intellectually discredited and unfashionable when Orwell wrote as it is today (e.g., the above essay predates Walden Two), even though most of the problems given in this piece were clearly already present and visible at that time. That seems like it does have something to do with the events of the 20th century, and their effects on the intellectual climate.