This assumes that the only benefit of public perception is that it brings in more people. In many instances, better perceptions could also mean various interventions working better (such as if an intervention depends on societal buy-in).
Daniel_Eth
The Importance of AI Alignment, explained in 5 points
Responding just to the comment about StrongMinds – I think mental health is an incredibly complicated issue, and mental illness is very multi-factored, so even if some people in sub-Saharan Africa are depressed due to bad governance, others may be depressed due to reasons that mental health services would alleviate. In any event, the fact that depression in sub-Saharan Africa is not even remotely close to 100% means the statement “I’d also be quite depressed if my government was as dreadful as most governments are in sub-Saharan Africa” is basically a non sequitur.
Yeah, my point is that it’s (basically) disjunctive.
I notice that some of these forecasts imply different paths to TAI than others (most obviously, WBE assumes a different path than the others). In that case, does taking a linear average make sense? Consider if you think WBE is likely moderately far away, versus other paths are more uncertain and may be very near or very far. In that case, a constant weight on the WBE probability wouldn’t match your actual views.
Can you describe a little bit how the novel promotes EA? For instance, is it that the morals of the novel are EA morals, and if so, what morals are they?
Should the forum limit the number of strong (up/down) votes per person (say, per week)? Right now, people can use as many strong votes as they want, which somewhat decreases the signal they’re intended to send (and also creates a bias in favor of those who “strategically” choose to overuse strong votes). Not sure if this is influencing the discourse at all but seems plausible.
Agree.
Sometimes it is, but I think more often the road to hell is paved with bad and/or selfish intentions. For instance, the Nazis weren’t sober-minded radical altruists who begrudgingly came to the conclusion that they had to do what they did for the greater good – they were instead following an ideology which was racist and evil, through and through. I’m also pretty sure that most people who participated in the slave trade weren’t doing so with “good intentions”, but instead simply to make a profit.
OTOH, I think the claim does apply, more-or-less, specifically to socialist movements, and I think it’s important that other consequentialist ideologies maintain significantly better epistemic hygiene and course-correction mechanisms than socialist movements typically have. But I’d also point out that global movements for democracy, slavery abolition, and anti-colonialism have also been done with good intentions, so “doing things with good intentions” has a track record of doing some really good things as well, and I think on net is a lot better than alternatives.
My intuition is that having more people does mean more potential fires could be started (since each person could start a fire), but it also means each fire is less damaging in expectation as it’s diluted over more people (so to speak). For instance, the environmentalist movement has at times engaged in ecoterrorism, which is (I think pretty clearly) much worse than anything anyone in EA has ever done, but the environmentalist movement as a whole has generally weathered those instances pretty well as most people (reasonably imho) recognize that ecoterrorists are a fringe within environmentalism. I think one major reason for this is that the environmentalist movement is quite large, and this acts as a bulwark against the entire movement being tarred by the actions of a few.
Agree with what you’re saying. This part of the review in particular stood out to me:
In pure Bayesian reasoning, if one has several uncertain measurements of the same value, each represented by a probability distribution...
Since Cotra isn’t presenting the different anchors as all-things-considered estimates, but instead more like different hypotheses. Consider the evolutionary anchor – Cotra could have divided the compute requirements in this anchor by a scaling factor for how much more efficient she believes human-directed SGD (or similar) will be compared to how efficient evolution was at finding intelligence, yielding an all-things-considered estimate of how much compute will be necessary for TAI, but instead she leaves the value as is and considers it a soft upper bound.
If this is worth it to do, wouldn’t it be best for a singular large donor to buy, say, $1M+ of tickets, instead of lots of random individuals buying small numbers of tickets?
“Pfizer currently intends to sell the vaccine in the US for around $110-130 per dose”
Just to check – this is the sticker price, where the cost is (mostly) covered by insurers (and possibly bargained down from this), right? Not the out-of-pocket cost to most US consumers? This would be another reason to expect lower costs for the UK than this.
Noting that I like that the prizes you guys are offering are large enough that they might lead to serious work from those outside the community. My sense is the potential to convert EA capital into productive labor from nonEAs is one of the main draws of prizes, and previous attempts of testing prizes here has been somewhat ambiguous, as they haven’t led to much work from outside the community, but also the prize amounts were generally small enough that they probably wouldn’t be expected to do so anyway.
“it seems to me that all AIs (and other technologies) already don’t give us exactly what we want but we don’t call that outer misaligned because they are not “agentic” (enough?)”
Just responding to this part – my sense is most of the reason that current systems don’t do what we want has to do with capabilities failures, not alignment failures. That is, it’s less about the system being given the wrong goal/doing goal misgeneralizing/etc, but instead simply not being competent enough.
I think this would, in general, be a really bad idea. Kowtowing to nuclear threats would lead to a huge incentive for various countries to acquire nukes (both to make nuclear threats, and to defend against nuclear threats), and thus would increase proliferation (and nuclear risk) considerably. Of course, if you can figure out a way to get Russia to de-escalate, that would be great, though I doubt anyone here has any ability to influence that. Barring that, my sense is the best strategy for the US right now is to continue to provide Ukraine with much assistance without engaging Russia directly.
It’s also much more pessimistic than are prediction markets – for instance, Metaculus puts the odds of a nuclear detonation in Ukraine by 2023 at 7%, and a Russian nuclear detonation in the US this year at ≤ 1%.
Meta-point – I think it would be better if this was called something other than “baby longtermism”, as I found this confusing. Specifically, I initially thought you were going to be writing a post about a baby (i.e., “dumbed-down”) version of longtermism.
“That said, when I started the 10% thing, I did so under the impression that it was what the sacrifice I needed to make to gain acceptance in EA”
If this sentiment is at all widespread among people on the periphery of EA or who might become EA at some point, then I find that VERY concerning. We’d lose a lot of great people if everyone assumed they couldn’t join without making that kind of sacrifice.
I feel like the power differential between community builders and new members decreases over time as the new member “graduates” from being a new member and becomes a longer-term members, so perhaps the policy could apply for the first few months of the member’s involvement?