Researcher of causal models and human-aligned AI at FHI | https://twitter.com/ryancareyai
Ah. If global IFR is worse than rich-countries’ IFR, that seems to imply that developing countries had lower survival rates, despite their more favourable demographics, which would be sad.
Was the prediction for infection fatality rate (IFR) or case fatality rate (CFR)? And high-income or all countries? Globally, the CFR is 2% (3.7M/173M), but the IFR is <0.66%, because <1/3 of cases were detected.
I think PAI exists primarily for companies to contribute to beneficial AI and harvest PR benefits from doing so. Whereas GPAI is a diplomatic apparatus, for Trudeau and Macron to influence the conversation surrounding AI.
The upshot seems to be that Joe, 80k, the AI researcher survey (2008), Holden-2016 are all at about a 3% estimate of AI risk, whereas AI safety researchers now are at about 30%. The latter is a bit lower (or at least differently distributed) than Rob expected, and seems higher than among Joe’s advisors.
The divergence is big, but pretty explainable, because it concords with the direction that apparent biases point in. For the 3% camp, the credibility of one’s name, brand, or field benefits from making a lowball estimates. Whereas the 30% camp is self-selected to have severe concern. And risk perception all-round has increased a bit in the last 5-15 years due to Deep Learning.
I think things like “collective rationality”, “collective epistemics”, or “quality of public discourse” would be reasonable though.
A related idea would be to buy copies of e.g. the precipice for university libraries...
Yeah, the ultra-pedantic+playful parenthetical is a very academic thing. “Psychology of effective altruism” seems to cover giving/x-risk/speciesism/career choice—i.e. it covers everything we want.
Nice, so we should buy the rights to all the other EA books...
My personal non-data-driven impression is that things are steady overall. Contracting in SF, steady in NYC and Oxford, growing in London, DC. “longtermism” growing. Look forward to seeing the data!
Yeah, I’d revise my view to: moderation seems too stringent on the particular axis of politeness/rudeness. I don’t really have any considered view on other axes.
Thanks, this detailed response reassures me that the moderation is not way too interventionist, and it also sounds positive to me that the moderation is becoming a bit more public, and less frequent.
If and when they break, we will replace them with corresponding links to the PDFs.
Ah, sounds great!
“the links could be of the form wiki.effectivealtruism.org/article″
Yeah, this would be more elegant!
I don’t quite see it. For example, where are the pdfs for “Christiano, Paul (2014) Certificates of impact, Rational Altruist, November 15.” here? Ideally, a link to an archived version should be continuously available, or at least appear when the link goes down.
Totally separate issue: I wonder if the wiki hompage should have the address wiki.effectivealtruism.org?
Have you thought of how to address link rot? I could imagine it making sense to automatically store one archived version of each external link using perma.cc or something!
Has anyone else noticed that the EA Forum moderation is quite intense of late?
Back in 2014, I’d proposed quite limited criteria for moderation: “spam, abuse, guilt-trips, socially or ecologically destructive destructive advocacy”. I’d said then: “Largely, I expect to be able to stay out of users’ way!” But my impression is that the moderators have at some point after 2017 taken to advising and sanction users based on their tone, for example, here (Halstead being “warned” for unsubstantiated true comments), “rudeness” and “Other behavior that interferes with good discourse” being criteria for content deletion. Generally I get the impression that we need more, not less, people directly speaking harsh truths, and that it’s rarely useful for a moderator to insert themselves into such conversation, given that we already have other remedies: judging a user’s reputation, counterarguing, or voting up and down. Overall, I’d go as far as to conjecture that if moderators did 50% less (by continuing to delete spam, but standing down in the less clear-cut cases) the forum would be better off.
Do we have any statistics on the number of moderator actions per year?
Has anyone had positive or negative experiences with being moderated?
Does anyone else have any thoughts on whether I’m right or wrong about this?
Substantiated true claims are the best, but sometimes merely stating important true facts can also be a public service...
It’s a separate concept!
“Scalably using labour”? Since it’s about getting people to do things, not about recruiting them.
So you’ve shown that Masrani has made a bunch of faulty arguments. But do you think his argument fails overall? i.e. can you refute its central point?