I think there’s something epistemically off about allowing users to filter only bad AI news. The first tag doesn’t have that problem, but I’d still worry about missing important info. I prefer the approach of just requesting users be vigilant against the phenomenon I described.
tcelferact
We Should Be Warier of Overconfidence
A one-sentence formulation of the AI X-Risk argument I try to make
Looking for Canadian summer co-op position in AI Governance
Prior X%—<1%: A quantified ‘epistemic status’ of your prediction.
I don’t object to folks vocalizing their outrage. I’d be skeptical of ‘outrage-only’ posts, but I think people expressing their outrage while describing what they are doing and wish the reader to do would be in line with what I’m requesting here.
Your post more than meets my requested criteria, thank you!
I agree with this. Where there is a tradeoff, err on the side of truthfulness.
A request to keep pessimistic AI posts actionable.
This seems aimed at regulators; I’d be more interested in a version for orgs like the CIA or NSA.
Both those orgs seem to have a lot more flexibility than regulators to more or less do what they want when national security is an issue, and AI could plausibly become just that kind of issue.
So ‘policy ideas for the NSA/CIA’ could be at once both more ambitious and more actionable.
I did write the survey assuming AI researchers have at least been exposed to these ideas, even if they were completely unconvinced by them, as that’s my personal experience of AI researchers who don’t care about alignment. But if my experiences don’t generalize, I agree that more explanation is necessary.
‘AI Emergency Eject Criteria’ Survey
I definitely think “that’s just one final safety to rely on” applies to this suggestion. I hope we do a lot more than this!
An ‘AGI Emergency Eject Criteria’ consensus could be really useful.
The idea here is to prepare for an emergency stop if we are lucky enough to notice things going spectacularly wrong before it’s too late. I don’t think there’s any hamstringing of well-intentioned people implied by that!
We might get lucky with AGI warning shots. Let’s be ready!
I agree that private docs and group chats are totally fine and normal. The bit that concerns me is ‘discuss how to position themselves and how to hide their more controversial views or make them seem palatable’, which seems a problematic thing for leaders to be doing in private. (Just to reiterate I have zero evidence for or against this happening though.)
Thanks Arden! I should probably have said it explicitly in the post, but I have benefited a huge amount from the work you folks do, and although I obviously have criticisms, I think 80K’s impact is highly net-positive.
I think you’re correct that they aren’t being dishonest, but I disagree that the discrepancy is because ‘they’re answering two different questions’.
If 80K’s opinion is that a Philosophy PhD is probably a bad idea for most people, I would still expect that to show up in the Global Priorities information. For example, I don’t see any reason they couldn’t write something like this:
In general, for foundational global priorities research the best graduate subject is an economics PhD. The next most useful subject is philosophy … but the academic job market for philosophy is extremely challenging, and the career capital you acquire working toward a career in philosophy isn’t particularly transferable. For these reasons, we strongly recommend approaching GPR via economics instead of philosophy unless you are a particularly gifted philosopher and comfortable with a high risk of failure...
Maybe I’m nitpicking, as you say it is mentioned on the ‘philosophy academia’ page. I was trying to draw attention to a general discomfort I have with the site that it seems to underemphasise risk of failure, but perhaps I need to find a better example!
“EA-aligned” was probably a poor choice of words, maybe “EA-influenced” would be better. I agree that e.g. the EA forum’s attitude to OpenAI is strongly negative.