3. Tarsney suggests one other plausible reason moral uncertainty is relevant: nonunique solutions leaving some choices undetermined. But I’m not clear on this.
technicalities
Excellent comment, thanks!
Yes, wasn’t trying to endorse all of those (and should have put numbers on their dodginess).
1. Interesting. I disagree for now but would love to see what persuaded you of this. Fully agree that softmax implies long shots.
2. Yes, new causes and also new interventions within causes.
3. Yes, I really should have expanded this, but was lazy / didn’t want to disturb the pleasant brevity. It’s only “moral” uncertainty about how much risk aversion you should have that changes anything. (à la this.)
4. Agree.
5. Agree.
6. I’m using (possibly misusing) WD to mean something more specific like “given cause A, what is best to do?; what about under cause B? what about under discount x?...”
7. Now I’m confused about whether 3=7.
8. Yeah it’s effective in the short run, but I would guess that the loss of integrity hurts us in the long run.
Will edit in your suggestions, thanks again.
Not in this post, we just link to this one. By “principled” I just mean “not arbitrary, has a nice short derivation starting with something fundamental (like the entropy)”.
Yeah, the Gittins stuff would be pitched at a similar level of handwaving.
Looking back two weeks later, this post really needs
to discuss of the cost of prioritisation (we use softmax because we are boundedly rational) and the Price of Anarchy;
to have separate sections for individual prioritisation and collective prioritisation;
to at least mention bandits and the Gittins index, which is optimal where softmax is highly principled suboptimal cope.
Yeah could be terrible. As such risks go it’s relatively* well-covered by the military-astronomical complex, though events continue to reveal the inadequacy of our monitoring. It’s on our Other list.
* This is not saying much: on the absolute scale of “known about” + “theoretical and technological preparedness” + “predictability” + “degree of financial and political support” it’s still firmly mediocre.
We will activate for things besides x-risks. Besides the direct help we render, this is to learn about parts of the world it’s difficult to learn about any other time.
Yeah, we have a whole top-level stream on things besides AI, bio, nukes. I am a drama queen so I want to call it “Anomalies” but it will end up being called “Other”.
We’re not really adding to the existing group chat / Samotsvety / Swift Centre infra at present, because we’re still spinning up.
My impression is that Great Power stuff is unusually hard to influence from the outside with mere research and data. We could maybe help with individual behaviour recommendations (turning the smooth forecast distributions of others into expected values and go / no-go advice).
Got you! Pardon the delay, am leaving confirmations to the director we eventually hire.
Been trying! the editor doesn’t load for some reason.
Yeah we’re not planning on doing humanitarian work or moving much physical plant around. Highly recommend ALLFED, SHELTER, and help.ngo for that though.
Yeah, we’re still looking for someone on the geopolitics side. Also, Covid was a biorisk.
Quick update since April:
We got seed funding.
We formed a board, including some really impressive people in bio risk and AI.
We’re pretty far through hiring a director and other key crew, after 30 interviews and trials.
We have 50 candidate reservists, as well as some horizon-scanners with great track records. (If you’re interested in joining in, sign up here.)
Bluedot and ALLFED have kindly offered to share their monitoring infrastructure too.
See the comments in the job thread for more details about our current structure.
Major thanks to Isaak Freeman, whose Future Forum event netted us half of our key introductions and let us reach outside EA.
Oh full disclosure I guess: I am a well-known shill for argmin.
Agree that this could be misused, just as the sensible 80k framework is misused, or as anything can be.
Some skin in the game then: me and Jan both spend most of our time on AI.
Mostly true, but a string of posts about the risks attests to there being some unbounded optimisers. (Or at least that we are at risk of having some.)
The above makes EA’s huge investment in research seem like a better bet: “do more research” is a sort of exploration. Arguably we don’t do enough active exploration (learning by doing), but we don’t want less research.
Your read makes sense! I meant the lumping together of causes, but there was also a good amount of related things about EA being too weird and not reading the room.
got none
Lovely satire of international development.
(h/t Eva Vivalt)
Go run it, I’d read it.