Co-founder of Arb, an AI / forecasting / etc consultancy. Doing a technical AI PhD.
Conflicts of interest: ESPR, EPSRC, Emergent Ventures, OpenPhil, Infrastructure Fund, Alvea.
Great work, and I was just about to ask for the code.
I think including personal fit (with say a 5 or 6 OOM range) will flip the sign on this though. Would also be good to show the intervals.
Seems like it would contribute to the profitability and feasibility of factory farming.
Yep ta, even says so on page 1.
Ord’s undergrad thesis is a tight argument in favour of enlightened argmax: search over decision procedures and motivations and pick the best of those instead of acts or rules.
Go run it, I’d read it.
3. Tarsney suggests one other plausible reason moral uncertainty is relevant: nonunique solutions leaving some choices undetermined. But I’m not clear on this.
Excellent comment, thanks!
Yes, wasn’t trying to endorse all of those (and should have put numbers on their dodginess).
1. Interesting. I disagree for now but would love to see what persuaded you of this. Fully agree that softmax implies long shots.
2. Yes, new causes and also new interventions within causes.
3. Yes, I really should have expanded this, but was lazy / didn’t want to disturb the pleasant brevity. It’s only “moral” uncertainty about how much risk aversion you should have that changes anything. (à la this.)
6. I’m using (possibly misusing) WD to mean something more specific like “given cause A, what is best to do?; what about under cause B? what about under discount x?...”
7. Now I’m confused about whether 3=7.
8. Yeah it’s effective in the short run, but I would guess that the loss of integrity hurts us in the long run.
Will edit in your suggestions, thanks again.
Not in this post, we just link to this one. By “principled” I just mean “not arbitrary, has a nice short derivation starting with something fundamental (like the entropy)”.
Yeah, the Gittins stuff would be pitched at a similar level of handwaving.
Looking back two weeks later, this post really needs
to discuss of the cost of prioritisation (we use softmax because we are boundedly rational) and the Price of Anarchy;
to have separate sections for individual prioritisation and collective prioritisation;
to at least mention bandits and the Gittins index, which is optimal where softmax is highly principled suboptimal cope.
Yeah could be terrible. As such risks go it’s relatively* well-covered by the military-astronomical complex, though events continue to reveal the inadequacy of our monitoring. It’s on our Other list.* This is not saying much: on the absolute scale of “known about” + “theoretical and technological preparedness” + “predictability” + “degree of financial and political support” it’s still firmly mediocre.
We will activate for things besides x-risks. Besides the direct help we render, this is to learn about parts of the world it’s difficult to learn about any other time.
Yeah, we have a whole top-level stream on things besides AI, bio, nukes. I am a drama queen so I want to call it “Anomalies” but it will end up being called “Other”.
We’re not really adding to the existing group chat / Samotsvety / Swift Centre infra at present, because we’re still spinning up.
My impression is that Great Power stuff is unusually hard to influence from the outside with mere research and data. We could maybe help with individual behaviour recommendations (turning the smooth forecast distributions of others into expected values and go / no-go advice).
Got you! Pardon the delay, am leaving confirmations to the director we eventually hire.
Been trying! the editor doesn’t load for some reason.
Yeah we’re not planning on doing humanitarian work or moving much physical plant around. Highly recommend ALLFED, SHELTER, and help.ngo for that though.
Yeah, we’re still looking for someone on the geopolitics side. Also, Covid was a biorisk.
Quick update since April:
We got seed funding.
We formed a board, including some really impressive people in bio risk and AI.
We’re pretty far through hiring a director and other key crew, after 30 interviews and trials.
We have 50 candidate reservists, as well as some horizon-scanners with great track records. (If you’re interested in joining in, sign up here.)
Bluedot and ALLFED have kindly offered to share their monitoring infrastructure too.
See the comments in the job thread for more details about our current structure.
Major thanks to Isaak Freeman, whose Future Forum event netted us half of our key introductions and let us reach outside EA.
Oh full disclosure I guess: I am a well-known shill for argmin.