Oh full disclosure I guess: I am a well-known shill for argmin.
technicalities
Agree that this could be misused, just as the sensible 80k framework is misused, or as anything can be.
Some skin in the game then: me and Jan both spend most of our time on AI.
Mostly true, but a string of posts about the risks attests to there being some unbounded optimisers. (Or at least that we are at risk of having some.)
The above makes EA’s huge investment in research seem like a better bet: “do more research” is a sort of exploration. Arguably we don’t do enough active exploration (learning by doing), but we don’t want less research.
We can do better than argmax
Your read makes sense! I meant the lumping together of causes, but there was also a good amount of related things about EA being too weird and not reading the room.
got none
Lovely satire of international development.
(h/t Eva Vivalt)
Good post!
I doubt I have anything original to say. There is already cause-specific non-EA outreach. (Not least a little thing called Lesswrong!) It’s great, and there should be more. Xrisk work is at least half altruistic for a lot of people, at least on the conscious level. We have managed the high-pay tension alright so far (not without cost). I don’t see an issue with some EA work happening sans the EA name; there are plenty of high-impact roles where it’d be unwise to broadcast any such social movement allegiance. The name is indeed not ideal, but I’ve never seen a less bad one and the switching costs seem way higher than the mild arrogance and very mild philosophical misconnotations of the current one.
Overall I see schism as solving (at really high expected cost) some social problems we can solve with talking and trade.
I struggled a lot with it until I learned how to cook in that particular style (roughly: way more oil, MSG, nutritional yeast, two proteins in every recipe). Good luck!
Bostrom selects his most neglected paper here.
There are two totally valid conclusions to draw from the structure you’ve drawn up: that CS people or EA people are deluded, or that the world at large, including extremely smart people, is extremely bad at handling weird or new things.
It seems bad in a few ways, including the ones you mentioned. I expect it to make longtermist groupthink worse, if (say) Kirsten stops asking awkward questions under (say) weak AI posts. I expect it to make neartermism more like average NGO work. We need both conceptual bravery and empirical rigour for both near and far work, and schism would hugely sap the pool of complements. And so on.
Yeah the information cascades and naive optimisation are bad. I have a post coming on a solution (or more properly, some vocabulary to understand how people are already solving it).
DMed examples.
You are totally right, Deutsch’s argument is computability, not complexity. Pardon!
Serves me right for trying to recap 1 of 170 posts from memory.
Yeah maybe they could leave this stuff to their coaching calls
ah, cool
on the 80k site? seems like a moderation headache
I have a few years of data from when I was vegan; any use?
Nice work, glad to see it’s improving things.
I sympathise with them though—as an outreach org you really don’t want to make public judgments like “infiltrate these guys please; they don’t do anything good directly!!”. And I’m hesitant to screw with the job board too much, cos they’re doing something right: the candidates I got through them are a completely different population from Forumites.
Adding top recommendations is a good compromise.
I guess a “report job ” [as dodgy] button would work for your remaining pain point, but this still looks pretty bad to outsiders.
Overall: previous state strikes me as a sad compromise rather than culpable deception. But you still made them move to a slightly less sad compromise, so hooray.
Quick update since April:
We got seed funding.
We formed a board, including some really impressive people in bio risk and AI.
We’re pretty far through hiring a director and other key crew, after 30 interviews and trials.
We have 50 candidate reservists, as well as some horizon-scanners with great track records. (If you’re interested in joining in, sign up here.)
Bluedot and ALLFED have kindly offered to share their monitoring infrastructure too.
See the comments in the job thread for more details about our current structure.
Major thanks to Isaak Freeman, whose Future Forum event netted us half of our key introductions and let us reach outside EA.