I’m Buck Shlegeris, I do research and outreach at MIRI, AMA

EDIT: I’m only go­ing to an­swer a few more ques­tions, due to time con­straints. I might even­tu­ally come back and an­swer more. I still ap­pre­ci­ate get­ting replies with peo­ple’s thoughts on things I’ve writ­ten.

I’m go­ing to do an AMA on Tues­day next week (Novem­ber 19th). Below I’ve writ­ten a brief de­scrip­tion of what I’m do­ing at the mo­ment. Ask any ques­tions you like; I’ll re­spond to as many as I can on Tues­day.

Although I’m ea­ger to dis­cuss MIRI-re­lated things in this AMA, my replies will rep­re­sent my own views rather than MIRI’s, and as a rule I won’t be run­ning my an­swers by any­one else at MIRI. Think of it as a rel­a­tively can­did and in­for­mal Q&A ses­sion, rather than any­thing pol­ished or defini­tive.

----

I’m a re­searcher at MIRI. At MIRI I di­vide my time roughly equally be­tween tech­ni­cal work and re­cruit­ment/​out­reach work.

On the re­cruit­ment/​out­reach side, I do things like the fol­low­ing:

- For the AI Risk for Com­puter Scien­tists work­shops (which are slightly badly named; we ac­cept some tech­ni­cal peo­ple who aren’t com­puter sci­en­tists), I han­dle the in­take of par­ti­ci­pants, and also teach classes and lead dis­cus­sions on AI risk at the work­shops.
- I do most of the tech­ni­cal in­ter­view­ing for en­g­ineer­ing roles at MIRI.
- I man­age the AI Safety Re­train­ing Pro­gram, in which MIRI gives grants to peo­ple to study ML for three months with the goal of mak­ing it eas­ier for them to tran­si­tion into work­ing on AI safety.
- I some­times do weird things like go­ing on a Slate Star Codex road­trip, where I led a group of EAs as we trav­el­led along the East Coast go­ing to Slate Star Codex mee­tups and vis­it­ing EA groups for five days.

On the tech­ni­cal side, I mostly work on some of our nondis­closed-by-de­fault tech­ni­cal re­search; this in­volves think­ing about var­i­ous kinds of math and im­ple­ment­ing things re­lated to the math. Be­cause the work isn’t pub­lic, there are many ques­tions about it that I can’t an­swer. But this is my prob­lem, not yours; feel free to ask what­ever ques­tions you like and I’ll take re­spon­si­bil­ity for choos­ing to an­swer or not.

----

Here are some things I’ve been think­ing about re­cently:

- I think that the field of AI safety is grow­ing in an awk­ward way. Lots of peo­ple are try­ing to work on it, and many of these peo­ple have pretty differ­ent pic­tures of what the prob­lem is and how we should try to work on it. How should we han­dle this? How should you try to work in a field when at least half the “ex­perts” are go­ing to think that your re­search di­rec­tion is mis­guided?
- The AIRCS work­shops that I’m in­volved with con­tain a va­ri­ety of ma­te­rial which at­tempts to help par­ti­ci­pants think about the world more effec­tively. I have thoughts about what’s use­ful and not use­ful about ra­tio­nal­ity train­ing.
- I have var­i­ous crazy ideas about EA out­reach. I think the SSC road­trip was good; I think some EAs who work at EA orgs should con­sider do­ing “res­i­den­cies” in cities with­out much ful­l­time EA pres­ence, where they mostly do their nor­mal job but also talk to peo­ple.