CFAR and MIRI are running our fifth annual MIRI Summer Fellows Program (MSFP) in the San Francisco Bay Area from August 9 to August 24, 2019.
MSFP is an extended retreat for mathematicians, computer scientists and programmers with a serious interest in making technical progress on the problem of AI alignment. It includes an overview of CFAR’s applied rationality content, a breadth-first grounding in the MIRI perspective on AI safety, and multiple days of actual hands-on research with participants and MIRI staff attempting to make inroads on open questions.
The intent of the program is to boost participants, as far as possible, in four overlapping areas:
Doing rationality inside a human brain: understanding, with as much fidelity as possible, what phenomena and processes drive and influence human thinking and reasoning, so that we can account for our own biases and blindspots, better recruit and use the various functions of our brains, and, in general, be less likely to trick ourselves, gloss over our confusions, or fail to act in alignment with our endorsed values.
Epistemic rationality, especially the subset of skills around deconfusion. Building the skill of noticing where the dots don’t actually connect; answering the question “why do we think we know what we think we know?”, particularly when it comes to predictions and assertions around the future development of artificial intelligence.
Grounding in the current research landscape surrounding AI: being aware of the primary disagreements among leaders in the field, and the arguments for various perspectives and claims. Understanding the current open questions, and why different ones seem more pressing or real under different assumptions. Being able to follow the reasoning behind various alignment schemes/theories/proposed interventions, and being able to evaluate those interventions with careful reasoning and mature (or at least more-mature-than-before) intuitions.
Generative research skill: the ability to make real and relevant progress on questions related to the field of AI alignment without losing track of one’s own metacognition. The parallel processes of using one’s mental tools, critiquing and improving one’s mental tools, and making one’s own progress or deconfusion available to others through talks, papers, and models. Anything and everything involved in being the sort of thinker who can locate a good question, sniff out promising threads, and collaborate effectively with others and with the broader research ecosystem.
Food and lodging are provided free of charge at CFAR’s workshop venue in Bodega Bay, California. Participants must be able to remain onsite, largely undistracted for the duration of the program (e.g. no major appointments in other cities, no large looming academic or professional deadlines just after the program).
[5/28/19 Update: Applications closed on March 31, finalists were interviewed between April 1 and April 17, and admissions decisions (yes, no, waitlist) were sent in April.]
If you have any questions or comments, message me here, or, if you suspect others would also benefit from hearing the answer, post them here.
MIRI Summer Fellows Program: Applications open
CFAR and MIRI are running our fifth annual MIRI Summer Fellows Program (MSFP) in the San Francisco Bay Area from August 9 to August 24, 2019.
MSFP is an extended retreat for mathematicians, computer scientists and programmers with a serious interest in making technical progress on the problem of AI alignment. It includes an overview of CFAR’s applied rationality content, a breadth-first grounding in the MIRI perspective on AI safety, and multiple days of actual hands-on research with participants and MIRI staff attempting to make inroads on open questions.
The intent of the program is to boost participants, as far as possible, in four overlapping areas:
Doing rationality inside a human brain: understanding, with as much fidelity as possible, what phenomena and processes drive and influence human thinking and reasoning, so that we can account for our own biases and blindspots, better recruit and use the various functions of our brains, and, in general, be less likely to trick ourselves, gloss over our confusions, or fail to act in alignment with our endorsed values.
Epistemic rationality, especially the subset of skills around deconfusion. Building the skill of noticing where the dots don’t actually connect; answering the question “why do we think we know what we think we know?”, particularly when it comes to predictions and assertions around the future development of artificial intelligence.
Grounding in the current research landscape surrounding AI: being aware of the primary disagreements among leaders in the field, and the arguments for various perspectives and claims. Understanding the current open questions, and why different ones seem more pressing or real under different assumptions. Being able to follow the reasoning behind various alignment schemes/theories/proposed interventions, and being able to evaluate those interventions with careful reasoning and mature (or at least more-mature-than-before) intuitions.
Generative research skill: the ability to make real and relevant progress on questions related to the field of AI alignment without losing track of one’s own metacognition. The parallel processes of using one’s mental tools, critiquing and improving one’s mental tools, and making one’s own progress or deconfusion available to others through talks, papers, and models. Anything and everything involved in being the sort of thinker who can locate a good question, sniff out promising threads, and collaborate effectively with others and with the broader research ecosystem.
Food and lodging are provided free of charge at CFAR’s workshop venue in Bodega Bay, California. Participants must be able to remain onsite, largely undistracted for the duration of the program (e.g. no major appointments in other cities, no large looming academic or professional deadlines just after the program).
[5/28/19 Update: Applications closed on March 31, finalists were interviewed between April 1 and April 17, and admissions decisions (yes, no, waitlist) were sent in April.]
If you have any questions or comments, message me here, or, if you suspect others would also benefit from hearing the answer, post them here.