That makes some sense, but leaves me with questions like
Which projects were home runs, and how did you tell that a) they were successful at achieving their goals and b) that their goals were valuable?
Which projects were failures that you feel were justifiable given your knowledge state at the time?
What do these past projects demonstrate about the team’s competence to work on future projects?
What and how was the budget allocated to these projects, and do you expect future projects projects to have structurally similar budgets?
Are there any other analogies you could draw between past and possible future projects that would enable us update on the latter’s probability of success?
MIRI is hardly unique even in the EA/rat space in having special projects—Rethink Priorities, for e.g., seem to be very fluid in what they work on; Founders Pledge and Longview are necessarily driven to some degree by the interests of their major donors; Clean Air Task force have run many different political campaigns, each seemingly unlike the previous ones in many ways; ALLFED are almost unique in their space, so have huge variance in the projects they work on; and there are many more with comparable flexibility.
And many EA organisations in the space that don’t explicitly have such a strategy have nonetheless pivoted after learning of a key opportunity in their field, or realising an existing strategy was failing.
In order to receive funds—at least from effectiveness-minded funders—all these orgs have to put a certain amount of effort into answering questions like those above.
And ok, you say you’re not claiming to be entitled to dollars, but it still seems reasonable to ask why a rational funder should donate to MIRI over e.g. any of the above organisations—and to hope that MIRI has some concrete answers.
Noting that this is more “opinion of an employee” than “the position of MIRI overall”—I’ve held a variety of positions within the org and can’t speak for e.g. Nate or Eliezer or Malo:
The Agent Foundations team feels, to me, like it was a slam dunk at the time; the team produced a ton of good research and many of their ideas have become foundational to discussions of agency in the broader AI sphere
The book feels like a slam dunk
The research push of 2020/2021 (that didn’t pan out) feels to me like it was absolutely the right bet, but resulted in (essentially) nothing; it was an ambitious, many-person project for a speculative idea that had a shot at being amazing.
I think it’s hard to generalize lessons, because various projects are championed by various people and groups within the org (“MIRI” is nearly a ship of Theseus). But some very basic lessons include:
Things pretty much only have a shot at all when there are people with a clear and ambitious vision/when there’s an owner
When we say to ourselves “this has an X% chance of working out” we seem to be actually pretty calibrated
As one would expect, smaller projects and clearer projects work out more frequently than larger or vaguer ones
(Sorry, that feels sort of useless, but.)
From my limited perspective/to the best of my ability to see and descrbe, budget is essentially allocated in a “Is this worth doing? If so, how do we find the resources to make it work?” sense. MIRI’s funding situation has always been pretty odd; we don’t usually have a pie that must be divided up carefully so much as a core administrative apparatus that needs to be continually funded + a preexisting pool of resources that can be more or less freely allocated + a sense that there are allies out there who are willing to fund specific projects if we fall short and want to make a compelling pitch.
Unfortunately, I can’t really draw analogies that help an outsider evaluate future projects. We’re intending to try stuff that’s different from anything we’ve tried before, which means it’s hard to draw on the past (except insofar as the book and surrounding publicity were also something we’d never tried before, so you can at least a little bit assess our ability to pivot and succeed at stuff outside our wheelhouse by looking at the book).
That makes some sense, but leaves me with questions like
Which projects were home runs, and how did you tell that a) they were successful at achieving their goals and b) that their goals were valuable?
Which projects were failures that you feel were justifiable given your knowledge state at the time?
What do these past projects demonstrate about the team’s competence to work on future projects?
What and how was the budget allocated to these projects, and do you expect future projects projects to have structurally similar budgets?
Are there any other analogies you could draw between past and possible future projects that would enable us update on the latter’s probability of success?
MIRI is hardly unique even in the EA/rat space in having special projects—Rethink Priorities, for e.g., seem to be very fluid in what they work on; Founders Pledge and Longview are necessarily driven to some degree by the interests of their major donors; Clean Air Task force have run many different political campaigns, each seemingly unlike the previous ones in many ways; ALLFED are almost unique in their space, so have huge variance in the projects they work on; and there are many more with comparable flexibility.
And many EA organisations in the space that don’t explicitly have such a strategy have nonetheless pivoted after learning of a key opportunity in their field, or realising an existing strategy was failing.
In order to receive funds—at least from effectiveness-minded funders—all these orgs have to put a certain amount of effort into answering questions like those above.
And ok, you say you’re not claiming to be entitled to dollars, but it still seems reasonable to ask why a rational funder should donate to MIRI over e.g. any of the above organisations—and to hope that MIRI has some concrete answers.
Noting that this is more “opinion of an employee” than “the position of MIRI overall”—I’ve held a variety of positions within the org and can’t speak for e.g. Nate or Eliezer or Malo:
The Agent Foundations team feels, to me, like it was a slam dunk at the time; the team produced a ton of good research and many of their ideas have become foundational to discussions of agency in the broader AI sphere
The book feels like a slam dunk
The research push of 2020/2021 (that didn’t pan out) feels to me like it was absolutely the right bet, but resulted in (essentially) nothing; it was an ambitious, many-person project for a speculative idea that had a shot at being amazing.
I think it’s hard to generalize lessons, because various projects are championed by various people and groups within the org (“MIRI” is nearly a ship of Theseus). But some very basic lessons include:
Things pretty much only have a shot at all when there are people with a clear and ambitious vision/when there’s an owner
When we say to ourselves “this has an X% chance of working out” we seem to be actually pretty calibrated
As one would expect, smaller projects and clearer projects work out more frequently than larger or vaguer ones
(Sorry, that feels sort of useless, but.)
From my limited perspective/to the best of my ability to see and descrbe, budget is essentially allocated in a “Is this worth doing? If so, how do we find the resources to make it work?” sense. MIRI’s funding situation has always been pretty odd; we don’t usually have a pie that must be divided up carefully so much as a core administrative apparatus that needs to be continually funded + a preexisting pool of resources that can be more or less freely allocated + a sense that there are allies out there who are willing to fund specific projects if we fall short and want to make a compelling pitch.
Unfortunately, I can’t really draw analogies that help an outsider evaluate future projects. We’re intending to try stuff that’s different from anything we’ve tried before, which means it’s hard to draw on the past (except insofar as the book and surrounding publicity were also something we’d never tried before, so you can at least a little bit assess our ability to pivot and succeed at stuff outside our wheelhouse by looking at the book).