Thanks for this, detailed post-mortems like this are very valuable!
Some thoughts:
I considered getting involved in the project, but was somewhat put off by the messaging. Somehow it came across as a “learning exercise for students” rather than “attempt to do actually new research”. Not sure exactly why that was (the grant size may have been a part, see below), and I now regret not getting more involved.
You describe the grant amount of £10,000 as “substantial”. This is surprising to me, since my reaction to the grant size was that it was too small to bother with. I think this corroborates your thoughts about grant size: any size of grant would have had most of the beneficial effects that you saw, but a much larger grant would have been needed to make it seem really “serious”.
I think that the project goal was too ambitious. Global prioritization is much harder than more restricted prioritization, but also vaguer and more abstract. Usually when we’re learning to deal with vague and abstract problems we start out by becoming very adept with simple, concrete versions to build skills and intuitions before moving up the abstraction hierarchy (easier, better feedback, more motivating, etc.). If I wanted to train up some prioritization researchers I would probably start by getting them to just do lots of small, concrete prioritization tasks.
As Michael Plant says below, I think the project was in a bit of an awkward middle ground. The costs of participation (in terms of work and “top-of-mind” time) were perhaps a bit too high for either students or otherwise-busy community members (like myself), and the perceived benefits (in terms of expected quality of research produced) were perhaps too low for the professionals. (To elaborate on why engaging felt like it would be substantial work for me: in order to provide good commentary on one of your posts, I would have had to: read the post; probably read some prior posts; think hard about it; possibly do some research myself; condense that into a thoughtful reply. That could easily take up an evening of my time, for not a huge perceived reward.) I think your suggestion of running such a project as a week-long retreat is a good idea—it would get a committed block of time from people, and prevents inefficiencies due to repeated time spent “re-loading” the background information.
Agree that quantitative modelling is great and under-utilised. I think a course which was more or less How To Measure Anything applied to EA with modern techniques and technologies would be a fantastic starter for prioritization research, and give people generally useful skills too.
I would have preferred less, higher-quality output from the project. My reaction to the first few blog posts was that they were fine but not terribly interesting, which meant I largely didn’t read much of the rest of the content until the models started appearing, which I did find interesting.
Even if you think the project was net-negative, I hope this doesn’t put you off starting new things. Exploration is very valuable, even if the median case is a failure.
I think a course which was more or less How To Measure Anything applied to EA with modern techniques and technologies would be a fantastic starter for prioritization research, and give people generally useful skills too.
Just want to strongly agree with this. Those are real figure-out-how-the-world-works skills. If anyone wants an overview, Luke Muehlhauser did an in-depth summary here.
Even if you think the project was net-negative, I hope this doesn’t put you off starting new things. Exploration is very valuable, even if the median case is a failure.
Further agreement. Seeing this failed project report is one of the few signs to me that EA is actually trying. I have a vague recollection of Charity Science doing a failed project report too.
Thanks for this, detailed post-mortems like this are very valuable!
Some thoughts:
I considered getting involved in the project, but was somewhat put off by the messaging. Somehow it came across as a “learning exercise for students” rather than “attempt to do actually new research”. Not sure exactly why that was (the grant size may have been a part, see below), and I now regret not getting more involved.
You describe the grant amount of £10,000 as “substantial”. This is surprising to me, since my reaction to the grant size was that it was too small to bother with. I think this corroborates your thoughts about grant size: any size of grant would have had most of the beneficial effects that you saw, but a much larger grant would have been needed to make it seem really “serious”.
I think that the project goal was too ambitious. Global prioritization is much harder than more restricted prioritization, but also vaguer and more abstract. Usually when we’re learning to deal with vague and abstract problems we start out by becoming very adept with simple, concrete versions to build skills and intuitions before moving up the abstraction hierarchy (easier, better feedback, more motivating, etc.). If I wanted to train up some prioritization researchers I would probably start by getting them to just do lots of small, concrete prioritization tasks.
As Michael Plant says below, I think the project was in a bit of an awkward middle ground. The costs of participation (in terms of work and “top-of-mind” time) were perhaps a bit too high for either students or otherwise-busy community members (like myself), and the perceived benefits (in terms of expected quality of research produced) were perhaps too low for the professionals. (To elaborate on why engaging felt like it would be substantial work for me: in order to provide good commentary on one of your posts, I would have had to: read the post; probably read some prior posts; think hard about it; possibly do some research myself; condense that into a thoughtful reply. That could easily take up an evening of my time, for not a huge perceived reward.) I think your suggestion of running such a project as a week-long retreat is a good idea—it would get a committed block of time from people, and prevents inefficiencies due to repeated time spent “re-loading” the background information.
Agree that quantitative modelling is great and under-utilised. I think a course which was more or less How To Measure Anything applied to EA with modern techniques and technologies would be a fantastic starter for prioritization research, and give people generally useful skills too.
I would have preferred less, higher-quality output from the project. My reaction to the first few blog posts was that they were fine but not terribly interesting, which meant I largely didn’t read much of the rest of the content until the models started appearing, which I did find interesting.
Even if you think the project was net-negative, I hope this doesn’t put you off starting new things. Exploration is very valuable, even if the median case is a failure.
Just want to strongly agree with this. Those are real figure-out-how-the-world-works skills. If anyone wants an overview, Luke Muehlhauser did an in-depth summary here.
Further agreement. Seeing this failed project report is one of the few signs to me that EA is actually trying. I have a vague recollection of Charity Science doing a failed project report too.