I asked readers of my blog with experience in AI alignment (and especially AI grantmaking) to fill out a survey about how they valued different goods. I got 61 responses. I disqualified 11 for various reasons, mostly failing the comprehension check question at the beginning, and kept 50.
Because I didn’t have a good way to represent the value of “a” dollar for people who might have very different amounts of money managed, I instead asked people to value things in terms of a base unit—a program like MATS graduating one extra technical alignment researcher (at the center, not the margin). So for example, someone might say that “creating” a new AI journalist was worth “creating” two new technical alignment researchers, or vice versa.
One of the goods that I asked people to value was $1 million going to a smart, value-aligned grantmaker. This provided a sort of researcher-money equivalence, which turned out to be $125,000 per researcher on median. I rounded to $100,000 and put this in an experimental second set of columns, but the median comes from a wide range of estimates and there are some reasons not to trust it.
The results are below. You can see the exact questions and assumptions that respondents were asked to make here. Many people commented that there were ambiguities, additional assumptions needed, or that they were very unsure, so I don’t recommend using this as anything other than a very rough starting point.
I tried separating responses by policy vs. technical experience, or weighting them by respondent’s level of experience/respect/my personal trust in them, but neither of these changed the answers enough to be interesting.
You can find the raw data (minus names and potentially identifying comments) here.
Results of an informal survey on AI grantmaking
I asked readers of my blog with experience in AI alignment (and especially AI grantmaking) to fill out a survey about how they valued different goods. I got 61 responses. I disqualified 11 for various reasons, mostly failing the comprehension check question at the beginning, and kept 50.
Because I didn’t have a good way to represent the value of “a” dollar for people who might have very different amounts of money managed, I instead asked people to value things in terms of a base unit—a program like MATS graduating one extra technical alignment researcher (at the center, not the margin). So for example, someone might say that “creating” a new AI journalist was worth “creating” two new technical alignment researchers, or vice versa.
One of the goods that I asked people to value was $1 million going to a smart, value-aligned grantmaker. This provided a sort of researcher-money equivalence, which turned out to be $125,000 per researcher on median. I rounded to $100,000 and put this in an experimental second set of columns, but the median comes from a wide range of estimates and there are some reasons not to trust it.
The results are below. You can see the exact questions and assumptions that respondents were asked to make here. Many people commented that there were ambiguities, additional assumptions needed, or that they were very unsure, so I don’t recommend using this as anything other than a very rough starting point.
I tried separating responses by policy vs. technical experience, or weighting them by respondent’s level of experience/respect/my personal trust in them, but neither of these changed the answers enough to be interesting.
You can find the raw data (minus names and potentially identifying comments) here.