I think the idea of a motivational shadow is a good one, and it can be useful to think about these sorts of filters on what sorts of evidence/argument/research people are willing to share, especially if people are afraid of social sanction.
However, I am less convinced by this concrete application. You present a hierarchy of activities in order of effort required to unlock, and suggest that something like ‘being paid full time to advocate for this’ pushes people up multiple levels:
Offhand comment
irate Tweet
Low-effort blog post
Sensationalised newpaper article
Polite, charitable, good faith, evidentially rigorous article
I don’t believe that the people who are currently doing high quality Xrisk advocacy would counter-factually be writing nasty newspaper hit pieces; these just seem like totally different activities, or that Timnit would write more rigourously if people gave her more money. My impression is that high quality work on both sides is done by people with strong inherent dedication to truth-seeking and intellectual inquiry, and there is no need to first pass through a valley of vitriol before your achieve a motivational level-up to an ascended state of evidence. Indeed, historically a lot of Xrisk advocacy work was done by people for whom such an activity had negative financial and social payoff.
I also think you miss a major, often dominant motivation: people love to criticize, especially to criticize things that seem to threaten their moral superiority.
I think that’s a good critique, although it can be mitigated somewhat with a narrower interpretation. In the narrower view, motivation (e.g., “effort required to unlock”) is a necessary but not sufficient precursor to various actions.
Being a jerk on X requires only low motivation, but if I’m not prone to being a jerk in the first place then my response to that level of motivation will be [no action], which will not result in any criticism. Conditional on someone posting criticism at that level of motivation, the criticism will be ~ in the form of mean tweets, because the motivation level isn’t high enough to unlock higher forms of criticism.
My impression is that high quality work on both sides is done by people with strong inherent dedication to truth-seeking and intellectual inquiry [ . . .]
. . . as well as sufficient motivation and resources to do so. As with the lower levels, I suggest that high motivation unlocks high-level work in the sense that it is a necessary but not sufficient precondition. This means that people with strong inherent dedication to truth-seeking and intellectual inquiry will still not produce high-quality work unless they are motivated enough to do so.
I don’t believe that the people who are currently doing high quality Xrisk advocacy would counter-factually be writing nasty newspaper hit pieces; these just seem like totally different activities, or that Timnit would write more rigourously if people gave her more money.
I don’t think that’s what the OP argues though.[1] The argument is that the people motivated to seek funding to assess X-risk as a full time job tend to be disproportionately people that think X-risk and the ability to mitigate it significant. So of course advocates produce more serious research, and of course people who don’t think it’s that big a deal don’t tend to choose it as a research topic (and on the rare occasions they put actual effort in, it’s relatively likely to be motivated by animus against x-risk advocates).
If those x-risk advocates had to do something other than x-risk research for their day job, they might not write hit pieces, but there would be blogs instead of a body of high quality research to point to, and some people would still tweet angrily and insubstantially about Sam Altman and FAANG.
Gebru’s an interesting example looked at the other way, because she does write rigorous papers on her actual research interests as well as issue shallow, hostile dismissals of groups in tech she doesn’t like. But funnily enough, nobody’s producing high quality rebuttals of those papers[2] - they’re happy to dismiss her entire body of work based on disagreeing with her shallower comments. Less outspoken figures than Gebru write papers on similar lines, but these don’t get the engagement at all.
EAs may not necessarily actually disagree with her when she’s writing about implicit biases in LLMs or concentration of ownership in tech rather than tweeting angrily about TESCREALs, but obviously some people and organizations have reason to disagree with her papers as well.
I think the idea of a motivational shadow is a good one, and it can be useful to think about these sorts of filters on what sorts of evidence/argument/research people are willing to share, especially if people are afraid of social sanction.
However, I am less convinced by this concrete application. You present a hierarchy of activities in order of effort required to unlock, and suggest that something like ‘being paid full time to advocate for this’ pushes people up multiple levels:
I don’t believe that the people who are currently doing high quality Xrisk advocacy would counter-factually be writing nasty newspaper hit pieces; these just seem like totally different activities, or that Timnit would write more rigourously if people gave her more money. My impression is that high quality work on both sides is done by people with strong inherent dedication to truth-seeking and intellectual inquiry, and there is no need to first pass through a valley of vitriol before your achieve a motivational level-up to an ascended state of evidence. Indeed, historically a lot of Xrisk advocacy work was done by people for whom such an activity had negative financial and social payoff.
I also think you miss a major, often dominant motivation: people love to criticize, especially to criticize things that seem to threaten their moral superiority.
I think that’s a good critique, although it can be mitigated somewhat with a narrower interpretation. In the narrower view, motivation (e.g., “effort required to unlock”) is a necessary but not sufficient precursor to various actions.
Being a jerk on X requires only low motivation, but if I’m not prone to being a jerk in the first place then my response to that level of motivation will be [no action], which will not result in any criticism. Conditional on someone posting criticism at that level of motivation, the criticism will be ~ in the form of mean tweets, because the motivation level isn’t high enough to unlock higher forms of criticism.
. . . as well as sufficient motivation and resources to do so. As with the lower levels, I suggest that high motivation unlocks high-level work in the sense that it is a necessary but not sufficient precondition. This means that people with strong inherent dedication to truth-seeking and intellectual inquiry will still not produce high-quality work unless they are motivated enough to do so.
I don’t think that’s what the OP argues though.[1] The argument is that the people motivated to seek funding to assess X-risk as a full time job tend to be disproportionately people that think X-risk and the ability to mitigate it significant. So of course advocates produce more serious research, and of course people who don’t think it’s that big a deal don’t tend to choose it as a research topic (and on the rare occasions they put actual effort in, it’s relatively likely to be motivated by animus against x-risk advocates).
If those x-risk advocates had to do something other than x-risk research for their day job, they might not write hit pieces, but there would be blogs instead of a body of high quality research to point to, and some people would still tweet angrily and insubstantially about Sam Altman and FAANG.
Gebru’s an interesting example looked at the other way, because she does write rigorous papers on her actual research interests as well as issue shallow, hostile dismissals of groups in tech she doesn’t like. But funnily enough, nobody’s producing high quality rebuttals of those papers[2] - they’re happy to dismiss her entire body of work based on disagreeing with her shallower comments. Less outspoken figures than Gebru write papers on similar lines, but these don’t get the engagement at all.
I do agree people love to criticize.
the bar chart for x-risk believers without funding actually stops short of the “hit piece” FWIW
EAs may not necessarily actually disagree with her when she’s writing about implicit biases in LLMs or concentration of ownership in tech rather than tweeting angrily about TESCREALs, but obviously some people and organizations have reason to disagree with her papers as well.