Quick bits of info / thoughts on the questions you raise re CLR
(I spent 3 months there as a Summer Research Fellow, but don’t work there anymore, and am not suffering-focused, so might be well-positioned to share one useful perspective.)
Is most of their research only useful from a suffering-focused ethics (SFE) perspective?
I think all of the research that was being done while I was there would probably be important from a non-SFE longtermist perspective if it was important from a SFE longtermist perspective
It might also be important from neither perspective if some other premise underpinning the work was incorrect or the work was just low-quality. But:
I think it was all at least plausibly important
I think each individual line of work would be unlikely to turn out to be important from an SFE longtermist perspective but not from a non-SFE longtermist perspective
This is partly because much of it could be useful for non-s-risk scenarios
E.g., much of their AI work may also help reduce extinction risks, even if that isn’t CLR as an organisation’s focus (it may be the focus of some individual researchers, e.g. Daniel K—not sure)
This is also partly because s-risks are also really bad from a non-SFE perspective (relative to the same future scenario but minus the suffering)
All that said, work that’s motivated by a SFE longtermist perspective should be expected to be higher priority from that perspective than from another perspective, and I do think that that’s the case for CLR’s work
That said, if CLR had a substantial room for more funding and I had a bunch of money to donate, I’d seriously consider them (even if I pretend that I give SFE views 0 credence, whereas in reality I give them some small-ish credence)
Is there a better option for suffering-focused donors?
I think the key consideration here is actually room for more funding rather than how useful CLR’s work is
I haven’t looked into their room for more funding
Is the probability of astronomical suffering comparable to that of other existential risks?
Personally I’d say so (but “comparable” is vague), and I think it’s plausibly more likely (though “plausibly” is vague and I haven’t tried to put specific numbers on s-risks as a whole)
Is CLR being cost-effective at producing research?
I haven’t really thought about that
Is CLR’s work on their “Cooperation, conflict, and transformative artificial intelligence”/”bargaining in artificial learners” agenda likely to be valuable?
I think so, but I’m not an expert on AI, game theory, etc.
Will CLR’s future research on malevolence be valuable?
I think so, conditional on them doing a notable amount of such work (I don’t know their current plans on that front)
And this I know more about since it was one of my focuses during my fellowship
How effective is CLR at leveling up researchers?
I think I learned a lot while I was there, and I think the other summer research fellows whose views I have a sense of felt the same
But I haven’t thought about that from the perspective of “Ok, but how much, and what was the counterfactual” in the way that I would if considering donating a large amount to CLR
(I’ve noticed that my habits of thinking are different when I’m just having regular conversations or reading stuff or whatever vs evaluating a grant)
Two signals of my views on this:
I’ve recommended several people apply to the CLR summer research fellowship (along with other research training programs)
I’ve drawn on some aspects of or materials from CLR’s summer research fellowship when informing the design of one or more other research training programs
“I get the impression that they are fairly disconnected from other longtermist groups (though CLR moved to London last year, which might remedy this.)”
I don’t think CLR are fairly disconnected from other longtermist groups
Some data points:
Stefan Torges did a stint at GovAI
Max Daniel used to work there, still interacts with them in some ways, and works at FHI and is involved in a bunch of other longtermist stuff
I worked there and am still in touch with them semi-regularly
Daniel Kokotajlo used to work at AI Impacts
Alfredo Parra, who used to work there, now works at Legal Priorities Project
Jonas Vollmer used to work there and now runs EA Funds
I know of various of other people who’ve interacted in large or small ways with both CLR and other longtermist orgs
I am not intending here to convince anyone to donate to CLR or work for CLR. I’m not personally donating to them, nor working there. Though I do think they’d be a plausibly good donation target if they have room for more funding (I don’t know about that) and that they’d be a good place to work for many longtermists (depending on personal fit, career plans, etc.).
I think I learned a lot while I was there, and I think the other summer research fellows whose views I have a sense of felt the same
+1. I’d say that applying for and participating in their fellowship was probably the best career decision I’ve made so far. Maybe 60-70% of this was due to the benefits of entering a network of people whose altruistic efforts I greatly respect, the rest was the direct value of the fellowship itself. (I haven’t thought a lot about this point, but on a gut level it seems like the right breakdown.)
Thanks for both comments here. Personal anecdotes are really valuable, and I assume would be useful to later people trying to get some idea of the value from CLR.
Sadly, I imagine there’s a significant bias for positive comments (I assume that people with negative experiences would be cautious of offending anyone), but positive comments still have signal.
Sadly, I imagine there’s a significant bias for positive comments (I assume that people with negative experiences would be cautious of offending anyone), but positive comments still have signal.
Yeah, I think that this is true and that it’s good that you noted it.
Though that brings to mind another data point, which is that several people who did the summer research fellowship at the same time as me are now still working at CLR. I also think that there might be a bias against the people who still work at an org commenting, since they wouldn’t want to look defensive or like they’re just saying it to make their employer happy, or something. But overall I do think there’s more bias towards positive comments.
(And there are also other people I haven’t stayed in touch with and who aren’t working there anymore, who for all I know could perhaps have had worse experiences.)
Quick bits of info / thoughts on the questions you raise re CLR
(I spent 3 months there as a Summer Research Fellow, but don’t work there anymore, and am not suffering-focused, so might be well-positioned to share one useful perspective.)
Is most of their research only useful from a suffering-focused ethics (SFE) perspective?
I think all of the research that was being done while I was there would probably be important from a non-SFE longtermist perspective if it was important from a SFE longtermist perspective
It might also be important from neither perspective if some other premise underpinning the work was incorrect or the work was just low-quality. But:
I think it was all at least plausibly important
I think each individual line of work would be unlikely to turn out to be important from an SFE longtermist perspective but not from a non-SFE longtermist perspective
This is partly because much of it could be useful for non-s-risk scenarios
E.g., much of their AI work may also help reduce extinction risks, even if that isn’t CLR as an organisation’s focus (it may be the focus of some individual researchers, e.g. Daniel K—not sure)
This is also partly because s-risks are also really bad from a non-SFE perspective (relative to the same future scenario but minus the suffering)
All that said, work that’s motivated by a SFE longtermist perspective should be expected to be higher priority from that perspective than from another perspective, and I do think that that’s the case for CLR’s work
That said, if CLR had a substantial room for more funding and I had a bunch of money to donate, I’d seriously consider them (even if I pretend that I give SFE views 0 credence, whereas in reality I give them some small-ish credence)
Is there a better option for suffering-focused donors?
I think the key consideration here is actually room for more funding rather than how useful CLR’s work is
I haven’t looked into their room for more funding
Is the probability of astronomical suffering comparable to that of other existential risks?
Personally I’d say so (but “comparable” is vague), and I think it’s plausibly more likely (though “plausibly” is vague and I haven’t tried to put specific numbers on s-risks as a whole)
(Note that astronomical suffering could occur even in a future scenario that’s overall better than extinction from a non-SFE or weakly SFE perspective. So my claim is simultaneously somewhat less surprising and somewhat less important than one might think.)
Is CLR figuring out important aspects of reality?
I think so, but this is a vague question
Is CLR being cost-effective at producing research?
I haven’t really thought about that
Is CLR’s work on their “Cooperation, conflict, and transformative artificial intelligence”/”bargaining in artificial learners” agenda likely to be valuable?
I think so, but I’m not an expert on AI, game theory, etc.
Will CLR’s future research on malevolence be valuable?
I think so, conditional on them doing a notable amount of such work (I don’t know their current plans on that front)
And this I know more about since it was one of my focuses during my fellowship
How effective is CLR at leveling up researchers?
I think I learned a lot while I was there, and I think the other summer research fellows whose views I have a sense of felt the same
But I haven’t thought about that from the perspective of “Ok, but how much, and what was the counterfactual” in the way that I would if considering donating a large amount to CLR
(I’ve noticed that my habits of thinking are different when I’m just having regular conversations or reading stuff or whatever vs evaluating a grant)
Two signals of my views on this:
I’ve recommended several people apply to the CLR summer research fellowship (along with other research training programs)
I’ve drawn on some aspects of or materials from CLR’s summer research fellowship when informing the design of one or more other research training programs
“I get the impression that they are fairly disconnected from other longtermist groups (though CLR moved to London last year, which might remedy this.)”
I don’t think CLR are fairly disconnected from other longtermist groups
Some data points:
Stefan Torges did a stint at GovAI
Max Daniel used to work there, still interacts with them in some ways, and works at FHI and is involved in a bunch of other longtermist stuff
I worked there and am still in touch with them semi-regularly
Daniel Kokotajlo used to work at AI Impacts
Alfredo Parra, who used to work there, now works at Legal Priorities Project
Jonas Vollmer used to work there and now runs EA Funds
I know of various of other people who’ve interacted in large or small ways with both CLR and other longtermist orgs
I am not intending here to convince anyone to donate to CLR or work for CLR. I’m not personally donating to them, nor working there. Though I do think they’d be a plausibly good donation target if they have room for more funding (I don’t know about that) and that they’d be a good place to work for many longtermists (depending on personal fit, career plans, etc.).
Personal views only, as always.
+1. I’d say that applying for and participating in their fellowship was probably the best career decision I’ve made so far. Maybe 60-70% of this was due to the benefits of entering a network of people whose altruistic efforts I greatly respect, the rest was the direct value of the fellowship itself. (I haven’t thought a lot about this point, but on a gut level it seems like the right breakdown.)
Thanks for both comments here. Personal anecdotes are really valuable, and I assume would be useful to later people trying to get some idea of the value from CLR.
Sadly, I imagine there’s a significant bias for positive comments (I assume that people with negative experiences would be cautious of offending anyone), but positive comments still have signal.
Yeah, I think that this is true and that it’s good that you noted it.
Though that brings to mind another data point, which is that several people who did the summer research fellowship at the same time as me are now still working at CLR. I also think that there might be a bias against the people who still work at an org commenting, since they wouldn’t want to look defensive or like they’re just saying it to make their employer happy, or something. But overall I do think there’s more bias towards positive comments.
(And there are also other people I haven’t stayed in touch with and who aren’t working there anymore, who for all I know could perhaps have had worse experiences.)
Thanks Michael, beautiful comment.