Quick bits of info /ā thoughts on the questions you raise re CLR
(I spent 3 months there as a Summer Research Fellow, but donāt work there anymore, and am not suffering-focused, so might be well-positioned to share one useful perspective.)
Is most of their research only useful from a suffering-focused ethics (SFE) perspective?
I think all of the research that was being done while I was there would probably be important from a non-SFE longtermist perspective if it was important from a SFE longtermist perspective
It might also be important from neither perspective if some other premise underpinning the work was incorrect or the work was just low-quality. But:
I think it was all at least plausibly important
I think each individual line of work would be unlikely to turn out to be important from an SFE longtermist perspective but not from a non-SFE longtermist perspective
This is partly because much of it could be useful for non-s-risk scenarios
E.g., much of their AI work may also help reduce extinction risks, even if that isnāt CLR as an organisationās focus (it may be the focus of some individual researchers, e.g. Daniel Kānot sure)
This is also partly because s-risks are also really bad from a non-SFE perspective (relative to the same future scenario but minus the suffering)
All that said, work thatās motivated by a SFE longtermist perspective should be expected to be higher priority from that perspective than from another perspective, and I do think that thatās the case for CLRās work
That said, if CLR had a substantial room for more funding and I had a bunch of money to donate, Iād seriously consider them (even if I pretend that I give SFE views 0 credence, whereas in reality I give them some small-ish credence)
Is there a better option for suffering-focused donors?
I think the key consideration here is actually room for more funding rather than how useful CLRās work is
I havenāt looked into their room for more funding
Is the probability of astronomical suffering comparable to that of other existential risks?
Personally Iād say so (but ācomparableā is vague), and I think itās plausibly more likely (though āplausiblyā is vague and I havenāt tried to put specific numbers on s-risks as a whole)
Is CLR being cost-effective at producing research?
I havenāt really thought about that
Is CLRās work on their āCooperation, conflict, and transformative artificial intelligenceā/āābargaining in artificial learnersā agenda likely to be valuable?
I think so, but Iām not an expert on AI, game theory, etc.
Will CLRās future research on malevolence be valuable?
I think so, conditional on them doing a notable amount of such work (I donāt know their current plans on that front)
And this I know more about since it was one of my focuses during my fellowship
How effective is CLR at leveling up researchers?
I think I learned a lot while I was there, and I think the other summer research fellows whose views I have a sense of felt the same
But I havenāt thought about that from the perspective of āOk, but how much, and what was the counterfactualā in the way that I would if considering donating a large amount to CLR
(Iāve noticed that my habits of thinking are different when Iām just having regular conversations or reading stuff or whatever vs evaluating a grant)
Two signals of my views on this:
Iāve recommended several people apply to the CLR summer research fellowship (along with other research training programs)
Iāve drawn on some aspects of or materials from CLRās summer research fellowship when informing the design of one or more other research training programs
āI get the impression that they are fairly disconnected from other longtermist groups (though CLR moved to London last year, which might remedy this.)ā
I donāt think CLR are fairly disconnected from other longtermist groups
Some data points:
Stefan Torges did a stint at GovAI
Max Daniel used to work there, still interacts with them in some ways, and works at FHI and is involved in a bunch of other longtermist stuff
I worked there and am still in touch with them semi-regularly
Daniel Kokotajlo used to work at AI Impacts
Alfredo Parra, who used to work there, now works at Legal Priorities Project
Jonas Vollmer used to work there and now runs EA Funds
I know of various of other people whoāve interacted in large or small ways with both CLR and other longtermist orgs
I am not intending here to convince anyone to donate to CLR or work for CLR. Iām not personally donating to them, nor working there. Though I do think theyād be a plausibly good donation target if they have room for more funding (I donāt know about that) and that theyād be a good place to work for many longtermists (depending on personal fit, career plans, etc.).
I think I learned a lot while I was there, and I think the other summer research fellows whose views I have a sense of felt the same
+1. Iād say that applying for and participating in their fellowship was probably the best career decision Iāve made so far. Maybe 60-70% of this was due to the benefits of entering a network of people whose altruistic efforts I greatly respect, the rest was the direct value of the fellowship itself. (I havenāt thought a lot about this point, but on a gut level it seems like the right breakdown.)
Thanks for both comments here. Personal anecdotes are really valuable, and I assume would be useful to later people trying to get some idea of the value from CLR.
Sadly, I imagine thereās a significant bias for positive comments (I assume that people with negative experiences would be cautious of offending anyone), but positive comments still have signal.
Sadly, I imagine thereās a significant bias for positive comments (I assume that people with negative experiences would be cautious of offending anyone), but positive comments still have signal.
Yeah, I think that this is true and that itās good that you noted it.
Though that brings to mind another data point, which is that several people who did the summer research fellowship at the same time as me are now still working at CLR. I also think that there might be a bias against the people who still work at an org commenting, since they wouldnāt want to look defensive or like theyāre just saying it to make their employer happy, or something. But overall I do think thereās more bias towards positive comments.
(And there are also other people I havenāt stayed in touch with and who arenāt working there anymore, who for all I know could perhaps have had worse experiences.)
Quick bits of info /ā thoughts on the questions you raise re CLR
(I spent 3 months there as a Summer Research Fellow, but donāt work there anymore, and am not suffering-focused, so might be well-positioned to share one useful perspective.)
Is most of their research only useful from a suffering-focused ethics (SFE) perspective?
I think all of the research that was being done while I was there would probably be important from a non-SFE longtermist perspective if it was important from a SFE longtermist perspective
It might also be important from neither perspective if some other premise underpinning the work was incorrect or the work was just low-quality. But:
I think it was all at least plausibly important
I think each individual line of work would be unlikely to turn out to be important from an SFE longtermist perspective but not from a non-SFE longtermist perspective
This is partly because much of it could be useful for non-s-risk scenarios
E.g., much of their AI work may also help reduce extinction risks, even if that isnāt CLR as an organisationās focus (it may be the focus of some individual researchers, e.g. Daniel Kānot sure)
This is also partly because s-risks are also really bad from a non-SFE perspective (relative to the same future scenario but minus the suffering)
All that said, work thatās motivated by a SFE longtermist perspective should be expected to be higher priority from that perspective than from another perspective, and I do think that thatās the case for CLRās work
That said, if CLR had a substantial room for more funding and I had a bunch of money to donate, Iād seriously consider them (even if I pretend that I give SFE views 0 credence, whereas in reality I give them some small-ish credence)
Is there a better option for suffering-focused donors?
I think the key consideration here is actually room for more funding rather than how useful CLRās work is
I havenāt looked into their room for more funding
Is the probability of astronomical suffering comparable to that of other existential risks?
Personally Iād say so (but ācomparableā is vague), and I think itās plausibly more likely (though āplausiblyā is vague and I havenāt tried to put specific numbers on s-risks as a whole)
(Note that astronomical suffering could occur even in a future scenario thatās overall better than extinction from a non-SFE or weakly SFE perspective. So my claim is simultaneously somewhat less surprising and somewhat less important than one might think.)
Is CLR figuring out important aspects of reality?
I think so, but this is a vague question
Is CLR being cost-effective at producing research?
I havenāt really thought about that
Is CLRās work on their āCooperation, conflict, and transformative artificial intelligenceā/āābargaining in artificial learnersā agenda likely to be valuable?
I think so, but Iām not an expert on AI, game theory, etc.
Will CLRās future research on malevolence be valuable?
I think so, conditional on them doing a notable amount of such work (I donāt know their current plans on that front)
And this I know more about since it was one of my focuses during my fellowship
How effective is CLR at leveling up researchers?
I think I learned a lot while I was there, and I think the other summer research fellows whose views I have a sense of felt the same
But I havenāt thought about that from the perspective of āOk, but how much, and what was the counterfactualā in the way that I would if considering donating a large amount to CLR
(Iāve noticed that my habits of thinking are different when Iām just having regular conversations or reading stuff or whatever vs evaluating a grant)
Two signals of my views on this:
Iāve recommended several people apply to the CLR summer research fellowship (along with other research training programs)
Iāve drawn on some aspects of or materials from CLRās summer research fellowship when informing the design of one or more other research training programs
āI get the impression that they are fairly disconnected from other longtermist groups (though CLR moved to London last year, which might remedy this.)ā
I donāt think CLR are fairly disconnected from other longtermist groups
Some data points:
Stefan Torges did a stint at GovAI
Max Daniel used to work there, still interacts with them in some ways, and works at FHI and is involved in a bunch of other longtermist stuff
I worked there and am still in touch with them semi-regularly
Daniel Kokotajlo used to work at AI Impacts
Alfredo Parra, who used to work there, now works at Legal Priorities Project
Jonas Vollmer used to work there and now runs EA Funds
I know of various of other people whoāve interacted in large or small ways with both CLR and other longtermist orgs
I am not intending here to convince anyone to donate to CLR or work for CLR. Iām not personally donating to them, nor working there. Though I do think theyād be a plausibly good donation target if they have room for more funding (I donāt know about that) and that theyād be a good place to work for many longtermists (depending on personal fit, career plans, etc.).
Personal views only, as always.
+1. Iād say that applying for and participating in their fellowship was probably the best career decision Iāve made so far. Maybe 60-70% of this was due to the benefits of entering a network of people whose altruistic efforts I greatly respect, the rest was the direct value of the fellowship itself. (I havenāt thought a lot about this point, but on a gut level it seems like the right breakdown.)
Thanks for both comments here. Personal anecdotes are really valuable, and I assume would be useful to later people trying to get some idea of the value from CLR.
Sadly, I imagine thereās a significant bias for positive comments (I assume that people with negative experiences would be cautious of offending anyone), but positive comments still have signal.
Yeah, I think that this is true and that itās good that you noted it.
Though that brings to mind another data point, which is that several people who did the summer research fellowship at the same time as me are now still working at CLR. I also think that there might be a bias against the people who still work at an org commenting, since they wouldnāt want to look defensive or like theyāre just saying it to make their employer happy, or something. But overall I do think thereās more bias towards positive comments.
(And there are also other people I havenāt stayed in touch with and who arenāt working there anymore, who for all I know could perhaps have had worse experiences.)
Thanks Michael, beautiful comment.