Thanks for this report! Exciting to hear that the LTFF will likely be able to give substantially more money this year than last. (Given that I believe this isnāt due to lowering the bar but rather getting better applications, more money, and/āor more vetting capacity.)
And a lot of these projects do sound very exciting to me, and none seemed to me clearly not worth funding.
Two questions on the Logan Strohl grant:
The basic goals at least faintly remind me of Leverage Research (though I say this with low confidence, as donāt know much about Leverage and the description of the Logan grant was fairly brief). I wonder if you agree? And I wonder if you think youād be excited to fund other things like Leverage, or Leverage itself, if you thought there was room for more funding and a competent team?
This question is like a vague and sincere curiosity. Iām aware Leverage is controversial, but I donāt mean this as a controversy-inducing, gotcha, or rhetorical question.
(Here āyouā could be Habryka specifically or any other LTFF fund managers.)
(Iām just saying the basic goals seem kind-of similarāI think the approaches Logan plans to use sound fairly different from my understanding of Leverageās approaches.)
What proxies of success would you or Logan hope to see from this grant? How would you or Logan evaluate whether the grant went well? Was this discussed?
Re 1) I have historically not been very excited about Leverageās work, and wouldnāt want to support more of it. I think itās mostly about choice of methodology. I really didnāt like that Leverage was very inwards focused, and am a lot more excited about people developing techniques like this by trying to provide value to people who arenāt working at the same organization as the person developing the technique (though Leverage also did that a bit in the last few years, though not much early on, as far as I can tell). Also, Leverage seemed to just make a bunch of assumptions about how the mind works that seemed wrong to me (as part of the whole āConnection Theoryā package), and Logan seems to make fewer of those.
Logan also strikes me as a lot less insular, and I also have a bunch of specific opinions on Leverage that would take a while to get into that make me less excited about funding Leverage stuff.
Re 2) Will write something longer about this in a few days. Just had a quick minute in-between a bunch of event and travel stuff in which I had time to write the above.
This kind of work does strike me as pretty early stage, so evaluation is difficult. The things I expect to pay most attention to in evaluating how this grant is going is just whether the people Logan is working with seem to benefit from it, and if they end up citing this program as a major influence on their future research practices and career choices (which seems pretty plausible to me).
In the long run, I would hope to see some set of ideas from Logan make its way āinto the groundwaterā so to speak. This has happened quite a bit with Gendlinās focusing, and it seems to me that a substantial fraction of AI Alignment Researchers I interface with have learned a bunch focusing-adjacent techniques, and if something similar happens to techniques or ideas originating from Logan, that seems like a good signal that the work was valuable.
I did have some email exchanges where I shared some LTFF internal discussion with Logan about what we hope to see out of this grant, and what would convince others on the fund that it was a good idea, which captured some of the above.
I also expect I will just watch and read and engage with any material coming out of Loganās program, try to apply them to my own research problems, and see whether they seem helpful or a waste of time. I might also end up getting some colleagues of mine, or some friends of mine who are active as researchers, to try out some of the material and see whether they find it useful, and debate with them what parts seem to work, and what parts donāt.
Thanks for this report! Exciting to hear that the LTFF will likely be able to give substantially more money this year than last. (Given that I believe this isnāt due to lowering the bar but rather getting better applications, more money, and/āor more vetting capacity.)
And a lot of these projects do sound very exciting to me, and none seemed to me clearly not worth funding.
Two questions on the Logan Strohl grant:
The basic goals at least faintly remind me of Leverage Research (though I say this with low confidence, as donāt know much about Leverage and the description of the Logan grant was fairly brief). I wonder if you agree? And I wonder if you think youād be excited to fund other things like Leverage, or Leverage itself, if you thought there was room for more funding and a competent team?
This question is like a vague and sincere curiosity. Iām aware Leverage is controversial, but I donāt mean this as a controversy-inducing, gotcha, or rhetorical question.
(Here āyouā could be Habryka specifically or any other LTFF fund managers.)
(Iām just saying the basic goals seem kind-of similarāI think the approaches Logan plans to use sound fairly different from my understanding of Leverageās approaches.)
What proxies of success would you or Logan hope to see from this grant? How would you or Logan evaluate whether the grant went well? Was this discussed?
Re 1) I have historically not been very excited about Leverageās work, and wouldnāt want to support more of it. I think itās mostly about choice of methodology. I really didnāt like that Leverage was very inwards focused, and am a lot more excited about people developing techniques like this by trying to provide value to people who arenāt working at the same organization as the person developing the technique (though Leverage also did that a bit in the last few years, though not much early on, as far as I can tell). Also, Leverage seemed to just make a bunch of assumptions about how the mind works that seemed wrong to me (as part of the whole āConnection Theoryā package), and Logan seems to make fewer of those.
Logan also strikes me as a lot less insular, and I also have a bunch of specific opinions on Leverage that would take a while to get into that make me less excited about funding Leverage stuff.
Re 2) Will write something longer about this in a few days. Just had a quick minute in-between a bunch of event and travel stuff in which I had time to write the above.
Now for a more thorough response for 2):
This kind of work does strike me as pretty early stage, so evaluation is difficult. The things I expect to pay most attention to in evaluating how this grant is going is just whether the people Logan is working with seem to benefit from it, and if they end up citing this program as a major influence on their future research practices and career choices (which seems pretty plausible to me).
In the long run, I would hope to see some set of ideas from Logan make its way āinto the groundwaterā so to speak. This has happened quite a bit with Gendlinās focusing, and it seems to me that a substantial fraction of AI Alignment Researchers I interface with have learned a bunch focusing-adjacent techniques, and if something similar happens to techniques or ideas originating from Logan, that seems like a good signal that the work was valuable.
I did have some email exchanges where I shared some LTFF internal discussion with Logan about what we hope to see out of this grant, and what would convince others on the fund that it was a good idea, which captured some of the above.
I also expect I will just watch and read and engage with any material coming out of Loganās program, try to apply them to my own research problems, and see whether they seem helpful or a waste of time. I might also end up getting some colleagues of mine, or some friends of mine who are active as researchers, to try out some of the material and see whether they find it useful, and debate with them what parts seem to work, and what parts donāt.