I’m finding this difficult to interpret—I can’t find a way of phrasing my question without it seeming snarky but this isn’t intended.
One reading of this offer looks something like:
if you have an idea which may enable some progress, it’s really important that you be able to try and I’ll get you the funding to make sure you do
Another version of this offer looks more like:
I expect basically never to have to pay out because almost all ideas in the space are useless, but if you can convince me yours is the one thing that isn’t useless I guess I’ll get you the money.
I guess maybe a way of making this concrete would be:
-have you paid out on this so far, if so, can you say what for?
-if not can you point to any existing work which you would have funded if someone had approached you asking for funding to try it?
This is your regular reminder that, if I believe there is any hope whatsoever in your work for AGI alignment, I think I can make sure you get funded. It’s a high bar relative to the average work that gets proposed and pursued, and an impossible bar relative to proposals from various enthusiasts who haven’t understood technical basics or where I think the difficulty lies. But if I think there’s any shred of hope in your work, I am not okay with money being your blocking point. It’s not as if there are better things to do with money.
I interpreted it as the former fwiw. Skimming his FB timeline, Eliezer has recently spoken positively of Redwood Research, and in the past about Chris Olah’s work on interpretability.
I’m finding this difficult to interpret—I can’t find a way of phrasing my question without it seeming snarky but this isn’t intended.
One reading of this offer looks something like:
Another version of this offer looks more like:
I guess maybe a way of making this concrete would be:
-have you paid out on this so far, if so, can you say what for?
-if not can you point to any existing work which you would have funded if someone had approached you asking for funding to try it?
Eliezer gave some more color on this here:
https://www.facebook.com/yudkowsky/posts/10159562959764228
There might be more discussion in the thread.
I interpreted it as the former fwiw. Skimming his FB timeline, Eliezer has recently spoken positively of Redwood Research, and in the past about Chris Olah’s work on interpretability.