This is a really interesting idea and I’m glad you are taking this up! Some considerations of the top of my head:
1. This set-up would probably not only ‘take away’ money that would otherwise have been donated directly. There is some percentage of ‘extra’ money this set-up would attract. So the discussion should not be solely decided by ’would the money be better spent investing or donated now?
2. There is probably a formal set-up for this (optimization) problem, and I think some economist or computer scientist would find it a worthwhile and publishable research question to work on. I’m sure there is related work somewhere, but I suppose the problem is somewhat new with the assumptions of ‘full altruism’, time-neutrality, and letting go of the fixed-resource assumption.
3. There is a difference between investing money for a) later opportunities that seem high-value that can be found by careful evaluation, and b) later opportunities that seem high-value and require a short-time frame to respond. I hope this fund would address both, and I think the case for b) might be stronger than for a). One option for a) would be a global catastrophic response fund. As far as I am aware, there is not a coordinated protocol to respond to global catastrophes or catastrophic crises, and the speed of funding can play a crucial role. A non-governmental fund would be much faster than trying to coordinate the international response. Furthermore, I think a) and b) play substantially different roles in the optimization problem.
I really like your idea of a GCR response fund-I was thinking about something similar (though did you mean it was in category b) not a)?). It seems that there could be quite a few EAs who think that contributing to AI is the highest priority, but if there were a global catastrophe, they might recognize that it could jeopardize all the work on AI and there are things we could do to make it go better.
Thanks Siebe. On (3) the fund as we currently see it would indeed attempt to address both (e.g. via evaluation on both that FP would also do otherwise), but it’s a useful distinction to make.
This is a really interesting idea and I’m glad you are taking this up! Some considerations of the top of my head:
1. This set-up would probably not only ‘take away’ money that would otherwise have been donated directly. There is some percentage of ‘extra’ money this set-up would attract. So the discussion should not be solely decided by ’would the money be better spent investing or donated now?
2. There is probably a formal set-up for this (optimization) problem, and I think some economist or computer scientist would find it a worthwhile and publishable research question to work on. I’m sure there is related work somewhere, but I suppose the problem is somewhat new with the assumptions of ‘full altruism’, time-neutrality, and letting go of the fixed-resource assumption.
3. There is a difference between investing money for a) later opportunities that seem high-value that can be found by careful evaluation, and b) later opportunities that seem high-value and require a short-time frame to respond. I hope this fund would address both, and I think the case for b) might be stronger than for a). One option for a) would be a global catastrophic response fund. As far as I am aware, there is not a coordinated protocol to respond to global catastrophes or catastrophic crises, and the speed of funding can play a crucial role. A non-governmental fund would be much faster than trying to coordinate the international response. Furthermore, I think a) and b) play substantially different roles in the optimization problem.
I really like your idea of a GCR response fund-I was thinking about something similar (though did you mean it was in category b) not a)?). It seems that there could be quite a few EAs who think that contributing to AI is the highest priority, but if there were a global catastrophe, they might recognize that it could jeopardize all the work on AI and there are things we could do to make it go better.
Thanks Siebe. On (3) the fund as we currently see it would indeed attempt to address both (e.g. via evaluation on both that FP would also do otherwise), but it’s a useful distinction to make.