Maybe I can help Chris explain his point here, because I came to the comments to say something similar.
The way I see it, neartermists and longtermists are doing different calculations and so value money and optics differently.
Neartermists are right to be worried about spending money on things that aren’t clearly impacting measures of global health, animal welfare, etc. because they could in theory take that money and funnel it directly into work on that stuff, even if it had low marginal returns. They should probably feel bad if they wasted money on a big party because that big party could have saved some kids from dying.
Longtermists are right to not be too worried about spending money. There’s astronomical amounts of value at stake, so even millions or billions of dollars wasted doesn’t matter if it ended up saving humanity from extinction. There might be nearterm reasons related to the funding pipeline they should care (so optics), but long term it doesn’t matter. Thus, longtermists will want to be more free with money in the hopes of, for example, hitting on something that solves AI alignment.
That both these things try to exist under EA causes tension, since the different ways of valuing outcomes result in different recommended behaviors.
This is probably the best case for splitting EA in two: PR problems for one half stop the other half from executing.
Maybe I can help Chris explain his point here, because I came to the comments to say something similar.
The way I see it, neartermists and longtermists are doing different calculations and so value money and optics differently.
Neartermists are right to be worried about spending money on things that aren’t clearly impacting measures of global health, animal welfare, etc. because they could in theory take that money and funnel it directly into work on that stuff, even if it had low marginal returns. They should probably feel bad if they wasted money on a big party because that big party could have saved some kids from dying.
Longtermists are right to not be too worried about spending money. There’s astronomical amounts of value at stake, so even millions or billions of dollars wasted doesn’t matter if it ended up saving humanity from extinction. There might be nearterm reasons related to the funding pipeline they should care (so optics), but long term it doesn’t matter. Thus, longtermists will want to be more free with money in the hopes of, for example, hitting on something that solves AI alignment.
That both these things try to exist under EA causes tension, since the different ways of valuing outcomes result in different recommended behaviors.
This is probably the best case for splitting EA in two: PR problems for one half stop the other half from executing.