My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does—it’s not clear to me that it would have an actively maintained custom forum, or EAGs, or EA Funds. James Snowden at Open Phil recently started working on grantmaking for neartermist-focused EA community growth, and so there would be at least one dedicated grantmaker trying to make some of this stuff happen. But most of the infrastructure would be gone.
My guess would be that the people who want an EA-without-longtermism movement would bite that bullet.
The kind of EA-without-longtermism movement that is being imagined here would probably need less of those things? For example, going to EAG is less instrumentally useful when all you want is to donate 10% of your income to the top recommended charity by GiveWell, and more instrumentally useful when you want to figure out what AI safety research agenda to follow.
For example, going to EAG is less instrumentally useful when all you want is to donate 10% of your income to the top recommended charity by GiveWell, and more instrumentally useful when you want to figure out what AI safety research agenda to follow.
Like, do you really think this is a characterization of non-longtermist activities that suggests to proponents of the OP, that your views are informed?
(In a deeper sense, this reflects knowledge necessary for basic cause prioritization altogether.)
Donating 10% of your income to GiveWell was just an example (those people exist, though, and I think they do good things!), and this example was not meant to characterize non-longtermists.
To give another example, my guess would be that for non-longtermist proponents of Shrimp Welfare EAG is instrumentally more useful.
I broadly agree.
My guess would be that the people who want an EA-without-longtermism movement would bite that bullet. The kind of EA-without-longtermism movement that is being imagined here would probably need less of those things? For example, going to EAG is less instrumentally useful when all you want is to donate 10% of your income to the top recommended charity by GiveWell, and more instrumentally useful when you want to figure out what AI safety research agenda to follow.
Like, do you really think this is a characterization of non-longtermist activities that suggests to proponents of the OP, that your views are informed?
(In a deeper sense, this reflects knowledge necessary for basic cause prioritization altogether.)
Donating 10% of your income to GiveWell was just an example (those people exist, though, and I think they do good things!), and this example was not meant to characterize non-longtermists.
To give another example, my guess would be that for non-longtermist proponents of Shrimp Welfare EAG is instrumentally more useful.