For example, going to EAG is less instrumentally useful when all you want is to donate 10% of your income to the top recommended charity by GiveWell, and more instrumentally useful when you want to figure out what AI safety research agenda to follow.
Like, do you really think this is a characterization of non-longtermist activities that suggests to proponents of the OP, that your views are informed?
(In a deeper sense, this reflects knowledge necessary for basic cause prioritization altogether.)
Donating 10% of your income to GiveWell was just an example (those people exist, though, and I think they do good things!), and this example was not meant to characterize non-longtermists.
To give another example, my guess would be that for non-longtermist proponents of Shrimp Welfare EAG is instrumentally more useful.
Like, do you really think this is a characterization of non-longtermist activities that suggests to proponents of the OP, that your views are informed?
(In a deeper sense, this reflects knowledge necessary for basic cause prioritization altogether.)
Donating 10% of your income to GiveWell was just an example (those people exist, though, and I think they do good things!), and this example was not meant to characterize non-longtermists.
To give another example, my guess would be that for non-longtermist proponents of Shrimp Welfare EAG is instrumentally more useful.