This is interesting and I’m glad you’re bringing the discussion up. I think your footnote 2 demonstrates a lot of my disagreements with your overall post:
I’m using resources in a broad sense here to include everything from funding to attention to advice to slots at EAG. Also, given the amount of resources being deployed by EA is increasing, a shift in the distribution of resources towards long-termism may still involve an increase in the absolute number of resources dedicated towards short-termist projects.
Consider this section:
Secondly, many of the short-term projects that EA has pursued have been highly effective and I would see it as a great loss if such projects were to suddenly have the rug pulled out from underneath them. Large shifts and sudden shifts have all kinds of negative consequences from demoralizing staff, to wasting previous investments in staff and infrastructure, to potentially bankrupting what would otherwise have been sustainable.
As a practical matter, Alexander Berger (with a neartermism focus) was promoted to a co-ED position at Open Phil, and my general impression is that Open Phil very likely intends to spend much more $s on neartermism efforts in the foreseeable future. So I think it’s likely that EA efforts with cost-effectiveness comparable or higher than GiveWell top charities will continue to be funded (and likely with larger sums) going forwards, rather than “have the rug pulled out from underneath them.”
Also:
You may be wondering, is such a thing even possible? I think it is, although it would involve shifting some resources dedicated towards short-termism[7] from supporting short-termist projects to directly supporting short-termists[8]. I think that if the amount of resources available is reduced, it is natural to adopt a strategy that could be effective with smaller amounts of money[9].
You mention in footnote 2 that you’re using the phrase “resources” very broadly, but now you’re referring to money as the primary resource. I think this is wrong because (especially in LT and meta) we’re bottlenecked more by human capital and vetting capacity.
This confusion seems importantly wrong to me (and not just nitpicking) , as longtermism efforts are relatively more bottlenecked by human capital and vetting capacity, while neartermism efforts are more bottlenecked by money. So from a moral uncertainty/trade perspective, it makes a lot of things for EA to dump lots of $s (and relatively little oversight) into shovel-ready neartermism projects, while focusing the limited community building, vetting, etc capacity on longtermism projects. Getting more vetting capacity from LT people in return for $s from NT people seem like a bad trade on both fronts.
So I think it’s likely that EA efforts with cost-effectiveness comparable or higher than GiveWell top charities will continue to be funded going forwards, rather than “have the rug pulled out from underneath them.
Yeah, some parts of this discussion are more theoretical than practical and I probably should have highlighted this. Nonetheless, I think it’s easy to make the mistake of saying “We’ll never get to point X” and then end up having no idea of what to do if you actually get to point X. If the prominence of long-termism keeps growing within EA, who knows where we’ll end up?
So from a moral uncertainty/trade perspective, it makes a lot of things for EA to dump lots of $s (and relatively little oversight) into shovel-ready neartermism projects, while focusing the limited community building, vetting, etc capacity on longtermism projects.
This is an excellent point and now that you’ve explained this line of reasoning, I agree.
I guess it’s not immediately clear to me to what extent my proposals would shift limited community building and vetting capability away from long-termist projects. If, for example, Giving What We Can had additional money, it’s not clear to me, although it’s certainly possible, that they might hire someone who would otherwise go to work at a long-termist organisation.
I guess it just seems to me that even though there are real human capital and vetting bottlenecks, that you can work around them to a certain extent if you’re willing to just throw money at the issue. Like there has to be something that’s the equivalent of GiveDirectly for long-termism.
Yeah, some parts of this discussion are more theoretical than practical and I probably should have highlighted this. Nonetheless, I think it’s easy to make the mistake of saying “We’ll never get to point X” and then end up having no idea of what to do if you actually get to point X. If the prominence of long-termism keeps growing within EA, who knows where we’ll end up?
Asking that question as a stopping point doesn’t resolve the ambiguity of which of this is theoretical vs. practical.
If the increasing prominence of long-termism like that, in terms of different kinds of resources consumed relative to short-termist efforts, is only theoretical, then the issue is one worth keeping in mind for the future. If it’s a practical concern, then, other things being equal, it could be enough of a priority that determining which specific organizations should distinguish themselves as long-termist may need to begin right now.
The decisions different parties in EA make on this subject will be the main factor determining ‘where we end up’ anyway.
I can generate a rough assessment for resources other than money of what expectations near-termism vs. long-termism is receiving and can anticipate for at least the near future. I can draft an EA Forum post for that by myself but I could co-author it with you and one or more others if you’d like.
Strongly upvoted. As I was hitting the upvote button, there was a little change in the existing karma from ‘4’ to ‘3’, which meant someone downvoted it. I don’t know why and I consider it responsible of downvoters to leave a comment as to why they’re downvoting but it doesn’t matter because I gave this comment more karma than can be taken away so easily.
I don’t feel strongly about this, but I think there shouldn’t be responsibility to explain normal downvotes if we don’t expect responsibility for explaining/justifying normal upvotes.
I think strong downvotes for seemingly innocuous comments should be explained, and it’s also polite (but not obligatory) for someone to give an explanation for downvoting a comment with net negative karma (especially if it appeared to be in good faith).
Summary: More opaque parts of discourse like up/downvoting are applied with standards so inconsistent and contextual that I consider it warranted for anyone to make a proposal for more objective, consistent and clear standards. I thought yours here was a comment especially undeserving of an unexplained downvote, so I wanted to leave a signal countering a notion the downvote was worthwhile at all.
I prefer both upvotes and downvotes are clarified, explained or justified. I doubt that will become a normal expectation. Yet in my opinion it’s warranted for me or any other individual to advocate for a particular (set of) standard(s) since that is better than the seeming alternative of discourse norms being more subjective and contextual as opposed to objective and consistent.
I don’t have a problem with others not starting a comment reply with ‘upvoted’ or ‘downvoted’ like I sometimes do if reaction is expressed in other ways. I received a downvote the other day and there was only one commenter. He didn’t tell me he downvoted me but he criticized the post for not being written clearly enough. That’s okay.
What frustrated me about you comment is of sufficient quality that I expect the downvote was likely because someone did not like what you said on a polarized subject. I.e., someone didn’t like it based on a perception it was too biased in favour of short-termism or long-termism. They may have a disagreement but if they don’t express it on an important topic and it’s emotive negative reaction when you’re only trying to be constructive, their downvote is futile. That’s something I’ve seen conversations off the EA Forum being the most common reason for downvotes on the EA Forum. Given the effort you put into a constructive comment, I wanted to counter this egregious case as having been pointless.
This is interesting and I’m glad you’re bringing the discussion up. I think your footnote 2 demonstrates a lot of my disagreements with your overall post:
Consider this section:
As a practical matter, Alexander Berger (with a neartermism focus) was promoted to a co-ED position at Open Phil, and my general impression is that Open Phil very likely intends to spend much more $s on neartermism efforts in the foreseeable future. So I think it’s likely that EA efforts with cost-effectiveness comparable or higher than GiveWell top charities will continue to be funded (and likely with larger sums) going forwards, rather than “have the rug pulled out from underneath them.”
Also:
You mention in footnote 2 that you’re using the phrase “resources” very broadly, but now you’re referring to money as the primary resource. I think this is wrong because (especially in LT and meta) we’re bottlenecked more by human capital and vetting capacity.
This confusion seems importantly wrong to me (and not just nitpicking) , as longtermism efforts are relatively more bottlenecked by human capital and vetting capacity, while neartermism efforts are more bottlenecked by money. So from a moral uncertainty/trade perspective, it makes a lot of things for EA to dump lots of $s (and relatively little oversight) into shovel-ready neartermism projects, while focusing the limited community building, vetting, etc capacity on longtermism projects. Getting more vetting capacity from LT people in return for $s from NT people seem like a bad trade on both fronts.
Yeah, some parts of this discussion are more theoretical than practical and I probably should have highlighted this. Nonetheless, I think it’s easy to make the mistake of saying “We’ll never get to point X” and then end up having no idea of what to do if you actually get to point X. If the prominence of long-termism keeps growing within EA, who knows where we’ll end up?
This is an excellent point and now that you’ve explained this line of reasoning, I agree.
I guess it’s not immediately clear to me to what extent my proposals would shift limited community building and vetting capability away from long-termist projects. If, for example, Giving What We Can had additional money, it’s not clear to me, although it’s certainly possible, that they might hire someone who would otherwise go to work at a long-termist organisation.
I guess it just seems to me that even though there are real human capital and vetting bottlenecks, that you can work around them to a certain extent if you’re willing to just throw money at the issue. Like there has to be something that’s the equivalent of GiveDirectly for long-termism.
Asking that question as a stopping point doesn’t resolve the ambiguity of which of this is theoretical vs. practical.
If the increasing prominence of long-termism like that, in terms of different kinds of resources consumed relative to short-termist efforts, is only theoretical, then the issue is one worth keeping in mind for the future. If it’s a practical concern, then, other things being equal, it could be enough of a priority that determining which specific organizations should distinguish themselves as long-termist may need to begin right now.
The decisions different parties in EA make on this subject will be the main factor determining ‘where we end up’ anyway.
I can generate a rough assessment for resources other than money of what expectations near-termism vs. long-termism is receiving and can anticipate for at least the near future. I can draft an EA Forum post for that by myself but I could co-author it with you and one or more others if you’d like.
Tbh, I don’t have a huge amount of desire to produce more content on this topic beyond this post.
Strongly upvoted. As I was hitting the upvote button, there was a little change in the existing karma from ‘4’ to ‘3’, which meant someone downvoted it. I don’t know why and I consider it responsible of downvoters to leave a comment as to why they’re downvoting but it doesn’t matter because I gave this comment more karma than can be taken away so easily.
I don’t feel strongly about this, but I think there shouldn’t be responsibility to explain normal downvotes if we don’t expect responsibility for explaining/justifying normal upvotes.
I think strong downvotes for seemingly innocuous comments should be explained, and it’s also polite (but not obligatory) for someone to give an explanation for downvoting a comment with net negative karma (especially if it appeared to be in good faith).
Summary: More opaque parts of discourse like up/downvoting are applied with standards so inconsistent and contextual that I consider it warranted for anyone to make a proposal for more objective, consistent and clear standards. I thought yours here was a comment especially undeserving of an unexplained downvote, so I wanted to leave a signal countering a notion the downvote was worthwhile at all.
I prefer both upvotes and downvotes are clarified, explained or justified. I doubt that will become a normal expectation. Yet in my opinion it’s warranted for me or any other individual to advocate for a particular (set of) standard(s) since that is better than the seeming alternative of discourse norms being more subjective and contextual as opposed to objective and consistent.
I don’t have a problem with others not starting a comment reply with ‘upvoted’ or ‘downvoted’ like I sometimes do if reaction is expressed in other ways. I received a downvote the other day and there was only one commenter. He didn’t tell me he downvoted me but he criticized the post for not being written clearly enough. That’s okay.
What frustrated me about you comment is of sufficient quality that I expect the downvote was likely because someone did not like what you said on a polarized subject. I.e., someone didn’t like it based on a perception it was too biased in favour of short-termism or long-termism. They may have a disagreement but if they don’t express it on an important topic and it’s emotive negative reaction when you’re only trying to be constructive, their downvote is futile. That’s something I’ve seen conversations off the EA Forum being the most common reason for downvotes on the EA Forum. Given the effort you put into a constructive comment, I wanted to counter this egregious case as having been pointless.