Discord lets you separate servers into different channels for people to talk about different things. There is already an EA Discord, of course new and near term EAs are welcome there. I think it would be bad if we split things like this because the more the near term EAs isolate themselves, the more and more “alienated” people will feel elsewhere, so it will be a destructive feedback loop. You’re creating the problem that you are trying to solve.
Also, it would reinforce the neglect of mid-term causes which have always gotten too little attention in EA.
I ask that far-future effective altruists and people whose priority cause area is AI risk or s-risks do not participate.
Yeah, this isn’t good policy. It should be pretty clear that this is how groupthink happens, and you’re establishing it as a principle. I get that you feel alienated because, what, 60% of people have a different point of view? (perish the thought!) And you want to help with the growth of the movement. But hopefully you can find a better way to do this than creating an actual echo chamber. It’s clearly a poor choice as far as epistemology is concerned.
You’re also creating the problem you’re trying to solve in a different way. Whereas most “near-term EAs” enjoy the broad EA community perfectly well, you’re reinforcing an assumption that they can’t get along, that they should expect EA to “alienate” them, as they hear about your server. As soon as people are pointed towards a designated safe space, they’re going to assume that everything on the outside is unfriendly to them, and that will bias their perceptions going forward.
You are likely to have a lighter version of the problem that Hatreon did with Patreon, Voat with Reddit, etc—whenever a group of people has a problem with the “mainstream” option and someone tries to create an alternative space, the first people who jump ship to the alternative will be the highly-motivated people on the extreme end of the spectrum, who are the most closed-minded and intolerant of the mainstream, and they are going to set the norms for the community henceforth. Don’t get me wrong, it’s good to expand EA with new community spaces and be more appealing to new people, it is always nice to see people put effort into new ideas for EA, but this is very flawed, I strongly recommend that you revise your plans.
“so it will be a destructive feedback loop” ~ not necessarily
“you’re reinforcing an assumption that they can’t get along” ~ unlikely
“whenever a group of people [...] extreme end of the spectrum, who are the most closed-minded and intolerant” ~ very big presumptions
I personally think this chat is a great idea. Too many times on Facebook groups, I have to see local events that I can’t attend. Too many times I see EA posts that have no relevance to my involvement in EA. That doesn’t mean I’m closed-minded. Most EAs, picking animal suffering or global poverty, are the most open-minded people in my opinion.
Perhaps think about it like the difference between the Physics Stack Exchange chat and the Electrical Engineering (EE) Stack Exchange chat. They’re very close to the same. EE is based in physics obviously. But they’re separate.
All three of those are merely cases of you disagreeing with my claims or my confidence in them. I thought I was being tone-policed, but you are just saying that I am wrong.
Too many times on Facebook groups, I have to see local events that I can’t attend.
The fact that people are unable to attend something is one of the problems with the server that is being promoted here. I’m not in favor of anything in EA that does this, if someone ever tries to exclude near-term EAs from their event then give me a ping and I will argue with them too!
Too many times I see EA posts that have no relevance to my involvement in EA.
Theoretical physicists are not upset by the presence of discussion on experimental physics, and the ones who disbelieve in dark matter are not upset by the presence of discussion from people who do. If lots of posts aren’t relevant to you, the right answer is presumably to ignore those posts; I and so many other EAs do it all the time, it’s easy.
If you want more content that is relevant to you… that’s perfect! Make it! Request it! Ask questions about it! Be the change that you wish to see in the world.
Perhaps think about it like the difference between the Physics Stack Exchange chat and the Electrical Engineering (EE) Stack Exchange chat. They’re very close to the same. EE is based in physics obviously. But they’re separate.
The physics stack exchange doesn’t try to exclude engineers, and they didn’t make it because they thought that engineers were “alienating”; if they operated on that basis then it would create unnecessary annoyance for everyone. They separate because they are different topics, with different questions that need to be answered, and the skills and education which are relevant to one are very different from those that matter for another. But “near-term Effective Altruism”, just like “long-term Effective Altruism”, is a poorly specified bundle of positions with no common methodological thread. The common thread within each bundle is not any legitimate underlying presupposition about values or methodology that may form the foundation for further inquiry, it is an ex post facto conclusion that the right cause is something that happens to be short- or long-term. And while some cause conclusions could form a meaningful basis for significant further inquiry (e.g., you selected poverty as a cause, so now you just want to talk about poverty relief), the mere conclusion that the right cause is something that matters in the near or long term does not form any meaningful basis, because there is little in the way of general ideas, tools, resources, or methodologies which matter greatly for one bundle of causes but not the other.
But not only is the original analogy with physics and engineering relevantly incorrect, it’s specifically pernicious, because many EAs already implicitly have the misconception that supporting near-term or long-term causes is a matter of philosophical presupposition or overarching methodology; in fact it is probably the greatest confusion that EAs have about EA and therefore it wouldn’t be wise to reinforce it.
@kbog: Most of your responses with respect to my reply do not make sense. Example, EA Chicago posts their events on the Facebook page. I don’t live in Chicago...(simple as that)
The physics stack exchange doesn’t try to exclude engineers
~ completely missed the point. Additionally, the analogy is fine. There is seldom such a thing as an absolute analogy. With that, it doesn’t follow that somehow the analogy is wrong related to these elusively implicit misconceptions by EAs about EAs.
So to sum up, you’re reading in way too far to what I wrote originally. I was answering your question related to why your first reply was “harsher than necessary”.
EA Chicago posts their events on the Facebook page. I don’t live in Chicago...(simple as that)
OK, but has nothing to do with whether or not we should have this discord server… why bring it up? In the context of your statements, can’t you see how much it looks like someone is complaining that there are too many events that only appeal to EAs who support long-term causes, and too few events for EAs who support near-term causes?
~ completely missed the point. Additionally, the analogy is fine. There is seldom such a thing as an absolute analogy
It’s not that the analogy was not absolute, it’s that it was relevantly wrong for the topic of discussion. But given that your argument doesn’t seem to be what I thought it was, that’s fine, it could very well be relevant for your point.
I was answering your question related to why your first reply was “harsher than necessary”.
I figured that “harsh” refers to tone. If I insult you, or try to make you feel bad, or inject vicious sarcasm, then I’m being harsh. You didn’t talk about anything along those lines, but you did seem to be disputing my claims about the viability of the OP, so I took it to be a defense of having this new discord server. If you’re not talking on either of those issues then I don’t know what your point is.
They were examples to how I saw how your post as “harsher than necessary”. You’ve diluted these mere examples into a frivolous debate. If you believe you were not harsh at all, then believe what you want to believe.
As I stated already, “harsh” is a question of tone, and you clearly weren’t talking about my tone. So I have no clue what your position is or what you were trying to accomplish by providing your examples. There’s nothing I can do in the absence of clarification.
Diction and pronouns have tone (e.g., “you’re reinforcing” vs a more modest “that could reinforce”). With that, expressing certainty, about predictions (e.g., “whenever a group of people”) is another way I saw the original comment as harsh—unless you’re an expert in the field (and a relevant study would help too). I, for one, am no anthropologist nor sociologist.
I’m not debating if here. You asked how, and I quoted the statements I saw as the most harsh + most questionable.
[I’m trying to say this lightly. Instead I could have made that last bit, ”
furthest from the truth”. But I didn’t, because I’m trying to demonstrate. (And that’s not what I really mean anyway.)] I never said you are wrong about _ _ _ _ _. I said, it may not be true; it may be true.
You seem to still think the original comment was not harsher than necessary by your own definition of tone. Either way, I’m guessing Mrs. Wise gave you much less confusing pointers with her PM.
Hi Kbog, I see your point concerning near/far-future ideas in principle. However, if you look at the practical execution of these ideas, things aren’t following your lines of reasoning (unfortunately, of course). For instance, the community practices related to far-future focus (in particular AI-risks) have adopted the assessment of scientific research and the funding thereof, which I find lacking scientific rigor, transparency and overall validity (to the point that it makes no sense to speak of “effective” charity). Moreover, there is a large consensus about such evaluative practices: they are assumed as valid by OpenPhil and the EAF, and even when I tried to exchange arguments with both of these institutions, nothing has ever changed (I’ve never even managed to push them into a public dialogue on this topic). I see this problem as a potential danger for the EA community in whole (just think of the press getting their hands on this problem and arguing that EAs finance scientific research which is assumed effective, where it is unclear according to which criteria it would count as such; similarly for newcomers). In view of this, I think dividing these practices would be a great idea. The fact they are connected to “far-future EA” is secondary to me, and it is unfortunate that far-future ideas turned into a bubble of its own, closed towards criticism questioning the core of their EA methodology.
That said, I agree with some of your worries (see my other comment here).
Hi Kbog, I see your point concerning near/far-future ideas in principle. However, if you look at the practical execution of these ideas, things aren’t following your lines of reasoning (unfortunately, of course).
Well the main point of my comment is that people should not reinforce wrong practices by institutionalizing them.
For instance, the community practices related to far-future focus (in particular AI-risks) has been embedded in the assessment of scientific research and the funding thereof,
What is it when money goes to Givewell or Animal Charity Evaluators? Funding scientific research. Don’t poverty interventions need research? Animal advocacy campaigns? Plant-based meat? Is it only the futurists who are doing everything wrong when numerous complaints have been lodged at the research quality of Givewell and ACE?
which I find lacking scientific rigor, transparency and overall validity
Well I haven’t claimed that the evaluation of futurist scientific research is rigorous, transparent or valid. I think you should make a compelling argument for that in a serious post. Telling us that you failed to persuade groups such as Open Phil and the EAF doesn’t exactly show us that you are right.
Note: it’s particularly instructive here, as we evaluate the utility of the sort of segregation proposed by the OP, how the idea that EA ought to be split along these lines is bundled with the assertion that the Other Side is doing things “wrong”; we can see that the nominally innocuous proposal for categorization is operationalized to effect the general discrediting of those with an opposing point of view, which is exactly why it is a bad thing.
just think of the press getting their hands on this problem and arguing that EAs finance scientific research which is assumed effective, where it is unclear according to which criteria it would count as such
Just think of the press reporting on us doing exactly the same thing as everyone else in science? If you are worried about bad press, the #1 thing you should avoid is trying to kick up the social divisions that would give them something actually juicy to report on.
closed towards criticism questioning the core of their EA methodology
Where is this criticism? Where are the arguments on cause prioritization? Where is the review of the relevant academic literature? Where is the quantitative modeling? I see people complain that their “criticisms” aren’t being met, but when I look for these criticisms, the search for the original source bottoms out either in sparse lines of assertions in web comments, or quite old arguments that have already been accepted and answered, and in either case opponents are clearly ready and willing to engage with such criticism. The claim that people are “closed towards criticism” invariably turns out to be nothing but the fact that the complainant failed to change anyone’s mind, but seldom does the complainant question whether they are right at all.
I have a few paper deadlines these days, so as much as I wish to respond with all the references, arguments, etc. I don’t have the time. I plan on writing a post concerning EAF’s funding policy as well, where I’ll sum it up in a similar way as I did for OpenPhil.
That said, I don’t think we shouldn’t criticize the research done by near-future organizations, to the contrary. And I completely agree: it’d be great to have a forum devoted only to research practices and funding thereof. But concerning far-future funding, research is the only thing that can be funded, which makes it particularly troublesome.
Just think of the press reporting on us doing exactly the same thing as everyone else in science? If you are worried about bad press, the #1 thing you should avoid is trying to kick up the social divisions that would give them something actually juicy to report on.
Err, no. Funding by academic institutions follows a whole set of criteria (take the ERC scheme, for instance), which can of course be discussed on their own, but they aim at efficient and effective research. The funding of AI-risk related projects follows… well, nobody could ever specify to me any criteria to begin with, except “an anonymous reviewer whom we trust likes the project” or “they seem to have many great publications”, which once looked at don’t really exist. That’s as far from academic procedures as it gets.
I assumed your post to be more of a nominal attempt to disagree with me than it really was, so the failure of some of its statements to constitute specific rebuttals of my points became irritating. I’ve edited my comment to be cleaner. I apologize for that.
Okay, and if we look at that post, we see some pretty complete and civil responses to your arguments. Seems like things are Working As Intended. I am responding some of your claims in that thread so that it gets collected in the right place. But going back to the conversation here, you seem to be pretty clear that it is possible to have effective and efficient science funding, even if Open Phil isn’t doing it right. Plus, you’re only referring to Open Phil/EAF, not everyone else who supports long term causes. So clearly it would be inappropriate for long term EA causes to be separated.
But concerning far-future funding, research is the only thing that can be funded, which makes it particularly troublesome.
We can push for political change at the national or international level, we can grow the EA movement, or do animal advocacy. Those are known and viable far-future cause areas, even if they don’t get as much attention under that guise.
No worries! Thanks for that, and yes, I agree pretty much with everything you say here.
As for the discussion on far-future funding, it did start in the comments on my post, but it led nowhere near practical changes, in terms of transparency of proposed criteria used for the assessment of funded projects. I’ll try to write a separate, more general post on that.
My only point was that due to the high presence of “far-future bias” on this forum (I might be wrong, but much of downvoting-without-commenting seems to be at least a tendency towards biased outlooks) it’s nice to have some chats on more near-future related topics and strategies for promoting those goals. I see a chat channel more as a complementary venue to this forum than as an alternative.
It’s extremely hard to identify bias without proper measurement/quantification, because you need to separate it from actual differences in the strength of people’s arguments, as well as legitimate expression of a majority point of view, and your own bias. In any case, you are not going to get downvoted for talking about how to reduce poverty. I’m not sure what you’re really worried about. At some point you have to accept that no discussion space is perfect, that attempts to replace good ones usually turn out to be worse, and that your time is better spent focusing on the issues. But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you’re just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.
First, I disagree with your imperatives concerning what one should do before engaging in criticism. That’s a non-sequitur: we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues. I am genuinely interested in reading about near-future improvement topics, while being genuinely interested in voicing opinion on all kinds of meta issues, especially those that are closely related to my own research topics.
Second, the fact that measuring bias is difficult doesn’t mean bias doesn’t exist.
Third, to use your phrase, I am not sure what you are really worried about: having different types of venues for discussion doesn’t seem harmful especially if they concern different focus groups.
That’s a non-sequitur: we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues
Mhm, it’s POSSIBLE to talk about it, bias MAY exist, etc, etc. There’s still a difference between speculation and argument.
having different types of venues for discussion doesn’t seem harmful especially if they concern different focus groups.
different venues are fine, they must simply be split among legitimate lines (like light chat vs serious chat, or different specific causes; as I stated already, those are legitimate ways to split venues). Splitting things along illegitimate lines is harmful for reasons that I stated earlier in this thread.
Mhm, it’s POSSIBLE to talk about it, bias MAY exist, etc, etc. There’s still a difference between speculation and argument.
Could you please explain what you are talking about here since I don’t see how this is related to what you quote me saying above? Of course, there is a difference between a speculation and argument, and arguments may still include a claim that’s expressed in a modal way. So I don’t really understand how is this challenging what I have said :-/
different venues are fine, they must simply be split among legitimate lines (like light chat vs serious chat, or different specific causes; as I stated already, those are legitimate ways to split venues). Splitting things along illegitimate lines is harmful for reasons that I stated earlier in this thread.
having a discussion focusing on certain projects rather than others (in view of my suggestion directly to the OP) allows for such a legitimate focus, why not?
Could you please explain what you are talking about here since I don’t see how this is related to what you quote me saying above?
The part where I say “it’s POSSIBLE to talk about it” relates to your claim “we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues”, and the part where I say “bias MAY exist” relates to your claim “the fact that measuring bias is difficult doesn’t mean bias doesn’t exist.”
having a discussion focusing on certain projects rather than others (in view of my suggestion directly to the OP) allows for such a legitimate focus, why not?
Your suggestion to the OP to only host conversation about “[projects that] improve the near future” is the same distinction of near-term vs long-term, and therefore is still the wrong way to carve up the issues, for the same reasons I gave earlier.
right, we are able to—doesn’t mean we cannot form arguments. since when did arguments exist only if we can be absolutely certain about something?
as for my suggestion, unfortunately, and as i’ve said above, there is a bubble in the EA community concerning the far-future prioritization, which may be overshadowing and repulsive towards some who are interested in other topics. in the ideal context of rational discussion, your points would hold completely. but we are talking here about a very specific context where a number of biases are already entrenched and people tend to be put off by that. your approach alone in this discussion with me is super off-putting and my best guess is that you are behaving like this because you are hiding behind your anonymous identity. i wonder if we talked in person, if you’d be so rude (for examples, see my previous replies to you). i doubt.
since when did arguments exist only if we can be absolutely certain about something?
You don’t have to be certain, just substantiated.
there is a bubble in the EA community concerning the far-future prioritization which may be overshadowing and repulsive towards some who are interested in other topics
It may be, or it may not be. Even if so, it’s not healthy to split groups every time people dislike the majority point of view. “It’s a bubble and people are biased and I find it repulsive” is practically indistinguishable from “I disagree with them and I can’t convince them”.
we are talking here about a very specific context where a number of biases are already entrenched and people tend to be put off by that
Again, this is unsupported. What biases? What’s the evidence? Who is put off? Etc.
my best guess is that you are behaving like this because you are hiding behind your anonymous identity
my IRL identity is linked via the little icon by my username. I don’t know what’s rude here. I’m saying that you need to engage with on a topic before commenting on the viability of engaging on it. Yet this basic point is being met with appeals to logical fallacies, blank denial of the validity of my argument, insistence upon the mere possibility and plausible deniability of your position. These tactics are irritating and lead to nowhere, so all I can do is restate my points in a slightly different manner and hope that you pick up the general idea. You’re perceiving that as “rude” because it’s terse, but I have no idea what else I can say.
OK, you aren’t anonymous, so that’s even more surprising. I gave you earlier examples of your rude responses, but doesn’t matter, I’m fine going on.
My impression of bias is based by my experience on this forum and observations in view of posts critical of far-future causes. I don’t have any systematic study on this topic, so I can’t provide you with evidence. It is just my impression, based on my personal experience. But unfortunately, no empirical study on this topic, concerning this forum, exists, so the best we currently have are personal experiences. My experience is based on observations of the presence of larger-than-average downvoting without commenting when criticism on these issues is voiced. Of course, I may be biased and this may be my blind spot.
You started questioning my comments on this topic by stating that I haven’t engaged in any near-future discussions so far. And I am replying that i don’t need to have done so in order to have an argument concerning the type of venue that would profit from discussions on this topic. I don’t even see how I could change my mind on this topic (the good practice when disagreeing) because I don’t see why one would engage in a discussion in order to have an opinion on the discussion. Hope that’s clear by now :)
My experience is based on observations of the presence of larger-than-average downvoting without commenting when criticism on these issues is voiced.
I’m not referring to that, I’m questioning whether talking about near-term stuff needs to be anywhere else. This whole thing is not about “where can we argue about cause prioritization and the flaws in Open Phil,” it is about “where can we argue about bed nets vs cash distribution”. Those are two different things, and just because a forum is bad for one doesn’t imply that it’s bad for the other. You have been conflating these things in this entire conversation.
And I am replying that i don’t need to have done so in order to have an argument concerning the type of venue that would profit from discussions on this topic. I don’t even see how I could change my mind on this topic (the good practice when disagreeing) because I don’t see why one would engage in a discussion in order to have an opinion on the discussion
The basic premise here, that you should have experience with conversations before opining about the viability of having such a conversation, is not easy to communicate with someone who defers to pure skepticism about it. I leave that to the reader to see why it’s a problem that you’re inserting yourself as an authority while lacking demonstrable evidence and expertise.
I have to single out this one quote from you, because I have no idea where you are getting all this fuel from:
But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you’re just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.
Can you please explain what you are suggesting here? How is this conflicting with my interest in near-future related topics? I have a hard time understanding why you are so confrontational. Your last sentence:
Try things out before judging.
is the highest peak of unfriendliness. What should I try exactly before judging?!
Civil can still be unfriendly, but hey, if you aren’t getting it, it’s fine.
It should be clear, no? It’s hard to judge the viability of talking about X when you haven’t talked about X.
If it was clear, why would I ask? there’s your lack of friendliness in action.
And I still don’t see the rationale in what you are saying: I can judge that certain topics may profit from being discussed in a certain context A even if I haven’t personally engaged in discussing it in that context. The burden of proof is on you: if you want to make an argument, you have to provide more than just a claim. So far, you are just stating something which I currently can’t make any sense of.
“talking about near-future related topics and strategies”. I don’t know how else I can say this.
Again: why would someone be able to assess the viability of the context in which a certain topic is discussed only if they have engaged in the discussion of that topic? As I said above, this is a non-sequitur, or at least you haven’t provided any arguments to support this thesis. I can be in a position to suggest that scientists may profit from exchanging their ideas in a venue A even if I myself haven’t exchanged any ideas in A.
And I still don’t see the rationale in what you are saying: I can judge that certain topics may profit from being discussed in a certain context A even if I haven’t personally engaged in discussing it in that context
Yes, you can, technically, in theory. I’m recommending that you personally engage before judging it with confidence.
The burden of proof is on you: if you want to make an argument, you have to provide more than just a claim.
This kind of burden-of-proof-shifting is not a good way to approach conversation. I’ve already made my argument.
So far, you are just stating something which I currently can’t make any sense of.
What part of it doesn’t make sense? I honestly don’t see how it’s not clear, so I don’t know how to make it clearer.
Again: why would someone be able to assess the viability of the context in which a certain topic is discussed only if they have engaged in the discussion of that topic
They can, I’m just saying that it will be pretty unreliable.
I’m recommending that you personally engage before judging it with confidence.
But why would I? I might be fond of reading about certain causes from those who are more knowledgeable about them than I am. My donation strategies may profit from reading such discussions. And yet I may engage there where my expertise lies. This is why i really can’t make sense of your recommendation (which was originally an imperative, in fact).
This kind of burden-of-proof-shifting is not a good way to approach conversation. I’ve already made my argument.
I haven’t seen any such argument :-/
What part of it doesn’t make sense? I honestly don’t see how it’s not clear, so I don’t know how to make it clearer.
First, because you seem to be interested in ’talking about near-future related topics and strategies”. And second, because it will provide you with firsthand experience on this topic which you are arguing about.
I haven’t seen any such argument
In above comments, I write “It’s hard to judge the viability of talking about X when you haven’t talked about X”, and “I’m not sure what you’re really worried about. At some point you have to accept that no discussion space is perfect, that attempts to replace good ones usually turn out to be worse, and that your time is better spent focusing on the issues. But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you’re just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.”
Like I mentioned above, I may be interested in reading focused discussions on this topic and chipping in when I feel I can add something of value. Reading alone brings a lot on forums/discussion channels.
Moreover, I may assess how newcomers with a special interest in these topics may contribute from such a venue. You reduction of a meta-topic to one’s personal experience of it is a non-sequitur.
But in many contexts this may not be the case: as I’ve explained, I may profit from reading some discussions which is a kind of engagement. You’ve omitted that part of my response. Or think of philosophers of science discussing the efficiency of scientific research in, say, a specific scientific domain (in which, as philosophers, they’ve never participated). Knowledge-of doesn’t necessarily have to be knowledge obtained by an object-level engagement in the given field.
as I’ve explained, I may profit from reading some discussions which is a kind of engagement.
OK, sure. But when I look at conversations about near term issues on this forum I see perfectly good discussion (e.g. http://effective-altruism.com/ea/xo/givewells_charity_recommendations_require_taking/), and nothing that looks bad. And the basic idea that a forum can’t talk about a particular cause productively merely because most of them reject that cause (even if they do so for poor reasons) is simply unsubstantiated and hard to believe in the first place, on conceptual grounds.
Or think of philosophers of science discussing the efficiency of scientific research in, say, a specific scientific domain (in which, as philosophers, they’ve never participated).
This kind of talk has a rather mixed track record, actually. (source: I’ve studied economics and read the things that philosophers opine about economic methodology)
Right, and I agree! But here’s the thing (which I haven’t mentioned so far, so maybe it helps): I think some people just don’t participate in this forum much. For instance, there is a striking gender imbalance (I think more than 70% on here are men) and while I have absolutely no evidence to correlate this with near/far-future issues, I wouldn’t be surprised if it’s somewhat related (e.g. there are not so many tech-interested non-males in EA). Again, this is now just a speculation. And perhaps it’s worth a shot to try an environment that will feel safe for those who are put-off by AI-related topics/interests/angles.
I think some people just don’t participate in this forum much.
Absofuckinglutely, so let’s not make that problem worse by putting them into their own private Discord. As I said at the start, this is creating the problem that it is trying to solve.
And perhaps it’s worth a shot to try an environment that will feel safe for those who are put-off by AI-related topics/interests/angles.
EA needs to adhere to high standards of intellectual rigor, therefore it can’t fracture and make wanton concessions to people who feel emotional aversion to people with a differing point of view. The thesis that our charitable dollars ought to be given to x-risk instead of AMF is so benign and impersonal that it beggars belief that a reasonable person will feel upset or unsafe upon being exposed to widespread opinion in favor of it. Remember that the “near-term EAs” have been pushing a thesis that is equally alienating to people outside EA. For years, EAs of all stripes have been saying to stop giving money to museums and universities and baseball teams, that we must follow rational arguments and donate to faraway bed net charities which are mathematically demonstrated to have the greatest impact, and (rightly) expect outsiders to meet these arguments with rigor and seriousness; for some of these EAs to then turn around and object that they feel “unsafe”, and need a “safe space”, because there is a “bubble” of people who argue from a different point of view on cause prioritization is damningly hypocritical. The whole point of EA is that people are going to tell you that you are wrong about your charitable cause, and you shouldn’t set it in protective concrete like faith or identity.
While I largely agree with your idea, I just don’t understand why you think that a new space would divide people who anyway aren’t on this forum to begin with? Like I said, 70% on here are men. So how are you gonna attract more non-male participants? This topic may be unrelated, but let’s say we find out that the majority of non-males have preferences that would be better align with a different type of venue. Isn’t that a good enough reason to initiate it? Why would it that be conflicting, rather than complementary with this forum?
I just don’t understand why you think that a new space would divide people who anyway aren’t on this forum to begin with
I stated the problems in my original comment.
So how are you gonna attract more non-male participants
The same ways that we attract male participants, but perhaps tailored more towards women.
let’s say we find out that the majority of non-males have preferences that would be better align with a different type of venue. Isn’t that a good enough reason to initiate it?
It depends on the “different type of venue.”
Why would it that be conflicting, rather than complementary with this forum?
Because it may entail the problems that I gave in my original comment.
Yeah, this isn’t good policy. It should be pretty clear that this is how groupthink happens, and you’re establishing it as a principle. I get that you feel alienated because, what, 60% of people have a different point of view?
If you want to talk about how best to X, but you run into people who aren’t interested in X, it seems fine to talk to other pro-Xers. It seems fine that FHI gathers people who are sincerely interested about the future of humanity. Is that a filter bubble that ought to be broken up? Do you see them hiring people who strongly disagree with the premise of their institution? Should CEA hire people who effective altruism, broadly construed, is just a terrible idea?
You’re also creating the problem you’re trying to solve in a different way. Whereas most “near-term EAs” enjoy the broad EA community perfectly well, you’re reinforcing an assumption that they can’t get along, that they should expect EA to “alienate” them, as they hear about your server
To be frank, I think this problem already exists. I’ve literally had someone laugh in my face because they thought my person-affecting sympathies were just idiotic, and someone else say “oh, you’re the Michael Plant with the weird views” which I thought was, well, myopic coming from an EA. Civil discourse, take a bow.
It seems fine that FHI gathers people who are sincerely interested about the future of humanity. Is that a filter bubble that ought to be broken up?
If so, then every academic center would be a filter bubble. But filter bubbles are about communities, not work departments. There are relevant differences between these two concepts that affect how they should work. Researchers have to have their own work departments to be productive. It’s more like having different channels within an EA server. Just making enough space for people to do their thing together.
Do you see them hiring people who strongly disagree with the premise of their institution? Should CEA hire people who effective altruism, broadly construed, is just a terrible idea?
These institutions don’t have premises, they have teloses, and if someone will be the best contributor to the telos then sure they should be hired, even though it’s very unlikely that you will find a critic who will be willing and able to do that. But Near Term EA has a premise, that the best cause is something that helps in the near term.
To be frank, I think this problem already exists. I’ve literally had someone laugh in my face because they thought my person-affecting sympathies were just idiotic, and someone else say “oh, you’re the Michael Plant with the weird views” which I thought was, well, myopic coming from an EA. Civil discourse, take a bow.
That sounds like stuff that wouldn’t fly under the moderation here or the Facebook group. The first comment at least. Second one maybe gets a warning and downvotes.
I don’t often argue the merits of bednets versus cash transfers, which means I get intellectually sloppy knowing I won’t be challenged.
OK, but in that case wouldn’t it be better to stick around people with opposing points of view?
This seems like a pretty severe misreading to me. Ozy is saying that they want to hone their arguments against people with expertise in a particular field rather than a different field, which is perfectly reasonable.
You’re right, I did misread it, I thought the comparison was something against long term causes.
In any case you can always start a debate over how to reduce poverty on forums like this. Arguments like this have caught a lot of interest around here. And just because you put all the “near-term EAs” in the same place doesn’t mean they’ll argue with each other.
For what it’s worth, I felt a bit alienated by the other Discord, not because I don’t support far-future causes or that it was even discussing the far future, but because I didn’t find the conversation interesting. I think this Discord might help me engage more with EAs, because I find the discourse more interesting, and I happen to like the way Thing of Thing discusses things. I think it’s good to have a variety of groups with different cultures and conversation styles, to appeal to a broader base of people. That said, I do have some reservations about fragmenting EA along ideological lines.
I do not intend Near-Term EAs to be participants’ only space to talk about effective altruism. People can still participate on the EA forum, the EA Facebook group, local EA groups, Less Wrong, etc. There is not actually any shortage of places where near-term EAs can talk with far-future EAs.
Near-Term EAs has been in open beta for a week or two while I ironed out the kinks. So far, I have not found any issues with people being unusually closed-minded or intolerant of far-future EAs. In fact, we have several participants who identify as cause-agnostic and at least one who works for a far-future organization.
I do not intend Near-Term EAs to be participants’ only space to talk about effective altruism. People can still participate on the EA forum, the EA Facebook group, local EA groups, Less Wrong, etc. There is not actually any shortage of places where near-term EAs can talk with far-future EAs.
There is not any shortage of places where near-term EAs can talk with near-term EAs—it is the same list. (except for maybe LessWrong, which may be bad for the same reasons as this discord server, but at least they are open to everyone’s participation, and don’t make a brand out of their POV.) But if the mere availability of alternative avenues for dissenting opinions were sufficient for avoiding groupthink, then groupthink would not exist. Every messageboard is just a click away from many others. And yet we see people operating in filter bubbles all the same.
Please see my comment reply to adamaero, “near-term EA” is a thesis, not a legitimate way to carve up the movement (the same goes for long-term EA), and it shouldn’t be entrenched as a kind of ideology—certainly not as a kind of identity, which is even worse. You are reinforcing a framing that will continue to cause deep problems that will be extremely difficult to undo. Consider focusing on poverty reduction instead, for instance.
Discord lets you separate servers into different channels for people to talk about different things. There is already an EA Discord, of course new and near term EAs are welcome there. I think it would be bad if we split things like this because the more the near term EAs isolate themselves, the more and more “alienated” people will feel elsewhere, so it will be a destructive feedback loop. You’re creating the problem that you are trying to solve.
Also, it would reinforce the neglect of mid-term causes which have always gotten too little attention in EA.
Yeah, this isn’t good policy. It should be pretty clear that this is how groupthink happens, and you’re establishing it as a principle. I get that you feel alienated because, what, 60% of people have a different point of view? (perish the thought!) And you want to help with the growth of the movement. But hopefully you can find a better way to do this than creating an actual echo chamber. It’s clearly a poor choice as far as epistemology is concerned.
You’re also creating the problem you’re trying to solve in a different way. Whereas most “near-term EAs” enjoy the broad EA community perfectly well, you’re reinforcing an assumption that they can’t get along, that they should expect EA to “alienate” them, as they hear about your server. As soon as people are pointed towards a designated safe space, they’re going to assume that everything on the outside is unfriendly to them, and that will bias their perceptions going forward.
You are likely to have a lighter version of the problem that Hatreon did with Patreon, Voat with Reddit, etc—whenever a group of people has a problem with the “mainstream” option and someone tries to create an alternative space, the first people who jump ship to the alternative will be the highly-motivated people on the extreme end of the spectrum, who are the most closed-minded and intolerant of the mainstream, and they are going to set the norms for the community henceforth. Don’t get me wrong, it’s good to expand EA with new community spaces and be more appealing to new people, it is always nice to see people put effort into new ideas for EA, but this is very flawed, I strongly recommend that you revise your plans.
Moderator note: I found this harsher than necessary. I think a few tone changes would have made the whole message feel more constructive.
What statements were “harsher than necessary”?
I’ll PM you.
“so it will be a destructive feedback loop” ~ not necessarily
“you’re reinforcing an assumption that they can’t get along” ~ unlikely
“whenever a group of people [...] extreme end of the spectrum, who are the most closed-minded and intolerant” ~ very big presumptions
I personally think this chat is a great idea. Too many times on Facebook groups, I have to see local events that I can’t attend. Too many times I see EA posts that have no relevance to my involvement in EA. That doesn’t mean I’m closed-minded. Most EAs, picking animal suffering or global poverty, are the most open-minded people in my opinion.
Perhaps think about it like the difference between the Physics Stack Exchange chat and the Electrical Engineering (EE) Stack Exchange chat. They’re very close to the same. EE is based in physics obviously. But they’re separate.
Anyway, my two cents.
All three of those are merely cases of you disagreeing with my claims or my confidence in them. I thought I was being tone-policed, but you are just saying that I am wrong.
The fact that people are unable to attend something is one of the problems with the server that is being promoted here. I’m not in favor of anything in EA that does this, if someone ever tries to exclude near-term EAs from their event then give me a ping and I will argue with them too!
Theoretical physicists are not upset by the presence of discussion on experimental physics, and the ones who disbelieve in dark matter are not upset by the presence of discussion from people who do. If lots of posts aren’t relevant to you, the right answer is presumably to ignore those posts; I and so many other EAs do it all the time, it’s easy.
If you want more content that is relevant to you… that’s perfect! Make it! Request it! Ask questions about it! Be the change that you wish to see in the world.
The physics stack exchange doesn’t try to exclude engineers, and they didn’t make it because they thought that engineers were “alienating”; if they operated on that basis then it would create unnecessary annoyance for everyone. They separate because they are different topics, with different questions that need to be answered, and the skills and education which are relevant to one are very different from those that matter for another. But “near-term Effective Altruism”, just like “long-term Effective Altruism”, is a poorly specified bundle of positions with no common methodological thread. The common thread within each bundle is not any legitimate underlying presupposition about values or methodology that may form the foundation for further inquiry, it is an ex post facto conclusion that the right cause is something that happens to be short- or long-term. And while some cause conclusions could form a meaningful basis for significant further inquiry (e.g., you selected poverty as a cause, so now you just want to talk about poverty relief), the mere conclusion that the right cause is something that matters in the near or long term does not form any meaningful basis, because there is little in the way of general ideas, tools, resources, or methodologies which matter greatly for one bundle of causes but not the other.
But not only is the original analogy with physics and engineering relevantly incorrect, it’s specifically pernicious, because many EAs already implicitly have the misconception that supporting near-term or long-term causes is a matter of philosophical presupposition or overarching methodology; in fact it is probably the greatest confusion that EAs have about EA and therefore it wouldn’t be wise to reinforce it.
@kbog: Most of your responses with respect to my reply do not make sense. Example, EA Chicago posts their events on the Facebook page. I don’t live in Chicago...(simple as that)
~ completely missed the point. Additionally, the analogy is fine. There is seldom such a thing as an absolute analogy. With that, it doesn’t follow that somehow the analogy is wrong related to these elusively implicit misconceptions by EAs about EAs.
So to sum up, you’re reading in way too far to what I wrote originally. I was answering your question related to why your first reply was “harsher than necessary”.
OK, but has nothing to do with whether or not we should have this discord server… why bring it up? In the context of your statements, can’t you see how much it looks like someone is complaining that there are too many events that only appeal to EAs who support long-term causes, and too few events for EAs who support near-term causes?
It’s not that the analogy was not absolute, it’s that it was relevantly wrong for the topic of discussion. But given that your argument doesn’t seem to be what I thought it was, that’s fine, it could very well be relevant for your point.
I figured that “harsh” refers to tone. If I insult you, or try to make you feel bad, or inject vicious sarcasm, then I’m being harsh. You didn’t talk about anything along those lines, but you did seem to be disputing my claims about the viability of the OP, so I took it to be a defense of having this new discord server. If you’re not talking on either of those issues then I don’t know what your point is.
They were examples to how I saw how your post as “harsher than necessary”. You’ve diluted these mere examples into a frivolous debate. If you believe you were not harsh at all, then believe what you want to believe.
As I stated already, “harsh” is a question of tone, and you clearly weren’t talking about my tone. So I have no clue what your position is or what you were trying to accomplish by providing your examples. There’s nothing I can do in the absence of clarification.
Diction and pronouns have tone (e.g., “you’re reinforcing” vs a more modest “that could reinforce”). With that, expressing certainty, about predictions (e.g., “whenever a group of people”) is another way I saw the original comment as harsh—unless you’re an expert in the field (and a relevant study would help too). I, for one, am no anthropologist nor sociologist.
I’m not debating if here. You asked how, and I quoted the statements I saw as the most harsh + most questionable. [I’m trying to say this lightly. Instead I could have made that last bit, ”
furthest from the truth”. But I didn’t, because I’m trying to demonstrate. (And that’s not what I really mean anyway.)] I never said you are wrong about _ _ _ _ _. I said, it may not be true; it may be true.
You seem to still think the original comment was not harsher than necessary by your own definition of tone. Either way, I’m guessing Mrs. Wise gave you much less confusing pointers with her PM.
Hi Kbog, I see your point concerning near/far-future ideas in principle. However, if you look at the practical execution of these ideas, things aren’t following your lines of reasoning (unfortunately, of course). For instance, the community practices related to far-future focus (in particular AI-risks) have adopted the assessment of scientific research and the funding thereof, which I find lacking scientific rigor, transparency and overall validity (to the point that it makes no sense to speak of “effective” charity). Moreover, there is a large consensus about such evaluative practices: they are assumed as valid by OpenPhil and the EAF, and even when I tried to exchange arguments with both of these institutions, nothing has ever changed (I’ve never even managed to push them into a public dialogue on this topic). I see this problem as a potential danger for the EA community in whole (just think of the press getting their hands on this problem and arguing that EAs finance scientific research which is assumed effective, where it is unclear according to which criteria it would count as such; similarly for newcomers). In view of this, I think dividing these practices would be a great idea. The fact they are connected to “far-future EA” is secondary to me, and it is unfortunate that far-future ideas turned into a bubble of its own, closed towards criticism questioning the core of their EA methodology.
That said, I agree with some of your worries (see my other comment here).
Well the main point of my comment is that people should not reinforce wrong practices by institutionalizing them.
What is it when money goes to Givewell or Animal Charity Evaluators? Funding scientific research. Don’t poverty interventions need research? Animal advocacy campaigns? Plant-based meat? Is it only the futurists who are doing everything wrong when numerous complaints have been lodged at the research quality of Givewell and ACE?
Well I haven’t claimed that the evaluation of futurist scientific research is rigorous, transparent or valid. I think you should make a compelling argument for that in a serious post. Telling us that you failed to persuade groups such as Open Phil and the EAF doesn’t exactly show us that you are right.
Note: it’s particularly instructive here, as we evaluate the utility of the sort of segregation proposed by the OP, how the idea that EA ought to be split along these lines is bundled with the assertion that the Other Side is doing things “wrong”; we can see that the nominally innocuous proposal for categorization is operationalized to effect the general discrediting of those with an opposing point of view, which is exactly why it is a bad thing.
Just think of the press reporting on us doing exactly the same thing as everyone else in science? If you are worried about bad press, the #1 thing you should avoid is trying to kick up the social divisions that would give them something actually juicy to report on.
Where is this criticism? Where are the arguments on cause prioritization? Where is the review of the relevant academic literature? Where is the quantitative modeling? I see people complain that their “criticisms” aren’t being met, but when I look for these criticisms, the search for the original source bottoms out either in sparse lines of assertions in web comments, or quite old arguments that have already been accepted and answered, and in either case opponents are clearly ready and willing to engage with such criticism. The claim that people are “closed towards criticism” invariably turns out to be nothing but the fact that the complainant failed to change anyone’s mind, but seldom does the complainant question whether they are right at all.
wow, you really seem annoyed… didn’t expect such a pissed post, but i suppose you got really annoyed by this thread or something. I provided the arguments in detail concerning OpenPhil’s practices in a post from few months ago here: http://effective-altruism.com/ea/1l6/how_effective_and_efficient_is_the_funding_policy/.
I have a few paper deadlines these days, so as much as I wish to respond with all the references, arguments, etc. I don’t have the time. I plan on writing a post concerning EAF’s funding policy as well, where I’ll sum it up in a similar way as I did for OpenPhil.
That said, I don’t think we shouldn’t criticize the research done by near-future organizations, to the contrary. And I completely agree: it’d be great to have a forum devoted only to research practices and funding thereof. But concerning far-future funding, research is the only thing that can be funded, which makes it particularly troublesome.
Err, no. Funding by academic institutions follows a whole set of criteria (take the ERC scheme, for instance), which can of course be discussed on their own, but they aim at efficient and effective research. The funding of AI-risk related projects follows… well, nobody could ever specify to me any criteria to begin with, except “an anonymous reviewer whom we trust likes the project” or “they seem to have many great publications”, which once looked at don’t really exist. That’s as far from academic procedures as it gets.
I assumed your post to be more of a nominal attempt to disagree with me than it really was, so the failure of some of its statements to constitute specific rebuttals of my points became irritating. I’ve edited my comment to be cleaner. I apologize for that.
Okay, and if we look at that post, we see some pretty complete and civil responses to your arguments. Seems like things are Working As Intended. I am responding some of your claims in that thread so that it gets collected in the right place. But going back to the conversation here, you seem to be pretty clear that it is possible to have effective and efficient science funding, even if Open Phil isn’t doing it right. Plus, you’re only referring to Open Phil/EAF, not everyone else who supports long term causes. So clearly it would be inappropriate for long term EA causes to be separated.
We can push for political change at the national or international level, we can grow the EA movement, or do animal advocacy. Those are known and viable far-future cause areas, even if they don’t get as much attention under that guise.
No worries! Thanks for that, and yes, I agree pretty much with everything you say here. As for the discussion on far-future funding, it did start in the comments on my post, but it led nowhere near practical changes, in terms of transparency of proposed criteria used for the assessment of funded projects. I’ll try to write a separate, more general post on that.
My only point was that due to the high presence of “far-future bias” on this forum (I might be wrong, but much of downvoting-without-commenting seems to be at least a tendency towards biased outlooks) it’s nice to have some chats on more near-future related topics and strategies for promoting those goals. I see a chat channel more as a complementary venue to this forum than as an alternative.
It’s extremely hard to identify bias without proper measurement/quantification, because you need to separate it from actual differences in the strength of people’s arguments, as well as legitimate expression of a majority point of view, and your own bias. In any case, you are not going to get downvoted for talking about how to reduce poverty. I’m not sure what you’re really worried about. At some point you have to accept that no discussion space is perfect, that attempts to replace good ones usually turn out to be worse, and that your time is better spent focusing on the issues. But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you’re just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.
First, I disagree with your imperatives concerning what one should do before engaging in criticism. That’s a non-sequitur: we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues. I am genuinely interested in reading about near-future improvement topics, while being genuinely interested in voicing opinion on all kinds of meta issues, especially those that are closely related to my own research topics.
Second, the fact that measuring bias is difficult doesn’t mean bias doesn’t exist.
Third, to use your phrase, I am not sure what you are really worried about: having different types of venues for discussion doesn’t seem harmful especially if they concern different focus groups.
Mhm, it’s POSSIBLE to talk about it, bias MAY exist, etc, etc. There’s still a difference between speculation and argument.
different venues are fine, they must simply be split among legitimate lines (like light chat vs serious chat, or different specific causes; as I stated already, those are legitimate ways to split venues). Splitting things along illegitimate lines is harmful for reasons that I stated earlier in this thread.
Could you please explain what you are talking about here since I don’t see how this is related to what you quote me saying above? Of course, there is a difference between a speculation and argument, and arguments may still include a claim that’s expressed in a modal way. So I don’t really understand how is this challenging what I have said :-/
having a discussion focusing on certain projects rather than others (in view of my suggestion directly to the OP) allows for such a legitimate focus, why not?
The part where I say “it’s POSSIBLE to talk about it” relates to your claim “we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues”, and the part where I say “bias MAY exist” relates to your claim “the fact that measuring bias is difficult doesn’t mean bias doesn’t exist.”
Your suggestion to the OP to only host conversation about “[projects that] improve the near future” is the same distinction of near-term vs long-term, and therefore is still the wrong way to carve up the issues, for the same reasons I gave earlier.
right, we are able to—doesn’t mean we cannot form arguments. since when did arguments exist only if we can be absolutely certain about something?
as for my suggestion, unfortunately, and as i’ve said above, there is a bubble in the EA community concerning the far-future prioritization, which may be overshadowing and repulsive towards some who are interested in other topics. in the ideal context of rational discussion, your points would hold completely. but we are talking here about a very specific context where a number of biases are already entrenched and people tend to be put off by that. your approach alone in this discussion with me is super off-putting and my best guess is that you are behaving like this because you are hiding behind your anonymous identity. i wonder if we talked in person, if you’d be so rude (for examples, see my previous replies to you). i doubt.
But they’ll be unsubstantiated.
You don’t have to be certain, just substantiated.
It may be, or it may not be. Even if so, it’s not healthy to split groups every time people dislike the majority point of view. “It’s a bubble and people are biased and I find it repulsive” is practically indistinguishable from “I disagree with them and I can’t convince them”.
Again, this is unsupported. What biases? What’s the evidence? Who is put off? Etc.
my IRL identity is linked via the little icon by my username. I don’t know what’s rude here. I’m saying that you need to engage with on a topic before commenting on the viability of engaging on it. Yet this basic point is being met with appeals to logical fallacies, blank denial of the validity of my argument, insistence upon the mere possibility and plausible deniability of your position. These tactics are irritating and lead to nowhere, so all I can do is restate my points in a slightly different manner and hope that you pick up the general idea. You’re perceiving that as “rude” because it’s terse, but I have no idea what else I can say.
OK, you aren’t anonymous, so that’s even more surprising. I gave you earlier examples of your rude responses, but doesn’t matter, I’m fine going on.
My impression of bias is based by my experience on this forum and observations in view of posts critical of far-future causes. I don’t have any systematic study on this topic, so I can’t provide you with evidence. It is just my impression, based on my personal experience. But unfortunately, no empirical study on this topic, concerning this forum, exists, so the best we currently have are personal experiences. My experience is based on observations of the presence of larger-than-average downvoting without commenting when criticism on these issues is voiced. Of course, I may be biased and this may be my blind spot.
You started questioning my comments on this topic by stating that I haven’t engaged in any near-future discussions so far. And I am replying that i don’t need to have done so in order to have an argument concerning the type of venue that would profit from discussions on this topic. I don’t even see how I could change my mind on this topic (the good practice when disagreeing) because I don’t see why one would engage in a discussion in order to have an opinion on the discussion. Hope that’s clear by now :)
I’m not referring to that, I’m questioning whether talking about near-term stuff needs to be anywhere else. This whole thing is not about “where can we argue about cause prioritization and the flaws in Open Phil,” it is about “where can we argue about bed nets vs cash distribution”. Those are two different things, and just because a forum is bad for one doesn’t imply that it’s bad for the other. You have been conflating these things in this entire conversation.
The basic premise here, that you should have experience with conversations before opining about the viability of having such a conversation, is not easy to communicate with someone who defers to pure skepticism about it. I leave that to the reader to see why it’s a problem that you’re inserting yourself as an authority while lacking demonstrable evidence and expertise.
I have to single out this one quote from you, because I have no idea where you are getting all this fuel from:
Can you please explain what you are suggesting here? How is this conflicting with my interest in near-future related topics? I have a hard time understanding why you are so confrontational. Your last sentence:
is the highest peak of unfriendliness. What should I try exactly before judging?!
I don’t know of any less confrontational/unfriendly way of wording those points. That comment is perfectly civil.
It should be clear, no? It’s hard to judge the viability of talking about X when you haven’t talked about X.
Look, it’s right there in the original comment—“talking about near-future related topics and strategies”. I don’t know how else I can say this.
Civil can still be unfriendly, but hey, if you aren’t getting it, it’s fine.
If it was clear, why would I ask? there’s your lack of friendliness in action. And I still don’t see the rationale in what you are saying: I can judge that certain topics may profit from being discussed in a certain context A even if I haven’t personally engaged in discussing it in that context. The burden of proof is on you: if you want to make an argument, you have to provide more than just a claim. So far, you are just stating something which I currently can’t make any sense of.
Again: why would someone be able to assess the viability of the context in which a certain topic is discussed only if they have engaged in the discussion of that topic? As I said above, this is a non-sequitur, or at least you haven’t provided any arguments to support this thesis. I can be in a position to suggest that scientists may profit from exchanging their ideas in a venue A even if I myself haven’t exchanged any ideas in A.
Yes, you can, technically, in theory. I’m recommending that you personally engage before judging it with confidence.
This kind of burden-of-proof-shifting is not a good way to approach conversation. I’ve already made my argument.
What part of it doesn’t make sense? I honestly don’t see how it’s not clear, so I don’t know how to make it clearer.
They can, I’m just saying that it will be pretty unreliable.
But why would I? I might be fond of reading about certain causes from those who are more knowledgeable about them than I am. My donation strategies may profit from reading such discussions. And yet I may engage there where my expertise lies. This is why i really can’t make sense of your recommendation (which was originally an imperative, in fact).
I haven’t seen any such argument :-/
See above.
First, because you seem to be interested in ’talking about near-future related topics and strategies”. And second, because it will provide you with firsthand experience on this topic which you are arguing about.
In above comments, I write “It’s hard to judge the viability of talking about X when you haven’t talked about X”, and “I’m not sure what you’re really worried about. At some point you have to accept that no discussion space is perfect, that attempts to replace good ones usually turn out to be worse, and that your time is better spent focusing on the issues. But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you’re just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.”
Like I mentioned above, I may be interested in reading focused discussions on this topic and chipping in when I feel I can add something of value. Reading alone brings a lot on forums/discussion channels.
Moreover, I may assess how newcomers with a special interest in these topics may contribute from such a venue. You reduction of a meta-topic to one’s personal experience of it is a non-sequitur.
I didn’t reduce it. I only claim that it requires personal experience as a significant part of the picture.
But in many contexts this may not be the case: as I’ve explained, I may profit from reading some discussions which is a kind of engagement. You’ve omitted that part of my response. Or think of philosophers of science discussing the efficiency of scientific research in, say, a specific scientific domain (in which, as philosophers, they’ve never participated). Knowledge-of doesn’t necessarily have to be knowledge obtained by an object-level engagement in the given field.
OK, sure. But when I look at conversations about near term issues on this forum I see perfectly good discussion (e.g. http://effective-altruism.com/ea/xo/givewells_charity_recommendations_require_taking/), and nothing that looks bad. And the basic idea that a forum can’t talk about a particular cause productively merely because most of them reject that cause (even if they do so for poor reasons) is simply unsubstantiated and hard to believe in the first place, on conceptual grounds.
This kind of talk has a rather mixed track record, actually. (source: I’ve studied economics and read the things that philosophers opine about economic methodology)
Right, and I agree! But here’s the thing (which I haven’t mentioned so far, so maybe it helps): I think some people just don’t participate in this forum much. For instance, there is a striking gender imbalance (I think more than 70% on here are men) and while I have absolutely no evidence to correlate this with near/far-future issues, I wouldn’t be surprised if it’s somewhat related (e.g. there are not so many tech-interested non-males in EA). Again, this is now just a speculation. And perhaps it’s worth a shot to try an environment that will feel safe for those who are put-off by AI-related topics/interests/angles.
Absofuckinglutely, so let’s not make that problem worse by putting them into their own private Discord. As I said at the start, this is creating the problem that it is trying to solve.
EA needs to adhere to high standards of intellectual rigor, therefore it can’t fracture and make wanton concessions to people who feel emotional aversion to people with a differing point of view. The thesis that our charitable dollars ought to be given to x-risk instead of AMF is so benign and impersonal that it beggars belief that a reasonable person will feel upset or unsafe upon being exposed to widespread opinion in favor of it. Remember that the “near-term EAs” have been pushing a thesis that is equally alienating to people outside EA. For years, EAs of all stripes have been saying to stop giving money to museums and universities and baseball teams, that we must follow rational arguments and donate to faraway bed net charities which are mathematically demonstrated to have the greatest impact, and (rightly) expect outsiders to meet these arguments with rigor and seriousness; for some of these EAs to then turn around and object that they feel “unsafe”, and need a “safe space”, because there is a “bubble” of people who argue from a different point of view on cause prioritization is damningly hypocritical. The whole point of EA is that people are going to tell you that you are wrong about your charitable cause, and you shouldn’t set it in protective concrete like faith or identity.
While I largely agree with your idea, I just don’t understand why you think that a new space would divide people who anyway aren’t on this forum to begin with? Like I said, 70% on here are men. So how are you gonna attract more non-male participants? This topic may be unrelated, but let’s say we find out that the majority of non-males have preferences that would be better align with a different type of venue. Isn’t that a good enough reason to initiate it? Why would it that be conflicting, rather than complementary with this forum?
I stated the problems in my original comment.
The same ways that we attract male participants, but perhaps tailored more towards women.
It depends on the “different type of venue.”
Because it may entail the problems that I gave in my original comment.
I don’t find your objections here persuasive.
If you want to talk about how best to X, but you run into people who aren’t interested in X, it seems fine to talk to other pro-Xers. It seems fine that FHI gathers people who are sincerely interested about the future of humanity. Is that a filter bubble that ought to be broken up? Do you see them hiring people who strongly disagree with the premise of their institution? Should CEA hire people who effective altruism, broadly construed, is just a terrible idea?
To be frank, I think this problem already exists. I’ve literally had someone laugh in my face because they thought my person-affecting sympathies were just idiotic, and someone else say “oh, you’re the Michael Plant with the weird views” which I thought was, well, myopic coming from an EA. Civil discourse, take a bow.
If so, then every academic center would be a filter bubble. But filter bubbles are about communities, not work departments. There are relevant differences between these two concepts that affect how they should work. Researchers have to have their own work departments to be productive. It’s more like having different channels within an EA server. Just making enough space for people to do their thing together.
These institutions don’t have premises, they have teloses, and if someone will be the best contributor to the telos then sure they should be hired, even though it’s very unlikely that you will find a critic who will be willing and able to do that. But Near Term EA has a premise, that the best cause is something that helps in the near term.
That sounds like stuff that wouldn’t fly under the moderation here or the Facebook group. The first comment at least. Second one maybe gets a warning and downvotes.
This seems like a pretty severe misreading to me. Ozy is saying that they want to hone their arguments against people with expertise in a particular field rather than a different field, which is perfectly reasonable.
You’re right, I did misread it, I thought the comparison was something against long term causes.
In any case you can always start a debate over how to reduce poverty on forums like this. Arguments like this have caught a lot of interest around here. And just because you put all the “near-term EAs” in the same place doesn’t mean they’ll argue with each other.
For what it’s worth, I felt a bit alienated by the other Discord, not because I don’t support far-future causes or that it was even discussing the far future, but because I didn’t find the conversation interesting. I think this Discord might help me engage more with EAs, because I find the discourse more interesting, and I happen to like the way Thing of Thing discusses things. I think it’s good to have a variety of groups with different cultures and conversation styles, to appeal to a broader base of people. That said, I do have some reservations about fragmenting EA along ideological lines.
Is the other Discord not publicly viewable? I’ve never heard of it.
https://www.reddit.com/r/EffectiveAltruism/comments/6etmdd/new_effective_altruism_discord_server/
It’s public. I would share a link, but that would give away my discord identity, hopefully someone has it.
I do not intend Near-Term EAs to be participants’ only space to talk about effective altruism. People can still participate on the EA forum, the EA Facebook group, local EA groups, Less Wrong, etc. There is not actually any shortage of places where near-term EAs can talk with far-future EAs.
Near-Term EAs has been in open beta for a week or two while I ironed out the kinks. So far, I have not found any issues with people being unusually closed-minded or intolerant of far-future EAs. In fact, we have several participants who identify as cause-agnostic and at least one who works for a far-future organization.
There is not any shortage of places where near-term EAs can talk with near-term EAs—it is the same list. (except for maybe LessWrong, which may be bad for the same reasons as this discord server, but at least they are open to everyone’s participation, and don’t make a brand out of their POV.) But if the mere availability of alternative avenues for dissenting opinions were sufficient for avoiding groupthink, then groupthink would not exist. Every messageboard is just a click away from many others. And yet we see people operating in filter bubbles all the same.
Please see my comment reply to adamaero, “near-term EA” is a thesis, not a legitimate way to carve up the movement (the same goes for long-term EA), and it shouldn’t be entrenched as a kind of ideology—certainly not as a kind of identity, which is even worse. You are reinforcing a framing that will continue to cause deep problems that will be extremely difficult to undo. Consider focusing on poverty reduction instead, for instance.