Okay, what information do you think they need? You mentioned “directions” and “approaches” but that is very vague. I need the specific questions you think readers need answered before they will notify me of similar projects or express interest in what I’m doing.
Kathy
I think you’re saying “There isn’t enough information for most readers to decide whether they want to PM you.” is that right?
I’m open to going in whatever direction gives the EA community the most insight into the truth, with whatever presentation encourages the most constructive use of that information. In case you’re interested in specifics, I am currently working on a planning document about how specifically to accomplish all that. I can give you access if you wish (Just give me your Google Docs address via PM.).
I’m open to considering directions / direction changes. What are your thoughts so far? :)
I am not sure if you are requesting to see the project, or if you are making a complaint of some sort. It’s easy enough for anyone to PM me and request to see the project. Just in case, I updated my post to explicitly invite people to PM me to see the project.
In case this wasn’t clear, the project isn’t finished yet. Before dumping a lot more hours into it, I want to see whether I’m duplicating anyone’s work.
The fact that it is not yet finished is why I did not publish anything about it so far. It’s not ready to be published.
The main point of this post is simply to find out whether there are others doing a similar project, and find other people who are interested in helping make sure the project gets completed.
Collaborators Wanted: Could war disrupt EA orgs in the US or UK in the next 10 years?
I agree that most people will not understand the most strange ideas until they understand the basic ideas. Ensuring they understand the foundation is a good practice.
I definitely agree that the instances of weirdness that are beneficial are only a tiny fraction of the weirdness that is present.
Regarding weirdness:
There are effective and ineffective ways to be weird.
There are several apparently contradictory guidelines in art: “use design principles”, “break the conventions”, and “make sure everything looks intentional”.
The effective ways to be weird manage all three guidelines.
Examples: Picasso, Björk, Lady Gaga
One of the major and most observable differences between these three artists vs. many weird people is that the behavior of the artists can be interpreted as a communication about something specific, meaningful, and valuable. Art is a language. Everything strange we do speaks about us. If you haven’t studied art, it might be rather hard to interpret the above three artists. The language of art is sometimes completely opaque to non-artists, and those who interpret art often find a variety of different meanings rather than a consistent one. (I guess that’s one reason why they don’t call it science.) Quick interpretations: In Picasso, I interpret an exploration of order and chaos. In Björk, I interpret an exploration of the strangeness of nature, the familiarity and necessity of nature, and the contradiction between the two. In Lady Gaga, I interpret an edgy exploration of identity.
These artists have the skill to say something of meaning as they follow principles and break conventions in a way that looks intentional. That is why art is a different experience from, say, looking at an odd-shaped mud splatter on the sidewalk, and why it can be a lot more special.
Ineffective weirdness is too similar to the odd-shaped mud splatter. There need to be signs of intentional communication. To interpret meaning, we need to see that combination of unbroken principles and broken conventions arranged in an intentional-looking pattern.
Edit: I agree that there aren’t a large number of people advocating for dishonesty. My concern is that if even a small number of EAs get enough attention for doing something dishonest, this could cause us all reputation problems. Since we could be “painted with the same brush” due to the common human bias called stereotyping bias, I think it’s worthwhile to make sure it’s easy to find information about how to do honest promotion, and why.
I updated my post to mention some specific examples of the problems I’ve been seeing. Thank you, David.
3 Examples You Can Use To Promote Causes Honestly and Effectively
It would protect the movement to have a norm that organizations must supply good evidence of effectiveness to the group and only if the group accepts this evidence should they claim to be an effective altruism organization.
I think some similar norm should also extend to individual people who want to publish articles about what effective altruism is. Obviously, this cannot be required of critics, but we can easily demand it from our allies. I’m not sure what we should expect individual people to do before they go out and write articles about effective altruism on Huffington Post or whatever, but expecting something seems necessary.
To prevent startups from being utterly ostracized by this before they’ve got enough data / done enough experiments to show effectiveness, maybe they could be encouraged to use a different term that includes EA but modifies it in a clear way like “aspiring effective altruism organization”.
Wow. More excellent arguments. More updates on my side. You’re on fire. I almost never meet people who can change my mind this much. I would like to add you as a friend.
I’m not completely sure what’s going on with Gleb, but I feel a great deal of concern for people with Asperger’s, and I think it made me overly sympathetic in this case. Thank you for this.
That was a truly excellent argument. Thank you.
I’m glad we agree that the absolute number of mistakes is obviously an incorrect thing to use. :) I like your addition of “better wrong than vague”, (though I am not sure exactly how you would go about implementing it as part of an assessment beyond “If they’re always vague, be suspicious.” which doesn’t seem actionable.).
Considering how people respond to criticism is important for at least two reasons. If you can communicate to the person, and they can change, this is far less frustrating and far less risky. A person you cannot figure out how to communicate with, or who does not know how to change the particular flaw, will not be able to reduce frustration or risk fast enough. People are going to lose their patience or total up the cost-benefit ratio and decide that it’s too likely to be a net negative. This is totally understandable and totally reasonable.
I think the reason we don’t seem to have the exact same thoughts on that is because of my main goal in life, understanding how people work. This has included tasks like challenging myself to figure out how to communicate with people when that is very hard, and challenging myself to figure out how to change things about myself even when that is very hard. By practicing on challenging communication tasks, and learning more about how human minds may work through my self-experiments, I have improved both my ability to communicate and also my ability to understand the nature of conflicts between people and other people-related problems.
I think a lot of people reading these comments do feel bad for Gleb or do acknowledge that some potential will be lost if EA rejects InIn despite the high risk that their reputation problems may result in a net negative impact.
Perhaps the real crux of our apparent disagreement is something more like differing levels of determination / ability to communicate about problems and persuade people like Gleb to make all the specific necessary changes.
The way some appear to be seeing this is: “The community is fed up with InIn. Therefore, let’s take the opportunity to oust them.”.
The way I appear to be seeing this is: “The community is fed up with InIn. Therefore, let’s take the opportunity to persuade InIn to believe they need to do enough 2-way communication to understand how others think about reputation and promotion.”.
Part of this is because I think Gleb’s ignorance about reputation and marketing are so deep that he didn’t see a need to spend a significant amount of time learning about these. Perhaps he is/was unaware of how much there is for him to learn. If someone could just convince him that there is a lot he needs to learn, he would be likely to make decisions comparable to: taking a break from promotion while he learns, granting someone knowledgeable veto power over all promotion efforts that aren’t good enough, or hiring an expert and following all their advice.
(You presented a lot more worthwhile thoughts in your comment and I wish I could reply intelligibly to them all, but unfortunately, I don’t have the time to do all of these thoughts justice right now.)
These are some really strong arguments, Elizabeth. This has a good chance to change my mind. I don’t know whether I agree or disagree with you yet because I prefer to sleep on it when I might update about something important (certain processing tasks happen during sleep). I do know that you have made me think. It looks like the crux of a disagreement, if we have one, would be between one or both of the first two arguments vs. the third argument:
1.) EA needs a set of rules which cannot be gamed by con artists.
2.) EA needs a set of rules which prevent us from being seen as affiliated with con artists.
vs.
3.) Let’s not ban people and organizations who have good intentions.
A possible compromise between people on different sides would be:
Previously, there had been no rule about this. (Correct me if I’m wrong about this!) Therefore, we cannot say InIn had broken any rule. Let’s make a rule to limit dishonesty and misleading mistakes to a certain number in a certain time period / number of promotional pieces / volunteers / whatever. *
If InIn breaks the new rule after it is made, then we’ll both agree they should be banned.
If you think they should be banned right now, whether there was an existing rule or not, please tell me why.
/* Specifying a time period or whatever would prevent discrimination against the oldest, most prolific, or largest organizations simply because they made a greater total number of mistakes due to having a greater volume of output.
The ratio between mistakes and output seems really important to me. Thirty mistakes in ninety articles is really egregious because that’s a third. Three mistakes in three hundred articles is only 1%, which is about as close to perfection as one can expect humans to get.
Comparing 1 / 3 vs. 1 / 100 is comparing apples to oranges.
I’m not sure what the best limit is, but I hope you can see why think this is an important factor. Maybe this was obvious to everyone who may read this comment. If so, I apologize for over-explaining!
My stance is currently that Gleb most likely has a learning disorder (perhaps he is on the spectrum) and is also ignorant about marketing, resulting in a low skill level with promotion. Some people here are claiming things that make it seem like they believe Gleb intends to do something bad, like a con. It’s also possible Gleb was following marketing instructions to the letter which were written by people who are less scrupulous than most EAs (perhaps because he thought it was necessary to follow such instructions to be effective). I wouldn’t be surprised if Gleb perceived what he was doing as “white lies” (thinking that there would be a strong net positive impact). It’s also possible that some of these were ordinary mistakes (though probably not all of them because there are a lot).
I’d like to discover why people believe things like “this is a con” and see whether I change my mind or not. Anyone up for that?
Shlevy, I think I might actually agree with everything you said here with the exception of the characterization of Intentional Insights as a “con”.
I can see the behavior on the outside very clearly. On the outside Gleb has said a list full of incorrect things.
On the inside, the picture is not so clear. What’s going on inside his head?
If this is a con, what in the world does he want? He can’t seem to make money off of this. Con artists have a tendency to do very, very quick things, with a very very low amount of effort, hoping to gain some disproportionate reward. Gleb is doing the opposite. He has invested an enormous amount of time (Not to mention a permanent Intentional Insights tattoo!) and (as far as I know) has been concerned about finances the whole time. He’s not making a disproportionate amount of money off of this… and spreading rationality doesn’t even look like one of those things which a con artist could quickly do for a disproportionate reward… so I am confused.
If I thought Intentional Insights was a con, I’d be right with you trying to make that more obvious to everyone… but I launched my con detector and that test was negative.
Maybe you use a different con detector. Maybe, to you, it is irrelevant whether Gleb is intentionally malicious or merely incompetent. Perhaps you would use the word “con” either way just as people use the word “troll” either way.
For the same reasons that we should face the fact that there’s a major problem with the inaccuracies Intentional Insights outputs, I think we aught to label the problem we’re seeing with Intentional Insights as accurately as possible.
Whether Gleb is incompetent or malicious is really important to me. If Gleb is doing this because of a learning disorder, I would really like to see more mercy. According to Wikipedia’s page on psychological trauma, there are a lot of things about this post which Gleb may be experiencing as traumatic events. For instance: humiliation, rejection, and major loss. (https://en.wikipedia.org/wiki/Psychological_trauma)
As some kind of weird hybrid between a bleeding heart and a shrewd person, I can’t justify anything but minimizing the brutality of a traumatic event for someone with a learning disorder, no matter how destructive it is. At the same time, I agree that ousting destructive people is a necessity if they won’t or can’t change, but I think in the case of an incompetent person, there are a lot of ways in which the community has been too brutal. In the event of a malicious con, we’ve been too charitable, and I’m guilty of this as well. If Gleb really is a con artist, we should be removing him as fast as possible. I just don’t see strong evidence that the problem he has is intentional, nor does it even seem to be clearly differentiated from terrible social skills and general ignorance about marketing.
Our response is too brutal for someone with a learning disorder or other form of incompetence, and it’s too charitable for a con artist. In order to move forward, I think perhaps we aught to stop and solve this disagreement.
Here’s what’s at stake: currently, I intend to advocate for an intervention*. If you convince me that he is a con artist, I will abandon this intent and instead do what you are doing. I’ll help people see the con.
/ (By intervention, I mean: encouraging everyone to tell Gleb we require him to shape up or ship out, to negotiate things like what we mean by shape up and how we would like him to minimize risk while he is improving. If he has a learning disorder, a bit of extra support could go a long way if* the specific problems are identified so the support can target them accurately. I suspect that Gleb needs to see a professional for a learning disorder assessment, especially for Asperger’s.)
I’m open to being convinced that Intentional Insights actually does qualify as some type of con or intends net negative destructive behavior. I don’t see it, but I’d like to synchronize perspectives, whether I “win” or “lose” the disagreement.
I’m half wondering how much upset was influenced by a general suspicion of or aversion about advertising and persuasion in general.
From one perspective, it’s almost as if Gleb used to be one of the “advertising/persuasion is icky” people, and decided to bite the bullet and just do this thing, even if it seemed whacked out and icky…
At first I thought maybe part of the problem was Gleb didn’t have any vision of how it could be done better. Now, I think it might actually be part of a systemic problem I keep noticing. Our social network generally does not have a clear vision of how it could be done better.
How many of us can easily think of specific strategies to promote InIn that sit well with all of your ethical standards and effectiveness criteria?
If a lot of people here are beginning with the belief that promotion is either icky or ineffective, we have set ourselves up for failure. This may encourage us to behave as if one either needs to accept being ineffective, or one needs to allow ones self to be icky … which may result in choosing whichever things appear to be the icky-effective ones.
I think effective altruism can have both ethics and effectiveness at the same time. I do not believe there is actually a trade-off where choosing one necessarily must sacrifice the other. I believe there are probably even ways where one can enhance and build on the other.
I keep thinking that it would really benefit the whole movement if more people became more aware about what sorts of things result in disasters and how to promote things well. This is another way that such awareness could be beneficial.
- Jul 21, 2019, 7:43 AM; 2 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (LessWrong;
Some of the criticisms I’ve read of MIRI are so nasty that I hesitate to rehash them all here for fear of changing the subject and side tracking the conversation. I’ll just say this:
MIRI has been accused of much worse stuff than this post is accusing Gleb of right now. Compared to that weird MIRI stuff, Gleb looks like a normal guy who is fumbling his way through marketing a startup. The weird stuff MIRI / Eliezer did is really bizarre. For just one example, there are places in The Sequences where Eliezer presented his particular beliefs as The Correct Beliefs. In the context of a marketing piece, that would be bad (albeit in a mundane way that we see often), but in the context of a document on how to think rationally, that’s more like… egregious blasphemy. It’s a good thing the guy counter-balanced whatever that behavior was with articles like “Screening Off Authority” and “Guardians of the Truth”.
Do some searches for web marketing advice sometime, and you’ll see that Gleb might have actually been following some kind of instructions in some of the cases listed above. Not the best instructions, mind you… but somebody’s serious attempt to persuade you that some pretty weird stuff is the right thing to do. This is not exactly a science… it’s not even psychology. We’re talking about marketing. For instance, paying Facebook to promote things can result in problems… yet this is recommended by a really big company, Facebook. :/
There are a few complaints against him that stand out as a WTF… (Then again, if you’re really scouring for problems, you’re probably going to find the sorts of super embarrassing mistakes people only make when they’re really exhausted or whatever. I don’t know what to make of every single one of these examples yet.)
Anyway, MIRI / Eliezer can’t claim stuff like “I was following some marketing instructions I read on the Internet somewhere.”, which, IMO, would explain a lot of this stuff that Gleb did—which is not to say I think copying him is an effective or ethical way of promoting things! The Eliezer stuff was, like self-contradictory enough that it was weird to the point of being original. It took me forever to figure that guy out. There were several years where I simply had no cogent opinion on him.
The stuff Gleb is doing is just so commonly bad. It’s not an excuse. I still want to see InIn shape up or ship out. I think EA can and should have higher standards than this. I have read and experienced a lot in the area of promoting things, and I know there are ways of persuading through making people think that don’t bias them or mislead them, but by getting them more in touch with reality. I think it takes a really well thought out person to accomplish that because seeing reality is only the first step… then, you need to know how to deal with it, and you need to encourage the person to do something constructive with the knowledge as well. Sometimes bare information can leave people feeling pretty cynical, and it’s not like we were all taught how to be creative and resourceful and lead ourselves in situations that are unexpectedly different from what we believed.
I really believe there are better ways to be memorable other than making claims about how much attention you’re getting. Providing questionable info of this type is certainly bad. The way I’m seeing it, wasting time on such uninspired attempts involves such a large quantity of lost potential that questionable info is almost silly by comparison. I feel like we’re worried about a guy who says he has the best lemonade stand ever, but what we should be worried about is why he hasn’t managed to move up to selling at the grocery store yet.
I can very clearly envision the difference between what Gleb has been doing, and specific awesome ways in which it is possible to promote rationality. I can’t condemn Gleb as some sort of bad guy when what he’s doing wrong betrays such deep ignorance about marketing. I feel like: surely, a true villain would have taken over the beverage aisle at the grocery store by now.
In 5.3. Twitter:
The question asked of Gleb is “How many of those are payed [sic] and how many organic?”
I double checked and some Internet sources define the term “organic” as “unpaid”. Following other accounts that will, in turn, follow your account is not the same thing as giving people money to follow you. I understand that this question was intended to inquire about how many Twitter followers actually genuinely want to follow the Intentional Insights account. This is a perfectly valid question.
What I’m saying is that the 5.3 Twitter section can be misinterpreted. People might think it means “Gleb was asked how many real followers he had and he mislead the person.” when what really happened looks to me like Gleb was asked how many of his followers he paid money to in exchange for their follow.
If the 5.3 section used different wording / presentation, I think it would depict the situation more accurately.
I appreciate the huge amount of work it must have taken to put this post together. Nothing is perfect, and it’s hard to edit out every single flaw in something this long.
Ooh. This looks interesting! To accomplish goals like these requires over ten times as much time, so this definitely requires funding. I’m now envisioning starting up a new EA org which serves the purpose of preventing disruptions to EA productivity through identifying risks and planning in advance!
I would love to do this!
Thanks for the inspiration, Ben! :D
At the current time, I suspect the largest disaster risk is war in the US or UK. That’s why I’m focusing on war. I haven’t seriously looked into the emerging risks related to antibiotic resistance, but it might be a comparable source of concern (with a lower probability of harming EA, of course, but with a much higher level of severity). The most probable risk I currently see is that there are certain cultural elements in EA which appear to have resulted in various problems. For a really brief summary: there is a set of misunderstandings which is having a negative impact on inclusiveness, possibly resulting in a significantly smaller movement than we’d have otherwise and potentially damaging the emotional health and productivity of an unknown number of individual EAs. The severity of that is not as bad as disease or war could get, but the probability of this set of misunderstandings destroying productivity is much higher than the others (That this is happening is basically guaranteed, so it’s just a matter of degree.). The reason I chose to work on the risk of war is because of the combination of the probability and severity of war which I currently suspect, and the relative severity/probability compared with other issues I could have focused on.
I have done a lot of thinking about some of the questions you pose here! I wish I could dedicate my life to doing justice to questions like “What is the worst threat to productivity in the effective altruism movement?” and I have been working on interventions for some of them. I have a pretty good basis for an intervention that would help with these cultural misunderstandings I mentioned, and this would also do the world a lot of good because second biggest problem in the world, as identified by the World Economic Forum for 2017, would be helped through this contribution. Additionally, continuing my work on misunderstandings could reduce the risk of war. I really, really want to continue with pursuing that, but I’m taking a few weeks to get on top of this potentially more urgent problem.
I have been stuck with making estimations based on the amount of information I have time to gather, so, sadly, my views aren’t nearly as comprehensive as I really wish they were.
I tend to keep an eye on risks in everything that’s important to me, like the effective altruism movement, because I prefer to prevent problems in my life wherever possible. Advanced notice about big problems helps me do that.
As part of this, I have worked hard to compensate for around 5-10 biases that interfere with reasoning about risks like optimism bias, normalcy bias, and affect heuristic. These three can prevent you from realising bad things will happen, cause one to fail to plan for disasters, and make you disregard information just because it is unpleasant. The one bias I saw on the list that actually supports risk identification, pessimism bias, is badly outnumbered by the 5-10 biases that interfere with reasoning about risks. That is not to say that pessimism bias is actually helpful. Given that one can get distracted by the wrong risks, I’m wary of it. I think quality reasoning about risks looks like ordering risks by priority, choosing your battles, and making progress on a manageable number of problems rather than being paralysed thinking about every single thing that could go wrong. I think it also looks like problem-solving because that’s a great way to avoid paralysis. I’ve been thinking about solutions as well.
After compensating for the biases I listed and others which interfere with reasoning about risks, I found my new perspective a bit stressful, so I worked very hard to become stronger. Now, I find it easy to face most risks, and I have a really, really high level of emotional stamina when it comes to spending time thinking about stressful things in general. In 2016, I managed to spend over 500 hours reading studies about sexual violence and doing related work while being randomly attacked by seven sex offenders throughout the year. I’ve never experienced anything that intense before. I can’t claim that I was unaffected, but I can claim that I made really serious progress despite a level of stress the vast majority of people would find too overwhelming. I managed to put together a solid skeleton of a solution which I will continue to build on. In the meantime, the solution can expand as needed.
I have discovered it’s difficult to share thoughts about risks and upsetting problems because other people have these biases, too. I’ve upgraded my communication skills a lot to compensate for that as much as possible. That is very, very hard. To become really excellent at it, I need to do more communication experiments, but I think what I’ve got at this time is sufficient to get through after a few tries with a bit of effort. Considering the level of difficulty, that’s a success!
Now that I think about it, I appear to have a few valuable comparative advantages when it comes to identifying and planning for risks. Perhaps I should seek funding to start a new org. :)