Open thread 5
Welcome to the fifth open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.
Welcome to the fifth open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.
What do people think should be the downvoting culture and criteria around here? I’ve seen some pretty harsh downvoting, and am concerned that this might make the forum seem less welcoming to new people.
I’ll just note that we’re pointing people somewhat in the right direction by labelling the up- and down-vote buttons “I found this useful” and “I didn’t find this useful”, in order to encourage people to appraise whether the posts are valuable and evidence-based, rather than whether they’re to the reader’s personal taste.
I think the big picture here is not whether one agrees with individual upvotes or downvotes but how the system is working overall. Largely, I think it’s identifying real differences in post quality and also the fact that about 95% of votes appear to be upvotes means that the system will encourage people to post more. So, I’m pretty encouraged by the way things are going so far. Maybe we can tilt people to consider slightly more comments ‘useful’, though.
I have mixed feelings on this.
From the point of view of highlighting the best comments to allow good reading order, I think there may not be enough downvoting. Having more downvoting as well perhaps as more upvoting would give a richer distinction and help the best stuff to rise to the top quickly, even if it’s new content on an old thread.
On the other hand from the point of view of experienced feedback, downvoting might be a turn-off, and more of it might reduce people’s inclination to post. But this effect might be reduced if downvoting were more normal.
Overall I guess I’d weakly prefer more upvoting and more downvoting—including downvoting things that you don’t disagree with but would have been happy to skip reading.
That’s a good point, though being extra sparing in your upvoting would achieve a decent fraction of the same benefits. On the other hand, that would mean that fewer people got the warm fuzzies of upvotes, so that fewer people would get demoralising downvotes.
Being sparing in your upvoting? That seems to be the worst of both worlds!
I’m imagining 2 scenarios:
1) People have a very low threshold for upvoting, so upvote most comments. They only downvote in extreme circumstances.
2) People have a high threshold for upvoting, so only upvote comments they think particularly helpful. They only downvote in extreme circumstances.
My thought is that more information about comment quality is conveyed in the second.
I guess that people currently upvote in the vicinity of 20% of comments they read (this is a guess, but based on how many more upvotes the top articles/comments get than the median), and downvote somewhat under 1%.
Optimal for information in theory might be a 1⁄3 1⁄3 1⁄3 split between upvoting, downvoting and not voting. But I think higher thresholds for downvoting than that probably make sense. I guess I might like to see upvoting at about 30% and downvoting at about 3%?
The second scenario isn’t how I started upvoting, but what I’m leaning towards now, on this forum.
Downvoted to follow my own suggestion—I’m afraid I found this confusing/confused, as I think just being more sparing with upvoting gets you no benefits at all, and you didn’t explain how it was meant to work.
Upvoted your original comment though. :)
Ha, fair enough! I tried to explain it in my reply to Ryan: http://effective-altruism.com/ea/b2/open_thread_5/1h2
There are different cultures for upvoting and downvoting for different websites:
As a passive user of Reddit, I’m aware of the voting culture there. Depending on the subreddit, it might be as bad as any other forum on the Internet which is a wilds of impoliteness and inconsideration. However, you might end up with one that’s better. Obviously, this varies widely depending on the subreddit(s) one is using.
As an active user of Less Wrong, I tend not to downvote too much. Voting there is on the basis of whether something adds to the level of discourse, in terms of moving it in a direction of greater or less quality.
I try to treat how I vote on this forum based on the sentences that go along with the votes. For example, I upvote a comment on this site if I actively find it useful, i.e., it provides a new framing or new information which clarifies or enriches my understanding of an essay’s subject matter. There are lots of comments that I don’t find ‘useful’, per se, in the sense that I don’t learn anything new from them. However, I don’t (want to) downvote those comments because I don’t want to imply anything is wrong with them when I don’t really believe that. Such comments are just-so to me. I believe I would only downvote an essay, article, or comment on this forum if I believe it was actively harmful to people’s understanding, because it would decrease clarity, or level of discourse. I would like to think I would do this regardless of whether it was from a position I agreed with it not.
Generally, I tend to be liberal with upvotes, and conservative with downvotes. However, this is a personal preference based on my perception that online communities with voting systems tend to be less friendly than I would like them to be, so I try correcting for this in the opposite direction in what small way I can as a user.
I find receiving downvotes pretty demoralising, in particular when they are given for disagreeing with the conclusion, rather than thinking something is poorly reasoned.
“This person disagrees with me” and “the person thinks my reasoning is bad” are closely related—if your reasoning was good, they’d agree with you. And even when they differ, the original author is hardly an unbiased judge.
I don’t think of it that way, because usually there are multiple important considerations on both sides of a disagreement.
If someone raised a legitimate reason for their point of view, but I disagreed with their conclusion all things considered, I would not down-vote unless I thought the reason they were focussed on in their comment didn’t make sense. That’s rarely the case here; disagreements are most often about different weight given to different considerations.
Something can be well-reasoned but still be disagreeable if it ignores an important consideration.
The difficulty (that no-one seems to have figured out how to solve) is a system that effectively hides low quality posts without becoming more of an echo chamber over time. While a community is small it is not too much of a problem, because even mildly down voted posts have good attention—but as it grows, highly up voted posts that reflect existing tastes or confirm existing biases increasingly dominate.
I don’t usually use forums, so I don’t know what the norm is. But I have found it somewhat demoralising so far when I’ve taken time to respond carefully and detailedly to questions, and then been downvoted with no explanation as to why people didn’t find the comment useful. (I’m very willing to believe this is just because I’m not used to forums though—I’m only used to Facebook, where you can only upvote, hence all negative feedback has to be spelled out.)
Thanks for bringing this up—seems like a useful discussion to have!
Michelle, I looked through all your posts and they’re all really good, not even controversial, so I wouldn’t assume that they were downvoted or had low points for a legit reason. If someone had a legit criticism of something you said, he should write what it is. That’s the whole point of the forum: to exchange ideas. I don’t find the points system affects that in a positive way. I think without points people would have to write what they’re criticism of a post is and defend it. Button clicking seems more like an act of emotion to me.
I personally think that if someone downvotes something, especially a post where it costs −10 on karma, then they owe that person a brief explanation for the downvote.
I haven’t seen a downvote here that I’ve agreed with, and for the moment I’d prefer an only-upvote system. I don’t know where I’d draw the line on where downvoting is acceptable to me (or what guidelines I’d use); I just know I haven’t drawn that line yet.
Having some downvoting is good, and part of the raison d’etre of this forum as opposed to the Facebook group. I agree that people downvote slightly too often, but that’s a matter of changing the norms.
This is because we want to encourage people to contribute, right? One approach is to be the norm you want to promote. If you want to encourage people to post, then upvote more posts. If you’re concerned that material is getting downvoted when it is not spam, then give it an upvote and a substantial reply. :)
I wasn’t personally saying that was a good idea, just that I thought there should (somewhat) fewer downvotes. Of note, I’m not thinking about myself getting downvotes but occasions where it happened to other people!
I loath the voting system. Actually, I have never clicked the up or down vote button once and I never will because it’s juvenile to turn commenting on something as important as how to improve the world into a popularity contest. We are adults and should be treated like adults. It’s not even useful, anyhow – I’ve found no correlation between the quality of the comment and its points. The highest rated comments are usually questions, or short comments like “thanks for this”. Does anyone else see the contradiction in a subculture that’s purports to be about rationality bringing social approval bias into the mix? I value judging people’s views solely on their merits; I don’t want my judgement to be skewed by the judgement of “the group” and likewise, I only want people to judge my views by their merits, not by how popular they are.
Besides skewing the logical reasoning of visitors to the forum, the voting system also promotes conservatism – people will naturally be too scared to write something original for fear of it having low points. I think that someone cannot think too broadly about how to help the world – crazy ideas should be welcomed! Perhaps most of them will be duds, but there only needs to be one that turns out to be a winner! Even without the voting system, posters have to deal with the judgement of other posters, but at least written comments can provide helpful feedback whereas simply having low votes will make the poster self-conscious and shy to write something against the grain.
I thought that the voting system is beneficial primarily because it allows others to “upvote” something as important. When I glance at comments, I am unlikely to read dozens of comments (limited time), but the upvotes are a simple way for me to tell which comments are more likely to provide something of value.
Upvotes are not a true demonstration of value, but they help. Consider if a comment gets 100 upvotes—that suggests there is something there that others like and I would do well to at least glance at it.
The points you raise are worth considering, though I think the benefits outweigh the concerns you have. Do you think otherwise?
If someone thinks that the better comments have higher votes, then certainly for him the points system would be helpful, especially for long threads. I don’t find that’s usually the case, which is one reason why I’m not fond of it. I find that people “like” (whether that means clicking a button on your computer, or agreeing to someone in person) things that validate their pre-existing feelings, rather than open them up to new ideas they hadn’t considered before (most respond with fear to the latter). I heard on the radio a few months ago that studies show that problem solving meetings are more productive when the people there have opposing perspectives, come from different fields, etc. IOW, the perspective you don’t want to hear is probably the one you need to.
Having said that, even if the points system doesn’t correlate with the most helpful comments it could still be net positive for other reasons: encouraging more participation than it discourages, providing support/validation for those interested in EA, being normal (since most sites have voting now, people might think it was weird if CEA didn’t).
Another thing, that just occurred to me yesterday, is that the posts on the forum seem mostly geared to people who are already involved in EA, when it could be more productive to write posts that are geared to new people learning about EA (both in terms of content and writing style). TLYCS/GWWC blogs are more like that, although they are only for poverty.
yeah, I agree that we’ve talked about effective altruism using the assumption that people already know roughly what that is and why we would care about it. It’s a good idea to post more material that is of interest to a wider audience. Although having started off with stuff that affirms the purpose of the forum and our shared identity is not a bad thing, it’s just that it’d be good to balance it out now with some materials that a wider range of people can enjoy.
I’ve been thinking of doing a ‘live below the line’ to raise money for MIRI/CFAR, and asking someone at MIRI/CFAR to do the same for CEA in return. The motivation is mostly to have a bit of fun. Does anyone think this is a good or bad idea?
Pbhyq lbh ng yrnfg pnyy vg fbzrguvat bgure guna “yvir orybj gur yvar”. V jbeel gung fbzr crbcyr zvtug svaq vg bssrafvir, nf yvir orybj gur yvar vf gurzrq fb urnivyl nebhaq cbiregl. V qba’g frr jung rngvat gur fnzr purnc sbbq sbe n jrrx unf gb qb jvgu ZVEV/PSNE/PRN.
I’ve hidden my thoughts in rot-13 to avoid biasing others: Vg pbhyq or pbafgehrq nf gevivnyvfvat cbiregl
Would you be raising money mainly from an EA audience? The idea of “living below the line” seems to have no connection at all to MIRI / CFAR, so it kind of feels like a non sequiter to non-EAs. Maybe more thematic would be living without computers (or without rationality!), but that seems not worthwhile.
The fact that it’s a bit of a non-sequitur is why I find it a fun idea.
It sounds like most people see it as outright weird, rather than quirky in an amusing way as I intended, so I won’t do it.
I guess if we get a Hansonian future the line would be very low. It’s unlikely pre-upload Rob could actually survive at such a level though, so probably not the best idea.
It’d be an interesting experiment to see how much this raised.
I made a map with the opinions of many Effective Altruists and how they changed over the years.
My sample was biased by people I live with and read. I tried to account for many different starting points, and of course, I got many people’s opinions wrong, since I was just estimating them.
Nevertheless there seems to be a bottleneck on accepting Bostrom’s Existential Risk as The Most Important Task for Humanity. If the trend is correct, and if it continues, it would generate many interesting predictions about where new EA’s will come from.
Here, have a look:
http://i.imgur.com/jQhoQOZ.png
For the file itself (open through the program yED by clicking File → Open URL and copying the link below):
https://dl.dropboxusercontent.com/u/72402501/EA%20flowchart%20Web.graphml
I suspect that one could make a chart to show a bottle neck in a lot of different places. From my understanding GW does not seem to think what the YED chart would imply.
“I reject the idea that placing high value on the far future – no matter how high the value – makes it clear that one should focus on reducing the risks of catastrophes” http://blog.givewell.org/2014/07/03/the-moral-value-of-the-far-future/
The yED chart shows Givewell being of the opinion that poverty alleviation is desirable and quite likely the best allocation of resources in 2013. This does not seem to be a controversial claim. There are no other claims about Givewell’s opinion in any other year.
Notice also that the arrows in that chart mean only that empirically it has been observed that individuals espousing one yellow opinion frequently change their opinion to one below it. The reverse can also happen, though it is less frequent, and frequenty people spend years, if not decades, within a particular opinion.
Can you give an example of a chart where a bottleneck would occur in a node that is not either the X-risk node, or the transition to the far future node? I would be interested in seeing patterns that escaped my perception, and it is really easy to change the yED graph if you download it.
The bottom part of your diagram has lots of boxes in it. Further up, “poverty alleviation is most important” is one box. If there was as much detail in the latter as there is in the former, you could draw an arrow from “poverty alleviation” to a lot of other boxes: economic empowerment, reducing mortality rates, reducing morbidity rates, preventing unwanted births, lobbying for lifting of trade restrictions, open borders (which certainly doesn’t exclusively belong below your existential risk bottleneck), education, etc. There could be lots of arrows going every which way in amongst them, and “poverty alleviation is most important” would be a bottleneck.
Similarly (though I am less familiar with it), if you start by weighting animal welfare highly, then there are lots of options for working on that (leafleting, lobbying, protesting, others?).
I agree that there’s some real sense in which existential risk or far future concerns is more of a bottleneck than human poverty alleviation or animal welfare—there’s a bigger “cause-distance” between colonising Mars and working on AI than the “cause-distance” between health system logistics and lobbying to remove trade restrictions. But I think the level of detail in all those boxes about AI and “insight” overstates the difference.
Is it possible to get a picture of the graph, or does that not make sense?
Here you go: image
Thank you Ryan, I tried doing this but failed to be tech savvy enough.
No problem. There’s an Export button in YeD’s file menu. Then, you have the image file that you can upload Imgur.
Thanks!
Wow, this is amazing! It brings to mind the idea of a “what kind of altruist are you?” quiz, with the answer providing a link to the most relevant essay or two which might change your mind about something...
I just read Katja’s post on vegetarianism (recommended). I have also been convinced by arguments (from Beckstead and others) that resources can probably be better spent to influence the long-term future. Have you seen any convincing arguments that vegetarianism or veganism are competitively cost-effective ways of doing good?
Related thought 3: Katja’s points about trading inconveniences and displeasures are interesting. Is it good to have a norm that all goods and “currencies” that take part in one’s altruism budget and spending must be tradeable with one another? Is this psychologically realistic?
One reason for thinking that goods in the altruism budget should be tradeable is that in some sense my Altruism Budget is what I call the part of my life where I take the demandingness of ethics seriously. Is this how anyone else thinks about it?
Yes, I think about it in the same way, and think that demanding or difficult non-monetary decisions like vegetarianism should fall into your altruism budget, where you should consider the trade-off between them and, say, donating money.
The altruism budget idea is plausible. It works well when you’re literally talking about money. For example, it’s really psychologically difficult to face the decision of whether to redirect your funds to charity every time you buy a dinner or go to a movie. It’s much better to take out a fixed fraction of your budget each month and give it away. Then, you can make non-altruistic decisions with your ‘you’ money without feeling selfish. Then, if you want to change the fraction of your budget that you give away, you make that decision at the end of the month or year.
It seems reasonable that something like that should happen with time i.e. that effective altruists should retain a concept of “leisure”!
But maybe it works poorly when things aren’t obviously commodities. Like, I think there’s a place for virtue ethics—just being the kind of person you would want to see in the world. And I think lots of people who take a virtue-based approach could reasonably object that always thinking of good in terms of money could be self-defeating.
Also, some psychological studies apparently show that thinking about money decreases your generosity.
Related thought 2: as someone who’s already vegetarian, I think it would be more costly in terms of effort, bad feels, etc. to switch back than to stay veggie or slowly drift back over time.
Yes, I agree with this. It seems like it’s easier to stay vegetarian. It’s cheap, it feels good. It’s probably not very disadvantageous to health. Long live the status quo—for diet ethics, at least.
Related thought 1: I think some tension can be defused here by avoiding the framing “should EAs be vegetarian?”, since answering “no” makes it feel like “EAs should not be vegetarian”, when really it seems to me that it just implies that I can’t put any costs incurred in my Altruism Budget, the same as costs I incur by doing other mundane good things.
Yes, ‘are altruistic people obligated to become vegetarian?’ might be better
Yes, that was a good argument that EAs aren’t obligated to be vegetarian, even if reasonable people can disagree about the numbers.
I think people have a tendency, though, to think that vegetarianism is more costly than it actually is, though. So I’m skeptical unless a person has actually tried to give up meat and faced some sort of problem. For example, I’m not vegan because of social pressure, but I am vegetarian.
At heart, even if you eat meat, there’s no reason I can fathom why you can’t simply try to eat less of it...
You may be right that people overestimate the cost. I’m not sure how to gather data about this.
Re: your second point (“there’s no reason I can fathom...”), how about this lens: view meat as a luxury purchase, like travel, movies, video games, music, etc. Instead of spending on these, you could donate this money, and I can imagine making a similar argument: “there’s no reason I can fathom why you can’t simply try to do less of that...”, but clearly we see foregoing luxuries as a cost of some kind, and don’t think that it’s reasonable to ask EAs to give up all their luxuries. When one does give up luxuries for altruistic reasons, I think it’s fine to try to give up the ones that are subjectively least costly to give up, and that will have the biggest impact.
Other costs: changing your possibly years-long menu for lunch and dinner; feeling hungry for a while if you don’t get it figured out quickly; having red meat cravings (much stronger for some people than others, e.g. not bad for me, but bad for Killian).
I don’t think what I’ve said is a case against vegetarianism; just trying to convey how I think of the costs.
ETA: there are other benefits (and other costs), this is just my subjective slice. An expert review, on which individuals can base their subjective cost breakdowns, would probably be helpful.
I’m thinking of giving “Giving games” for Christmas this year.
Family and friends gets a envelope with two cards. A nice Christmas card saying they now have x NOK to give on a charity of their choosing. Then it presents some interesting recommendations and encourage them to look more into them if they want to. When they have decided they have to write it down on an accompanying empty (but postaged) card addressed to me and when I get the card after Christmas I will donate the money.
Have somebody else though of something similar? Do you have any ideas that could make it more interesting or better in any way?
I would also recommend running a Christmas Fundraiser (Basically asking for donations instead of gifts during Christmas). http://christmas.causevox.com/
I will post a longer description + guide on how to set this up on a main thread early December.
That could be interesting. You could count the numbers with tracking URLs. You could even get a group of effective altruists to run a similar giving game using the same tracking URLs so that everyone can (anonymously) see how many people have voted for the same or different charity from you. This could be a pretty cool project I think.
As a follow-up to this comment: I gave my 10-minute talk on effective altruism at Scribd. The talk went better than I expected: several of my coworkers told me afterwards that it was really good. So I thought I would summarize the contents of the talk so it can be used as a data point for presenting on effective altruism.
You can see the slides for my talk in keynote, pptx, and html. Here are some notes on the slides:
The thought experiment on the second slide was Peter Singer’s drowning child thought experiment. After giving everyone a few seconds to think about the thought experiment, I asked everyone who thought there was 50% probability or higher that they would save the drowning child to raise their hand (inspired by this essay). Almost everyone raised their hands.
I threw in a few ideas that haven’t seen wide discussion of in the effective altruist community. For example, in the last chapter of Martin Seligman’s book Learned Optimism he explains how he thinks that Western culture’s focus on consumerism and our lack of purpose and connection has contributed to our depression epidemic, which I covered on slides 6-7.
On slide 9, I tried to make things concrete and interesting by suggesting that Scribd could save money by giving everyone Chromebooks to work with, but this would probably end up being bad for the company’s bottom line in the long run because we would work less efficiently.
Another thing I haven’t seen wide discussion of in the EA community is the analogy between groups like Givewell and scientists (see this LW comment for more on this idea). I discussed this on slides 11-12.
On slide 14, I discussed in depth the idea that doctors don’t do all that much good due to replaceability effects. (The 4 principles were stolen from Ben Kuhn’s writeup.)
Overall, I found this experience really encouraging. Initially I was afraid that the drowning child thought experiment would make people hostile, but that didn’t seem to happen at all… there wasn’t any criticism of the idea even during the Q&A period at the end. I was also afraid that the talk tried to cram too many ideas in to just 10 minutes, which may have occurred but all the evidence I observed afterwards suggested to me that the concepts I tried to communicate were well-understood. The people at Scribd are pretty smart though: the talk before mine was about the physics of motorcycle riding, and the talk after mine was by a champion Go player. So a different presentation might be optimal for a different crowd.
Although several people told me they thought the talk was good, I didn’t hear much discussion about the concepts I presented afterwards. And of course it’s hard to measure whether people actually became significantly inclined towards effective altruism or not. So in the long run we should still probably do rigorous message A/B testing.
Hi there! In this comment, I will discuss a few things that I would like to see 80,000 Hours consider doing, and I will also talk about myself a bit.
I found 80,000 Hours in early/mid-2012, after a poster on LessWrong linked to the site. Back then, I was still trying to decide what to focus on during my undergraduate studies. By that point in time, I had already decided that I needed to major in a STEM field so that I would be able to earn to give. Before this, in late 2011, I had been planning on majoring in philosophy, so my decision in early 2012 to do something in a STEM field was a big change from my previous plans. I hadn’t known which STEM field I wanted to major in at this point; I had only realized that STEM majors generally had better earning potentials than philosophy majors.
The way that this ties back into 80,000 Hours is that I think that I would have liked someone to help me decide which STEM field to go into. Actually, I can’t find any discussion of choosing a college major on the 80,000 Hours site, though there are a couple of threads on this topic posted to LessWrong. I would like to see an in-depth discussion page on major choice as one of the core posts on 80,000 Hours.
Anyhow, I ended up majoring in chemistry because it seemed like one of the toughest things that I could major in—I made this decision under the rule-of-thumb that doing hard things makes you stronger. I probably should have majored in mathematics, because I actually really enjoy math, and have gotten good grades in most of my math classes; neither of those two things are true of the chemistry classes I have taken. I think that my biggest previous misconception about major choice was that all STEM majors were roughly equal in how well they prepared you for the job market—looking back, I feel that CS and Math are two of the best choices for earning to give, followed by engineering and then biology, with chemistry and physics as the two worst options for students interested in earning to give. Of course, YMMV, and people with physics degrees do go into quantitative finance, but I do think that not all STEM majors are equally useful for earning to give.
The second thing that I would like to mention is that, from my point of view, 80,000 Hours seems very elitist. I don’t mean this in a bad way, really, I don’t, but it is hard to be in the top third of mathematics graduates from an ivy league university. The first time that I had a face-to-face conversation with an effective altruist who had been inspired by 80,000 Hours, I told them that I was planning on doing important scientific research, and they just gave me a look and asked me why I wasn’t planning on going into one of the more lucrative earning-to-give type of careers.
I am sure that this person is a good person, but this episode leads me to wonder if adding more jobs that very smart people who aren’t quite ready to go into quantitative finance or strategic consulting could do to the top careers page on 80,000 Hours’ site would be a good idea. Specifically, mechanical, chemical, and electrical engineering, as well as the actuarial sciences, could be acceptable fields for one to go into for earning to give.
Hi Fluttershy,
Really appreciate hearing you’re feedback.
We’ve written about how to choose what subject to study a bunch of times, but I agree it’s hard to find, and it’s not a major focus of what we do. Unfortunately we have very limited research capacity and have decided to focus on choosing jobs rather than subjects because we think we’ll be able to have more impact that way. In the future I’d love to have more content on subject choice though.
I also realise our careers list comes across badly. I’m really keen to expand the range of careers that we consider—we’re trying to hire someone to do more career profiles but haven’t found anyone suitable yet. Being an actuary and engineering are both pretty high on the list.
I also know that a lot of people around 80,000 Hours think most people should do earning to give. That’s not something I agree with. Earning to give is just one of a range of strategies.
Ben
Seems like 80K could probably stand to link to more of Cognito Mentoring’s old stuff in general. No reason to duplicate effort.
Yeah I’ll add a link to Cognito on the best resources page next time I update it.
“Actually, I can’t find any discussion of choosing a college major on the 80,000 Hours site, though there are a couple of threads on this topic posted to LessWrong.”
Not a tremendous excuse, but it wouldn’t surprise me if this is basically because 80k is UK-based, where there is no strong analogue to ‘choosing a major’ as practised by US undergraduates; by the time someone is an undergraduate in the UK (actually, probably many months before that, given application deadlines), they’ve already chosen their subject and have no further choices to make on that front except comparatively minor specialisation choices.
Not to take away from the substance of your post, but when you note that impact is power-law distributed, doing important scientific research sounds (much)[https://80000hours.org/2012/08/should-you-go-into-research-part-1/] (more skill-dependent)[https://80000hours.org/2013/01/should-you-go-into-research-part-2/] than quantitative finance.
Should we try to make a mark on the Volgbrother’s “Project 4 Awesome”? It can expose effective altruism to a wide and, on average, young audience.
I would love to help in any way possible, but video editing is not my thing...
https://www.youtube.com/watch?v=kD8l3aI0Srk
Hi UriKatz, there’s a group of us trying to do just that, and we’d love to have your help. Join the EA Nerdfighters Facebook group and I’ll brief you on what we’ve been up to. :)
https://www.facebook.com/groups/254657514743021/
People often criticise GWWC for bad reasons. In particular, people harshly criticise it for not being perfect, despite not doing anything much of value themselves. Perhaps we should somewhat discount such armchair reasoning.
However, if we do so, we should pay extra attention when people who have donated hundreds of millions of dollars, a majority of their net worth, and far more than most of us will, have harsh criticism of giving pledges.
From his email:
“When I talk to young people who seem destined for great success, I tell them to forget about charities and giving. Concentrate on your family and getting rich—which I found very hard work. I personally and the world at large are very glad you were more interested in computer software than the underprivileged when you were young. And don’t forget that those who don’t make money never become philanthropists.”
There is certainly truth in this.
But not all of Wilson’s giving was in areas suitable for effective altruism. In particular, donating to the Catholic Church arguably causes active harm. Preserving monuments and wildlife reserves is at least a good distance away from optimal.
I think the strongest objection to his objection is that becoming rich doesn’t make the world a better place in itself. Even if you make other people richer in the process, it’s not a clear-cut world improvement. Especially if you consider replaceability effects and negative externalities from certain forms of business, making rich people more altruistic, and more effectively altruistic, could be more important than making more rich people.
Arguably donating to global health causes active harm (via the effect on fertility). Arguably veganism causes active harm (via wild animal suffering). Arguably donating to Xrisk causes active harm (ok, not so clear on the mechanism here, but I’m sure people have argued it).
Yet these last three causes are EA causes. So merely ‘arguably’ causing active harm cannot be enough. What matters is how much actual good it does. And I think it is very plausible that the Catholic Church actually does a lot of good.
Yes, perhaps donating to the Church is less effective than donating to SCI. On the other hand, it could be significantly less effective and him still have done more good with his donations than most EAs. Giving a lot more money slightly less efficiently does more good to others than giving a small amount of money very efficiently.
More importantly, this doesn’t really affect the argument. In general we should pay more attention to criticism when the critic is overcoming social desirability bias. And in this case, even if you disagree with his donation choices, he clearly scores very highly on altruism, which makes his criticism of our attempts to spread it all the more potent. Given
Actually, my point was that donating to the Catholic Church does more harm than good, not just that it causes harm. Perhaps you should look up how little it spends on things like poverty relief, how much money it absorbs from presenting itself as an official institution of morality while spreading supernatural superstition and promoting socially harmful policies. I would probably pay money to make the Catholic Church poorer, though certainly not at a 1:1 exchange rate.
I think the other EA causes you mention, while mixed blessings, have a much better profile.
I do agree with Wilson’s core argument, but would still point out that his money didn’t come out of thin air, and neither would the money of other rich people. A lot of that is competing for profit margins, that is, a successfull hedge fund manager replaces other hedge fund managers. It can therefore be more effective to try to make rich people more altruistic rather than to make more people rich.
Animal Charity Evaluators has/have found that leafleting is a highly effective form of antispeciesist activism. I want to use it generally for effective altruism too. Several times a year I’m at conventions with lots of people who are receptive to the ideas behind EA, and I would like to put some well-designed flyers into their hands.
That’s the problem—“well-designed” is. My skills kind of end at “tidy,” and I haven’t been able to find anything of the sort online. So it would be great if a gifted EA designer could create some freely licensed flyers as SVG with the basic tenets of EA and some entry points for further research, all for anyone to print and distribute.
I could then just apply some minor edits for a specific convention theme to build greater rapport with the con goers and have it printed.
Relevant to this, there’s an (inactive) .impact project to make EA infographics, with some discussion of them. There’s also an idea to create an EA design collective, consulting on design for EA orgs and projects.
It’s not quite the same thing, but you might be interested in this infographic about deworming that I made for Charity Science.
Thanks, that’s a great info graphic! I’d need something more generic, intervention agnostic, though, because we’ll be fund-raising for LLINs. Maybe something will come of that design collective.
There is a brochure created by Fox Moldrich that I edited (he shared the Adobe Illustrator file with me). Here is a PDF of it: Trifold
Please contact me for the *.AI file and/or directly contact Fox.
[Your recent EA activities]
Tell us about these, as in Kaj’s thread last month. I would love to hear about them—I find it very inspirational to hear what people are doing to make the world a better place!
(Giving this thread another go after it didn’t get any responses last month.)
I’ve volunteered for CSER. Also, I’ve done most of Andrew Ng’s Coursera course on Machine Learning. It seems like a valuable skill to acquire, so I think that belongs on the list.
I’m planning on starting an EA group at the University of Utah once I get back in January, and I need a good first meeting idea that will have broad appeal.
I was thinking that I could get someone who’s known outside of EA to do a short presentation/question and answer session on Skype. Peter Singer is the obvious choice, but I doubt he’d have time (let me know if you think otherwise). Can anyone suggest another EA who might have name recognition among college students who haven’t otherwise heard of EA?
Is there an audio recording of Holden’s “Altruistic Career Choice Conference call”? If so, can someone point me in the right direction. I’m aware of the transcript:
http://files.givewell.org/files/calls/Altruistic%20career%20choice%20conference%20call.pdf
Thanks!
I’ve been growing skeptical that we will make it through AI, due to
1) civilizational competence (that is incompetence) and
2) Apparently all human cognition is based on largely subjective metaphors of radial categories which have arbitrary internal asymmetries that we have no chance of teaching a coded AI in time.
This on top of all the other impossibilities (solving morality, consciousness, the grounding problem, or at least their substitute: value loading).
So it is seeming more and more to me that we have to go with the forms of AI’s that have some small chance of converging naturally into human-like cognitionn, like neuromorphic or WBE. Since those are already low probability (see e.g. Superintelligence) to begin with, my growing impression is that we are very, very likely doomed.
So far the arena of people doing control problem related activities has been dominated by pessimists (say people who think we have less than 8% chance of making through). Over time as more and more people join it is likely more optimists will join. How will that affect our outcomes? Are optimists more likely to underestimate important strategical considerations?
A separate question would be: what does EA look like in a doomed world? Suppose we knew for certain that AGI would destroy life on earth; what are the most altruistic actions we can take between now and then? Is postponing the end by a few days more valuable than donating to sub-saharan effective charities?
These thoughts are not fully formed, but I wanted people to give their own opinions on these issues.
It makes sense that the earliest adopters of the idea of existential risk are more pessimistic and risk-aware than average. It’s good to attract optimists because it’s good to attract anyone and also because optimistic rhetoric might help to drive political change.
I think it would be pretty hard to know with probability >0.999 that the world was doomed, so I’m not that interested in thinking about it.
The underlying assumption is that for many people working on probability shifts that are between 0 and 1 percent is not desirable. They would be willing to work for the same shift if it was betwen, say, 20 and 21, but not if it is too low. This is an empirical fact about people, I’m not issuing that it is a relevant moral fact.
Yeah so if it started to look like the world was doomed, then less people would work on x-risk, true.
I posted this late before, and was told to post in a newer Open Thread so here it goes:
Is voting valuable?
There are four costs associated with voting:
1) The time you spend deciding on whom to vote.
2) The risk you incur in going to the place where you vote (a non-trivial likelihood of dying due to unusual traffic that day).
3) The attention you pay to politics and associated decision cost.
4) The sensation you made a difference (this cost is conditional on voting not making a difference).
What are the benefits associated with voting:
1) If an election is decided based on one vote, and you voted on one of the winning contestants, your vote decides who is elected, and is causally responsible for the counterfactual difference between candidates.
2) Depending on your inclinations about how decision theory and anomalous causality actually work in humans, you may think your vote is numerically more valuable because it changes/indicates/represents/maps what your reference class will vote. As if you were automatically famous and influential.
Now I ask you to consider whether benefit (1) would in fact be the case for important elections (elections say where the elected will govern over 10 000 000 people). If 100 Worlds had an election decided based on one vote, which percentage of those would be surreptitiously biased by someone who could tamper with the voting? How many would request a recount? How many would ask it’s citizens to vote again? Would deem the election illegitimate? Etc… Maybe some of these worlds would indeed accept the original counting, or do a fair recounting that would reach the exact same number, I find it unlikely this would be more than 80 of these 100 worlds, and would not be surprised if it was 30 or less.
We don’t know how likely it is that this will happen, in more than 16000 elections in the US only one was decided by one vote, and it was not an executive function in a highly populated area.
This has been somewhat discussed in the rationalist community before, with different people reaching different conclusions.
Here are some suggestions for EA’s that are consistent with the point of view that voting is, ceteris paribus, not valuable:
EA’s who are not famous and influential should consider never making political choices.
EA’s who are uncertain or live in countries where suffrage is compulsory may want to consider saving time by copying the voting decisions of someone who they trust, to avoid time and attention loss.
Suggestions for those who think voting is valuable:
EAs should consider the marginal cost of copying the voting policy of a non-EA friend/influence they trust highly, and weight it against the time, attention and decision cost of deciding themselves.
EAs should consider using safe vehicles (all the time and) during elections.
EAs who think voting is valuable because it represents what all agents in their reference class would do in that situation should consider other situations in which to apply such decision procedure. There may be a lot at stake in many decisions where using indication and anomalous causation applies—even in domains where this is not the sole ground of justification.
There are more things to add to the benefits list:
When I talk to friends about how to vote I get to exhibit some of the ways I think about policy which may influence their thinking in the future
Becoming educated about local political issues helps you look educated and gain respect among other local people
Learning about public policy might be enjoyable
Overall, though, none of this seems to justify either not voting if you want to vote, or voting if you don’t want to vote.
Previous thread