Open Thread 4
Welcome to the fourth open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.
Welcome to the fourth open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.
I’m interested in if anyone has any data or experience attempting to introduce people to EA through conventional charitable activities like blood drives, volunteering at a food bank, etc. The idea I’ve been kicking around is basically start or co-opt a blood drive or whatever event.
While people are engaged in the activity, or before or after, you introduce them to the idea of EA. Possibly even using this conventional charitable event as the prelude to a giving game. On the plus side the people you are speaking with are self-selected for doing charitable acts so might be more receptive to EA than a typical audience. On the downside is this group might be self-selected for people who care a lot about personally getting hands on for charitable works which typically aren’t the most effective things you can do.
There is a lot of discussion about what to DO in the context of EA. But for everything I do, there is something else that I don’t.
What have you decided NOT to do, because it has a (somewhat) lower priority than other things?
Things that I downprioritized:
some recreational activities: playing the guitar, cooking, baking cakes, reading novels.
I quit volunteering in an online education project. It was low time cost anyway.
meditating (would that increase productivity more than the time spent on it? I don’t really care about the other benefits.)
keep an EA blog, because there are already good ones. My comparative advantage would be to write in Dutch to a local public, but that’s a small group of people who can easily read English.
I can think of many things I no longer do but I’m not sure that’s a direct result of my EA involvement. I’m a busy person, so activities that offered a small benefit naturally gave way either to more productive things, or things that offered a bigger benefit.
I think I probably drink (and therefore spend less on) alcohol. I only watch a few selected TV shows and don’t re-watch old episodes. I spend less time at the gym but more outdoors cycling with my boyfriend (kills multiple birds with one stone).
I think as we age our priorities naturally shift and our activities naturally change. Nothing in my personal life has changed ONLY as a result of EA ideas.
Since 2000 I’ve abandoned TV, videogames, celebrity gossip, musical ability, knowledge about bands, politics, theater classes, dancing classes, handball, tennis, reading fiction, reading parts of Facebook, maintaining contact with groups X and Y of friends, newspapers, magazines and comics.Those were not easy choices, each comes with a cost, a sadness, and a feeling that something valuable has been lost. The richness of flavors of life got somewhat poorer.
This is a bit tangential but I don’t know if there’s a single EA that smokes cigarettes.
I don’t know if EA demographics fit smoking much—my sense is that we tend to be young and highly educated.
How many EAs do you know definitely do not smoke? I’m often surprised at how long I can know someone in person without realizing they smoke. Even more so when they just occasionally smoke cigars (though this is not exactly what you were discussing)
I know a couple.
In a couple of weeks, I’m going to give a 10-minute talk (with slides) on effective altruism at the software company I work for (Scribd.com). The audience will be ~40 people, many of whom I am friends with & many of whom are well-compensated and intelligent software engineers/designers/etc. (This is part of a thing Scribd does where employees periodically give talks on random topics that interest them.)
I’d love to hear any suggestions for the content of my talk. I’m curious what evidence we have about the most effective ways to convince people of effective altruism. I’m also curious if anyone has any interesting calls to action for the end of my talk aside from “donate money to Givewell-recommended charities”. It feels like the ideal case would be bringing more people in to the EA community, and I’m not sure what the best first step there is.
Regarding content, I’m open to outlandish suggestions—I’m fairly willing to make a fool of myself (I’m about to quit to work on MealSquares full time) and Scribd has a great sense of humor.
(And, unrelated question: if there are any books that are available on Scribd that you think more people should read, let me know and maybe I can get them featured before I leave. I already got Stuart Armstrong’s book on AI risk featured in our Computers & Technology section. I’m thinking I might try to get this book featured as well because I found it pretty enlightening and spreading the ideas in it seems robustly positive.)
ETA: Here is a follow-up comment discussing the contents and reception of my talk.
What about one of Bostrom’s books—either Global Catastropic Risks or Superintelligence? Both are excellent.
Both not available on Scribd :/
Could you try to get them available before you leave?
Probably not :(
[AMF and its RFMF]
I’m curious as to whether people are giving to AMF, and if so what they think of its room for more funding. I used to favour it but haven’t done so since GiveWell stopped recommending it due to room for more funding concerns. Their financial information suggests that they still have a large cash reserve, but I’d be interested to hear from anyone who’s looked into this.
Here are some reasons to think AMF could still be worth giving to:
AMF net distributions come in large (and varied) sizes, and they generally only arrange distributions after they have the money to cover them. This means that a larger cash reserve may open up extra opportunities for distributions that they wouldn’t have with a smaller cash reserve; it’s therefore not lossless to just wait until they run down the cash reserves before giving. Of course the opportunity value created by a marginal dollar in their cash reserves may well be lower now than when the cash reserves were lower, but it’s not obvious how big this effect is (it depends on the distribution of prices of possible net distribution opportunities).
Also note that incentives as a private donor differ from the incentives GiveWell has in making recommendations (at least as far as risk-aversion goes). GiveWell is in the business not just of making good recommendations, but of building a reputation for making good recommendations. A small chance that money going to AMF now will be wasted should act as a small discounting factor on the expected value for a private donor—but rears much larger than this for GiveWell, for whom the bad outcome translates not just to the waste of donations, but a reduction in trust from donors who say with hindsight that they should have known better.
Additionally, GiveWell have to consider whether they have enough room for more funding for all GiveWell donors (i.e. $ millions per year), which is more difficult case to make than simply having room for more funding from a single donor (presumably $ hundreds or $ thousands per year)
Some of it I think depends on the “if all people who thought similarly did this” principle. If enough small individual donors all thought there was enough RMRF for their small, individual donation, they might collectively run out of RMRF.
True, though GiveWell give the impression of that AMF have room for more funding even for small individual donors, given their cash reserve. Does anyone know whether that’s not in fact what they think?
My comment on Niel’s post above also should be addressed to you. :)
Thanks Owen, that’s helpful.
I don’t personally donate to AMF (I think top animal charities are more effective), but I’ve heard a few people who still do. They are confident that AMF will successfully mobilize its current funds within the next couple of years or so.
Interesting, do you know why they don’t just wait to see if it manages to do so and GiveWell re-recommends it?
About a quarter of my donations this year will go to AMF. I’d feel a bit weird holding on to the money instead of donating it.
Why AMF and not somewhere else?
It’s some kind of balancing act between supporting GiveWell-recommended charities as a way of supporting GiveWell, and recognising that our best guess is that bednets are substantially more cost-effective than deworming/cash transfers. (Pending the forthcoming update....)
Not to begrudge you too much because I’m delighted that you’re donating, but do you think GiveWell is wrong about AMF? Presumably they’ve already factored in the relative strength of bednets.
I don’t think this is relevant to GiveWell’s decision not to recommend AMF.… Immunisations are super-cost-effective, but GiveWell don’t make a recommendation in this area because GAVI or UNICEF or whoever already have committed funding for this.
I’ve got two choices if I want to donate all my donation money this year:
Donate to AMF, which is likely higher impact, but maybe my money won’t be spent for a couple of years.
Donate somewhere else, likely lower impact.
I think an AMF donation looks a pretty decent option here. I would say that the EA-controversial part of my thinking is the insistence on donating all my donation money this year, rather than using a donor-advised fund (to which I say, “Eh, whatevs...”).
So why not donate to immunizations, then?
AMF is far more likely to need the money soon than GAVI.
But SCI is far more likely to need the money soon than AMF.
Probability that they’ll need my money soon:
GAVI: ~0%
AMF: ~50%
SCI: ~100%
You might say “well there’s a 50-percentage-point difference at each of those two steps” and think I’m being inconsistent in donating to AMF and not GAVI. But if I try some expectation-value-type calculation, I’ll be multiplying the impact of AMF’s work by 50% and getting something comparable to SCI, but getting something close to zero for GAVI.
A middle ground would be to put the money into a donor-advised fund, and then wait and see if AMF gain room for more funding. That way, you can direct the money elsewhere if they don’t.
As I said to Peter in our long thread, “Eh whatevs”. :P
I don’t think I can make anything more than a very weak defence of avoiding DAF’s in this situation (the defence would go: “They seem kinda weird from a signalling perspective”). I’m terrible at finance stuff, and a DAF seems like a finance-y thing, and so I avoid them.
Well I can’t argue with “whatevs” ;)
I hope you don’t feel like Peter and I have been attacking your choice of donation—I see where you’re coming from, and AMF is a great charity, RFMF concerns apart!
That’s OK, even if I had perceived it as an attack, I’ve thought enough about this topic for it not to bother me!
As of December 1, 2014, GiveWell has reinstated AMF as a top charity. GiveWell believes that AMF now has room for more funding. See http://blog.givewell.org/2014/12/01/our-updated-top-charities/
Is there any interest in an EA blogging carnival?
How it works is that each month, a different blogger “hosts” the carnival by selecting a topic. Everyone interested in participating for that month then writes a blog post about that topic. The host then writes up a post linking to all the submissions.
This sounds like a great idea. I’d love to participate. Sounds like it could be a great way of creating content and discussion that would be available to refer to in future.
What topic would you suggest?
Well, the point is that it’s a different person choosing the topic each time so my personal list won’t be a good representation of what an actual blogging carnival would like—but here are some possibilities, some of which have already been done to death:
Donating Now vs Donating Later
EA Outreach
The Importance of the Far Future
Should EAs Be Vegan?
The Role of Self-Improvement in EA
Morality and Altruism
EA Ideas in Art and Popular Culture
Unusual Causes
Advanced Finances for EAs
The Moral Relevance of Wild Animal Suffering
Unknown Unknowns
The Epistemology of Cause Prioritization
The topics should be open-ended enough that different bloggers will take the topic in different directions.
We are planning to do a survey of a representative selection of students at NTNU, our university in Trondheim, Norway. There are about 23 000 students across a few campuses. We want to measure the students’:
… basic knowledge of global development, aid and health (like Hans Rosling’s usual questions)
… current willingness and habits of giving (How much? To what? Why?)
… estimates of what they will give in the future, that is after graduating
And of course background information.
We think we may use this survey for multiple ends. Our initial motivation was to find a way to measure our own impact at the university. And we still think, in some sense, we could measure our impact over time. Another use of the results would be the media opportunity when we present the disaggregated results, e.g. how altruistic are the engineering students compared to the medical students. We think the student press would love these kind of results and give us a lot of free media coverage. Lastly we have thought about these results could be interesting to other institutions in Norway, primarily in the aid sector. Our university is the largest technological university in Norway with many of the most attractive fields of study and thus many businesses and institutions are interested in the students.
It this is a success we want to expand to the universities in Oslo and Bergen. This will also give us a better control group, more solid results, maybe national media coverage and a better chance to reach out to people.
I would love to get some answers to the following questions: Do you have any experience from similar projects? Are there any specific questions or other topics we should consider including in the survey? Maybe you have other ideas of how we could leverage the results?
Per Bernadette, getting good data from these sorts of project requires significant expertise (if your university is as bad as mine, you can get student media attention for attention-grabbing but methodologically suspect survey data, but I doubt you would get much more). I’m reluctant to offer advice beyond ‘find an expert’. But I will add a collection of problems that surveys run by amateurs fall into as pitfalls to avoid, and further to provide further evidence why expertise is imperative.
1: Plan more, trial less
A lot of emphasis in EA is on trialling things instead of spending a lot of time planning them: lean startups, no plan survives first contact, VoI etc. But lean trial design hasn’t taken off in the way lean start-ups have. Your data can be poisoned to the point of being useless in innumerable ways, and (usually) little can be done about this post-hoc: many problems revealed in analysis could only have been fixed in original design.
1a: Especially plan analysis
Gathering data and then analysing it always suspect: one can wonder whether the investigators have massaged the analysis to satisfy their own preconceptions or prejudices. The usual means to avoiding it is specifying the analysis you will perform: the analysis might be ill-conceived, but at least it won’t be data-dredging. It is hard to plan in advance what sort of hypotheses the data would inspire you to inspect, so seek expert help.
2: Care about sampling
With ‘true’ random sampling, the errors in your estimates fall as your sample size increases. The problem with bias/directional error is that its magnitude doesn’t change with your sample size.
Perfect probabilistic sampling is probably a platonic ideal—especially with voluntary surveys, the factors that make someone take the survey will probably change the sample from the population of interest along axis that aren’t perfectly orthogonal to your responses. It remains an ideal worth striving for: significant sampling bias makes your results all-but-uninterpretable (modulo very advanced ML techniques, and not always even then). It is worth thinking long and hard about the population you are actually interested, the sampling frame you will use to try and capture them, etc. etc.
Questions can be surprisingly hard to ask right
Even with a perfect sample, they still might not provide good data depending on the questions you use. There are a few subtle pitfalls besides the more obvious ones of forgetting to include the questions you wanted to ask or lapses of wording: allowing people to select multiple options of an item then wondering how to aggregate it, having a ‘choose one’ item with too many selections for the average person to read, or sub dividing it inappropriately: (“Is your favourite food Spaghetti, Tortollini, Tagliatelle, Fusili, or Pizza?”)
Again, people who spend a living designing surveys try and do things to limit these problems—item pools, pilots where they look at different questions and see which yield the most data, etc. etc.
3a. Too many columns in the database
There’s a habit towards a ‘kitchen sink’ approach of asking questions—if in doubt, add it in, as it can only give good data, right? The problem is that false positives become increasingly difficult if you just fish for interesting correlations, as the possible comparisons increase geometrically. There are ways of overcoming this (dimension reduction, family-wise or false-discovery error control), but they aren’t straightforward.
There are probably many more I’ve forgotten. But tl;dr: it is tricky to do this right!
I think a key challenge with this is how you intend to select your sample, so as to be truly representative. Getting interested students will select a certain type of participant; so will offering a payment. Could you get the University on board with distributing your survey via email to random student numbers, for example? Your results will only be powerful (and useful) if you can ensure random selection of participants.
Agreed! As well as a careful sampling plan, things to think about in advance are: how will your questions be tested (to make sure you are asking about the information you think you are asking about). To be rigorous, you should also have a pre-specified analysis plan, which includes what comparisons you are going to make, what tests are appropriate for the data set, and how big your sample needs to be to detect the difference you are interested in.
The planning and design of surveys is a whole area of study. I would suggest finding someone knowledgeable in it to help. It’s possible somebody studying a relevant subject might be able to get involved and help you as part of their coursework. (At Oxford University these things are taught in Health Sciences, and there are study design modules that have projects doing just that)
Sounds like an interesting project Jorgen! It sounds like you already have a good plan, so my main survey-running tip would be to keep it short, and break it out into multiple pages if it reaches sufficient length so people don’t have to complete the whole thing. We used LimeSurvey for the EA survey, which is a pretty nice piece of software—I’d be happy to answer any questions on that if you want to message me.
[Your recent EA activities]
Tell us about these, as in Kaj’s thread last month. I would love to hear about them—I find it very inspirational to hear what people are doing to make the world a better place!
Can anyone recommend to me some work on existential threats as a whole? I don’t just mean AI or technology related threats but nuclear war, climate change, etc.
Btw Nick Bostrom’s Superintelligence is already at the top of my reading list, and I know Less Wrong is currently engaged in a reading group on that book.
For overviews, I recommend: Preventing Human Extinction by Beckstead, Wage and Singer Existential Risk as a Global Priority by Bostrom Reducing the Risk of Human Extinction by Matheny GCR Survey by Sandberg and Bostrom
For book-length detail, Global Catastrophic Risks by Bostrom and Cikovic is good. Others, that I haven’t read, are Our Final Hour by Martin Rees and Catastrophe: Risk and Response by Richard Posner
I haven’t read it myself, but I believe that the book ‘Global Catastrophic Risks’ (edited by y Nick Bostrom, Milan Cirkovic, and Martin Rees) covers a broad range. Here are links to it (5% of Amazon purchases through them go to SCI): US; UK.
You can also read The Open Philanthropy Project’s (previously GiveWell Labs) notes on the x-risks they’ve investigated.
GiveWell have released a summary of the status of their assessments of risks through the Open Philanthropy Project so far. The top contenders are biosecurity and geoengineering, followed by AI, geomagnetic storms, nuclear and food security, although these assessments are at various stages of completion.
What are easy, low-cost altruistic ‘wins’ that EAs can take advantage of? (An example might be telling friends and family about charity cost-effectiveness.)
Don’t forget to check out “What Small Things Can an EA Do?”
Summary:
Use GiveWell to inform your donation choices.
Join Amazon Smile.
Join some mailing lists.
Set up an altruistic tip jar.
Talk to friends.
Run a Giving Game.
Run a Fundraiser.
Join or Start a Local Meetup Group.
Try Vegetarianism (or Veganism!) for a Week
Volunteer.
Live more frugally.
Donate more.
Of note, you can direct ten times more to charity by making your Amazon purchases via the links at the Charity Science ‘Shop for Charity’ page.
Added that to the essay.
What’s the biggest mistake you’ve made in trying to help others?
I told someone that Heiffer International was an ineffective charity, showing her the GiveWell review on the page. She ended up never donating to anything international ever again, preferring now to donate to the local opera.
Seeing as Heiffer probably beats the local opera, this was a net loss.
Moral: be careful about framing, who you approach, and how.
I’d be interested to learn how a more sales-pitch type framing would have worked… e.g. “what if I told you that for the same amount of money, you could save 2x as many lives in the developing world?”
For people who have started effective altruist meetups, how large was its counterfactual impact? If you start a meetup for outreach purposes, it can be hard to tell whether people’s increased engagement resulted from a new meetup or would’ve happened anyway.
In my opinion, not very large. In my experience, I think telling people about EA has had high counterfactual value, but further work to sustain their commitment hasn’t had much counterfactual value—they either don’t get engaged despite my work, or they stay engaged despite my being hands off.
To tease the results of the 2014 Survey of EAs, local groups were one of the least popular answers to the question “Which factors were important in ‘getting you into’ Effective Altruism, or altering your actions in its direction?”. Does anyone know of other data on this?
Though keep in mind that the reason they’re not very popular could be that EAs from local groups don’t get much into LessWrong, the EA Forum, etc., where they would see links to take the survey. …So maybe local groups create lots of EAs that are less engaged with the online content.
You might want to ask local group organisers to distribute the survey among their members.
I think we did do that to the best extent we could; Tom Ash could confirm. Definitely something we should look into more. The problem is it’s hard to know what meetup groups exist and who to contact.
Confirmed! I think that we reached a pretty good cross-section of the known-about meetup groups.
Yes, that’s quite possible, good point Peter. Specifically, it could be that EAs in local groups are less involved in the online EA community and less easily reachable by a survey.
We sometimes discuss why EA wasn’t invented before. Here’s an example of GWWC being re-invented
Is voting valuable?
There are four costs associated with voting:
1) The time you spend deciding on whom to vote.
2) The risk you incur in going to the place where you vote (a non-trivial likelihood of dying due to unusual traffic that day).
3) The attention you pay to politics and associated decision cost.
4) The sensation you made a difference (this cost is conditional on voting not making a difference).
What are the benefits associated with voting:
1) If an election is decided based on one vote, and you voted on one of the winning contestants, your vote decides who is elected, and is causally responsible for the counterfactual difference between candidates.
2) Depending on your inclinations about how decision theory and anomalous causality actually work in humans, you may think your vote is numerically more valuable because it changes/indicates/represents/maps what your reference class will vote. As if you were automatically famous and influential.
Now I ask you to consider whether benefit (1) would in fact be the case for important elections (elections say where the elected will govern over 10 000 000 people). If 100 Worlds had an election decided based on one vote, which percentage of those would be surreptitiously biased by someone who could tamper with the voting? How many would request a recount? How many would ask it’s citizens to vote again? Would deem the election illegitimate? Etc… Maybe some of these worlds would indeed accept the original counting, or do a fair recounting that would reach the exact same number, I find it unlikely this would be more than 80 of these 100 worlds, and would not be surprised if it was 30 or less.
We don’t know how likely it is that this will happen, in more than 16000 elections in the US only one was decided by one vote, and it was not an executive function in a highly populated area.
This has been somewhat discussed in the rationalist community before, with different people reaching different conclusions.
Here are some suggestions for EA’s that are consistent with the point of view that voting is, ceteris paribus, not valuable:
EA’s who are not famous and influential should consider never making political choices.
EA’s who are uncertain or live in countries where suffrage is compulsory may want to consider saving time by copying the voting decisions of someone who they trust, to avoid time and attention loss.
Suggestions for those who think voting is valuable:
EAs should consider the marginal cost of copying the voting policy of a non-EA friend/influence they trust highly, and weight it against the time, attention and decision cost of deciding themselves.
EAs should consider using safe vehicles (all the time and) during elections.
EAs who think voting is valuable because it represents what all agents in their reference class would do in that situation should consider other situations in which to apply such decision procedure. There may be a lot at stake in many decisions where using indication and anomalous causation applies—even in domains where this is not the sole ground of justification.
It does indeed look hard to predict what will happen here exactly. Luckily it pretty much factors out of the analysis.
I’ll demonstrate with a toy example. Say the election goes to whoever gets more votes, EXCEPT if the counts only differ by 1 vote. In that case there’s a 100% chance that the election is deemed illegitimate and the military step in. Say you’re voting for Party B rather than Party A. You know Party A has 1,000 votes, and you think Party B has about 1,000 votes.
If you move Party B from 1,000 to 1,001 votes you make no difference—you get the election declared illegitimate either way. Instead your impact comes from the cases where Party B was getting 998 votes already (you move from Party A win to military takeover) or Party B was getting 1,001 votes already (you move from military takeover to Party B win).
If you’re uncertain enough about the outcome that all of those possibilities for vote numbers look about equally likely to you, then the net expected effect of your vote is a small chance (the chance that the votes stand at any one particular number) to move from a Party A win to a Party B win—just the same as if you disregarded all the possible complications.
Now I ask you (and everyone reading) to consider this: http://what-if.xkcd.com/19/
I don’t understand what you’re pointing us to in that link. The main part of the text tells us that ties are usually broken in swing states by drawing lots (so if you did a full accounting of probabilities and expectation values, you’d include some factors of 1⁄2, which I think all wash out anyway), and that the probability of a tie in a swing state is around 1 in 10^5.
The second half of the post is Randall doing his usual entertaining thing of describing a ridiculously extreme event. (No-one who argues that a marginal vote is valuable for expectation-value reasons thinks that most of the benefit comes from the possibility of ties in nine states.)
Perhaps some of those details are interesting, but it doesn’t look to me like it changes anything of what’s been debated in this thread.
My main response is that this is worrying about very little—it doesn’t take much time to choose who to vote for once or twice every few years.
But in particular,
is an overstated concern at least for the US (relative risk around 1.2 of dying on the road on election day compared to non-election days) and Australia (relative risk around 1.03 +/- error analysis I haven’t done).
Yes, you’re right that election day doesn’t add much to the danger. But the baseline risk of drying on the road is pretty high relative to other risks you probably face, so if you thought the benefits of voting were negligible this one might be a significant element of your calculus.
Diego, I don’t weight any of the 4 risks you’ve listed very heavily. I also think you’ve underestimated the benefits.
In regards to Benefit #1, a vote’s relevance doesn’t depend on the election being decided by a single vote. If you think probabilistically, then in any given election, your vote has a certain probability of affecting the outcome. You can weight that against how important you think it is for Party A to win over Party B. I think that given how little it costs to vote, it’s usually clearly worth it to take a small action with a tiny probability of having large-scale consequences.
I think this is somewhat analogous to going vegetarian, in which case you’re contributing to a larger cause even though your individual decision not to buy meat only has a tiny probability of being the non-purchase that causes the grocery store to order one less item next time.
Other benefits:
a) Your vote might cause other people to vote with you. In this case, you are no longer a single vote but a package of votes.
b) There’s also something to be said for signalling an interest in politics and social issues.
c) In some elections, your vote might give the party you voted for more seats, funding, power and/or legitimacy, even if they ultimately lose the election.
d) The attention it takes to learn about politics can also have multiple benefits: being in touch with the people around you, learning about issues in society, learning about solving those issues, etc.
a) Yes, famous people should signal to whom they will vote.
b) Signalling interest in politics seems commendable on occasion and despicable at least as frequently.
c) Which is why I focused on large elections where the counterfactual difference would be larger. Also, definition-wise, a vote that decides on more seats is a vote that breaks a tie, which I had considered.
d) The hypothesis that dedicating attention to politics gets you closer to the people around me strikes me as utopic, whereas frequently politics are used to determine who is left, not who is right, in a social environment.
It seems that you missed a substantial cost: the time taken to physically go to vote (and queue if necessary). I’d expect this to be a bigger expected cost than the extra risk incurred.
I think voting is very valuable from a “moral trade” perspective. I want to convince other people to take my ideas of virtue seriously, but they won’t if they see me doing something that’s commonsense unvirtuous like not voting.
I agree that this might help with persuasion, but I’m not sure this really counts as moral trade. By voting, you’re diluting the effect of everyone else’s votes. So plausibly you are harming everyone else by voting. If this counts as a trade for them, it’s a perverse one, where they would be better off not trading.
Of course, you could make the reasonable counter-argument that you have studied economics, history etc. far more than the average voter, so are actually helping by diluting the impact of stupid voters. But that’s not so much trade as paternalism.
Not if the person values democracy for the sake democracy, rather than achieving particular legislative aims. I feel like many of my friends are like this.
Recently on the site there have been a number of cross-posts from other websites. I recognise that is great and can bring a lot of value. But I subscribe to the site in an RSS reader and have a very good group of feeds already, including all of the sites content has been cross posted from so far—so the effect for me is to create double posts. My RSS reader has a feature to filter tags or parts of the titles of posts. Would it be possible to tag or add a reddit style brackets tag to cross posts so I can filter them?
I can write [crosspost] for anything that I cross-post.
What is something you believe that nearly no one agrees with you on?
I don’t know if this is an “I believe X and no-one also does.” But I have noticed I have a tendency to poke everything in a counter-intuitive manner, So even something dumb like gamer gate—my instinct is to ask “well is there any kernel of truth in these criticisms?” rather than advocating on behalf of a cause. (annoyed EA types would see the same about my poking of X-risk cause)
I worry that this makes me quite “uncooperative” and reduces my effectiveness because this is my instinct even with super clear causes like climate change. But I am not sure how to resolve it.
I just don’t find advocacy as interesting as argument/ debate—even though I can’t say it works with my terminal values.
How can the skills you use in your occupation be applied to effective altruism?
Working on getting a more useful skill but for now if ever someone needs some audio editing, perhaps for a potential EA podcast, I can do it.
Also this job board seems relevant as skills people have they might not think would be of use are in demand.
I do data science, which makes me a good fit for working on the forthcoming EA survey!