I describe it as a calling. It’s not so much that I feel a strong emotion as I feel like it’s the most natural thing in the world that I would want to help people and do that in the most effective way possible. Since I focus specifically on x-risk from AI, I find this as a calling to address AI safety due to the natural way this feels like an obvious problem in desperate need of a solution.
For me it’s very similar to the kind of “calling” people talk about in religious contexts, and now that I’m Buddhists I conceptualized what happened when I was 18 that made me care about and start pursuing AI safety as the awakening of bodhicitta because although I already wanted to become enlightened at that time (even though I didn’t really appreciate what that meant) it wasn’t until I cared about saving humanity from AI that I developed the compassion and desire that drove me to bodhicitta. With time that calling has broadened even though I mainly focus on AI safety.
This feels very timely, because several of us at CEA have recently been working on updating our resources for media engagement. In our Advice for talking with journalists guide, we go into more depth about some of the advice we’ve received. I’d be happy to have people’s feedback on this resource!
This seems to be a private document. When I try to follow that link I get a page asking for me to log in to Google Drive with a @centreforeffectivealtruism.org Google account, which I don’t have (I’m already logged into Google with two other Google accounts, so those don’t seem to give me enough permission to access this document).
Maybe this document is intended to be private right now, but if it’s allowed to be accessed outside CEA it doesn’t seem that you currently can.
I can’t speak for any individual, but being careful in how one engages with the media is prudent. Journalists often have a larger story they are trying to tell over the course of multiple articles and they are actively cognitively biased towards figuring out how what you’re saying confirms and fits in with that story (or goes against it such that you are now Bad because you’re not with whatever force for Good is motivating their narrative). This isn’t just an idle worry either: I’ve talked to multiple journalists and they’ve independently told me as much straight out, e.g. “I’m trying to tell a story, so I’m only interested if you can tell me something that is about that story”.
Keeping quiet is probably a good idea unless you have media training so you know how to interact with journalists. Otherwise you function like a random noise generator that might accidentally generate noise that confirms what the journalist wanted to believe anyway and if you don’t endorse whatever the journalist believes you’ve just done something that works against your own interests and you probably didn’t even realize it!
So assuming the Copenhagen interpretation is wrong and something like MWI or zero-world or something else is right, it’s likely the case that there are multiple, disconnected casual histories. This is true to a lesser extent even in classical physics due to the expansion of the universe and the gradual shrinking of Hubble volumes (light cones), so even a die-hard Cophenhagenist should consider what we might call generally acausal ethics.
My response is generally something like this, keeping in mind my ethical perspective is probably best described as virtue ethics with something like negative preference utilitarianism applied on top:
Causal histories I am not causally linked with still matter for a few reasons:
My compassion can extend beyond causality in the same way it can extend beyond my city, country, ethnicity, species, and planet (moral circle expansion).
I am unsure what I will be causally linked with in the future (veil of ignorance).
Agents in other causal histories can extend compassion for me in kind if I do it for them (acausal trade).
Given that other causal histories matter, I can:
act to make other causal histories better in those cases where I am currently causally connected but later won’t be (e.g. MWI worlds that will split causally later from the one I will find myself in that share a common history prior to the split),
engage in acausal trade to create in the causal history I find myself in more of what is wanted in other causal histories when the tradeoffs are nil or small knowing that my causal history will receive the same in exchange,
otherwise generally act to increase the measure (or if the universe is finite, count) of causal histories that are “good” (“good” could mean something like “want to live in” or “enjoy” or something else that is a bit beyond the scope of this analysis).
Google Drive has a simple survey function that lots of people use and is pretty convenient and can dump the results in Google Sheets for export. For example, it seems to be good enough for Scott’s monster SSC reader survey.
Sure, this is the ideology part that springs up and people end up engaging with. Thinking of EA as a question can help us hew to a less political, less assumption-laden approach, but this can’t stop people entirely from forming an ideology anyway and hewing to that instead, producing the types of behaviors you see (and that I’m similarly concerned about, as I’ve noticed and complained about similar voting patterns as well).
The point of my comment was mostly to save the aspiration and motivation for thinking of EA as a question rather than ideology, as I think if we stop thinking of it as a question it will become nothing more than an ideology and much of what I love about EA today would then be lost.
You are, of course, right: effective altruism is an ideology by most definitions of ideology, and you give a persuasive argument of that.
But I also think it misses the most valuable point of saying that it is not.
I think what Helen wrote resonates with many people because it reflects a sentiment that effective altruism is not about one thing, about having the right politics, about saying the right things, about adopting groupthink, or any of the many other things we associate with ideology. Effective altruism stays away from the worst tribalism of other -isms by being able to continually refresh itself by asking the simple question, “how can I do the most good?”
When we ask this question we don’t get so tied up in what others think, what is expected of us, and what the “right” answer is. We can simply ask, right here and right now, given all that I’ve got, what can I do that will do the most good, as I judge it? Simple as that we create altruism through our honest intention to consider the good and effectiveness through our willingness to ask “most?”.
Further, thinking of effective altruism as more question than ideology is valuable on multiple fronts. When I talk to people about EA, I could talk about Singer or utilitarianism or metaethics, and some times for some people those topics are the way to get them engaged, but I find most people resonate most with the simple question “how can we do the most good?”. It’s tangible, it’s a question they can ask themselves, and it’s a clear practice of compassion that need not come with any overly strong pre-conceived notions, and so everyone feels they can ask themselves the question and find an answer that may help make the world better.
When we approach EA this way, even if it doesn’t connect for someone or even if they are confused in ways that make it hard for them to be effective, they still have the option to engage in it positively as a practice that can lead them to more effectiveness and more altruism over time. By contrast, if they think of EA as an ideology that is already set, they see themselves outside it and with no path to get in, and so leave it off as another thing they are not part of or is not a part of them—another identity shard in our atomized world they won’t make part of their multifaceted lives.
And for those who choose not to consider the most good, seeing that there are those who ask this question my seem silly to them, but hardly threatening. An ideology can mean an opposing tribe you have to fight against so your own ideology has the resources to win. A question is just a question, and if a bunch of folks want to spend their time asking a question you think you already know the answer to, so much the better that you can offer them your answer and so less the worse that they pose a threat, those silly people wasting time asking a question. EA as question is flexibility and strength and pliancy to overcome those who would oppose and detract from our desire to do more good.
And that I think is the real power of thinking of EA as more question than ideology: it’s a source of strength, power, curiosity, freedom, and alacrity to pursue the most good. Yes, it may be that there is an ideology around EA, and yes that ideology may offer valuable insights into how we answer the question, but so long as we keep the question first and the ideology second, we sustain ourselves with the continually renewed forces of inquiry and compassion.
So, yes, EA may be an ideology, but only by dint of the question that lies at its heart.
I think much of the work being done by what you think of as frugality here is actually being done by slack: creating conditions under which you have enough flexibility to take advantage of situations when they arise and not be so attached to things as they are that you miss opportunities you value after taking. Only in your first case do I think frugality does the heavy lifting; everywhere else it is a way you created slack for yourself, but it could have been accomplished many other ways while living a more materially lavish life.
I’ll go ahead and give an answer to get us started.
The best thing for me was discovering that there is a way I can take an idea I had a while ago and apply it within the framework of iterated amplification to likely make the idea both more relatable and more useful in the nearer term. This discovery came thanks to one of the one-on-one meetings I scheduled via the network feature of the conference app and that conversation leading to a mutual realization that this idea might have new legs via iterated amplification. I think it is unlikely I would have figured that out without the conversation facilitated by the networking features of the app!
I suspect much of the trouble is the same as the trouble investors have trying to take advantage of this strategy: it requires marking a better prediction than the prediction the market is implicitly making with its current prices. Although it seems reasonable to predict that a recession will come “soon” since it’s been unusually long since the last one and they appear cyclically (approximately coordinated with the approximately 5-year business cycle?), making that prediction too soon and switching to hoarding assets in anticipation of a drop so you can re-buy assets when they are at the bottom to maximize gains on the way back up will result in unnecessarily giving up potential gains. You might make a lucky guess once, but in the long run you’d need some reason to believe you can predict recessions or else you will perform worse than the market, not better.
So this seems probably only relevant if you are so good at predicting recessions so you can use that to make money and then donate that money, and will probably also require keeping quiet about your prediction and your evidence such that you can maximize the amount of advantage you can take (up to the limit of your funds, including the use of leverage, which might cause you to carefully share your knowledge in an attempt to fill gaps in opportunity you wouldn’t be able to take advantage of yourself). If you’re a non-profit, regular donor, or anyone else, you’re probably best off not trying to beat the market, and only accounting for this in the normal way of holding funds in reserve so you can weather temporarily shocks to the market, i.e. have enough operating capital that you won’t have to draw down on your investments before they recover.
Although related, EA has grown and includes many people who don’t share the rationalist/LW most prevalent among EAs concerned with x-risk, so LessWrong and especially the Sequences are probably worth mentioning.
Taxes seem tricky. I view it as generally good that governments allow offsetting of tax burden via donation to allow more flexibility in allocation of money to public goods, and in this way taxes being used for purposes you disagree with can actually incentivize spending on things we each care about more. Of course, it would be nice if you could just give more and be taxed less, and eventually donation offsetting runs out because governments still need some money.
My guess is that tax resistance won’t be an effective cause area unless you especially believe there is large harm caused to people by making them pay taxes (a sort of libertarian suffering consequentialist argument), but for a variety of reasons it is probably worthwhile to minimize the amount you pay in taxes, i.e. don’t give up money to a government that you could have otherwise spent in a way better aligned with your interests.
There is also some impact here based on who you pay taxes to. A citizen of the USA, like me, does more to fund war than a citizen of Switzerland, and thus if I were to pay less tax to the USA than a Swiss citizen were to pay to Switzerland I would more be reducing war spending than a Swiss citizen would, who would likely be more reducing funding of other public goods they would endorse being supported.
On the whole I don’t think we can conclude anything especially strong, but it does at least seem like an interesting case to think about to sharpen our skills!
For what it’s worth, the reason I dislike yay/boo voting is that it incentivizes people towards posting/commenting in ways that maximize applause lights at the expense of saying things that are more useful to other purposes, like becoming less confused and doing more good. I worry that the current voting system is too heavily suffering from Goodhart effects and as a result shaping people’s motivation in posting and commenting in ways that work against what most people would prefer we do on this and its sister forums (though of course maybe many people genuinely want applause lights, though the comments on this post seem to suggest otherwise).
What do you mean by “better” here? That there is a discrepancy suggests to me that people are voting for different reasons between the two places, not that the voting is better in some universal way (compare the way “better” in economics could mean redistribution to things you like or more efficiency so everyone gets more of what they want).
Also, just further noting voting patterns, no disrespect intended to you kbog, but your comment contains little content (in a very straightforward sense: it is short) and is purely a statement of opinion with no justification provided (though some is implied), yet at time of writing has 6 votes for 14 karma, which relative to what I see on average comments on EAF, where more thorough comments receive less karma and less attention, suggests to me you hit an applause light and people are upvoting it for that reason rather than anything else.
None of this is to say people can’t vote the way they like or that you don’t deserve the karma. I merely seek to highlight how people seem to use voting today. The way people use voting is not aligned with how I would like voting to be used, hence why I mention these things and am interested in them, but it is also not up to me to shape this particular mechanism.
I think we lack clear evidence to conclude that, though. I can just as easily believe the story, given what we’ve seen, that EAF users are more likely to downvote anything criticizing EA (just as LW users are more likely to downvote anything that goes against the standard interpretation of LW rationality). I’d be very interested to know if there are posts that both criticize something EA in a cogent way as this post does and don’t receive large numbers of downvotes.
Also, don’t forget many posts that have pro-EA results are about equally well reasoned as what we see here, but receive overwhelmingly positive votes, even if they receive criticism in the comments. So the question remains, why downvote this post when we respond to it and not downvote other posts when we criticize them?
My general algorithm for voting is to vote up that which I would have liked to have recommend for me to read and downvote that which I would be disappointed if it were recommended to me, where the criterion for wanting something recommended is does it thoughtfully engage with a topic in a way that advances my understanding (and in the case that my understanding already includes what is presented, I try to imagine the case that I didn’t know what I know and vote from that place of counterfactual ignorance). I don’t vote on things that either fail to pique my interest or that I feel indifferent on having recommended to me.
Strong votes (up and down) go to things that I would, respectively, be visibly happy or sad if someone recommended it to me, i.e. someone sent me an email about it and I light up and smile or frown and droop when I read the content.
Since I am both mid-career and EA, maybe I can say a little about this even if I can’t give a full answer.
I was concerned about existential risk due to AI prior to the start of my career (heck, prior to going to college, and this was in 2000), but for a variety of reasons I failed to do much directly about this. I got distracted by life, had to get a job to deal with more pressing needs, and spent several years just trying to get along without putting much effort into AI safety.
Then a couple of years ago my life got better, I had more slack, and I used that slack to start working on AI safety as a “hobby”. So far this has proven pretty successful: I’ve published some things, had many interesting conversations with people who are also doing direct work on AI safety (part or full time), and helped influence research directions and progress.
I don’t know what this will turn into, but the hobby model is worth considering as a way to transition mid-career: get interested in and start working on something you care about, and eventually maybe transition to doing that work full time. Plus you’ll be somewhat unique in that you’ll be carrying forward all your existing career capital that others in your chosen space likely won’t have.
The downside of this approach is that it requires you have enough time and energy to do it. To make progress here it may be necessary to take a less demanding job to creating that time and energy or give up other commitments.
Definitely interested to see what others suggest or have tried.
Small formatting tip: it would be nice if you put a very short title to your question in the title and asked the full question in the body of the question. I found it a bit hard to read the question when the whole thing is in title styling.