I’m a software engineer from Brisbane, Australia who’s looking to pivot into AI alignment. I have a grant from the Long-Term Future Fund to upskill in this area full time until early 2023, at which point I’ll be seeking work as a research engineer. I also run AI Safety Brisbane.
Jay Bailey
The barrier to action is definitely a big thing. When I was a student, I avoided donating money. I told myself I’d start donating when I got a job and started making good money. Then, when I did get a job, I procrastinated for another two years.
The thing that convinced me to finally do it was joining a different online group where I tried to do a good deed every day. When I got that down, I got into the habit of doing good, which made me rethink EA. After some thought, I committed to try giving 10% just for a year. A month later, I made the Giving What We Can pledge. After I’d made the commitment I realised it wasn’t that hard, and I felt a lot better about myself afterwards.
If I could go back in time, I think what I’d ask my past self to do is not to commit to donating 10%, but to commit to donating just 1% for a year. 1% is nothing, and anyone can do that—but once you start intuitively understanding that A) You feel better donating this money, and B) You really don’t miss it, it’s a lot easier to scale up. Going from 0 to 1 is a bigger step than from 1 to 10.
I still don’t have a full solution, but I think that might be a place to begin.
I still don’t think you’re wrong. Will is correct when he says that it is more likely someone with a BMI of 25 or lower is actually overweight than someone with a BMI of 25 or higher is just well-muscled, but that isn’t the same as estimating by eye.
The point, as I understand it, is that if you live in a country where most people are overweight, your understanding of what “overweight” is will naturally be skewed. If the average person in your home country has a BMI of 25-30, you’ll see that subconsciously as normal, and therefore you could see plenty of mildly overweight people and not think they were overweight at all—only people at even higher BMI’s would be identifiable as overweight to you.
Fair enough. I’ve edited it to remove the quotation marks.
How useful is it to help a large number of ineffective charities?
How to even approach this calculation?
====
The way I would probably approach the calculation is this:
- Roughly how effective is the average charity compared to GiveWell’s top charities? 10%? 1%?
- What is the mean annual revenue of these charities? (Mean, not median, remember the power law)
- How many charities do I expect to be able to help with this?
- How much more effective do I think I can make them?
I’m not sure how to find out the answers, but that would be a way to approach it. Since the questions are difficult, it might be good to have three calculations—optimistic, average, and pessimistic for plugging in the variables.
That’s definitely a good question to ask. After all, people in the future aren’t here now, and there are a lot of problems we’re facing already. That said, I don’t think we should. I mean—do you or I have any less moral value now than the people who lived a thousand years ago? Regardless of where or when they live, the value of a human life doesn’t change. Basically, I think the default hypothesis should be “A human life is worth the same, no matter what” and we need a compelling reason to think otherwise, and I just don’t see that when it comes to future people.
There are some caveats in the real world, where things are messy. Like, if I said “Why shouldn’t we focus on people in the year 3000”, your first thought probably wouldn’t be “Because they don’t matter as much”. It’d probably be something like “How do we know we can actually do anything that’ll impact people a thousand years from now?” That’s the hard part, but that’s discounting based on chance of success, not morality. We’re not saying helping people in a thousand years is less valuable, just that it’s a lot harder to do. Still, EA definitely has some ideas. Investing money to give later can have really big compounding effects, so that the compounding has a bigger effect than our uncertainty. Imagine you could invest a thousand dollars in something that would definitely work, or ten thousand on something just as effective that was about a 50⁄50 shot. There’s a whole mode of thought called “patient philanthropy” that deals with this—I could send you a podcast episode if you’d like?
(Followup: Send them to this episode of the 80,000 Hours podcast if interested: https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/)
===
I’ve definitely leaned into the “conversational” aspect of this one—the argument is less rigorous and sophisticated than a lot of others in this post, but I’ve tried to optimise it for something I could understand in real time if someone was speaking to me, and wouldn’t have to read it twice.
I am a software engineer who is considering applying for EA-aligned roles as a career move in the not-too-distant future (Still deciding between going for AI safety or just trying to do a similar type of SWE job I already do, but in an effective org) and the thing I found most surprising in this article was:
Is your bottle neck “people don’t apply”?This is the most common problem for EA orgs, as far as I know.
From the developer side, I read articles like this one (https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really) and my conclusion was “Despite being well above average as an engineer according to objective career metrics like titles over time and compensation, as someone not at a FAANG level, I probably won’t meet the bar to get hired at an EA organisation.” This may be one reason why EA orgs say “If in doubt, apply”, but it’s still a bit daunting.
I’d be interested to know if that info (from 2019) still applies, since I also saw this (https://forum.effectivealtruism.org/posts/CdYniXZ53dyPupRiY/is-it-no-longer-hard-to-get-a-direct-work-job) but the comments muddy this a lot and make it uncertain as to how accurate this is.
Another thing I try and remind myself of when I start thinking “Ahh, there’s too much to learn!” is that I should be thinking on the timescale of months and years, rather than days and weeks—it’s amazing how much progress one can make by consistently plugging away at something for a few hours a week.
This is more an emotional strategy than a strategy of actually learning more effectively, but I find it helpful.
I think the daunting part is the “being rejected” part, more than any actual difficulty in applications. I don’t think making the process 30 seconds instead of five minutes would have made me any more likely to pull the trigger. I’ve sent in a few applications anyway because I wanted to check my current ability against the needs of the organisations, and the process itself was pretty fast.
This may not be generalisable across other people (and I’m not the kind of person who really needs it, since I did send in the applications anyway), but I see two parts to rejection.
1) The social aspect of “Oh no, rejection by a human being” which is unreasonably strong for most people. (There’s a reason why asking someone out is terrifying for a lot of people) This can also manifest as “I don’t want to waste someone’s time if I’m way below the standard”.
2) The psychological aspect of failing at something.
Of this, I suspect 1 is stronger than 2 for most individuals. A potential solution to this might be some sort of automated screen as a first round, such that individuals who fail it never actually get rejected by a human, and individuals who succeed now have enough buy-in and signal of their suitability to be more likely to progress to the next step. At the very least, I can imagine some people would say “Well, I’m sure I’m not <org> material, but it would be nice to take the test and see where I stand!” but they wouldn’t want to waste an actual human’s time by sending in an application in similar circumstances. And some of those people might be closer to <org> material than they think.
For this to work, you would need:
* A very clear idea of what the standard is
* Encouragement that if someone meets this standard, you want them to apply
* A way for candidates to disqualify themselves without ever talking to a human.
Anthropic’s call to action had at least two and a half of these. The standard wasn’t 100% objective in the sense that I can unambiguously pass/fail it right now, but it’s pretty damn close.
(I wonder if this could work with grants too, with questions with clear acceptance criteria and encouragement that if someone meets these acceptance criteria, they have met the threshold that they should apply for a grant)
Of course, this comes with its own difficulties—an official public automated test is easier to game, whereas an objective standard like “If you can complete 3 of 4 problems in a LeetCode competition within the time limit, talk to us” is less authoritative and thus less effective. So I’m not sure what the best way to go about doing this is, or if it would be effective across a bunch of not-me people.
This is one of my favourite posts to re-read from time to time. It’s HARD to keep these strategies in mind. I think Eliezer’s right that what we automatically do is “Pick things that feel like they will achieve our goal”. Some people are more calibrated in their feelings than others, and their feelings on what will achieve their goals are more aligned with strategic thinking, but even so strategic thinking is very difficult.
I like these posts which are more practical in nature. As much as the Sequences influenced my thinking when I read them as a young adult, it’s these posts I come back to years later.
I think this post would be better if it went into some more detail on the career transition process, and perhaps mentioned some lessons learned that are applicable to those intentionally aiming for transitions.
Of these two, I find Baxter more interesting and relevant to EA. Baxter:
- Didn’t intend to move into this field at all at first. (This is bad news for EA people trying to manufacture a career change, but still interesting, and may cause people to update in favor of doing relatively casual research into a field, rather than thinking “Oh, a few hours a week will never get me anywhere”.)
- Performed self-study. (I’d love to hear more about this. The article mentions he bought a subscription to a magazine or journal, then something something, then wrote a paper. What was the something something? Was Aviation Weekly really sufficient for this? Did it involve a lot of talks with this neighbor of his?)
- Created a useful deliverable in the field. (Actionable!)
- Got the deliverable in the hands of someone influential in the field. (Also actionable—moreso for EA’s than most people, since the EA community is small and happy to connect people. If you have a decent AI paper and want to get it in the hands of a particular org, you can probably do that without much trouble)
Ina Garten, on the other hand, is simultaneously less reproducible (due to the high initial expense and commitment of her transition) and misses out on the more interesting part of her transition. You talk about how she bought a grocery store, then ???, then celebrity chef. I think “Nuclear policy advisor to grocery store owner” is actually less of a move than “Grocery store owner to celebrity chef”. Even though the latter two appear intuitively closer (both deal with food), a sufficiently-rich person can just buy a grocery store like Ina did, but how do you go from that to a celebrity chef? How did Ina build her brand up? Was this Ina’s goal from the start, or did she approach it incrementally?
There’s definitely the seeds of a great article here, but it feels more like an article proposal/draft than a fully-fledged article. It leaves some of the most interesting/applicable parts out of the story. I understand this is more meant to be inspirational than a how-to manual, and that it is a LOT easier to summarise public research than to dive as deeply into the topic as would be needed to answer the questions I had. So, I understand if you’d rather leave it here, but if you wanted to put more time into this idea I think it would bear fruit in the above ways.
I think this is an excellent point. Something I’d like to write into a forum post someday if I get any actual conclusions is that EA seems to have some difficulties that are inherent in the mathematical realities of the movement.
On the one hand, EA wants to grow and advocate more publicly. This makes sense and is a good thing for a movement. While EA definitely focuses on slower, more sustainable growth, my understanding is that when EA wants quality, they mean quality of fit moreso than explicitly targeting quality of talent/money/resources. We want people aligned with the movement—it’s okay if they aren’t hugely influential in their fields.
On the other hand...EA is essentially a love letter to the Pareto principle. The guiding principle of EA is that some interventions are much, MUCH more effective than others. The unfortunate truth is that in many fields, the same applies to not just organisations, but people. One Sam Bankman-Fried makes the impact of thousands or tens of thousands of “ordinary” people. And even then, an “ordinary” person is someone who makes a median or above-median income in a wealthy country and donates 10% of it per year—even THIS is not a low bar to cross!
Research productivity isn’t quite as bad as this, but there are absolutely some people who have way more impact than others. EA also badly needs people, but it badly needs people who meet a certain talent bar. Once again, the Pareto principle comes into play. If an EA organisation wants to double its headcount, one might think “Oh, it’ll be easy to get a job there!” And yet, often the objective standard is very high. It might be very easy to find a job if you meet that standard (https://www.lesswrong.com/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers for an example) but meeting that standard is HARD.
The final issue is this. Earning-to-give and donating 10% to charity is a totally reasonable path that can save dozens or hundreds of lives. Basically any effective altruist would say this is a totally good thing, you should be proud of it, and it is a totally worthy contribution to the cause. But if you engage with EA materials, you will hear about this a few times, and you will hear far more about other causes. Why is that? Because...there’s just not that much to say about it, really. Once you’ve gotten the infrastructure of research up (which charities are effective, where do I donate), the only updates in this area tend to be “Here’s a way to advocate to get other people to give more”, “Let’s celebrate X Day”, and “Hey, GiveWell found a new effective charity!”. If we wanted to make a “10%-er post” every week, I feel like we’d run out of content pretty quickly.
Therefore, when most people engage with places like the EA forum, they get people talking about things that probably aren’t relevant to them. Most people are talking about things that are less obvious, still being fleshed out, or require specific, often highly niche and difficult-to-obtain talents. This isn’t because EA is elitist and is deliberately shutting out the plebians who don’t want to devote their whole career to EA, it’s because these are the areas where new content is needed, and where new content won’t repeat itself.
I don’t yet have any suggestions about this.
Are we able to post contest entries elsewhere? I’m still on the fence about trying to go after the blog prize, but I figure a reflection post submitted here would be a good post for a blog as well.
I would be interested in this one.
To provide a relevant anecdote to the Benjamin Todd thread, (n = 1, of course) I had known about EA for years, and agreed with the ideas behind it. But the thing that got me to actually take concrete action about it was that I joined a group that, among other things, asked its members to do a good deed each day. Once I got into the habit of doing good deeds, (and, even more importantly, actively looking for opportunities to do good deeds) however small or low-impact, I began thinking about EA more, and finally committed to try giving 10% for a year, then signing the pledge.
Without pursuing classical virtue, I would be unlikely to be involved in EA now. My agreement with EA philosophically remained constant, but my willingness to act on a moral impulse was what changed. I built the habit of going from “Someone should do something” to “I should do something” with small things like stopping to help a stranger with a heavy box, and that transferred to larger things like donating thousands of dollars to charity.
Thus, I am interested in the intersection of EA and virtue and how they can work together. EA requires two things—philosophical agreement, and commitment to action. In my case, virtue helped bridge the gap between 1 and 2.
If I had read this seven years ago, there’s a non-zero chance I’d now be a developer with six years of professional experience instead of a developer with a CS degree and three years of professional experience. I’m honestly not sure which one would be better (I suspect the former, but credentialism is a thing) but especially for those who aren’t going to get a Bachelor’s anyway for whatever reason, this sounds like fantastic advice.
In the event that this post becomes wildly successful and you become overwhelmed with people asking for your help with solving programming problems, I am happy to take some overflow.
For those who might be inspired to take action as a result of this post, Charity Entrepreneurship is still accepting applications until April 3! I am not affiliated with them, but my thought process around the CE application consolidated my thoughts on the topic enough that I was ready to write this post.
Obviously this is a fantastic idea with zero flaws in any way, but I’d love to see it fleshed out a bit more. For instance, let’s say I know a bright young undergraduate with high levels of aggressiveness—how would I encourage them to test their personal fit for this cause area?
In order to avoid timezone problems that may fragment the EA community, I propose everyone born on the last or first day of a cause boundary simply swap causes every six months when DST comes around.
Since searching for jobs every six months might be rather inconvenient for you Luke, I would advise using your leadership experience to found a startup that switches priorities biannually instead.
I highly recommend Duck as an advisor. Duck is very empathic, non-judgmental, and a good listener. On top of that, Duck is a master of the ancient art of wu wei. Quite the impressive set of skills!
I was fully expecting this to be an April Fools post based on the title, and became more and more confused as the article progressed, since you were making excellent points throughout!
One thing I’d like to add is that our brains treat scarce time as more valuable and act accordingly. This is my completely-unsubstantiated theory for why many people work better under a deadline. Not just “better” as in “more focused”, but “better” as in “I spent 12 hours scrambling to put this together today, and this somehow turned out better than I would have done if I spent two hours a day for a week.”
A strategy I have occasionally used when I have difficulty motivating myself to work is to actually limit the amount of work I do on that day. “Okay Jay, you’ve gotten no real work done on this tough problem, and it’s 1 pm now. Here’s the deal—you’re only allowed to work on this problem until 4pm. After that, not only can you stop, I’m insisting that you do. If you want to get some progress done, you’d best get started.”
And you know, it often works. So I think that there’s a second dimension to time value not mentioned in the article—one being “opportunity” (Such as being at an EAG conference) and another being scarcity. These two often go hand-in-hand, but not always. EAG time is both high opportunity and high scarcity. Gym time is high opportunity for exercising, but less scarce since you can always just go again tomorrow.
Hi everyone. My name is Jay, and I’m a software engineer from Brisbane, Australia. My current goal is to become financially independent so I can devote the rest of my life to whatever cause I deem most worthy without needing to worry about earning a living. I expect to reach this point around 2025. I’ve joined the EA Forum because I want to spend some time in the next few years examining different causes, learning about the EA movement, and figuring out how I can scale my efforts to make a big impact in retirement.
I prefer to focus on concrete problems that are very scalable, and that one can easily contribute to in a small but meaningful way. Thus, the best option I’ve found so far is focusing on extreme global poverty. I also don’t want to fall in the trap of always telling myself I’ll do good later and never getting around to it, so in February of this year, I finally signed the Giving What We Can pledge.