Could this be an unusually good time to Earn To Give?
I think there could be compelling reasons to prioritise Earning To Give highly, depending on oneâs options. This is a âhot takesâ explanation of this claim with a request for input from the community. This may not be a claim that I would stand by upon reflection.
I base the argument below on a few key assumptions, listed below. Each of these could be debated in their own right but I would prefer to keep any discussion of them outside this post and its comments. This is for brevity and because my reason for making them is largely a deferral to people better informed on the subject than I. The Intelligence Curse by Luke Drago is a good backdrop for this.
Whether or not we see AGI or Superintelligence, AI will have significantly reduced the availability of white-collar jobs by 2030, and will only continue to reduce this availability.
AI will eventually drive an enormous increase in world GDP.
The combination of these will produce a severity of wealth inequality that is both unprecedented and near-totally locked-in.
If AI advances cause white-collar human workers to become redundant by outperforming them at lower cost, we are living in a dwindling window in which one can determine their financial destiny. Government action and philanthropy notwithstanding, oneâs assets may not grow appreciably again once their labour has become replaceable. An even shorter window may be available for starting new professions as entry-level jobs are likely the easiest to automate and companies will find it easier to stop hiring people than start firing them.
That this may be the fate of much of humanity in the not-too-distant future seems really bleak. While my ear is not the closest to the ground on all things AI, my intuition is that humanity will not have the collective wisdom to restructure society in time to prevent this leading to a technocratic feudal hierarchy. Frankly, Iâm alarmed that having engaged with EA consistently for 7+ years Iâve only heard discussion of this very recently. Furthermore, the Trump Administration has proven itself willing to use Americaâs economic and military superiority to pressure other states into arguably exploitative deals (tariffs, offering Ukraine security guarantees in exchange for mineral resources) and shed altruistic commitments (foreign aid). My assumption is that if this Administration, or a similar successor, oversaw the unveiling of workplace-changing AI, the furthest it would cast its moral circle would be American citizens. Those in other countries may have very unclear routes to income.
Should this scenario come to pass, altruistic individuals having bought shares in companies that experience this economic explosion before it happened could do disproportionate good. The number of actors able to steer the course of the future at all will have shrunk by orders of magnitude and I would predict that most of them will be more consumed by their rivalries than any desire to help others. Others have pointed out that this generally was the case in medieval feudal systems. Depending on the scale of investment, even a single such person could save dozens, hundreds, or even thousands of other people from destitution. If that person possessed charisma or political aptitude, their influence over other asset owners could improve the lives of a great many. Given that being immensely wealthy leaves many doors open for conventional Earning To Give if this scenario doesnât come to pass (and I would advocate for donating at least 10% of income along the way), it seems sensible to me for an EA to aggressively pursue their own wealth in the short term.
If one has a clear career path for helping solve the alignment problem or achieve the governance policies required to bring transformative AI into the world for the benefit of all, I unequivocally endorse pursuing those careers as a priority. These considerations are for those without such a clear path. I will now apply a vignette to my circumstances to provide a concrete example and because I genuinely want advice!
I have spent 4 years serving as a military officer. My friend works at a top financial services firm, which has a demonstrable preference for hiring ex-military personnel. He can think of salient examples of people being hired for jobs that pay ÂŁ250k/âyear with CVs very arguably weaker, in both military and academic terms, than mine. With my friendâs help, it is plausible that I could secure such a position. I am confident that I would not suffer more than trivial value drift while earning this wage, or on becoming ludicrously wealthy thereafter, based on concrete examples in which I upheld my ethics despite significant temptation not to. I am also confident that I have demonstrated sufficient resilience in my current profession to handle life as a trader, at least for a while. With much less confidence, I feel that I would be at least average in my ability to influence other wealthy people to buy into altruistic ideals.
My main alternative is to seek mid to senior operations management roles at EA and adjacent organisations with longtermist focus. I wonât labour why I think these roles would be valuable, nor do I mean to diminish the contributions that can be made in such roles. This theory of impact does, of course, rely heavily on the org I get a job in delivering impactful results; money can almost certainly buy results but of a fundamentally more limited nature.
So, should one such as I Earn To Invest And Then Give, or work on pressing problems directly?
I think this effect is completely overshadowed by the fact if what you are saying is true, we have 5-10 years on the technical alignment/âgovernance of AI to get things to go well.
Now is the time to donate and work on AI safety stuff. Not to get rich and donate to it later in the hopes that things worked out.
Iâm sympathetic to this point and stress that my argument above only applies if one is relatively optimistic about solving alignment and relatively pessimistic about these governance/âpolicy problems. I donât think Iâm informed enough to be optimistic on alignment but I do feel very pessimistic on preventing immense wealth inequality. The amount of coordination between so many actors for this not to be the default seems unachievable to me.
This may be available elsewhere and I accept that I might not have looked hard enough, but are there impactful, funding-constrained donation opportunities to solve these problems?
The other two things I want to point out are:
Itâs very tempting to be biased towards âthe thing I should be doing is making moneyâ. Iâve seen a shocking amount of E2Gers that donât seem to do much giving, particularly in AI safety. There should be a small anti-correction bias against the thing you should be doing is making money and investing it to earn more money. That looks a lot like selfish non-impact.
ÂŁ250k/âyear after taxes and expenses, just isnât that much to donate. I think in the UK (where ÂŁ250k/âyear would be paid) would incur income tax of ~35-40% depending on deductions. Letâs call it ÂŁ95k. After say ÂŁ45k/âyear in personal expenses (more if you have a family), we are talking about about ÂŁ110k/âyear. Invested or not, this just isnât that much money to move the needle on AI safety by enough to write home about. AI governance organizations would very happily have a very good mid to senior operations management roles at EA and adjacent organisations with longtermist focus or other role. These orgs spend ÂŁ110k/âyear like its nothing.
Re. 2, that maths is the right ballpark is trying to save but if donating I do want to remind people that UK donations are tax-deductible and this deduction is not limited the way I gather it is in some countries like the US.
So you wouldnât be paying ÂŁ95k in taxes if donating a large fraction of ÂŁ250k/âyr. Doing quick calcs, if living off ÂŁ45k then the split ends up being something like:
Income: 250k
Donations: 185k
Tax: 20k
Personal: 45k
(I agree with the spirit of your points.)
ÂŁ110k seems like it would probably be impactful, and thatâs just one person giving right? Thatâs probably at least one FTE. Also SERI MATS only costs about ~ÂŁ500k per year so it could be expanded substantially with that amount.
This is generally less than one FTE for an AI safety organization. Remember, there are other costs than just salary.
MATS is spending far more than ÂŁ500k/âyear. I donât know how accurate it is but it looks like they might have spent ~$4.65MM. Iâm happy to be corrected but I think my figure it more accurate.
Some simplifying assumptions:
ÂŁ50k starting net worth
Only employed for the next 4 years
ÂŁ300k salary, ÂŁ150k after tax, ÂŁ110k after personal consumption
10% interest on your savings for 4 years
Around ÂŁ635k at end of 2030
This is only slightly more than the average net worth of for UK 55 to 64 year olds.
Overall, if this plan worked out near perfectly, it would place you in around the 92 percentile of wealth in the UK.
This would put you in a good, but not great position to invest to give.
Overall it seems to me as if youâre trying to speedrun getting incredibly wealthy in 4 years. This is generally not possible with salaried work (the assumption above put you around the 99-99.5 percentile of salaries), but might be more feasible through entrepreneurship.
Some other considerations:
Working in such a high paying job, even in financial services, will probably not allow you to study and practice investing. You will not be an expert on AI investing and investing in general in 2030, which would be a problem if you believe such expertise was necessary for you to invest to give.
Quite a lot of EAs will be richer than this in 2030. My rough guess is more than 500. Your position might be useful but is likely to be far from unique.
You might want to think through your uncertainties about how useful money will be to achieve your goals in 2030-2040. If thereâs no more white collar jobs in 2030, in 2035 the world might be very weird and confusing.
If there is a massive increase of overall wealth in 2030-2040 due to fast technological progress, a lot of problems you might care about will get solved by non-EAs. Charity is a luxury good for the rich, more people will be rich, charity on average solves much more problems than it creates.
Technological progress itself will potentially solve a lot of the problems you care about.
(Also agree with Marcusâs point.)
The way I understood his post was that even a few hundred thousand or a few million dollars, if invested pre-explosive growth, might become astronomical wealth post-explosive growth. Whereas people without those investments may have nothing due to labor displacement. Which is an interesting theory?
Maybe we need a hedge fund for EAs to invest in AI lol, though that would create hairy conflicts of interest!
That was the point I had meant to convey, Aaron. Thanks for clarifying that.
This seems like an important critique, Tobias, and I thank you for it. It was a useful readjustment to realise I wouldnât be exceptionally wealthy for doing this in either society at large or the EA community. My sense is still that even being in the 92nd percentile of the UK going into this would be really valuable. Not world-changing valuable, but life-changing for many. That everything might get solved by technology and richer people is plausible, given the challenges in predicting how the future will pan out. I see this strategy mainly as a backstop to mitigate the awfulness of the most S-risk intensive ways this could go.
(Thanks for providing lots of details in the post. Standard disclaimer that you know the most about your strengths/âweaknesses, likes/âdislikes, core values, etc)
I recommend going for the job. It sounds like you have a uniquely good chance at getting it, and otherwise Iâd assume itâd go to someone who wasnât going to donate a lot of the salary.
After you get the job, Iâd recommend thinking/âreading/âdiscussing a lot about the best way and time to give.
Regarding:
> This may not be a claim that I would stand by upon reflection.
> my reason for making them is largely a deferral to people better informed on the subject than I
You say youâre not currently an expert, but Iâd guess it wouldnât take so long (100 hours, so maybe a few months of weekends) for you to become an expert in the specific questions that determine when and how you should donate. Questions like:
- When will we develop superintelligence?
- Given that we do, how likely are humans to stay in control?
- Given that we stay in control, what would the economy look like?
- Given that the future economy looks like [something], whatâs the most impactful time and way to donate?
- Wild guess that I havenât thought about much: even if youâd be much richer in the future because the stock market will go up a lot, maybe itâs still better to donate all you can to AMF now. Reasoning: you canât help someone in the future if they died of malaria before the AI makes the perfect malaria vaccine
Whatever your final beliefs are, having the high-paying job allows you to have a large impact.
It looks like the other path youâre considering is âmid to senior operations management roles at EAâ. I would guess you could give enough money to EA orgs so they could hire enough ops people to do more work than you could have done directly (but maybe thereâs some kind of EA ops work where you have a special hard-to-buy talent?)
Thanks for the input, Theodore!
I agree that my chances of getting a trader role are higher than average and whoever would get the job instead is almost certainly not going to donate appreciable sums. Naturally, I would devote a very large amount of time and energy to the decision of how to give away this money.
Iâm very sceptical about my ability to become an âexpertâ on these questions surrounding AI. This is largely based on my belief that my most crippling flaw is a lack of curiosity but I also doubt that anyone could come up with robust predictions on these questions through casual research inside a year.
My intuition is strongly in the other direction regarding donating to AMF now (with the caveat that I have been donating to GiveWellâs top charity portfolio for years). I donât have strong credence on how the cost of a DALY will change in the future, but I am confident it wonât increase by a greater percentage than tactful investments. It is a tragedy that anyone dies before medicine advances to the point of saving them but we must triage our giving opportunities.
Iâd never been convinced that Earning To Give in the conventional sense would be a more impactful career for me than operations management work. My social network (which could be biased) consistently implies the EA community has a shortage of management talent. A large amount of money is already being thrown at solving this problem, particularly in the Bay Area and London.
I donât really follow why one set of entities getting AGI and not sharing it should necessarily lead to widespread destitution.
Suppose A, B and C are currently working and trading between each other. A develops AGI and leaves B and C to themselves. Would B and C now just starve? Why would that necessarily happen? If they are still able to work as before, they can do that and trade with each other. They would become a bit poorer due to needing to replace the goods that A had a comparative advantage in producing I guess.
For B and C to be made destitute directly, it would seem to require that they are prevented at working at anything like their previous productivity eg if A were providing something essential and irreplaceable for B and C (maybe software products if A is techy?) or if Aâs AGI went and pushed B and C off a large fraction of natural resources. It doesnât seem very likely to me that B and C couldnât mostly replace what A provided (eg with current open-source software). For A to push B and C off a large enough amount of resources, when the AGI has presumably already made A very rich, would require A to be more selfish and cruel than I hope is likelyâbut itâs unfortunately not unthinkable.
Of course there would probably still be hugely more inequalityâbut that doesnât imply B and C are destitute.
I could imagine there being indirect large harms on B and C if their drop in productivity were large enough to create a depression, with financial system feedbacks amplifying the effects.
In any case, the picture you paint seems to require an additional reason that B and C cannot produce the things they need for themselves.
Have you read the Intelligence Curse, linked at the beginning of this post? It explains the case for this better than I would.
I had a look, it seems to presume the AI-owners will control all the resources, but this doesnât seem like a given (though it may pan out that way).
I realise you said you didnât want to debate these assumptions, but just wanted to point out that the picture painted doesnât seem inevitable.
Executive summary: Given the potential for AI-driven economic upheaval and locked-in wealth inequality, now may be an unusually good time to prioritize Earning To Giveâespecially for those with lucrative career prospectsâso they can later redistribute wealth in a way that mitigates future harms.
Key points:
AI is likely to significantly reduce white-collar job availability by 2030 while also driving enormous GDP growth, leading to unprecedented and entrenched wealth inequality.
Those who accumulate wealth before their labor becomes replaceable may have a unique opportunity to do significant good, as future redistribution mechanisms could be limited.
If AI-induced economic concentration leads to a âtechnocratic feudal hierarchy,â wealthy altruists could become rare actors capable of steering resources toward helping the destitute.
The geopolitical implications of AI-driven economic shifts may further restrict wealth distribution, particularly under nationalistic policies that prioritize domestic citizens over global needs.
While directly working on AI alignment or governance remains a higher priority, individuals without a clear path in those areas might do more good by aggressively pursuing wealth now to give later.
The author personally considers shifting from a military career to high-earning finance roles, weighing whether Earning To Give would be more impactful than working in longtermist EA organizations.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.