My biggest mistake was not buying, and holding, crypto early. This was an extremely costly mistake. If I bought and held I would have hundreds of millions of dollars that could have been given as grants. I doubt I will ever make such a costly mistake again.
Going to graduate school was a very bad decision too. After 2.5 years I had to take my L and get out. It was very painful to admit I had been wrong but that is life.
The problem is real. Though for ‘normal’ low probabilities I suggest biting the bullet. A practical example is the question of whether to found a company. If you found a startup you will probably fail and make very little or no money. However, right now a majority of effective altruist funding comes from Facebook co-founder Dustin Moskovich. The tails are very thick.
If you have a high-risk plan with a sufficiently large reward I suggest going for it even if you are overwhelming likely to fail. Taking the risk is the most altruistic thing you can do. Most effective altruists are unwilling to take on the personal risk.
Really cool to learn about resource generation. These fellows are hardcore. I promote the following to EA type people:-- Donate at least 10% of pre-tax income (I am above this)-- Be as frugal as you can. Certainly don’t spend more than could be supported by the median income in your city. -- Once you have at least ~500K net worth give away all additional income. In my opinion, 500K is enough to fund a lean retirement if you are willing to accept a little risk.
--If you get a big windfall I suggest either putting it in a trust or just earmarking it for charity instead of immediately donating the whole thing; your cause prioritization may change (I regret how I donated a big windfall during the first crypto bull market. )I don’t think people should have to work if they don’t want to so I think it’s reasonable to ‘save yourself’. But don’t strive for too much security and keep your spending lean. I was objectively raised in a far from top 10% household and have no received much money from my parents. For example, they contributed zero dollars to my college. But anyone who is able to ‘speedrun to 500K while donating’ (or even seriously consider it) must be very privileged somehow.
If you actually take my advice seriously it is quite strict. But RG seems a lot more hardcore than that.
From this perspective, a corporate lawyer who went to Harvard is not a class traitor. They are just acting in their own class interests.
I think of the intersectionality/social justice/anti-oppression cluster as being a bit more specific than just ‘progressive’ so I will only discuss the specific cluster. Through activism, I met many people in this cluster. I myself am quite sympathetic to the ideology.
But I have to ask: How do you hold this ideology while attending Harvard Law? From this perspective, Harvard law is a seat of the existing oppressive power structure and you are choosing to become part of this power structure by attending. The privileges that come from attending Harvard Law are enormous. Harvard law graduates earn extremely high salaries (even the starting salaries are high)and often end up with very high net worths. Harvard law is also obviously strongly connected to many parts of the neoliberal capitalist system.
From a certain perspective being a leftist at Harvard law can be viewed as trying to become some sort of ‘class traitor’ to the neoliberal elite. This does not seem like the obvious thing to do from a leftist perspective. Much leftist analysis would suggest that it’s much more likely you just end up part of the neo-liberal power structure instead of subverting it.
In your experience how do these people resolve the contradiction?
Parasitic wasps may be the most diverse group of animals (beating out beetles). In some areas environments, a shocking fraction of prey insects are parasitized.
If you value ‘life’ you should probably keep humans around so we can spread life beyond earth. The expected amount of life in the Galaxy seems much higher if humans stick around. Imo the other logical position is ‘blow up the sun’. Don’t just take out the humans, take out the wasps too. The earth is full of really horrible suffering and if the humans die out then wasp parasitism will probably go on for hundreds of millions of additional years.
Of course, humans literally spread parasitic wasps as a form of ‘natural’ pest control so maybe the life spread by humans will be unusually terrible? I suppose ‘life on earth is net-good, but life specifically spread by humans will be net bad’. It is worth noting humans might create huge amounts of digital life. RobinHanson’s ‘Age of Em’ makes me wonder about their quality of life.
Just killing all humans but leaving the rest of the biosphere intact seems like it’s ‘threading the needle’. Maybe you can clarify more what you are valuing specifically.
Of course, don’t do anything crazy. Give the absurdity heuristic a little respect.
“The 99th percentile probably isn’t good enough either.” If you more than 99th percentile talented maybe you can give yourself a chance to earn a huge amount of money if you are willing to take on risk. Wealth is extremely fat-tailed so this seems potentially worthwhile.
If Dustin had not been a Facebook co-founder EA would have something like one-third of its current funding. Sam Bankman Fried strikes me as quite talented. He originally worked at Jane Street and quit to work at a major EA org. Instead, he ended up founding the crypto exchange FTX. FTX is now valued at around a billion dollars. I am quite happy he decided against ‘direct work’.
It seems difficult but not impossible to replace top talent with multiple less talented people at many EA jobs (for example charity evaluation). It seems basically impossible to replace a talented cofounder with a less talented one without decimating your odds of success. However, it is plausible that top talent should directly work on AI issues.
It is also important to note most people are not ‘top talent’ and so they need to follow different advice.
You should take the quant role imo. Optionality is valuable (though not infinitely so). Quant trading gives you vastly more optionality. If trading goes well but you leave the field after five years you will have still gained a large amount of experience and donated/saved a large amount of capital. It’s not unrealistic to try for 500K donated and 500K+ saved in that timeframe, especially since firms think you are unusually talented. If you have five hundred thousand dollars, or more, saved you are no longer very constrained by finances. Five hundred thousand dollars is enough to stochastically save over a hundred lives. There are several high impact EA orgs with a budget of around a million dollars a year (Rethink Priorities comes to mind). If trading goes very well you could personally fund such an org.
How are you going to feel if you decide to do the PHD and after five years you decide that it was not the best path? You will have left approximately a million dollars and a huge amount of earning potential on the table. You could have been free to work for no compensation if you want. You would have been able to bankroll a medium sized project if you keep trading.
There are a lot of ways to massively regret turning down the quant job. It is plausible that the situation is so dire that you need to drop other paths and work on AI safety right now. But you need to be confident in a very detailed world model to justify giving up so much optionality. There are a lot of theories on how to do the most good. Stay upstream.
I am a rather strong proponent of publishing credible accusations and calling out community leadership if they engage in abuse enabling behavior. I published a long post on Abuse in the Rationality/EA Community. I also publicly disclosed details of a smaller incident. People have a right to know what they are getting into. If community processes are not taking abuse seriously in the absence of public pressure then information has to be made public. Though anyone doing this should be careful.
Several people are discussing allegations of DXE being abusive and/or a cult. I joined in early 2020. I have not personally observed or heard any credible accusations of abusive or abuse enabling behavior by the leadership of DXE during the time I have been a member. It is hard for me to know what happened in 2016 or 2017.
Given my history in the rationality you should trust that if I had evidence I could post about systematic abuse within DXE I would post it. Even if I did not have the consent of victims to share evidence I will still publicly state I knew of abuse. I will note it is highly plausible DXE is acting badly behind closed doors. If this becomes clear to me I will certainly let people know.
(This is explicitly not a claim there is no evidence I find concerning. But I think you should be quite critical of most organizations and your eyes open for signs of abusive behavior.)
Good point that Open Phil makes all donations public. I found a CSV on their site and added up the donations dated 2018/2019/2020.
2020 so far: $145,405,362
This is a really useful answer.
https://www.givewell.org/about/impact is something I already found.
I am a member of DXE and have interacted with Wayne. I think if you care about animals the amount of QALYs gained would be massive. In general Wayne has always seemed like a careful, if overly optimistic, thinker to me. He always tries to follow good leadership practices. Even if you are not concerned with animal welfare I think Wayne would be very effective at advancing good policies.
Wayne being mayor would result in huge improvements for climate change policy. Having a city with a genuine green policy is worth a lot of QALYs. My only real complaint about Wayne is that he is too optimistic but that isn’t the most serious issue for a Mayor.
Why do you think orgs labelled ‘effective altruist’ get so much talent applying but those orgs don’t? How big do you think the difference is? I am somewhat informed about the job market in Animal Advocacy. It does not seem nearly as competitive as the EA market. But I am not sure the magnitude of the difference in the replaceability analysis.