For EA folks in tech, I’m still giving mock interviews. I’m bumping this into quick takes because my post is several years old, and I don’t advertise it well.
dan.pandori
dan.pandori’s Quick takes
There are a lot of ‘lurkers’, but less than 30 folks would be involved in the yearly holiday matching thread and sheet. Every self-professed EA I talked to at Google was involved in those campaigns, so I think that covers the most involved US Googlers.
Most people donated closer to 5-10% than Jeff or Oliver’s much higher amounts, that is for sure true.
So I think both your explanations are true. There are not that many EAs at Google (although I don’t think that’s surprising), and most donate much less than they likely could. I put myself in that bucket, as I donated around 20%, but likely could have done close to twice that. Although it would be hard for me to do that in recent years, as I switched to Waymo where I can’t sell my stock.
RE: why aren’t there as many EAs giving this much money: I’m (obviously) not Jeff, but I was at Alphabet for many of the years Jeff was. Relevantly, I was also involved in the yearly donation matching campaigns. There were around 2-3 other folks who donated similar amounts to Jeff. Those four-ish people were the majority of EA matching funds at Alphabet.
It’s hard to be sure how many people actually donated outside of giving campaigns, so this might undercount things. But to get to 1k EAs donating this much money, you’d need like 300 companies with similarly sized EA contingents. I don’t think there are 300 companies with as large of a (wealthy) EA contingent as Alphabet, so the fact that Jeff was a strong outlier at Google explains most of this to me.I think that there are only like 5k individuals as committed to EA as Jeff and his wife are. And making as much money as they did is fairly rare, especially when you consider the likelihood of super committed folks going into direct work.
Legal or constitutional infeasibility does not always prevent executive orders from being applied (or followed). I feel like the US president declaring a state of emergency related to AI catastrophic risk (and then forcing large AI companies to stop training large models) sounds at least as constitutionally viable as the attempted executive order for student loan forgiveness.
I agree that this seems fairly unlikely to happen in practice though.
At the time of me posting this there are 3 duplicate posts on this:
https://forum.effectivealtruism.org/posts/ycCXDofXr89DyxKqm/wealth-redistribution-are-we-on-the-same-page-1
https://forum.effectivealtruism.org/posts/3Gp3mKF4mg3aXqEKC/wealth-redistribution-are-we-on-the-same-page-5
https://forum.effectivealtruism.org/posts/7iAkFkaj7cHJbMZWd/wealth-redistribution-are-we-on-the-same-page-6
While this is in some ways poetic about a post asking whether or not we are all on the same page, I’m guessing you want to delete the duplicates.
I deeply appreciate the degree to which this comment acknowledges issues and provides alternative organizations that may be better in specific respects. It has given me substantial respect for LTFF.
This feels like a “be the change you want to see in the world” moment. If you want such an event, it seems like you could basically just make a forum post (or quick take) offering 1:1s?
I think that basically all of these are being pursued and many are good ideas. I would be less put off if the post title was ‘More people should work on aligning profit incentives with alignment research’, but suggesting that no one is doing this seems off base.
This is what I got after a few minutes of Google search (not endorsing any of the links beyond that they are claiming to do the thing described).
AI Auditing:
https://www.unite.ai/how-to-perform-an-ai-audit-in-2023/
Model interpretability:
https://learn.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability?view=azureml-api-2
Monitoring and usage:
https://www.walkme.com/lpages/shadow-ai/?t=1&PcampId=7014G000001ya0pQAA&camp=it_shadow-ai_namer&utm_source=it_shadow_ai&utm_medium=paid-search_google&utm_content=walkme_ai&utm_campaign=it_shadow-ai_namer&utm_term=paid-media&gclid=Cj0KCQjw3JanBhCPARIsAJpXTx69aVdhkJkHOpEQd4_Bfpp_9_93hQM8NVTWkfZU8eR15VU--34lCKMaAkUUEALw_wcB
Future Endowment Fund sounds a lot like an impact certificate:
https://forum.effectivealtruism.org/posts/4bPjDbxkYMCAdqPCv/manifund-impact-market-mini-grants-round-on-forecasting
I agree that ‘utilitarianism’ often gets elided into meaning a variation of hedonic utilitarianism. I would like to hold philosophical discourse to a higher bar. In particular, once someone mentions hedonic utilitarianism, I’m going to hold them to the standard of separating out hedonic utilitarianism and preference utilitarianism, for example.
I agree hedonic utilitarians exist. I’m just saying the utilitarians I’ve talked to always add more terms than pleasure and suffering to their utility function. Most are preference utilitarians.
I feel like ‘valuism’ is redefining utilitarianism, and the contrasts to utilitarianism don’t seem very convincing. For instance, you define valuism as noticing what you intrinsically value and trying to take effective action to increase that. This seems identical to a utilitarian whose utility function is composed of what they intrinsically value.
I think you might be defining utilitarianism such that they are only allowed to care about one thing? Which is sort of true, in that utilitarianism generally advocates converting everything into a common scale, but that common scale can measure multiple things. My utility function includes happiness, suffering, beauty, and curiosity as terms. This is totally fine, and a normal part of utilitarian discourse. Most utilitarians I’ve talked to are total preference utilitarians, I’ve never met a pure hedonistic utilitarian.Likewise, I’m allowed to maintain my happiness and mental health as an instrumental goal for maximizing utility. This doesn’t mean that utilitarianism is wrong, it just means we can’t pretend we can be utility maximizing soul-less robots. I feel like there is a post on folks realizing this at least every few months. Which makes sense! It’s an important realization!
Also, utilitarianism also doesn’t need objective morality any more than any other moral philosophy, so I didn’t understand your objection there.
This comment came across as unnecessarily aggressive to me.
The original post is a newsletter that seems to be trying to paint everyone in their best light. That’s a nice thing to do! The epistemic status of the post (hype) also feels pretty clear already.
Thank them for the comment, and then link to this thread?
As someone who went through the CEA application process, I wholeheartedly endorse this. I was also really impressed with CEA’s approach the process, and their surprising willingness to give feedback & advice through it.
[It ended up being a mutually bad fit. I’ve spent my whole career as a C++ backend engineer at a FAANG and I like working in person, and that doesn’t align super well with a small remote-first org that has a lot of frontend needs.]
It feels weird to me to hear that something is terrible to think. It might be terrible that we’re only alive because everyone doesn’t have the option to kill everyone else instantly, but it’s also true. Thinking true thoughts isn’t terrible.
If everyone has a button that could destroy all life on the planet, I feel like it’s unrealistic to expect that button to remain unpressed for more than a few hours. The most misanthropic person on Earth is very, very misanthropic. I’m not confident that many people would press the button, but the whole thing is that it only takes one.
Given that currently people don’t have such a button, it seems easier to think how we can prevent that button from existing, rather than how we could make everyone agree not to press the button. The button is a power no one should have.
If AI + a nontechnical person familiar with business needs can replace me in coding, I expect something resembling a singularity within 5 years.
I think that software engineering is a great career if you have an aptitude for it. It’s also way easier to tell if you are good at it relative to most other careers (ie, Leetcode, Hackerrank, and other question repositories can help you understand your relative performance).
So my answer is that either AI can’t automate software engineers for a while, or they’ll automate every career quite soon after software engineering. Maybe 30% of my job is automating other people’s. As a result, software engineering is a pretty good bet as a career.
I’d be curious to hear from folks who can imagine worlds where software engineering is nearly fully automated, and we don’t automate all jobs that decade.
IANAL: I view ‘effective altruism’ to not be owned, and if any organization claims to own the term I’m going to ignore them. I expect most folks to share my opinion here.
Agreed with the specific reforms. Blind hiring and advertising broadly seem wise.
GiveWell