I had sort of the same reaction. To me, “doing the most good” is something I live by. I don’t identify as EA, but as altruistic.
I find it sort of interesting when people refer to the “EA community” and make efforts to change it. I mean, they’re not wrong, but from another perspective, the “EA community” is almost an overgeneralization. Like for instance there are animal rights activists, longtermists, and climate change activists all getting to know each other through EA. There are going to be toxic people or cliques, and it’s sort of weird to say “EA is ____” when plenty of people “within EA” have never met each other and have nothing to do with each other.
Just some thoughts. I don’t disagree with the original post.
Max Pietsch
[Question] What are people’s thoughts on working for DeepMind as a general software engineer?
Cool thanks for your thoughts KevinO. Those are good points.
I totally agree that EA can be too elitist, and yes it can arise from a rationalist mindset. You made some good points about how it’s not really a grassroots movement, either.
I find it can be helpful to just not identify as EA and instead identify as someone who wants to help others. Then it’s just like, who gives a shit about what happens in EA—you can remain true to who you are at your core, which is someone who wants to help others. I’d rather internalize “I want to help others” than “I am someone in EA”. The former is not elitist—anyone can try to help others, whereas the latter might be elitist and is also unstable if someone shitty becomes influential in EA (like SBF).
I do hope you continue to want to help others. Glad to have you in the ‘helping others’ community, still. Thanks for the post—it rang true.
[Question] Help me recommend effective charities to people who want to donate to specific causes and populations
Yeah it’d be cool if @Henrik Karlsson and team could come up with a way to defend against social bubbles while still having a trust mechanic. Is there some way to model social bubbles and show that eigenkarma or some other mechanism could prevent social bubbles but still have the property of trusting people who deserve trust?
For instance maybe users of the social network are shown anyone who has trust, and ‘trust’ is universal throughout the community, not just something that you have when you’re connected to someone else? Would that prevent the social bubble problem, while still allowing users to filter out the low quality content from untrusted users?
I personally would want them to factor the problem of social bubbles into their model and figure out some way of preventing that while still building up ‘trust points’.
>Have you considered not spending time on those questions if you expect you can’t find any good answers?
I’m not spending much time on them. I have to sort through the less-easy-to-answer questions in order to find the more-easy-to-answer ones. I am spending time on the overall project, but you would make a false assumption if you extrapolate these to all of the other questions I’m seeing. I’m posting about these specifically because they are less easy to answer. The easy-to-answer questions I’m not asking for advice on because I can already generate a good answer.Is the overall project work worth the time? It’s hard for any of us to answer that question about our work. I am currently trying to collect some data on how much my responses have changed people’s minds, but it takes work to find that out.
>this… comes off… as a little bit coercive.There’s always a balance between being pushy and not saying enough when giving advice. It feels appropriate that I’m giving people advice on topics which they’ve asked for advice on. I wrote “steer… towards” which is something you might associate with a manager or captain who is directing people. Perhaps the words, “let them know” would have been more apt. What I’m doing is more giving information than making the decision for people.
Ok, got it. Yeah these particular questions I may have to ignore if I can’t come up with better answers.
I’ve heard from someone that Open Phil-sponsored companies are now doing essentially what you suggest. If you look at for example Anthropic’s job board you can see one of their benefits is, “Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.” By donating equity they avoid income taxes, and perhaps there are other tax implications of donating tax instead of cash (I’m not an expert).
[Question] How effective are the “Best Charities by Cause” organizations recommended by Charity Navigator?
I mean, it’s a better path than copywriting. Is that the other choice? Yeah software engineering is not a bad choice.
Ah, gotcha. That plan makes sense then.
Are there enough EAs that we could form a voting block that has enough people to sway an election? Would you vote for a politician that we got together and decided is the best in order to advance the goals of human welfare, animal welfare, and longtermism?
[Question] Does it make sense for straight men to not move to the Bay Area or Seattle due to the ratio of men and women there?
I remember reading about a charity which is trying to change the way science is done. Something about science not being as focused on publication count, and scientists having more freedom to pursue what matters. I can’t for the life of me remember the name. Do you know any charity like this, that’s trying to change how science is conducted?
We should be donating more frequently so we’re happier and feel more encouraged to donate
We know we get some happiness and fulfillment from donating money to a cause we care about (for instance see https://www.science.org/content/article/secret-happiness-giving). If we could get even more joy from donating the same amount of money, then it would make us more happy (benefitting ourselves) and encourage us to keep giving more (benefitting others).
To me, there’s a huge difference between donating $10,000 at once to a single charity and donating $100 one hundred times to different charities. Sort of like how our brains aren’t great at telling the difference between saving 1,000,000 lives and 10,000,000 lives, even though they are hugely different, is there research saying that the number of times we donate is more conducive to our brains remembering the feeling of happiness, than the amount we donate each time? I think if there is this kind of research then more people should be talking about it, because it could make a big difference for people who are earning to give.