Why do you need to justify something to yourself? You can do whatever you want.
sapphire
Justify to who? God? Your mom?
I’m quite leftwing by manifest standards. I’m probably extremely pro-woke even by EA standards. I had a great time at less-online/summer-camp/manifest. I honestly tried to avoid politics. Unlike many people I don’t actually like arguing. I’d prefer to collaborate and learn from other people. (Though I feel somewhat ‘responsible for’ and ‘invested in’ EA and so I find it hard not to argue about that particular topic). I mostly tried to talk to people about finance, health and prediction markets. Was honestly super fun and easy. People didn’t force me to discuss poltiics.
Though I must say it was probably a mistake to bring my girlfriend to manifest. I think she got freaked out. Probably wasn’t good for our relationship.
Emile seems to donate quite a bit:
“I’m passionate about alleviating global poverty, and have pledged to give away everything I earn over $40,000 a year. In December 2022, I started a fundraiser with Nathan Young, an Effective Altruist, that raised more than $321,000 for the charity Give Directly.”—https://www.xriskology.com/
I’m also quite critical of EA and have donated more than most EAs (both in absolute and percentage terms).
Even annoying critics may be quite sincere.
I donated a lot. Both in absolute and percentage terms. I gave a percentage many times higher than even most well off EAs. I think it would have been selfish to just keep the money. But I don’t have any particularly great feelings about how I donated. ‘Things are complicated’ can be an applause light. Sometimes things aren’t all that complicated. But this topic sure is. Saying ‘those who criticize the movement as a whole are deeply intellectually unserious’ just seems unserious to me. The movement has a lot of structural problems. Both ‘extremely positive’ and ‘extremely negative’ impacts seem very plausible to me. Probability is distributed over a very wide range.
Im not sure what normal community members can really do. Decision making is incredibly centralized. But surely we have some responsibility to be serious about downsides. The existence of annoying critics does not absolve us. Though we also have a responsibility not to be overly negative. Or maybe the real answer is we dont have much of either responsibility since two people have almost all the power. But this suggests maybe we should take our talents elsewhere.
Imo full enlightenment really means, or should mean, no suffering. There is no necessary suffering anyway. The Buddha, or the classic teaching, are pretty clear if you ask me. One can debate how to translate the noble truths but its pretty clear to me the fourth one says suffering can be completely overcome.
FWIW you can get much faster progress combining meditation with psychedelics. Though as the Buddha said you must investigate for yourself, don’t take anyones word for spiritual truth. Also enlightenment absolutely does make you better at most stuff. Including partial enlightenment. People just say ‘you can suffer and be enlightened’ and ‘enlightenment doesnt make you better at things’ because they either want to feel accomplished or be accomplished. The Buddha sought the highest star, he was never satisfied by the teachers of his time. Let us emulate him by seeking only the highest star. In fact lets not settle for merely copying his methods. The original Sangha didn’t even have LSD, we can do one better.
There are a lot of possible answers to where thoughts come from and which thoughts are useful. One charitable thought is some Elite EAs tried to do things which were all of: hard, extremely costly if you fuck them up, they weren’t able to achieve given the difficulty. I have definitely updated a lot toward trying things that are very crazy but at least obviously only hurt me (or people who follow my example, but those people made their own choice). Fail gracefullly. If you dont know how competent you are make sure not to mess things up for other people. There is a lot of ‘theater’ around this but most people don’t internalize what it really means.
The people who initially set up Givewell, did the research and conivnced Dustin to donate his money did a truly amazing jop. AFAICT the people who currently run Givewell are doing a good job. A large fraction of the good EA has done, in total, is largely do to their work.
But I don’t think its a good idea to frame things as their a bunch of elite EAs and the quality of their work is superb. The EA leadership has fucked up a bunch of stuff. Many ‘elite EAs’ were not part of the parts of EA that went well. Many were involved in the parts of Ea that went quite poorly.
If you are a true altruist you should really reconsider whether you even want to trust the leadership and work under their direction. Maybe you should work at a different sort of charity or get funding from ‘someone who doesn’t ultimately get their money from Givewell’. Unless you really fit in well with the ‘elite Eas’ doing that is likely to be more fun.
‘Think for yourself about how to make the world better and then do it (assuming its not insane)’ is probably both going to be better for you and better for the world.
I don’t think it makes any sense to punish people for past political or moral views they have sincerely recanted. There is some sense in which it shows bad judgement but ideology is a different domain from most. I am honestly quite invested in something like ‘moral progress’. Its a bit of a naive position to have to defend philosophically but I think most altruists are too. At least if they are being honest with themselves. Lots of people are empirically quite racist. Very few people grew up with what I would consider to be great values. If someone sincerely changes their ways Im happy to call them brother or sister. Have a party. Slaughter the uhhhhh fattest pumpkin and make vegan pumpkin pie.
However mr Hanania is stil quite racist. He may or may not still be more of a Nazi than he lets on but even his professed views are quite bad. Im not sure what the policy should be on cooperating with people with opposing value sets. Or on Hanania himself. I just wanted to say something in support of being truly welcoming to anyone who real deal rejects their past harmful ideology.
Deleted
Not to state the obvious but the ‘criticism of EA’ posts didn’t pose a real risk to the power structure. It is uhhhhh quite common for ‘criticism’ to be a lot more encouraged/tolerated when it isnt threatening.
EA’s meta-strategy is ‘simp for tech billionaires and AI companies’. EA systematically attracts people who enjoy this strategy. So no it does not attract the best. Maybe a version of EA with more integrity would attract the best people.
Im not trying to get dignity points. Im just trying to have a positive impact. At this point if AI is hard to align we all die (or worse!). I spent years trying to avoid contributing to the problem and helping when I could. But at this point its better to just hope alignment isn’t that hard (lost cause timelines) and try to steer the trajectory positively.
Ime you can induce much more torture than a tattoo relatively safely. Though all the best ‘safe’ forms of torture do cause short term damage to the skin.
Just Pivot to AI: The secret is out
At this point, unless you are very talented and/or working at Anthropic/OpenAI/Deepmind, I dont see much reason to avoid working in AI. The timeline is already burnt. The people who burnt it, often in the name of altruism, should be ashamed. But at some point the benefits of trying to do good things with a dangerous technology outweigh the downsides of accelerating progress. Prior to ~now it was quite bad to work on AI in more or less any capacity. But the train is leaving the station anyway. Marginal impacts are now smaller than the plausible positive impact of using the tech for good. Accelerating AI was an incredibly dumb strategy but at this point might as well play to the out where alignment isn’t that hard.
I mean that ‘at what income do GWWC pledgers actually start donating 10%+‘. Or more precisely ‘consider the set of GWWC pledge takers who make at least X per year, for what value X does is the mean donation at least X/10’. The value of X you get is around one million per year. Donations are of course even lower for people who didn’t take the pledge! Giving 10% when you make one million PER YEAR is not a very big ask. You will notice EAs making large, but not absurd salaries, like 100-200K give around 5%. Some EAs are extremely altruistic, but the average EA isn’t that altruistic imo.
I agree with the thrust of the argument but I think its a little too pessimistic. A lot of EAs aren’t especially altruistic people. Tons of EAs got involved because of Xrisk. And it requires very little altruism to care about whether you and everyone you know will die. You can look at the data on EA donations and notice they aren’t that high. EAs dont donate 10% until they have a pre-tax income of around one million dollars per year!
I would encourage anyone reading this to remember [Qhapna] has truly horrible opinions on what is a reasonable way to treat people. He was literally one of the last rationalists defending Brent. Neither his personal behavior nor his habit of defending abusers has improved over time. I’d recommend reading something I wrote some time ago:
Terrible judgment, a habit of feeling oppressed, and extreme arrogance is a very toxic combination.
- 14 Mar 2023 2:22 UTC; 39 points) 's comment on Share the burden by (
Criticism of who? If anything EAs have been far too trusting of their actual leaders. Conversely they have been far too critical of people like Holly. Its not a simple matter of some parameter being too high.
Holden is married to Dario Amodei’s sister. Dario is a founder of Anthropic. Holden was a major driver of EA AI policy.
Dustin is a literal billionaire who, along with his wife, has control over almost all EA institutions. Being critical of Dustin, while at all relying on EA funding or support, is certainly brave. Open Phil is known to be quite capricious. If anything the EA comunity was far too trusting of its leaders and funders. Dustin has tons of ties, including financial, to the AI industry.
These serious conflicts explain a lot of why EA took such a strange approach to AI policy.
However criticizing random EAs who are trying to do a good job is completely demotivating. There needs to be some sense of proportionality. I remember being asked about the potential downsides of my project when I applied to future fund. There were concerns about what, to me, seemed extremely unlikely outcomes. It is very funny looking back given that FTX was, at that time, running a gigantic fraud. Criticism of the locally powerful is undersupplied. Criticism of random people is very oversupplied.