Im not trying to get dignity points. Im just trying to have a positive impact. At this point if AI is hard to align we all die (or worse!). I spent years trying to avoid contributing to the problem and helping when I could. But at this point its better to just hope alignment isn’t that hard (lost cause timelines) and try to steer the trajectory positively.
Ime you can induce much more torture than a tattoo relatively safely. Though all the best ‘safe’ forms of torture do cause short term damage to the skin.
Just Pivot to AI: The secret is out
At this point, unless you are very talented and/or working at Anthropic/OpenAI/Deepmind, I dont see much reason to avoid working in AI. The timeline is already burnt. The people who burnt it, often in the name of altruism, should be ashamed. But at some point the benefits of trying to do good things with a dangerous technology outweigh the downsides of accelerating progress. Prior to ~now it was quite bad to work on AI in more or less any capacity. But the train is leaving the station anyway. Marginal impacts are now smaller than the plausible positive impact of using the tech for good. Accelerating AI was an incredibly dumb strategy but at this point might as well play to the out where alignment isn’t that hard.
I mean that ‘at what income do GWWC pledgers actually start donating 10%+‘. Or more precisely ‘consider the set of GWWC pledge takers who make at least X per year, for what value X does is the mean donation at least X/10’. The value of X you get is around one million per year. Donations are of course even lower for people who didn’t take the pledge! Giving 10% when you make one million PER YEAR is not a very big ask. You will notice EAs making large, but not absurd salaries, like 100-200K give around 5%. Some EAs are extremely altruistic, but the average EA isn’t that altruistic imo.
I agree with the thrust of the argument but I think its a little too pessimistic. A lot of EAs aren’t especially altruistic people. Tons of EAs got involved because of Xrisk. And it requires very little altruism to care about whether you and everyone you know will die. You can look at the data on EA donations and notice they aren’t that high. EAs dont donate 10% until they have a pre-tax income of around one million dollars per year!
I would encourage anyone reading this to remember [Qhapna] has truly horrible opinions on what is a reasonable way to treat people. He was literally one of the last rationalists defending Brent. Neither his personal behavior nor his habit of defending abusers has improved over time. I’d recommend reading something I wrote some time ago:
Terrible judgment, a habit of feeling oppressed, and extreme arrogance is a very toxic combination.
- 14 Mar 2023 2:22 UTC; 35 points)'s comment on Share the burden by (
No one has a right to be a leader. If leaders mismanaged abuse situations they should be removed from positions of leadership. The point of leadership is supposed to be service.
I posted that I do not think how [Qhapna] treats people, including publicly on facebook, is remotely ok and is abusive imo.
Im in favor of just saying out loud who you think is harmful. The culture of silence is very strong. But you are allowed to say things you believe and share information available to you! Don’t share information given to you in confidence. But if you want to speak up you can.
FWIW I posted on my facebook and twitter that I think a relatively major figure behaves abusively. People are free to disagree. But I think its important to share information and analysis. Silence protects bad behaviors.
In my experience anonymous accounts work fine? Whats important is having the information in public. Whether the account is anonymous or not isn’t very predictive of whether effective change occurs. For example Brent was defended by CFAR, but got kicked out once anonymous accounts were posted publicly.
This is great to hear! Amazing work!
Why did Google invest three hundred million dollars? Google is a for profit company.
If you cannot tell Duncan Sabien is an abusive person from reading his facebook posts you should probably avoid weighing in on community safety. He makes his toxicity and aggression extremely obvious. Lots of people have gotten hurt.
(Of course there is other evidence, like the fact he constantly defends bad behavior by others. He was basically the last person publicly defending Brent. But he continues to be conisdered a community leader with good judgment)
I think negative update since lots of the people with bad judgment remained in positions of power. This remains true even if some people were forced out. AFAIK Mike Valentine was forced out of CFAR for his connections to Brent, in particular greenlighting Brent meeting with a very young person alone. Though I dont have proof of this specific incident. Unsurprisingly, post-brent Anna Salomon defended included Mike Vassar.
With the exception of Brent, who is fully ostracized afaik, I think you seriously understate how much support these abusers still have. My model is sadly that a decent number of important rationalists and EAs just dont care that much about the sort of behavior in the article. CFAR investigated Brent and stood by him until there was public outcry! I will repost what Anna Salomon wrote a year ago, long after his misdeeds were well known. Lots of people have been updating TOWARD Vassar:
I hereby apologize for the role I played in X’s ostracism from the community, which AFAICT was both unjust and harmful to both the community and X. There’s more to say here, and I don’t yet know how to say it well. But the shortest version is that in the years leading up to my original comment X was criticizing me and many in the rationality and EA communities intensely, and, despite our alleged desire to aspire to rationality, I and I think many others did not like having our political foundations criticized/eroded, nor did I and I think various others like having the story I told myself to keep stably “doing my work” criticized/eroded. This, despite the fact that attempting to share reasoning and disagreements is in fact a furthering of our alleged goals and our alleged culture. The specific voiced accusations about X were not “but he keeps criticizing us and hurting our feelings and/or our political support” — and nevertheless I’m sure this was part of what led to me making the comment I made above (though it was not my conscious reason), and I’m sure it led to some of the rest of the ostracism he experienced as well. This isn’t the whole of the story, but it ought to have been disclosed clearly in the same way that conflicts of interest ought to be disclosed clearly. And, separately but relatedly, it is my current view that it would be all things considered much better to have X around talking to people in these communities, though this will bring friction.
There’s broader context I don’t know how to discuss well, which I’ll at least discuss poorly:
Should the aspiring rationality community, or any community, attempt to protect its adult members from misleading reasoning, allegedly manipulative conversational tactics, etc., via cautioning them not to talk to some people? My view at the time of my original (Feb 2019) comment was “yes”. My current view is more or less “heck no!”; protecting people from allegedly manipulative tactics, or allegedly misleading arguments, is good — but it should be done via sharing additional info, not via discouraging people from encountering info/conversations. The reason is that more info tends to be broadly helpful (and this is a relatively fool-resistant heuristic even if implemented by people who are deluded in various ways), and trusting who can figure out who ought to restrict their info-intake how seems like a doomed endeavor (and does not degrade gracefully with deludedness/corruption in the leadership). (Watching the CDC on covid helped drive this home for me. Belatedly noticing how much something-like-doublethink I had in my original beliefs about X and related matters also helped drive this home for me.)
Should some organizations/people within the rationality and EA communities create simplified narratives that allow many people to pull in the same direction, to feel good about each others’ donations to the same organizations, etc.? My view at the time of my original (Feb 2019) comment was “yes”; my current view is “no — and especially not via implicit or explicit pressures to restrict information-flow.” Reasons for updates same as above.
It is nevertheless the case that X has had a tendency to e.g. yell rather more than I would like. For an aspiring rationality community’s general “who is worth ever talking to?” list, this ought to matter much less than the above. Insofar as a given person is trying to create contexts where people reliably don’t yell or something, they’ll want to do whatever they want to do; but insofar as we’re creating a community-wide include/exclude list (as in e.g. this comment on whether to let X speak at SSC meetups), it is my opinion that X ought to be on the “include” list.
Thoughts/comments welcome, and probably helpful for getting to shared accurate pictures about any of what’s above.
I think beating the uhhh ‘market’ is a lot easier than the EMH friends think. But its not exactly easy being a +EV ‘gambler’/speculative-investor. Your counterparites usually aren’t total idiots*. You are better off passing unless you think a bet is both really good and you can get in at least decent money. Its good policy to restrict your attention to only cases which plausibly fulfill both conditions**.
Ad hoc bets also have a very serious adverse selection problem. And in some cases betting people in private when they are being morons makes me feel predatory. I was chatting someone who I know is a lot poorer than me about SBF. He thought there was only a 20% of SBF being arrested by the end of 2023! He offered to bet. I felt predatory taking his money, especially since I knew he had less money than me. I just told him he would look bad shortly. SBF was arrested a few weeks later.
*If they think Trump is currently the secret president they are idiots. But I think those guys ran out of money and gave up.
**Exceptions exist like absurdly +EV intro offers on sportsbetting sites which might be limited to 1500 but pay you like 700dollars in EV for under an hour of work.
The issue with non-metaphorical ‘Id bet’ is that a policy I endorse is ‘only make bets/trades if you are willing to bet with substantial size’. I honestly have to bet like 20K to follow this rule. Ive very rarely seen bets that big in/adjacent-to the EA community. Though I remember hearing about someone betting 500K on scott’s blog.
Im happy Oxford might be taking this seriously. The EA community needs to take a way stronger stance against racism.
EA’s meta-strategy is ‘simp for tech billionaires and AI companies’. EA systematically attracts people who enjoy this strategy. So no it does not attract the best. Maybe a version of EA with more integrity would attract the best people.