I’m not sure what you mean by “the principles have little room for errors in implementing them”.
That quote seems scarily plausible.
EDIT: Relevant Twitter thread
I’m not sure what you mean by “the principles have little room for errors in implementing them”.
That quote seems scarily plausible.
EDIT: Relevant Twitter thread
I think your first paragraph provides a potential answer to your second :-)
There’s an implicit “Sam fell prey to motivated reasoning, but I wouldn’t do that” in your comment, which itself seems like motivated reasoning :-)
(At least, it seems like motivated reasoning in the absence of a strong story for Sam being different from the rest of us. That’s why I’m so interested in what people like nbouscal have to say.)
Well that’s the thing—it seems likely he didn’t see his actions as contradicting those principles. Suggesting that they’re actually a dangerous set of principles to endorse, even if they sound reasonable. That’s what’s really got me thinking.
I wonder if part of the problem is a consistent failure of imagination on the part of humans to see how our designs might fail. Kind of like how an amateur chess player devotes a lot more thought to how they could win than how their opponent could win. So if the principles Sam endorsed are at all recoverable, maybe they could be recovered via a process like “before violating common-sense ethics for the sake of utility, go down a massive checklist searching for reasons why this could be a mistake, including external observers in the decision if possible”.
Thanks for the reply!
In terms of public interviews, I think the most interesting/relevant parts are him expressing willingness to bite consequentialist/utilitarian bullets in a way that’s a bit on the edge of the mainstream Overton window, but I believe would’ve been within the EA Overton window prior to recent events (unsure about now). BTW I got these examples from Marginal Revolution comments/Twitter.
This one seems most relevant—the first question Patrick asks Sam is whether the ends justify the means.
In this interview, search for “So why then should we ever spend a whole lot of money on life extension since we can just replace people pretty cheaply?” and “Should a Benthamite be risk-neutral with regard to social welfare?”
In any case, given that you think people should put hardly any weight on your assessment, it seems to me that as a community we should be doing a fair amount of introspection. Here are some things I’ve been thinking about:
We should update away from “EA exceptionalism” and towards self-doubt. (EDIT: I like this thread about “EA exceptionalism”, though I don’t agree with all the claims.) It sounds like you think more self-doubt would’ve been really helpful for Sam. IMO, self-doubt should increase in proportion to one’s power. (Trying to “more than cancel out” the normal human tendency towards decreased self-doubt as power increases.) This one is tricky, because it seems bad to tell people who already experience Chidi Anagonye-style crippling self-doubt that they should self-doubt even more. But it certainly seems good for our average level of self-doubt to increase, even if self-doubt need not increase in every individual EA. Related: Having the self-awareness to know where you are on the self-doubt spectrum seems like an important and unsolved problem.
I’m also wondering if I should think of “morality” as being two different things: A descriptive account of what I value, and (separately) a prescriptive code of behavior. And then, beyond just endorsing the abstract concept of ethical injunctions, maybe it would be good to take a stab at codifying exactly what they should be. The idea seems a bit under-operationalized, although it’s likely there are relevant blog posts that aren’t coming to my mind. Like, I notice that the EA who’s most associated with the phrase “ethical injunctions” is also the biggest advocate of drastic unilateral action, and I’m not sure how to reconcile that (not trying to throw shade—genuinely unsure). EDIT: This is a great tweet; related.
Institutional safeguards are also looking better, but I was already very in favor of those and puzzled by lack of EA interest, so I can’t say it was a huge update for me personally.
This comment seems to support the idea that a whistleblowing system would’ve helped: https://forum.effectivealtruism.org/posts/xafpj3on76uRDoBja/the-ftx-future-fund-team-has-resigned-1?commentId=NbevNWixq3bJMEW7b
I’m curious if you (or any other “SBF skeptic”) has any opinion regarding whether his character flaws should’ve been apparent to more people outside the organizations he worked at, e.g. on the basis of his public interviews. Or alternatively, were there any red flags in retrospect when you first met him?
I’m asking because so far this thread has discussed the problem in terms of private info not propagating. But I want to understand if the problem could’ve been stopped at the level of public info. If so that suggests that a solution of just getting better at propagating private info may be unsatisfactory—lots of EAs had public info about SBF, but few made a stink.
I’m also interested to hear “SBF skeptic” takes on the extent his character flaws were a result of his involvement in EA. Or maybe something about being raised consequentialist as a kid? Like, if we believe that SBF would’ve been a good person if it weren’t for exposure to consequentialist ideas, that suggests we should do major introspection.
Trying to brainstorm… I noticed this tweet from CZ, which states:
We gave support before, but we won’t pretend to make love after divorce. We are not against anyone. But we won’t support people who lobby against other industry players behind their backs.
Maybe SBF can hire an apology coach (if that exists? I might know someone kinda like that actually—but someone SBF knows is probably better) and find it in his heart to apologize to CZ for “lobbying against other industry players behind their backs”, and anything else he may have done that CZ resents? Is there a way FTX can, legally or otherwise, commit to avoid lobbying CZ dislikes in the future?
It’d be a shame if humanity’s future was derailed due to our oversized egos...
Maybe he could get together with a few wealthy friends?
I used to interview software engineers for a living. Something I noticed often in interviews: “stress-induced tunnel vision”. The candidate being interviewed would react to the pressure of the interview by going depth-first when they should’ve gone breadth-first in their problem solving.
On the other hand, John Cleese did this great talk about how a relaxed frame of mind is key for creativity.
The FTX team has more information than we do. (I don’t think I understand what’s going on myself.) But they are probably also very stressed out.
If they’re stressed in a way that impairs lateral thinking, the idea of talking to Dustin Moskovitz might not have occurred to them. Even if it did, this thread could surface considerations that relevant parties hadn’t thought of.
More generally, if EAs outside FTX are less stressed out, they might be better positioned to think of an idea out of left field that saves the company. IMO, for the next 48 hours we should be “all hands on deck”, brainstorming ideas to save FTX (assuming they don’t object). Seems like it’s worth a try at least.
I definitely think it is worth considering. And Dustin merely announcing that he’s considering a purchase of FTX could help consumer confidence and/or give FTX a better bargaining position vs Binance. Such an announcement could conceivably backfire if Binance sees their offer to purchase FTX as a charitable gesture, or merely wants to maintain consumer confidence in crypto? But overall, making such an announcement ASAP seems like a fairly clear win to me?
Given how fast crypto markets can move, I think this is liable to be quite time-sensitive?
Additionally, in terms of consumer confidence in FTX, I imagine the brand impact of “FTX withdrawals didn’t work for 48 hours” is fairly different from that of “FTX withdrawals didn’t work for 3 weeks”.
Can you say more about how you see intuition pumps as a potential way for longtermism to avoid political attacks? Seems to me we use them all the time.
The thought is to tailor the intuition pump for your audience, e.g. if your audience is left-wing, leverage moral intuitions they already have.
I’m truly confused both about how you can watch Hossenfelder’s video and not see it as a politically motivated attack
Supposing it is a politically motivated attack, what do you think her motivation was? Why would she craftily seek to discredit longtermism in the way you describe? I think that’s the biggest missing piece for me.
(I also think it’s dangerous to mistake criticism for deliberate persecution.)
how you imagine, in practical terms, that longtermism could have avoided becoming a target for such attacks.
One of the most common ways to argue in moral philosophy is to make use of intuition pumps. For example: “Do you believe fighting global warming should be a top priority, even if it means less growth in developing countries and therefore more suffering in the near term? If so, how would you justify that?”
To me, this sounds like PR, and I agree with Anna Salamon that PR is corrosive, reputation is not.
I think any effort to popularize longtermism is in some sense a PR effort. If you’re going to deliberately push a meme you should do it strategically. (Edit: To be clear, I’m not advocating for dishonesty.)
I think the “corrosiveness of PR” point applies more strongly to personal and organizational conduct than advocating for a new idea.
My own few experiences of trying to politely respond to public figures making ill-founded criticisms is that they just ignore me. I expect this would be the result.
Publicly admitting you’re incorrect is disincentivized. Probably if someone finds your counterpoint persuasive, they will not say so, in order to save face. In any case, onlookers seem more important—there are far more of them.
Also, if the counterpoint is published by a professional, they’ll have a bit more of a platform, so the likelihood of them getting ignored will be a bit lower. (Edit: Clarification—I’m advocating that you publish counterpoints specifically in places where people who saw the original are also likely to see the counterpoint. So e.g. if you have more Twitter followers, your reply to their tweet will be more visible.)
Remember that Sabine Hossenfelder is a theoretical physicist. She went through and read papers. She is an extremely intelligent person. I am sure she’s smarter than me. I think it is far more likely that she understood the ideas and deliberately decided to distort them for her own political agenda, or maybe just for clicks, than that she misunderstood them. I really think that longtermism is an easier topic to grasp than Collider signatures in the Planck regime. If she can publish the latter, I think she can grasp the former.
I’m not referring to the difficulty of grasping it so much as the amount of time that was put in. Also, framing effects are important. Maybe Sabine just skimmed the paper to verify that the claims made in the media were correct. Maybe she doesn’t have much experience with moral philosophy discourse norms. (“You would kill baby Hitler? Stop advocating for infanticide!!”)
I’m not sure what you think her agenda is. If she was focused on advancing an agenda, such as attempting a “hit job”, would it make sense to include the bit at the end about how she really appreciates the longtermist focus on the prevention of existential risks so we have a long-term strategy for the next 10 billion years? My guess is she is not deliberately pushing an agenda, so much as fitting longtermism into an existing worldview without trying to steelman it (or, adopting a frame from a someone else who did this).
I’ve watched a few of Sabine Hossenfelder’s videos in the past. She didn’t previously strike me as a “hit-job critic”—for example, I remember thinking this video about nuclear power was reasonable (not an area I have expertise in, though).
Your model here seems to be that Sabine set out to make a hit job on longtermism. I think a more likely sequence of events was something like:
Sabine supplements her academic income by making Youtube videos about popular science.
The more videos she makes, the more money she makes.
Longtermism has been in the news recently; she decides to make a video about it.
She reads some news coverage of longtermism that ends up shaping her thinking about longtermism quite a lot.
The video ends up being essentially a repetition of talking points from the news coverage.
I think it’s incorrect to believe that Sabine knows everything about longtermism that you do, and is seeking to intentionally distort it. It seems more likely to me that she is just repeating what has become the popular narrative about longtermism by this point. (Note: I haven’t been paying much attention to longtermism news coverage. This is just a guess.)
“Never attribute to malice that which can be adequately explained by neglect.” The video did not strike me as especially “cruel”, in sense of deliberately seeking to cause harm. “Uncharitable” or “dismissive” seems more like it.
Anyway, if the above story is true, my takeaways would be:
Before popularizing a subtle idea like longtermism, there should be a red teaming process: thinking through how critics are likely to respond, and also how the meme might evolve when introduced to a broader audience. (Imagine the person you like least, then imagine them justifying their worst idea using longtermism. How to prevent this?)
If it’s worthwhile to popularize an idea like longtermism, it’s worthwhile to do it right. Responding to critics doesn’t actually take that much time. (80/20 rule: Responding to 20% of critics gets you 80% of the benefit.) A few people can be paid to watch for longtermism discussion using Google Alerts etc. and offer polite corrections if bad arguments are made. Polite corrections probably won’t cause the person who made the bad argument to reverse their position, but they can be persuasive to onlookers. If no counterargument is made, some onlookers will assume that’s because no counterargument can be made, and some of those onlookers could be people who also have a big social media platform. Standard EA advice to ignore most critics makes little sense to me.
Thanks. This makes me less excited about prediction market advocacy.
I think it could still be a better time than average to advocate though. Announcing prediction markets could be a way for the next government to double down on good forecasting, and convince voters they won’t make the same mistake as the previous government.
Could be your, or their, comparative advantage to start such an org—if you already have retirees in your group, you could make a special effort to help them find ways to contribute, talk to them about outreach, and see if there’s a model which can be scaled up.
Seems plausible, I think it would be good to have a dedicated “translator” who tries to understand & steelman views that are less mainstream in EA.
Wasn’t sure about the relevance of that link?