I am a bit worried people are going to massively overcorrect on the FTX debacle in ways that don’t really matter and impose needless costs in various ways. We should make sure to get a clear picture of what happened first and foremost.
DC
We appreciate you! ❤️
I recommend a mediator be hired to work with Jacy and whichever stakeholders are relevant (speaking broadly). This will be more productive than a he-said she-said forum discussion that is very emotionally toxic for many bystanders.
“Jon Wertheim: He made a mockery of crypto in the eyes of many. He’s sort of taken away the credibility of effective altruism. How do you see him?
Michael Lewis: Everything you say is just true. And it–and it’s more interesting than that. Every cause he sought to serve, he damaged. Every cause he sought to fight, he helped. He was a person who set out in life to maximize the consequences of his actions—never mind the intent. And he had exactly the opposite effects of the ones he set out to have. So it looks to me like his life is a cruel joke.”
😢
Archive: https://archive.ph/uwawF
I like “quality risks” (q-risks?) and think this is more broadly appealing to people who don’t want to think about suffering-reduction as the dominantly guiding frame for whatever reason. Moral trade can be done with people concerned with other qualities, such as worries about global totalitarianism due to reasons independent of suffering such as freedom and diversity.
It’s also relatively more neglected than the standard extinction risks, which I am worried we are collectively Goodharting on as our focus (and to a lesser extent, focus on classical suffering risks may fall into this as well). For instance, nuclear war or climate change are blatant and obvious scary problems that memetically propagate well, whereas there may be many q-risks to future value that are more subtle and yet to be evinced.
Tangentially, this gets into a broader crux I am confused by: should we work on obvious things or nonobvious things? I am disposed towards the latter.
I have strong downvoted as a strong disendorsement of getting involved in hot-button, highly publicized military conflicts such as this, where the sign of the donation is unclear. I think this could be slightly contributing to the risk of escalating to nuclear war, and may actually prolong the war, increasing the amount of deaths. I think it’s terrible that this is so highly upvoted and there’s no debate.
(Parenthetical update: I would definitely be in favor of people here supporting effective altruist-identifying individuals they know in Ukraine, including arming them if they need that.)
The post should be updated stating he is deceased.
It would be helpful to know what events have been hosted there by now.
Yeah a significant consideration for me in whether to be less professionally involved in EA is exhaustion from centralized funding and the weird power dynamics that ensue. I would rather build products that lots of people can use and lots of investors or donors would find attractive to give money to than be beHolden to a small coterie of grantmakers no matter how well-intentioned.
Discussion about inclusivity is really conspicuous by it’s absence within EA. It’s honeslty really weird we barely talk about it.
Are you sure? Here are some previous discussions (most of which were linked in the article above):
http://effective-altruism.com/ea/1ft/effective_altruism_for_animals_consideration_for/ http://effective-altruism.com/ea/ek/ea_diversity_unpacking_pandoras_box/ http://effective-altruism.com/ea/sm/ea_is_elitist_should_it_stay_that_way/ http://effective-altruism.com/ea/zu/making_ea_groups_more_welcoming/ http://effective-altruism.com/ea/mp/pitfalls_in_diversity_outreach/ http://effective-altruism.com/ea/1e1/ea_survey_2017_series_community_demographics/ https://www.facebook.com/groups/effective.altruists/permalink/1479443418778677/
I recall more discussions elsewhere in comments. Admittedly this is over several years. What would not barely talking about it look like, if not that?
- 27 Oct 2017 17:55 UTC; 6 points) 's comment on Why & How to Make Progress on Diversity & Inclusion in EA by (
Fiction can be a powerful tool for generating public interest in an issue, as Toby Ord describes in the case of asteroid preparedness as part of his appearance on the 80,000 Hours Podcast:
I think general additional asteroid preparedness awareness is net negative because it increases the amount of dual-use asteroid deflection capabilities moreso than it increases the amount of non-dual-use asteroid defense capabilities.
The sign though of asteroid awareness is probably dominated by the number of people who go on to think about and work on other existential risks, which in itself may either be really good by preventing x-risks or may be dual-use in itself, causing general mass awareness of x-risks as a category to be net bad.
2 FTEs doesn’t seem that bad to me for something as important as cause exploration and given how big the movement is? This just seems fine to me?
This seems like progress to me. Something highly upvoted but disagreement-downvoted means to me “we appreciate this comment’s existence and want to incentivize that, but disagree with it factually”. I think voting has degraded due to the sheer influx of people and content, but that this feature is swimming uphill against that and has noticeably improved discourse.
As someone who helped raise the alarm about Covid (and is trying to do so for this one as well), I have wondered if my actions were actually harmful. I posted an update on Facebook in I think May 2020 something to the effect that more harm may come from EAs losing productivity than from the actual disease. I consider myself a pretty good updater for these situations but a lot of people are subject to information cascades. I do think some people remained, frankly, way too fucking neurotic about this longer than was reasonable. I wish more people grokked the coordination cost of imposing more friction along their collaboration surface area. As an example, there was a post I think last autumn that was like “what is EAG doing about Covid?” and I considered that annoying and felt sorry for EAG people.
One argument in favor of your viewpoint is that if global nuclear war happens, there’s really not much EA work left to do in the aftermath besides help a few people around you if you survived. That might be comparable to global health and development relief? Maybe global poverty people who live in these cities and think they have alpha on what lifesaving efforts they could somehow participate in in a way that more than offsets the expected disvalue of leaving and is more expected value than their current charity work should leave.
On the other hand, there’s so much social pressure to be sanguine as well. Very few people left the cities when the Cuban Missile Crisis happened IIRC. Nukes are a different beast than Covid. In some ways it’s easier to prepare and some ways it’s harder.
I wish people could update and decide fast in any direction. The ability to really admit WW3 could happen before it hits rather than bury one’s head in the sand, and also the ability to set-and-forget a policy regarding the threat that maintains productivity. I am inclined towards ‘everyone should have their panic period early and get it out of their system’.
Part of the issue may be that there are status incentives that come into play to talk a whole bunch and cogitate about the current thing while it’s happening. I know that I need to stay away from social media right now.
There is a significant probability of WW3 happening over this century, so I don’t think it’s virtuous to skip over the prepping work that most people have neglected and now is the Schelling time to at least buy some potassium iodide pills in case the U.S. and China go to war over Taiwan in the next decade. Though it may be next to impossible to reach adequacy.
FWIW I am pretty confident the Samotsvety forecast or others like that are consistently understating risks due to outside view reasoning biases or what Thiel calls indefinite thinking.
I definitely think now is a good time to stock up on food and water if one hasn’t.
As noted in ‘The Precipice’ though, while potentially reducing the risk from asteroids, such a capability may pose a larger risk itself if used by malicious actors to target asteroids towards Earth.
I am very confident that dual-use risk of improved asteroid deflection technology in general is much more likely than a random asteroid hitting us, and that therefore this experiment has likely made the world worse off (with a bit less confidence, because maybe it’s still easier to deflect asteroids defensively rather than offensively, and this experiment improved that defensive capability?). This is possibly my favorite example of a crucial consideration, and also more speculatively, evidence that the sum of all x-risk reduction efforts taken together could be net-harmful (I’d give that a 5-25% chance?).
Here he is right on the homepage, 2015, the only earn-to-give profile of the three highlighted, and it links to his profile, “last updated October 25th, 2014”.
I strong disendorse this. Your post and comments about this make me angry. Stop policing my emotions. I know more than enough to make a judgement of approximately what happened and that it deserves my judgement for fucking me and many others over.
Additional 25km seems very inconvenient if Oxford proximity is important and depending on public transport. Your financial tradeoff still might make sense, I dunno . At 25km though they might as well optimize along other axes like different counties or countries. That’s 12 miles… 10-20 minute drive depending? They could hire a full-time driver (with some temp drivers for events?) to create a world-class drive? I’m getting a bit more convinced. But if anything I would argue for getting a place that’s even more amenitied but way cheaper real estate plus amazing transport. Proximity is just a really important variable for these decisionmakers, though.
I think people are underestimating how much the decision was made out of lazy convenience. Most of the bougie vibes are already there just because they’re at Oxford to begin with vs some other place. With that in mind, one might ask, “why don’t we move the EA hubs from Berkeley and Oxford to a village in India”, which while sounding absurd to some I would be happy to consider the move, it being a question exemplifying a more extreme version of anti-bougieness (anti-aristocracism?) logic. If people aren’t willing to move from first-world countries, that’s also relatively kinda privileged and lazy (in a way that is obviously understandable and doesn’t translate exactly to the venue tradeoff situation, to be clear).
I was really looking forward to maybe implementing impact markets in collaboration with Future Fund plus FTX proper if you and they wanted, and feel numb with regard to this shocking turn. I really believed FTX had some shot at ‘being the best financial hub in the world’, SBF ‘becoming a trillionaire’, and this longshot notion I had of impact certificates being integrated into the exchange, funding billions of dollars of EA causes through it in the best world. This felt so cool and far out to imagine. I woke up two days ago and this dream is now ash. I have spiritually entangled myself with this disaster.
I don’t want to be the first commenter to be that guy, and forgive me if I’m poking a wound, but when you have the time and slack can you please explain to us to what extent you guys grilled FTX leadership about the integrity of the sources of money they were giving you? Surely you had an inside view model of how risky this was if it blew up? If it’s true SBF has had a history of acting unethically before (rumors, I don’t know), isn’t that something to have thoroughly questioned and spoken against? If there was anyone non-FTX who could have pressured them to act ethically, it would have been you. As an outsider it felt like y’all were in a highly trusted concerted relationship with each other going back a decade.
In any case, thank you for what you’ve done.