As I understand it, posts are frontpage by default unless you or a mod decide otherwise.
Thanks for writing this. I think you do an excellent job on the rhetoric issues like language and framing. These seem like good methods for building coalitions around some specific policy issue, or deflecting criticism.
But I’m not sure they’re good for actually bringing people into the movement, because at times they seem a little disingenuous. EA opposition to factory farming has nothing to do with indigenous values—EAs are opposed to it taking place in any country, regardless of how nicely or otherwise people historically treated animals there. Similarly EA aid to Africa is because we think it is a good way of helping people, not because we think any particular group was a net winner or loser from the slave trade. If we’re going to try to recruit someone, I feel like we should make it clear that EA is not just a flavour of woke, and explicitly contradicts it at times.
As well as seeming a bit dishonest, I think it could have negative consequences to recruit people in this way. We generally don’t just want people who have been lead to agree on some specific policy conclusions, but rather those who are on board with the whole way of thinking. There has been a lot of press written about the damages to workplace cohesion, productivity and mission focus from hiring SJWs, and if even the Bernie Sanders campaign is trying to “Stop hiring activists” it could probably be significantly worse if your employees had been hired expecting a very woke environment and were then disappointed.
Is there any particular discussion you think we should be happening? My impression is EAs were concerned about lab leaks before, thought it was plausible but far from clear this was a lab leak, and continue to want more security for BSL labs in the future.
Thanks for writing this interesting post on a novel cause area I’ve never seen presented in this way.
Another aspect perhaps worth mentioning is that the modern world seems to require an increasingly high minimum IQ/contentiousness level to navigate successfully. Reducing paperwork burdens, which can difficult for some people to fill out, could help with this.
Thanks very much for sharing this, and in particular the fascinating charts. I was pretty surprised at how large a fraction of successful applicants were, on these axis, strictly dominated by other rejected applicants, and how large the overlap was in the box and whiskers plots. Sometimes colleges argue they can’t just look at SAT because they have more applicants with perfect SATs than they have spaces, but that doesn’t explain why you would almost all your successful applicants (from this school) would have sub-perfect SATs.
I have have thoughts that could perhaps change the conclusion:
It’s well known that colleges care a lot about extracurriculars. If these really are a good sign of flexibility, work ethics, initiative and so on, perhaps we should care about them also. If so, colleges might be correctly adjusting, the low correlations we observe in the charts are just because we can’t directly observe those facts, and high quality people are more concentrated in top schools than this data would suggest.
Additionally, SCOTUS is due to hear Students For Fair Admissions vs Harvard later this year, and Metaculus currently gives them a 75% chance to successfully get racial discrimination in university admissions found unlawful. If so the correlation with SAT/GPA might improve a lot after this year, so the phenomena you’re highlighting might be a relatively short-lived one.
Nice article, thanks for linking (and Will for writing).
Unfortunately some people I know thought this section was a little misleading, as they felt it was insinuating that Xrisk from nuclear was over 20% - a figure I think few EAs would endorse. Perhaps it was judged to be a low-cost concession to the prejudices of NYT readers?
We still live under the shadow of 9,000 nuclear warheads, each far more powerful than the bombs dropped on Hiroshima and Nagasaki. Some experts put the chances of a third world war by 2070 at over 20 percent. An all-out nuclear war could cause the collapse of civilization, and we might never recover.
Are you aware of anyone in EA who has studied the problem of moral regress?
Somewhat related: Gwern on the Narrowing Circle.
The danger of nuclear war is greater than it has ever been.
What is your argument for the risk now being higher than during the Cuban Missile Crisis, or similar incidents during the Cold War, or indeed than earlier this year?
Preserve option value by giving yourself a vague name
Seems quite possible that your donors want you to do the project you said you’d do, and not some other random project. If this is the case project lock-in through name choice could be a feature rather than a bug.
Sounds like part of the purpose of BERI?
You could just invest in 3m Treasury bills directly, or invest in a conventional fund that buys bills, (or indeed whatever other investments you thought were most appropriate given your circumstances) and then donate the interest to charity.
Thanks for sharing this very original idea! I’m somewhat sceptical of the intervention you mention but it definitely seems like a large and neglected issue.
The only cost of breaking the GWWC commitment is that people who saw you make that commitment might lose a but of trust in you. I think this is a great balance
This seems like very little cost at all. Charitable donations and income are, by default, private, so no-one need know you stopped, and even when people are public about leaving the community, the main reaction I have seen is one of best-wishes and urging self-care. I’m not sure I’ve ever seen any EA leaders write a harsh word about people for leaving.
Thanks for writing this. For anyone with good ideas in the area, it’s worth noting that addressing demographic decline is listed as an area the FTX Foundation is interested in funding.
My gut reaction is this sounds pretty unpleasant. Perhaps I am misunderstanding the sort of feedback you’d expect to share in such a situation; could you perhaps give some examples?
Owen has done some related work here and here on pricing research externalities.
why apply it to abortion and not Ukraine?
I agree it should apply to both; if your question is why didn’t I object to the previous post I don’t have any specific defense other than having no recollection of seeing the Ukraine post at the time, though maybe I saw it and forgot.
I seem to recall some places, when sorting thinks by average rating, will use something like the lower 90th percent confidence bound on the mean. This doesn’t solve for which number to display though, as it is not a very user-intuitive number to read.
But I think some parts of the EA mindset would be very useful to tackle some other important issues like reproductive rights, and I think we should encourage playful and scientific exploration of topics.
I think this is a reasonable position, but I don’t think it’s a convincing defense of the OP. “Does it make sense to fly business class, and if so when” is a plausible ‘playful and scientific exploration’, that could benefit from EA-style analysis. But “how can I get my employer to let me fly business” is not, because the it assumes the part of the question—whether flying business class at all is good—where EA considerations can bring the most light. Considering a wide range of possible issues can help us find new problems to tackle, and hence is worthwhile as you said, but not if you simply assume they are good things to work on—you have to actually analyze this question, including the potential that the exact opposite is true.
I’m not here to argue that this is a global priority or a more important cause area than others. I’m instead here to ask, given one considers reproductive rights in America an important cause area …
The purpose of the forum is ‘writing that will help us do the most good’, not how to effectively pursue some political objective that people care about for non-EA reasons. It is fine to advocate that some novel cause area perhaps could be an EA cause, even if that seems unlikely or highly speculative, but I don’t think we should encourage people to post about their personal cause with indifference to whether it could be highly effective. ‘How to support abortion’ is the topic du jour on a huge variety of online venues; this forum should be focused on what is distinctly EA.
In this case, not only does this seem unlikely to be a top priority by cause-neutral EA lights, EA considerations (e.g. moral circle expansion, caring about future generations, moral uncertainty, total view population ethics, scope sensitivity, and advocating for those who cannot advocate for themselves) seem like they would actually push in the opposite direction: they make abortion look worse, and trying to reduce it seem better, than would appear to a non-EA with similar background. Despite this, I am intuitively somewhat skeptical that trying to reduce abortion would be a top EA cause area either, because it does not seem very neglected.