What was the positive effect supposed to be?
Liam_Donovan
17.4% of the citizen voting age population of OR-6 is Hispanic
https://davesredistricting.org/maps#viewmap::9b2b545f-5cd2-4e0d-a9b9-cc3915a4750f
So now that it’s over, can someone explain what the heck was up with SBF donating $6m to HMP in exchange for a $1m donation to Flynn? From an outside perspective it seems tailor made to look vaguely suspicious and generate bad press, without seeming to produce any tangible benefits for Flynn or EA.
It seems like these observations could be equally explained by Paul correctly having high credence in long timelines, and giving advice that is appropriate in worlds where long timelines are true, without explicitly trying to persuade people of his views on timelines. Given that, I’m not sure there’s any strong evidence that this is good advice to keep in mind when you actually do have short timelines, regardless of your views on the Bible.
sent, thank you
I’d be interested in joining the Slack group
I’d like to take Buck’s side of the bet as well if you’re willing to bet more
What was her rationale for prioritizing hand soap over food?
It’s probably the lizardman constant showing up again—if ~5% of people answer randomly and <5% of the population are actually veg*ns, then many of the self-reported veg*ns will have been people who answered randomly.
I think it’s misleading to call that evidence that marriage causes shorter lifespans (not sure if that’s your intention)
Do you have a link and/or a brief explanation of how they convincingly established causality for the “married women have shorter lives” claim?
The next logical step is to evaluate the novel ideas, though, where a “cadre of uber-rational people” would be quite useful IMHO. In particular, a small group of very good evaluators seems much better than a large group of less epistemically rational evaluators who could be collectively swayed by bad reasoning.
I think the argument is that we don’t know how much expected value is left, but our decisions will have a much higher expected impact if the future is high-EV, so we should make decisions that would be very good conditional on the future being high-EV.
Have you read this paper suggesting that there is no good evidence of a connection between climate change and the Syrian war? I found it quite persuasive.
What is a Copernican prior? I can’t find any google results
You’re estimating there are ~1000 people doing direct EA work? I would have guessed around an order of magnitude less (~100-200 people).
What if rooms at the EA Hotel were cost-price by default, and you allocated “scholarships” based on a combination of need and merit, as many US universities do? This might avoid a negative feedback cycle (because you can retain the most exceptional people) while reducing costs and making the EA Hotel a less attractive target for unaligned people to take resources from.
What does this mean in the context of the EA Hotel? In particular, would your point apply to university scholarships as well, and if not, what breaks the analogy between scholarships and the Hotel?
Maybe the most successful recruitment books directly target people 1-2 stages away in the recruitment funnel? In the case of HPMOR/Crystal Society, that would be quantitatively minded people who enjoy LW-style rationality rather than those who are already interested in AI alignment specifically.
If you believe this, doesn’t it flip the sign of the “very best interventions” (ie you would believe they are exceptionally bad interventions)?