Even smart people will often intuitively (that is to say, without realizing it, or only dimly realize it) shy away from the part of the project that would provide information that would tell them they’re doing the wrong thing. This is part of the value of things like gantt charts and other project maps in that even though the plans they are typically used to generate fail when colliding with reality, they can alert you to ways you are fooling yourself about the most uncertain parts of a project.
Although I lean in the direction that Hillary would have been a lower war risk than Trump, the fact that it’s at all uncertain is depressing.
This is really interesting. I’m curious about crowding out and marginal dollar effects. i.e. the smart money spends all its resources on this, allowing the dumb money to free ride and keep on with the status quo (or even get worse with less perceived consequences). Meanwhile, there are now far less smart dollars available to fund weird moonshots that only the smart money can think about.
One solution: more funding for geoengineering moonshots (and please, with fewer assumptions that geoengineering automatically means that safety and reversibility aren’t major design criteria).
This is a big part of the reason why a split keyboard can be so helpful since it really makes maintaining better posture much more comfortable and intuitive.
I also recommend a roost laptop stand to get the monitor up to eye level.
Most utilitarian gotchas are either circular or talking about leaky abstractions. ‘Assume higher utility from taking option X, but OH NO, you forgot about consideration Y! Science have gone too far!’
See also aether variables.
I think there are two claims. I stand by both, but think arguing them simultaneously causes things like a motte and bailey problem to rear its head.
We seem to be having different conversations. I think you’re looking for strong evidence of stronger, more universal claims than I am making. I’m trying to say that this hypothesis (for some children) should be within the window of possibility and worthy of more investigation. There’s a potential motte and bailey problem with that, and the claims about evidence for benefit from schooling broadly should probably be separated from evidence for harms of schooling in specific cases.
>Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced. -Meditations on Moloch
Imagine that an altruistic community in such a world is very open minded and willing consider not shocking yourself all the time, but wants to see lots of evidence for it produced by the tazer manufacturers, since after all they know the most about tazers and whether they are harmful...
If you give children the option of being tazed or going to school some of them are going to pick the tazer.
It seems like you’re arguing from common sense?
>There is strong evidence that the majority of children will never learn to read unless they are taught.
This is a different claim. I don’t know of strong evidence that children will fail to learn to read if not sent to school.
Although it seems to be fine for the majority, school drives some children to suicide. Given that there is little evidence of benefit from schooling, advocating for letting those most affected have alternative options could be high impact.
Easing euthenasia legal and logistics obstacles for those with painful terminal illness.
The raising money for famous scientists part seems at odds with some of the optimism in the early sections. Any further comment on this?
Still seems worth it, FB might just eventually ban. ( I sort of doubt anything would happen if you link to an informational infographic)
I think how the ‘middle class’ (a relative measure) of the USA is doing is fairly uninteresting overall. I think most meaningful progress at the grand scale (decades to centuries) is how fast is the bottom getting pulled up and how high can the very top end (bleeding edge researchers) go. Shuffling in the middle results in much wailing and gnashing of teeth but doesn’t move the needle much. Their main impact is just voting for dumb stuff that harms the top and bottom.
Economic growth likely isn’t stagnating, it just looks that way due to some catch up growth effects:
Maximizing is usually a bad idea.
Reminds me of how revolutionaries think they’re really sticking it to the elites when they protest against free markets. But elites hate free markets, they try to insulate themselves from them as much as possible. Why would you want competition from up and coming new elites? That’s why elites fund the useful idiots who think they are revolutionaries.
There’s also a weird thing where newly minted elites don’t think of themselves as elites and so don’t engage with the possibility of moving major equilibria even though they are potentially large enough to do so, and doing so can be much more powerful than tuning efficiencies in existing equilibria. Probably fears related to consequentialist cluelessness as well.
First thought is to wonder why prizes aren’t more common. E.g. awards for fostering cross organizational coordination, either on the object (direct cross org efforts that result in research) or meta level (platforms, conferences etc.). One guess is that prize grantors don’t gain enough from granting them. Grantors might also have a systematic aversion to paying for things that have already happened without much guarantee that doing so will incentivize further desired behavior.
Funding more parallel work. If something is worth doing once, and is cheap, it is very likely worth doing 2-3 times and then having the teams crux on conclusions, data, and methodology.
Yes, that’s the concern. Asking me what projects I consider status quo is the exact same move as before. Being status quo is low status, so the conversation seems unlikely to evolve in a fruitful direction if we take that tack. I think institutions tend to slide towards attractors where the surrounding discourse norms are ‘reasonable and defensible’ from within a certain frame while undermining criticisms of the frame in ways that make people who point it out seem like they are being unreasonable. This is how larger, older foundations calcify and stop getting things done, as the natural tendency of an org is to insulate itself from the sharp changes that being in close feedback with the world necessitates.