I think this is a decent idea given a small reframe. Rather than thinking of it as earmarking the cash for a specific purpose, treating it like an unenforced restriction, instead think of the cash transfers as having an opportunity to provide information attached, and try to provide good information. Ie, instead of “this cash transfer is for X”, say “this cash transfer comes with a small pamphlet with several purchase ideas X,Y,Z”. This framing is more cooperative, and fails more gracefully if the recommendations are bad.
Conventional wisdom in the business world is that brick-and-mortar retail (and brick-and-mortar books in particular) is a declining business, because it can’t compete effectively with online stores. So I’m really skeptical of whether this business is financial viable to survive without continuous infusions of external cash, let alone with enough slack to do things that aren’t profit motivated.
What that means practice is you haven’t actually pinned the cost down to the right order of magnitude. Neither of the business sales you mentioned is comparable; B&N is an online store and an eReader brand, Books-A-Million was sold in 2014 and since then appears to have diversified into a lot of other businesses. More importantly, the main cost isn’t the sale price, it’s taking responsibility for the operational losses. This doesn’t tell me what order of magnitude that cost will be.
Building a publisher could be a thing, but owning this retail chain is strictly negative for that. You definitely aren’t getting the relevant trademarks out of the deal and will not be able to publish under the brand, ever, unless you separately buy the trademarks from the 2007 buyer, and if you’re going that route you’re shopping for a publishing house not a bookstore chain.
(Copy-pasted from pre-publication comments on a Google Docs doc)
How does XR weigh costs and benefits?Does XR consider tech progress default-good or default-bad?
The core concept here is differential intellectual progress. Tech progress can be bad if it reorders the sequence of technological developments to be worse, by making a hazard precede its mitigation. In practice, that applies mainly to gain-of-function research and to some, but not all, AI/ML research. There are lots of outstanding disagreements between rationalists about which AI/ML research is good vs bad, which, when zoomed in on, reveal disagreements about AI timeline and takeoff forecasts, and about the feasibility of particular AI-safety research directions.
Progress in medicine (especially aging- and cryonics-related medicine) is seen very positively (though there’s a deep distrust of the existing institutions in this area, which bottoms out in a lot of rationalists doing their own literature reviews and wishing they could do their own experiments).
On a more gut/emotional level, I would plug my own Petrov Day ritual as attempting to capture the range of it: it’s a mixed bag with a lot of positive bits, and some terrifying bits, and the core message is that you’re supposed to be thinking about both and not trying to oversimplify things.
What would moral/social progress actually look like?
This seems like a good place to mention Dath Ilan, Eliezer’s fictional* universe which is at a much higher level of moral/social progress, and the LessWrong Coordination/Cooperation tag, which has some research pointing in that general direction.
What does XR think about the large numbers of people who don’t appreciate progress, or actively oppose it?
I don’t think I know enough to speak about the XR community broadly here, but as for me personally: mostly frustrated that their thinking isn’t granular enough. There’s a huge gulf between saying “social media is toxic” and saying “it is toxic for the closest thing to a downvote button to be reply/share”, and I try to tune out/unfollow the people whose writings say things closer to the former.
I think the common factor, among forms of advice that people are hesitant to give, is that they involve some risk. So if, for example, I recommend a supplement and it causes a health problem, or I recommend a stock and it crashes, there’s some worry about blame. If the supplement helps, or the stock rises, there’s some possibility of getting credit; but, in typical social relationships, the risk of blame is a larger concern than the possibility of credit, which makes people more than optimally hesitant.
I was somewhat confused by the scale using Categorizing Variants of Goodhart’s Law as an example of a 100mQ paper, given that the LW post version of that paper won the 2018 AI Alignment Prize ($5k), which makes a pretty strong case for it being “a particularly valuable paper” (1Q, the next category up). I also think this scale significantly overvalues research agendas and popular books relative to papers. I don’t think these aspects of the rubric wound up impacting the specific estimates made here, though.
From people I know that have gotten vaccines in the Bay, it sounds like appointments have been booked quickly after being posted / there aren’t a bunch of openings.
This was true in February, but I think it’s no longer true, due to a combination of the Johnson & Johnson vaccine being added and the currently-eligible groups being mostly done. Berkeley Public Health sent me this link which shows hundreds of available appointment slots over the next days at a dozen different Bay Area locations.
(EDIT: See below, the map I linked to may be mixing vaccine and PCR-test appointments together in a way that confused me.)
The core thesis here seems to be:
I claim that [cluster of organizations] have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact.
There are different ways of unpacking this, so before I respond I want to disambiguate them. Here are four different unpackings:
Tight feedback loops are important, [cluster of organizations] could be doing a better job creating them, and this is a priority. (I agree with this. Reality doesn’t grade on a curve.)
Tight feedback loops are important, and [cluster of organizations] is doing a bad job of creating them, relative to organizations in the same reference class. (I disagree with this. If graded on a curve, we’re doing pretty well. )
Tight feedback loops are important, but [cluster of organizations] has concluded in their explicit verbal reasoning that they aren’t important. (I am very confident that this is false for at least some of the organizations named, where I have visibility into the thinking of decision makers involved.)
Tight feedback loops are important, but [cluster of organizations] is implicitly deprioritizing and avoiding them, by ignoring/forgetting discouraging information, and by incentivizing positive narratives over truthful narratives.
(4) is the interesting version of this claim, and I think there’s some truth to it. I also think that this problem is much more widespread than just our own community, and fixing it is likely one of the core bottlenecks for civilization as a whole.
I think part of the problem is that people get triggered into defensiveness; when they mentally simulate (or emotionally half-simulate) setting up a feedback mechanism, if that feedback mechanism tells them they’re doing the wrong thing, their anticipations put a lot of weight on the possibility that they’ll be shamed and punished, and not much weight on the possibility that they’ll be able to switch to something else that works better. I think these anticipations are mostly wrong; in my anecdotal observation, the actual reaction organizations get to poor results followed by a pivot is usually at least positive about the pivot, at least from the people who matter. But getting people who’ve internalized a prediction of doom and shame to surface those models, and do things that would make the outcome legible, is very hard.
(Meta: Before writing this comment I read your post in full. I have previously read and sat with most, but not all, of the posts linked to here. I did not reread them during the same sitting I read this comment.)
Should competent EAs be pursuing local political offices?
Looking at ads and introducing ads into your environment is not free, it’s mildly harmful. If you offered me 1 cent per ad to display ads in my browser, I would refuse. The money going to charity doesn’t change that.
LessWrong has a sidebar which makes the link to All Posts much more prominent; it looks like EA Forum hasn’t adopted that yet, but it would probably help.
Were you under the impression that I was disagreeing with the sodium-reduction guidelines because I was merely unaware that they existed? This is an area of considerable controversy.
Quitting smoking, alcohol, salt, and sugar is also hard–they are quite addictive.
For most people, cutting salt intake is harmful, not helpful. Salt isn’t new to human diets, and it isn’t a matter of addiction; it’s just a necessary nutrient.
Sugar can be harmful, but only insofar as it crowds out other calorie sources which are better. When people try to cut sugar, they often fail (and mildly harm themselves) because they neglect to replace it.
Post-mortem donation is fine, but being asked to sign up for kidney donation would be severely trust-destroying for me.
This happens to posts by accounts which have never posted before; established accounts (at least one post or comment) don’t have to wait. This was instituted on both LW and EA Forum because of a steady stream of bot-generated spam.
That doesn’t seem especially relevant to the question of whether first-world consumers should buy farmed or wild-caught fish; the amount caught form fisheries is set by regulations, not by demand, so consumer demand does not, on the margin, increase or decrease overfishing.
I doubt this makes a difference. Most of the market treats farmed and wild-caught fish as close substitutes, the supply of wild-caught fish is inelastic, and the supply of farmed fish is highly elastic. So if you switch from farmed to wild-caught fish, you are probably affecting market prices in a way which causes one other person to make the opposite change.
There are three additional premises required here. The first is that your own use of funds from investments must be significantly better than that of of other shareholders of the companies you invest in. The second is that the growth rate of the companies you invest in must exceed the rate at which the marginal cost of doing good increases, due to low-hanging fruit getting picked and due to lost opportunities for compounding. The third is that the growth potential of AI companies isn’t already priced in, in a way that reduces your expected returns to be no better than index funds.
The first of these premises is probably true. The second is probably false. The third is definitely false.
I based this mainly on a combination of a model and personal experience/self-experimentation, but hadn’t previously looked for data to quantify it. I’ve significantly downgraded my confidence in the correct quantity of extra food to eat being meal-sized, but am uncertain since none of the studies measure quite the thing I care about.
This study measured energy expenditure as a result of an all-nighter, in subjects whose food intake was controlled (ie not allowed to eat extra), and found that
Missing one night of sleep had a metabolic cost of ∼562 ± 8.6 kJ (∼134 ± 2.1 kcals) over 24 h, which equates to a ∼7% higher 24 h EE
This (134kcal) is smaller than I was expecting; on the other hand, not being able to eat extra calories puts a pretty sharp limit on ability to spend extra calories. From a different angle, this paper measured sleep and wake energy expenditure and found a ratio of 1.67:1 (in nonobese controls), which would imply that converting sleep hours to wake hours would increase TDEE by ~15%. A study which measured next-day intake rather than metabolic expenditure found a 22% increase; but it’s possible subjects overcompensated by eating more extra than they consumed.
Nutrition problems tend to disguise themselves as other kinds of stress; being hungry makes people emotionally brittle, which creates a thousand red herrings when you’re trying to figure out what’s wrong.