Perhaps you all have considered this already, but I think there’s a lot to like about sidenotes over footnotes, especially on the web (e.g. footnotes aren’t always in sight at the bottom of a physical page).
How would you expect EA WSDNs to differ from current EA orgs concretely?
When it comes to worker cooperatives, I see the differences as all flowing from reducing conflicting interests. That is, in standard firms, owners are ultimately interested in profits and only instrumentally interested in working conditions while workers are ultimately interested in working conditions (broadly construed) and only instrumentally interested in profits. Worker cooperatives resolve this tension by making agents principals and principals agents.
This is an idealization, but it seems like the interests of all relevant actors in EA orgs (and nonprofits more generally?) are more aligned. The board and the workers are (at least in theory) largely (if not solely) motivated by the same do-gooding goal.
Consideration 1: Economists often consider small actors in competitive markets to be price-takers meaning that they cannot influence prices on their own. This seems like a pretty plausible characterization of any individual food buyer.
Consideration 2: “He reasoned that economics says a drop in demand for some commodity should cause prices to fall for that commodity, and overall consumption remains the same.” This is not correct. In inward shift in the demand curve (“a drop in demand”) (for ordinary downward sloping demand curves and upward sloping supply curves), causes both equilibrium price and quantity to decrease. I’d guess the thing he’s trying to get at is that for a good which is unit elastic, a small drop in price is offset by a small increase in quantity which leads to total revenue being unchanged.
So our first option is to regard individual actors as too small to influence the price. If we reject this and think they do have an effect, their effect would be to shift the demand curve in—dropping equilibrium price and quantity.
Aside: I’m reasonably well-informed about economics and don’t recall having ever heard the term “cumulative elasticity” before.
I don’t really see ESM as being in opposition to QALYs. It seems like it’s a method that you would use as an input in QALY weight determinations. Wikipedia lists some of the current methods for deriving QALY weights as:
Time-trade-off (TTO): Respondents are asked to choose between remaining in a state of ill health for a period of time, or being restored to perfect health but having a shorter life expectancy.
Standard gamble (SG): Respondents are asked to choose between remaining in a state of ill health for a period of time, or choosing a medical intervention which has a chance of either restoring them to perfect health, or killing them.
Visual analogue scale (VAS): Respondents are asked to rate a state of ill health on a scale from 0 to 100, with 0 representing being dead and 100 representing perfect health. This method has the advantage of being the easiest to ask, but is the most subjective.
There’s also the “day reconstruction method” (DRM). The Oxford Handbook of Happiness talks about ESM, DRM and others relevant measurement approaches at various points.
I’d guess the trouble with using ESM, DRM and some other methods like them for QALY weights is it’s hard to isolate the causal effect of particular conditions using these methods.
Ah, I see that now. Thanks.
FWIW, I was specifically looking for a disclaimer and it didn’t quickly come to my attention. It looks like a few other people in these subthreads may have also missed the disclaimer.
Yeah, I hadn’t realized it was more or less deprecated. (The page itself doesn’t seem to give any indication of that. Edit: Ah, it does. I missed the second paragraph of the sidenote when I quickly scanned for some disclaimer.)
Also, apparently unfortunately, it’s the first sublink under the 80,000 Hours site on Google if you search for 80,000 Hours.
It seems quite possible to me have a “parameterized list”. That is, recommendations can take the shape “If X is true of you, Y and Z are good options.” And in fact 80,000 Hours does do this to some degree (via, for example, their career quiz). While this isn’t entirely personalized (it’s based only on certain attributes that 80,000 Hours highlights), it’s also far from a single, definitive list. So it doesn’t seem to be that there’s any insoluble tension between taking account of individual difference and communicating the same message to a broad audience—you just have to rely on the audience to do some interpreting.
I don’t particularly want to try to resolve the disagreement here, but I’d think value per dollar is pretty different for dollars at EA institutions and for dollars with (many) EA-aligned people . It seems like the whole filtering/selection process of granting is predicated on this assumption. Maybe you believe that people at CEA are the type of people that would make very good use of money regardless of their institutional affiliation?
 I’d expect it to vary from person to person depending on their alignment, commitment, competence, etc.
I am not OP but as someone who also has (minor) concerns under this heading:
Some people judge HPMoR to be of little artistic merit/low aesthetic quality
Some people find the subcultural affiliations of HPMoR off-putting (fanfiction in general, copious references to other arguably low-status fandoms)
If the recipients have negative impressions of HPMoR for reasons like the above, that could result in (unnecessarily) negative impressions of rationality/EA.
Clearly, there also many people that like HPMoR and don’t have the above concerns. The key question is probably what fraction of recipients will have positive, neutral and negative reactions.
It’s not at all clear to me why the whole $150k of a counterfactual salary would be counted as a cost. The most reasonable (simple) model I can think of is something like: ($150k * .1 + $60k) * 1.5 = $112.5k where the $150k*.1 term is the amount of salary they might be expected to donate from some counterfactual role. This then gives you the total “EA dollars” that the positions cost whereas your model seems to combine “EA dollars” (CEA costs) and “personal dollars” (their total salary).
I think you have some math errors:
$150k * 1.5 + $60k = $285k rather than $295k
Presumably, this should be ($150k + $60k) * 1.5 = $315k ?
I have a pretty averse reaction to all the people you named, expect I would feel similarly about someone in that mold in EA, and expect many other people in EA would feel similarly. I don’t think charismatic leadership fits all that well with the other elements of EA in ways both important and incidental.
I don’t know how promising others think this is, but I quite liked Concepts for Decision Making under Severe Uncertainty with Partial Ordinal and Partial Cardinal Preferences. It tries to outline possible decision procedures once you relax some of the subject expected utility theory assumptions you object to. For example, it talks about the possibility of having a credal set of beliefs (if one objects to the idea of assigning a single probability) and then doing maximin on this i.e. selecting the outcome that has the best expected utility according to its least favorable credences.
There’s actually a thing called the Satisficer’s Curse (pdf) which is even more general:
The Satisficer’s Curse is a systematic overvaluation that occurs when any uncertain prospect is chosen because its estimate exceeds a positive threshold. It is the most general version of the three curses, all of which can be seen as statistical artefacts.
IIRC, the mechanism has problems with collusion/dissembling. For example, one backer with $46 dollars and 4 backers with $1 each will get significantly better results by splitting their money into 5 contributions of $10 each. This seems like a problem that’s actually moderately likely to arise in practice.
It looks like the case you’re making in the “a prize” section is that prizes are more open to “outsiders” than grants which seems generally plausible to me. On the other hand, grants can actually fund the research itself while contestants for a prize need some source of funding. If it’s capital-intensive to mount a serious attempt at the prize, this creates a funding and vetting problem again (contestants will need money to bankroll their attempt).
I’m not convinced that a prize is particularly helpful in this case. I think of prizes as useful for inducing investment in things like public goods where private returns are limited. That doesn’t seem to be the case here; successfully creating “radically better energy generation” seems like it would be wildly remunerative. The promise of vast wealth seems like it ought to be sufficient incentive regardless of a prize.
OTOH, that’s all very first-principles and the history of innovation prizes doesn’t seem to really pay much attention to this line of criticism. Maybe prizes make particular problems more salient, etc.
This is interesting! I think it would also be useful to talk about the standard terminology in the field. Some of those terms are:
Aleatoric and epistemic uncertainty
Decisions under risk vs decisions under ignorance
Reasons I think it’s useful to talk about standard terminology:
Allows you to converse with others and understand their work more easily
Allows readers to follow up and connect with a larger body of work
Communicates to experts that you’ve seriously engaged with the field and understand it
In this particular case, I’d be interesting in hearing how your categories map to the standard ones. Or, if you think they don’t, it would be interesting to hear why that is. What are the inadequacies of the standard terms and categories?
This seems very related to social impact bonds: “Social Impact Bonds are a type of bond, but not the most common type. While they operate over a fixed period of time, they do not offer a fixed rate of return. Repayment to investors is contingent upon specified social outcomes being achieved.”