$80m “other” per year seems very high to me, fwiw.
See also: What’s the best structure for optimal allocation of EA capital?
So EA is currently in a regime wherein the large majority of capital flows from a single source, and capital allocation is set by a small number of decision-makers.
Rough estimate: if ~60% of Open Phil grantmaking decisioning is attributable to Holden, then 47.2% of all EA capital allocation, or $157.4M, was decided by one individual in 2017. 2018 & 2019 will probably have similar proportions.
It seems like EA entered into this regime largely due to historically contingent reasons (Cari & Dustin developing a close relationship with Holden, then outsourcing a lot of their philanthropic decision-making to him & the Open Phil staff).
It’s not clear that this structure will lead to optimal capital allocation.
A good thread (a) summarizing a paper on our current understanding of coronavirus transmission dynamics.
Though perhaps the effect size they found is implausibly large...
Expressed as relative risk, vitamin D reduced the risk of ICU admission 25-fold. Put another way, it eliminated 96% of the risk of ICU admission. Expressed as an odds ratio, which is a less intuitive concept but is often used in statistics because it gives an estimate of the effect of the treatment that would be constant across scenarios with different levels of risk, vitamin D reduced the odds of ICU admission by 98%. Either way, vitamin D practically abolished the need for ICU admission.
Would be great if this replicates in a bigger study. In the meantime, supplementing Vitamin D is cheap & safe.
More Vitamin D discussion:
LessWrong post on Vitamin D supplementation (pre-covid)
SSC post on the Vitamin D literature (from 2014)
Gwern’s comment on that SSC post
In other covid news, we seem to be learning that Vitamin D supplementation is helpful.
A small RCT was recently published: Castillo et al. 2020
From Masterjohn’s commentary (a):
The trial was conducted at the Reina Sofía University Hospital in Córdoba, Spain. The trial included 76 patients with COVID-19 pneumonia. Although this is no longer the standard of care, all patients were treated with hydroxychloroquine and azithromycin and, when needed, a broad-spectrum antibiotic. Admission to the ICU was determined by a multidisciplinary committee consisting of intensive care specialists, pulmonologists, internal medicine specialists, and members of the ethics committee.
The patients were randomly allocated to receive or not receive vitamin D in a 2:1 ratio. This resulted in 50 patients in the vitamin D group and 26 patients in the control group.
From the abstract:
Of 50 patients treated with calcifediol [a form of Vitamin D], one required admission to the ICU (2%), while of 26 untreated patients, 13 required admission (50%) p-value X^2 Fischer test p < 0.001. Univariate Risk Estimate Odds Ratio for ICU in patients with Calcifediol treatment versus without Calcifediol treatment: 0.02 (95%CI 0.002-0.17).
Thanks for running the conversion into micromorts – that’s helpful.
fwiw back in 2003 the average US commute was ~30 miles/day (I couldn’t find more recent data). So that’s about one micromort/week from commuting.
Here’s some case fatality rate data by age. 0.5% chance of death seems a bit high, though maybe reasonable depending on how you’re incorporating the externality.
Thank you for creating this!
I was surprised by the riskiness estimates in the table on this page: https://www.microcovid.org/paper/2-riskiness
If an event that accrues 1000 μCoV is considered “borderline reckless”, wouldn’t that imply a very low risk tolerance for everyday activities like driving a car? (Because driving is fairly high risk and some of that risk is externalized.)
Some of Hanson’s writing has probably been, on net, detrimental to his own influence...
+1 to Parable of the Talents being excellent, especially given EA’s relationship to scrupulosity.
100% agree that cultural health is very important, and that EA is under-investing in it. (The “we don’t want to just give money to our friends” point resonates, and other scrupulosity-related stuff is probably at play here as well.)
Individuals at social events will be trying to one-up each other with their cleverness. I’m sure I’ve contributed to this. I’ve noticed myself becoming jealous when I hear of others who are similar in some ways doing well, which really should make no sense at all.
Thank you for talking about this!
I’ve noticed similar patterns in my own mind, especially around how I engage with this Forum. (I’ve been stepping back from it more this year because I’ve noticed that a lot of my engagement wasn’t coming from a loving place.)
These dynamics may not make any sense, but there are deep biological & psychological forces giving rise to them. [insert Robin Hanson’s “everything you do is signaling” rant here]
… I think in the anonymous surveys 80K did a while back a bunch of people complained that there was a lot of signaling going on and that status was a big deal.
Right. Last year concerns about status made a lot of heat on the Forum (1, 2, 3), but as far as I know nothing has really changed since then, perhaps other than more folks acknowledging that status is a thing.
(Status seems closely related to scrupulosity & to EA being vetting-constrained; I haven’t unpacked this yet.)
I haven’t seen these specific examples, but there definitely seems to be a similar bias in other groups. Many organizations are afraid to make any kinds of estimates at all...
Reminds me of the thing where corporations don’t want to implement internal prediction markets because implementing a market isn’t in the self-interest of any individual decision-maker.
Here’s a list I came up with from thinking about this for ~30 minutes:
Better ways of measuring what matters
Better neuroimaging tech to parse out the neurological basis of desirable & undesirable subjective states
Better measures of subjective well-being
Help EAs see more clearly, unpack + resolve personal traumas, and boost their efficacy + motivation
Emotional healing as a prerequisite to rationality
CFAR, OAK, Leverage, etc.
Plus building methods to audit which projects are working, which are failing, which are stagnating
Perhaps also a data collection project that vacuums up outcomes from the object-level projects?
Strengthen EA community ties / our sense of fellowship
More honesty about how weird effective research methods can be
More acknowledgement of the interdependent causal complex that gives rise to good research (e.g. Alex Flint’s introduction here)
More Ben Franklin-esque Juntos
Import more of Silicon Valley’s “pay it forward” culture
Less reputation management / more psychological safety
OAK, Bay Area group houses, EA Hotel
Again, building out (non-dominating) ways to audit & collect data from the object-level projects
Ties into the above but deserves its own bullet given how our collective psychology skews
Compassionate fighting against the thought-pattern Scott Alexander describes here
Make EA sexier
Market to retail donors / the broader public (e.g. Future Perfect, e.g. 80k, e.g. GiveWell running ads on Vox podcasts)
Market to impact investors (e.g. Lionheart) and big philanthropy
Cultivating more “I want to be like that” energy
Seems easy to walk back if it isn’t working because so many interest groups are competing for mindshare
Support EA physical health
Propagate effective treatments for RSI & back problems, as above
Take the mind-body connection seriously
Propagate best practices for nutrition, sleep, exercise; make the case that attending to these is prerequisite to having impact (rather than trading off against having impact)
Advance our frontier of knowledge
e.g. GPI’s research agenda, e.g. the stuff Michael Dickens laid out in his comment
More work on how to solve coordination problems
More work on governance (e.g. Vitalik’s stuff, e.g. the stuff Palladium is exploring)
Fund many moonshots / speculative projects
Fund projects that can be walked back if they aren’t working out (which is most projects, though some tech projects may be hard-to-reverse)
Worry less about brand management
Your list reminds me of this thread: What EA Forum posts do you want someone to write?
At a glance, Salesforce’s AI Economist seems like an attempted implementation of an IAM.
Thanks for this!
How do you think the potential consistency over time (A) squares with the inconsistency between scales & sub-scales that Kaj pointed out?
Note they are mostly to do with insurance issues.
fwiw I don’t think most of this problem is due to insurance issues, though I agree that the US healthcare system is very weird and falls short in a lot of ways.
This also isn’t specific to mental health: one might retort to donors to AMF that they should be funding improvements in (say) health treatment in general or malaria treatment in particular.
I don’t think this analogy holds up: we’ve eradicated malaria in many developed countries, but we haven’t figured out mental health to the same degree (e.g. 1 in 5 Americans have a mental illness).
I suspect that if there were a really strong ‘pull’ for goods/services to be provided, then we would already have ‘solved’ world poverty, which makes me think distribution is weakly related to innovation.
World poverty has been decreasing a lot since 1990 – some good charts here & here.
M-Pesa and the broad penetration of smartphones are examples of innovations that were quickly distributed. The path from innovation to distribution is probably harder for services.
I usually do link posts to improve the community’s situational awareness.
This is upstream of advocating for specific actions, though it’s definitely part of that causal chain.
I’m not sure what you mean by going from 0 to 1 vs 1 to n. Can you elaborate?
The link in my top-level comment elaborates the concept.
how much better MH treatment could be than the current best practice
Quick reply: probably a lot better. See ecstatic meditative states, confirmed by fMRI & EEG.
See also Slate Star Codex on the weirdness of Western mental healthcare: 1, 2, 3, 4, 5
how easy it would be to get it there
Quick reply: not sure about how easy it would be to achieve the platonic ideal of mental healthcare – QRI is probably more opinionated about this.
Given how much of an improvement SSRIs and CBT were over the preexisting standard-of-care, and how much of an improvement psychedelic, ketamine, and somatic therapies seem to be over the current standard-of-care, I’d guess that we’re nowhere close to hitting diminishing marginal returns.
how fast this would spread
Quick reply: if globalization continues, the best practices of the globalized society will propagate “naturally” (i.e. as a result of the incentives stakeholders face). From this perspective, we’re more limited by getting the globalized best practices right than we are by distributing our current best practices.
From the part I excerpted:
“You should read it right now (or at least read this Vox interview), if you want to think through the contours of a civilizational Singularity that seems at least as plausible to me as the AI Singularity, but whose fixed date of November 3, 2020 we’re now hurtling toward.”
The EA implications of the 2020 US presidential election seem obvious?
See also Dustin & Cari’s $20m grant to the 2016 Clinton campaign.