I was inspired by your post, and I wrote a post about one way I think grant-making could be less centralized and draw more on expertise. One commenter told me grant-making already makes use of more expert peer reviewers than I thought, but it sounds like there is much more room to move in that direction if grant-makers decide it is helpful.
ben.smith
At a pinch, I would say review might be more worthwhile for topics where the work builds on a well-developed but pre-existing body of research. So, funding a graduate to take time to learn about AI Safety full-time as a bridge to developing a project probably wouldn’t benefit from a review, but an application to develop a very specific project based on a specific idea probably would.
I don’t have a sense on how often five-to-low-six-figure grants involve very specific ideas. If you told me they usually don’t, I would definitely update against thinking a peer review would be useful in those circumstances.
Thank you, David! From what you’ve said here it seems clear my post was missing critical information.
I’m not sure this post literally could have been much better researched, conditional on me writing it. I don’t feel entitled to contact funders to ask them about their process (perhaps I should feel free to? I’m not sure). EA Funds website mentions briefly they “engage expert-led teams of subject matter experts” in their decision-making, and that’s something I should have researched first and mentioned, but also, I think that gives away so little information that I learned more from your reply here than I would have from reading that.
Perhaps other funders describe their process in more detail, I don’t know, and if so, I concede that’s something I could have identified before writing the post.
So the only other way I can see this post could have had more information is that I could have asked more widely with people more familiar with the process than I am. But I’m not personally acquainted with anyone who I knew would know more about the process.
Or, finally, I could have left it to someone else to write, but then, if they didn’t, I wouldn’t have learned from you that grant-makers already engage in expert consultation.
It probably is obvious to you, considering your experiences. But the way grant making was described on the Doing Good Better post last week—something to the effect of “it helps to move to the bay and make friends with grant-makers”—suggests to me the process is pretty opaque to a lot of other people, not just to me, and so I suppose I’m glad I opened a conversation even if I don’t have a lot of insight to share on the process.
Edit: There is a point I’m trying to make, other than defending my own process, which is that the process in general is fairly opaque, and if the information you’re talking about is publicly available, I’m not aware of it! And that validates something of the transparency critiques from the DGB post last week.
Agreed, but 5% is much lower than “certain or close to certain”, which is the starting point Nuno Sempere said he was sceptical of.
I don’t know that anyone thinks doom is “certain or close to certain”, though the April 1 post could be read that way. 5% is also much lower than, say, 50%, which seems to be a somewhat more common belief.
I’m chalking this one up in favor of playing the long game—as Holden Karnowsky likes to advise, prioritizing building career capital because it would create greater impact in the future. Clearly though, Prof. Karlan has been making a substantial impact in the area for a couple of decades at least.
Doing EA Better: grant-makers should consider grant app peer review along the public-sector model
I enjoyed this read, and agree with the vibes of it. I am not sure what specifically is being recommended. I do think it would be good if EA avoids alienating people unnecessarily. That has two implications:
(a) the movement avoid identifying with the political left any more than is strictly entailed by the goals of EA, because it will alienate people on the political right who could contribute;
(b) being more conservative, in the non-political sense of holding new ideas lightly and giving a lot of weight to conventional ideas. This could be framed on “moral parliament” terms—you give a lot of weight in decision-making to the question, “what would conventional, common-sense morality say about this decision/cause area?”.
I think EA has generally achieved (a), but I probably have blind spots there so could be wrong.
I don’t have a good sense of (b). It’s hard to say. EA leaders like Will MacAskill have said they waited several years after being told that longtermism should be the overriding concern for EAs, before he started publicly advocating for that stance. Plenty of EA Funding still goes to global health and development, even though philosophers like Will MacAskill and Hilary Greaves seem to think long-termism is the overriding concern.
Presumably that comes from a reluctance to go all-in on EA thinking and to have some sort of compromise with more conventional norms. I don’t have a good sense if we give conventional morality too much weight, or not enough.
My impression is that many people are subconsciously or implicitly aware of this dynamic, and that contributes to the high level of interest in topics or decisions that are likely to set the tone for the future. I think many people are acting in ways that they hope would set the standard precisely because they want the movement to be defined in the ways they act. I don’t mean to single out any particular point of view as being motivated in this way, because in my experience most of the views being expressed here are sincerely held and principled.
Nevertheless I applaud your reminder, because I think becoming consciously aware of what is going on could cause people to be more conscientious about defining those norms. It has shades of Nietzsche’s “Eternal Recurrence”, which I quite enjoy.
Awesome resource thanks 🙏!
I think everything after the words “Christian Blind Mission” was unnecessary and if he had ended the statement there, it would have been less provocative. I don’t think that would have been the best possible choice but it would have been better than what we got.
That’s something I couldn’t have predicted before he posted the thing, but I’m sure a more skilled communicator could have.
The reason the words afterwards were harmful is because irrespective of their truth value, they raise an issue that is potentially harmful just to get into. By talking about it in a particular context when it isn’t necessary, you give it airtime you don’t need to. I think that’s the dimension that is missed here.
Words aren’t just truth claims; they are also locutionary acts that draw readers’ attention to certain topics. The words “smoking is bad for your lungs” are true and even helpful in many contexts, but they may also remind a smoker about smoking which could have an unintended consequence of causing a smoker to smoke. There may also be unintended negative consequences from jumping into a discussion about what you do and don’t believe about eugenics or genetics and IQ.
I think starting the post with “Do Better” is a kind of rhetorical flush that probably erodes goodwill between you and the people you want to convince
while giving no reasons for people to agree with what you are saying.It’s a common turn of phrase, and when I see it I often think it’s an effort to sort of shame people into agreeing with what you are saying, to assert moral superiority without actually providing argument for it. In your post you do make a number of arguments which I think are pretty good. I don’t think they need to be embellished with some low-key shaming in your post’s title.
Edit: I ought to explain more clearly why I think what I claimed above. The exhortation, “Do Better” carries an unambiguous implication the recipient isn’t putting in as much effort as they ought to. This is probably true in many cases, but I think probably there are people with many different starting positions in this discussion who are doing as well as they can to understand how they should respond. So “do better” as an exhortation to everyone who disagrees with the particular claims one is making seems quite blunt and carries inaccurate implications. People who are already doing the best they can might conclude the only way they can “do better” is to change their attitude or position without really understanding why they should do so.
I don’t know—there are probably some issues where that’s fine, given the stakes, but it is poor epistemic practice because it has rhetorical persuasive power independent of the truth value or clarity of the claims being made, but possibly very dependent on social norm adherence.
[Question] How can I access information about FTX Fund grants that were made before the fund was wound up?
Thank you! That’s a reasonably significant update for me.
I do take a non-person-affecting view, but with a sort of deontological barrier around doing things that could cause substantial harm to a large number of currently-existing people, particularly in areas I regard as “human rights”. This is not the only area I’d endorse this sort of deontological barrier. I also endorse one against committing dishonesty or fraud in the name of things that might harm the long term future, and recent events have strengthened my view on that.
There are a couple of justifications for this. First, we should have a certain amount of epistemic humility, where at some point it’s just really hard to understand what effects of current acts on the long term future will be, and we better be really sure if we cause present harm for the sake of far-future good. I have a certain amount of loss aversion when it comes to working out what sort of acts I should do. Second, we might want certain values to be sustained into the long term future, like honesty and respect for human rights, because sustaining those values will be good for future society. That might be a reason that doing things in the name of honesty or human rights that might, on the balance, have negative direct effects for the long-term future, could actually end up having long-term positive effects.
On this particular issue, I’m at least moderately pro-natalist. I think the vast majority of possible good we can do in the present to improve the long term future is just to avoid existential risk, so I don’t hugely emphasize pro-natalism. One reason is it’s not clear whether a larger population now will lead to a large population in 200 years, since future generations might compensate for higher fertility now with less fertility in their time, as long as there are resource constraints given current technologies. But on balance I do support pro-natalism, not just because of the long term future, but also because more present people enjoying the present is good. Having said that, I think the restriction on womens’ autonomy is a high cost to pay, and we might be able to get equivalent boosts in fertility through other policies, like letting children vote, providing much more financial support for parents at low incomes, and more favorable tax treatment for parents at all income levels, by, for instance, allowing not just spouses to file jointly but allowing parents to also file jointly with their children.
I worry I wasn’t actually fair enough to the “12 week fetal pain” theory—the more I think about the paper I read yesterday, the more I actually prefer it, all things considered, to its alternative, and I’ll update my post on that basis.
The cognitive and experiential capacities of an organism are important for us in determining how they treat them. So any consideration about fetuses as moral patients needs to consider their capacities. 38-week old fetuses have a very different set of cognitive and experiential capacities compared to 24-week old fetuses, and even more so to 14-week old fetuses. Because 90% of abortions in the US occur prior to 14 weeks, and 99% before 22 weeks, the relevant questions about capacity are probably about experience in that time. At least prior to 12 weeks it seems unlikely fetuses could consciously experience pain, and unlikely they experience anything at all (EDIT: I’ve updated against this somewhat—see Callum’s comment below). As a result, and considering negative consequences for women’s autonomy in cutting abortion funding, I caution against recommendations that involve cutting any funding for abortion prior to that time.
I worry that some folks will get a little bit queasy about me launching into comparisons with animal suffering, but I think that is unavoidable, and ultimately justifiable. But when we try to determine how we should treat pigs, chickens, and shrimp, we look to their capacity to suffer, and their overall capacity to experience things. This is important because if we want to know whether it is net positive to farm animals, we need to know if farming them is net positive, i.e., if the positive experiences they have and we have as a result of farming them outweigh the suffering they experience, compared to the counterfactual of having not existed. If a particular animal doesn’t have any capacity for suffering or any kind of conscious experience, arguably they are, as an individual creature, no more a moral patient than a rock or stone. That raises the question: do fetuses have the capacity to consciously suffer or experience, and if so what sort of experience do they have?
Conscious experience is not the only consideration one would have when considering a fetus as a moral patient. There are other reasons that have been explored in more general abortion debates over the last few decades about various social factors that lead us to assign fetuses more or less personhood, which I acknowledge. But in this comment I’ll focus on the issue of conscious experience.
The conventional medical advice has been that “the cortex and intact thalamocortical tracts are necessary for pain experience”, and because these don’t develop until after 24 weeks of pregnancy, we can rule out any kind of fetal pain until that point. This is a based on a theory of human conscious pain that posits that something about the neocortex is what gives humans many or even all of our conscious experiences. Whether this is true is not clearly known but it would seem to follow from leading theories of human consciousness. The evidence that experience of pain in particular arises from the neocortex—let’s call it the “thalamo-cortical pain theory” is somewhat stronger. The neocortex contains the somatosensory cortices; it also contains the amygdala and the anterior cingulate cortex, which at least until recently have been inseparable from the conscious experience of pain in normal humans. Because those features don’t develop until some time after 24 weeks, it seems plausible fetuses don’t have conscious experience of pain until after that time.
On the other hand, there’s mounting evidence that animals without these advanced cognitive structures also have some experience of pain. I am sceptical this implies humans have experiences of conscious pain before they develop a neocortex, because those animals could have evolved other features giving them the experience of pain that humans do not have.
What’s more important is the thalamo-cortical pain theory I described in the previous paragraph is under question, specifically with respect to fetuses, raising the possibility that fetuses before 24 weeks could feel some sort of conscious pain, though it would almost certainly not reach the full expression and intensity that fully-formed humans can experience. This is based on evidence in adult human patients, including patients with disabled cortices and patients who were born insensitive to pain. On this theory of fetal pain would place development at closer to 12 weeks. This is based on the fact the “first projections from the thalamus into the cortical subplate” occur around that time. [This paragraph edited slightly to update in favor of the 12-week theory]
Overall I am somewhat convinced by the recent work that pain processing doesn’t require the neocortex
, but less sure thatconscious experience of paincan be had without it. However, at least in the United States, over 90% of abortions occured before 13 weeks. Less than 1% occur after 21 weeks. Prior to 13 weeks, there doesn’t seem to be a viable theory of conscious experience of fetal pain. I should acknowledge that in this context, we’re not only concerned about pain—we’re concerned about personhood more broadly. But one can infer, from the debate about pain, it seems unlikely that there are conscious experiences in general at the very least prior to 12 weeks. Consequently, any concern about the moral patienthood of fetuses at the time when most are aborted (spontaneously or otherwise) should be order of magnitude or two less than concern we might have about a fetus at that 38-week period which Peter Singer points out seems to be minimally distinguishable from an infant child.Finally—I can’t help but spell out some of my own motivation for this comment. Although it is very clear that it advocates only for voluntary abortion reduction, the original post does recommend defunding of abortion services for which funding may have previously been provided. I have less of a unique contribution to make in this area, than in the neuroscience, so I won’t say too much about it. But it does seem to me that even defunding services could have tangible negative consequences for pregnant women’s autonomy and control over their pregnancy. That probably motivates my caution against such a recommendation, and it needs to be considered alongside the discussion about fetal personhood, and within that discussion, fetal consciousness, which is the primary topic of this post.
nice work and what a great line-up. tuning in!
Can you elaborate a little bit about Stronger Minds claiming they almost literally solve depression? That would be a pretty strong claim, considering how treatment resistant depression can be.
I suppose I would be open to the idea that in Western countries we are treating the long tail of very treatment resistant depression whereas in developed countries, there are many people who get very very little of any kind of care and just a bit of therapy makes a big difference.
Have a listen to Sam Harris’s most recent podcast, and Matt Yglesias’s Substack post about the FTX collapse. I think both of those takes expressed some level of distance and discomfort with EA as a whole. Sam Harris called it “cultish”, and Matt Yglesias was also uneasy about various aspects of the community.
But both of them said that the appeal of EA’s original motivating ideal remains very clear and strong, no matter how badly the community itself has generated problems. Specifically, trying to figure out how to do good more effectively using reason and evidence is a robustly good goal to aim for. I think the appeal of that goal is very wide, and will remain regardless of the reputation or health of the EA community.
I suspect the health of the community is really up to how we all respond to the present crisis in the coming months. If the community responds well, with humility and a readiness to learn a lesson, adjust, and pivot appropriately, that will increase the odds we’re able to overcome the current crisis. If we don’t learn from it, odds of survival are lower, and that would probably be for the best. I have an impulse to link to one of my favorite takes on what specifically the community needs to do to improve. But I think it’s best I refrain, and focus on the meta-claim that it is critically important we do need to learn and be willing to make very radical pivots if we are to (a) survive and (b) deserve surviving.
To get a bit more concrete, people will be more a bit more wary of the community, but I think that’s probably healthy. Frankly, I think outsiders are blaming SBF specifically, and crypto generally, more than they are currently blaming EA, and if that continues, I don’t think people will be unduly wary, and thus, I think if EA can take genuine steps to appropriately course-correct, it will retain its ability to attract new people to its causes.
I am optimistic the community can learn. Several of EA’s most prominent leaders pivoted over the last decade toward longtermist cause areas. We’ve gone from (in the early 2010s) focusing on funding charities that help people living now, to (in the mid 2010s) framing EA as somewhat equally divided between x-risk, global poverty, animal welfare, and meta-EA, and then (from the late 2010s to now) developing a primary focus on x-risk and longtermism. The evolution of EA over that time involved substantial changes at each step. I am not not saying whether I think they should pivot away from longtermism specifically as a reaction to the current crisis. But seeing the community and its leaders pivot over the last decade or so then gives me some hope they are able to do it again.
In summary I think the original ideal EA identified are highly attractive and likely to remain strong. The present cause areas and many of the more established institutions are also likely to remain funded and to make solid progress. I think the community does need to learn from the current crisis; if it does not, it might not recover, and might not deserve to recover. But the community has made changes in its focuses in the past, and that gives me hope we can do it again.
I hope I’m not tempting fate here, but I’m quite surprised I haven’t already seen EA Forums quoted “out there” during the present moment. I can only imagine outsiders have more juicy things to focus on than this forum, for the moment. I suppose once they tire of FTX/Alameda leaders’ blogs and other sources they might wander over here for some dirt.
This is not the first time I’ve heard this sentiment and I don’t really understand it. If SBF had planned more carefully, if he’d been less risk-neutral, things could have been better. But it sounds like you think other people in EA should have somehow reduced EA’s exposure to FTX. In hindsight, that would have been good, for normative deontological reasons, but I don’t see how it would have preserved the amount of x-risk research EA can do. If EA didn’t get FTX money, it would simply have had no FTX money ever, instead of having FTX money for a very short time.