In the past 30 years, HIV has gone from being a lethal disease to an increasingly treatable chronic illness.
lilly
Yeah, I think these are great ideas! I’d love to see the Forum prize come back; even if there was only a nominal amount of (or no) money attached, I think it would still be motivating; people like winning stuff.
Thanks for writing this! Re this:
Perhaps the most straightforward way you can help is by being more active on the Forum. I often see posts and comments that don’t receive enough upvotes (IMO), so even voting more is useful.
I’ve noticed that comments with more disagree than agree votes often have more karma votes than karma. Whether this is good or bad depends on the quality of the comment, but sometimes the comments are productive and helpful, and so the fact that people are downvoting them seems bad for a few reasons: first, it disincentivizes commenting; second, it incentivizes saying things that you think people will agree with, even at the expense of saying what is true. (Of course, it’s good to try to frame things more persuasively when this doesn’t come at the cost of speaking honestly.) The edit here provides an example of how I think this threatens to undermine epistemic and discursive norms on the Forum.
I’m not sure what the solution is here—I’ve suggested this previously, but am not sure it’d be helpful or effective. And it may turn out that this issue—how does the Forum incentivize making/promote helpful comments that people disagree with?—is relatively intractable, or hard to solve without making sacrifices in other domains. (Another thought that occurred to me is doing what websites like the NYT do: having “NYT recommended comments” and “reader recommended comments,” but I assume the mods don’t want to be in the business of weighing in on the merits of particular comments.)
In developing countries, infectious diseases like visceral gout (kidney failure leading to poor appetite and uric acid build up on organs), coccidiosis (parasitic disease causing diarrhoea and vomiting), and colibacillosis (E. coli infection) are common.
I don’t think visceral gout is an infectious disease. I also don’t think chickens can vomit. Two inaccuracies in this one sentence just made me wonder if there were other inaccuracies in the article as well (though I appreciate how deeply researched this is and how much work went into writing it).
Thanks for your very thoughtful response. I’ll revise my initial comment to correct the point I made about funding; I apologize for portraying this inaccurately.
Your points about the broadening of the research agenda make sense. I think GPI is, in many ways, the academic cornerstone of EA, and it makes sense for GPI’s efforts to map onto the efforts of researchers working at other institutions and in a broader range of fields.
And thanks also for clarifying the purpose of the agenda; I had read it as a document describing GPI’s priorities for itself, but it makes more sense to read it as a statement of priorities for the field of Global Priorities Research writ large. (I wonder if, in future iterations of the document—or even just on the landing page—it might be helpful to clarify this latter point, because the documents themselves read to me as more internal facing, e.g., “This document outlines some of the core research priorities for the economics team at GPI.” Outside researchers not affiliated with GPI might, perhaps, be more inclined to engage with these documents if they were more explicitly laying out a research agenda for researchers in philosophy, economics, and psychology aiming to do impactful research.)
Thanks for sharing this! I think these kinds of documents are super useful, including for (e.g.) graduate students not affiliated with GPI who are looking for impactful projects to focus their dissertations on.
One thing I am struck by in the new agenda is that the scope seems substantially broader than it did in prior iterations of this document; e.g., the addition of psychology and of projects related to AI/philosophy of mind in the philosophy agenda. (This is perhaps somewhat offset by what seems to be a shift away from general cause prioritization research.)
I am wondering how to reconcile this apparent broadening of mission with what seems to be a decreasing budget (though maybe I am missing something)
—it looks likeOP granted~$3 million to GPI approximately every six months between August 2022 and October 2023, but there are no OP grants documented in the past year; there was alsonoGlobal Priorities Fellowship this year, and my impression is that post-doc hiring ison hold.Am I right to view the new research agenda as a broadening of GPI’s scope, and could you shed some light on the feasibility of this in light of what (at least at first glance) looks like a more constrained funding environment?
EDIT: Eva, who currently runs GPI, notes that my comment paints a misleading picture of the funding environment. While she writes that “the funding environment is not as free as it was previously,” the evidence I cite doesn’t really bolster this claim, for reasons she elaborates on. I apologize for this.
No shade to the mods, but I’m just kind of bearish on mods’ ability to fairly determine what issues are “difficult to discuss rationally,” just because I think this is really hard and inevitably going to be subject to bias. (The lack of moderation around the Nonlinear posts, Manifest posts, Time article on sexual harassment, and so on makes me think this standard is hard to enforce consistently.) Accordingly, I would favor relying on community voting to determine what posts/comments are valuable and constructive, except in rare cases. (Obviously, this isn’t a perfect solution either, but it at least moves away from the arbitrariness of the “difficult to discuss rationally” standard.)
Yeah, just to be clear, I am not arguing that the “topics that are difficult to discuss rationally” standard should be applied to posts about community events, but instead that there shouldn’t be a carveout for political issues specifically. I don’t think political issues are harder to discuss rationally or less important.
This is weird to me. There are so many instances of posts on this forum having a “strong polarizing effect… [consuming] a lot of the community’s attention, and [leading] to emotionally charged arguments.” The several posts about Nonlinear last year strike me as a glaring example of this.
US presidential candidates’ positions on EA issues are more important to EA—and our ability to make progress on these issues—than niche interpersonal disputes affecting a handful of people. In short, it seems like posts about politics are ostensibly being held to a higher standard than other posts. I do not think this double standard is conducive to healthy discourse or better positions the EA community to achieve its goals.
Two separate points:
I am one of those people who, having seen the Twitter post with the letter, scanned the Forum home page for the letter and didn’t see it! And regardless of what you think of the letter, I think the discussion in the comments here is useful; I am glad I did not miss it. So I agree with what others have said—there are real downsides to downvoting things just because you disagree with them; I would encourage people not to do this. (And if you downvoted this because you don’t think a Stanford professor making a sincere effort to engage with EA ideas is valuable/warrants engagement then… yeah, I just disagree. But I would be eager to hear downvoters’ best defense of doing this.)
Regarding the letter itself: one thing I am struck by is the number of claims in this letter that go without citations. This is frustrating to me, especially given the letter repeatedly appeals to academic authority. As just one example, claims like “It has lots of premises that GiveWell says depend on guesswork, and it runs against some of the literature in fields like development economics” warrant a citation—what literature in development economics?
I think there’s a lot of truth to this; the part about sanctifying criticism and critical gadflies especially resonated with me. I think it is rational to ~ignore a fair bit of criticism, especially online criticism, though this is easier said than done.
Two pieces of advice I encountered recently that I’m trying to implement more in my life (both a bit trite, but perhaps helpful as heuristics):
don’t take criticism from someone you wouldn’t take advice from
when you write/post/say something, have a panel of people in mind whose opinions you most care about/who you are speaking to; do not try to appease/appeal to/convince everyone
Despite working in global health myself, I tend to moderately favor devoting additional funding to animal welfare vs. global health. There are two main reasons for this:
Neglectedness: global health receives vastly more funding than animal welfare.
Importance: The level of suffering and cruelty that we inflict on non-human animals is simply unfathomable.
I think the countervailing reason to instead fund global health is:
Tractability: my sense is that, due in part to the far fewer resources that have gone into investigating animal welfare interventions and policy initiatives, it could be difficult to spend $100m in highly impactful ways. (Whereas in global health, there would be obviously good ways to use this funding.) That said, this perhaps just suggests that a substantial portion of additional funding should go towards research (e.g., creating fellowships to incentivize graduate students to work on animal welfare).
It’s super cool to see USAID and OP partnering very publicly on such an important project. In addition to the obvious good this will do via the project’s direct impact on lead exposure, I’m glad to see such a powerful and reputable government agency implicitly endorsing OP as an organization. I hope this will help legitimize some of OP’s other important work, and pave the way for similar partnerships in other arenas.
Looking forward to this! I hope there will also be some “lessons learned”—it seems like Leverage included many EA-oriented people who prided themselves on their altruistic tendencies, rational thinking, willingness to question/subert certain social norms, and so on. I’d be curious to hear involved parties’ reflections on how similarly well-motivated people can avoid inadvertently veering off the rails in their pursuit of ambitious/weird projects.
Thanks; this is helpful, and I appreciate your candor. I’m not questioning whether 80k’s advising overall is valuable, and am thus willing to grant stuff like “most of the shifts people make as a result of 80k advising are +EV”. My reservations mainly pertain to the following:
does this grant effectively incentivize referrals?
are those referrals of high quality?
contingent on 80k agreeing to meet with a referred party, is that party liable to make career shifts based on the advising they receive?
(to a lesser extent) will the recipients of the career grants use the money well?
I get that it’s easy to be critical of (1) post-hoc, but I think we should subject the general model of “give EAs a lot of money to do things that are easy and that have very uncertain and difficult to quantify value” to a high degree of scrutiny because (as best I can tell based on a small n) this: (a) hasn’t tended to work that well, (b) is self-serving, and (c) often seems to be held to a lower evidentiary standard than other kinds of interventions EAs fund. (A countervailing piece of evidence is that OP does this for hiring referrals, and they presumably do have good evidence re: efficacy, although the benefits there also seem much clearer for the reasons you mention.)
Regarding (2), my worry is that the people who get referred as a result of this program will be importantly different from the general population of people who receive 80k career advising. This is because I suspect highly engaged EAs will have already applied for or received 80k advising. Conversely, people who are not familiar enough with EA to have previously heard of 80k advising—which I think is a low bar, given many people learn about EA via 80k—probably won’t have successful applications. Thus, my model of the median successful referral is “someone who has heard of 80k but not previously opted to pursue 80k advising.” Which brings me to (3): by virtue of these people having not previously opted into a free service, I suspect that they’re less likely to benefit from it. In other words, I suspect that people referred as a result of this program will be less likely (or less able) to make changes as a result of their advising meetings. (Or at least this was the conclusion I came to in deciding who to send my referral links to.)
Regarding (4), I haven’t seen evidence to support the claim that “very engaged and agentic EAs… will use $5,000 very well to advance their careers and create good down the line,” and while this seems prima facie plausible, I don’t think that is the standard of evidence we should apply to this—or any—intervention. (This is a less important point, because if this program generated tons of great referrals, it wouldn’t really matter how the $50k was spent.)
I am a big fan of 80k, and have found talking to 80k advisors helpful. But this program feels reminiscent of the excesses of pre-FTX-implosion EA, in that this is a lot of money to be giving people to do something that is not very hard and (in my view) of questionable value, though maybe I’m underestimating the efficacy of 80k’s filtering process, how much these conversations will shift the career paths of the referred parties, how well people will use the career grants, or something else. I’m sure a lot of thought went into doing this, so I’d be curious to see the BOTEC that led to these career grants.
Some feedback on this episode: The part of the interview I listened to was really cool and interesting, but this episode is also 3 hours 48 minutes, and it’s pretty hard for me to commit that much attention/time to listening to an episode outside of my area. I know that this is kind of 80k’s thing, but I’m wondering if—for episodes of this length—it might be worth separately releasing a ~60-90 minute version of highlights. (I also felt that even in the portion I listened to, there could’ve been edits—e.g., the question that went unanswered about the number of juvenile insects.) Overall, though, really fantastic episode—thanks for doing this interview!
Yeah, to be clear, I think inappropriate interpersonal behavior can absolutely warrant banning people from attending events, and this whole situation has given me more respect for how CEA strikes this balance with respect to EAGs.
I was mainly responding to the point that “we might come up with ideas that let each side get more of what they want at a smaller cost to what the other side wants,” by suggesting that, at a minimum, the organizers could’ve done things that would’ve involved ~no costs.
I apologize if I did not characterize the fears correctly
I think you didn’t. My fear isn’t, first and foremost, about some theoretical future backsliding, creating safe spaces, or protecting reputations (although given the TESCREAL discourse, I think these are issues). My fear is:
Multiple people at Manifest witnessed and/or had racist encounters.
Racism has been, and continues to be, very insidious and very harmful.
EA is meant to be a force for good in the world; even more than that, EA aims to benefit others as much as possible.
So the bar for EA needs to be a lot higher than “only some of our ‘special guests’ say racist stuff on a regular basis” and “not everyone experienced racism at our event.”
I am bolstered by the fact that Manifest is not Rationalism and Rationalism is not EA. But I am frustrated that articulating the above position is seen as even remotely in the realm of “pushing society in a direction that leads to things like… the thought police from 1984.” This strikes me as uncharitable pearl-clutching, given that organizers have an easy, non-speech-infringing way of reducing the likelihood that their events elicit and incite racism: not listing Hanania, who wasn’t even a speaker, as a special guest on their website, while still allowing him to attend if he so chooses.
Without being able to comment on your specific situation, I would strongly discourage almost anyone who wants to have a highly impactful career from dropping out of college (assuming you don’t have an excellent outside option).
There is sometimes a tendency within EA and adjacent communities to critique the value of formal education, or to at least suggest that most of the value of a college education comes via its signaling power. I think this is mistaken, but I also suspect the signaling power of a college degree may increase—rather than decrease—as AI becomes more capable, and it may become harder to use things like, e.g., work tests to assess differences in applicants’ abilities (because the floor will be higher).
This isn’t to dismiss your concerns about the relevance of the skills you will cultivate in college to a world dominated by AI; as someone who has spent the last several years doing a PhD that I suspect will soon be able to be done by AI, I sympathize. Rather, a few quick thoughts:
Reading the new 80k career guide, which touches on this to some extent (and seeking 80k advising, as I suspect they are fielding these concerns a lot).
Identifying skills at the intersection of your interests, abilities, and things that seem harder for AI to replace. For instance, if you were considering medicine, it might make more sense to pursue surgery rather than radiology.
Taking classes where professors are explicitly thinking about and engaging with these concerns, and thoughtfully designing syllabi accordingly.