Strong downvoted for hostile tone, e.g. “In a similar vein to my previous comment, I’d be curious where you’re getting your data from, and would love if you could publicise the survey you must undoubtedly have done to come to these conclusions, since you are not yourself at Atlas fellow!”
ZacharyRudolph
(Applications Open!) UChicago XLab Summer Research Fellowship 2024
Applications open! UChicago Existential Risk Laboratory’s 2023 Summer Research Fellowship
Strongly agree here. Simply engaging with the community seems far better than silence. I think the object level details of FTX are less important than making the community not feel like it has been thrown to the wolves.
I appreciate this post, but pretty strongly disagree. The EA I’ve experienced seems to be at most a loose but mutually supportive coalition motivated by trying to most effectively do good in the world. It seems pretty far from being a monolith or from having unaccountable leaders setting some agenda.
While there certainly things I don’t love such as treating EAGs as mostly opportunities to hang out and some things like MacAskill’s seemingly very expensive and opaque book press tour, your recommendations seem like they would mostly hinder efforts to address the causes the community has identified as particularly important to work on.
For instance, they’d dramatically increase the transaction costs for advocacy efforts (i.e. most college groups) aimed at introducing people to these issues and giving them an opportunity to consider working on solving them. One of the benefits of EA groups is that it allows for a critical mass of people to become involved where there might not be enough interest to sustain clubs for individual causes (and again the costs of people needing to organize multiple groups). In effect, this would mostly just cede ground and attention to things like consulting, finance, and tech firms.
Similarly, we shouldn’t discount the (imo enormous) value of having people (often very senior people) willing to offer substantial help/advice on projects they aren’t involved with simply because the other person/group is part of the same community and legibly motivated for similar reasons. I can also see ways in which a loss of community would lead to reduced cooperation between orgs and competition over resources. It seems important to note too that being part of a cause-neutral community makes people more able to change priorities when new evidence/arguments emerge (as the EA community has done several times since I’ve been involved).
I think proposals of this kind really ought to be grounded in saying how the arguments the community has endorsed for some particular strategy are flawed, e.g. showing how community building is not in fact impactful. We generally seem to be over updating on a single failure (even allowing that the failure was particularly harmful).
Note: wrote this fairly quickly, so it’s probably not the most organized collection of thoughts.
Writing since I haven’t seen this mentioned elsewhere, but it seems like it might be a good idea to do (and announce that you are doing ) a rapid evaluation of grantee organizations that received a majority of their funding from FF in order to provide emergency funding to the most promising in order to avoid loss of institutions. If this is something OP plans on doing, it should do so quickly and unambiguously.
I’m imagining something like a potentially important org has lost its funding and employees will soon begin looking for and accepting other opportunities. If they do leave, it could be very difficult to get them back or find suitable replacements. If whole organizations cease operations, it could set back work in their areas substantially since momentum will be lost, future organizations will have to deal with answering why this similar org didn’t work out, the ability to make credible commitments in the org’s given field will be at risk if they suddenly drop projects, and institutional knowledge will be lost. Similar to how other countries supplemented employee salaries instead of the US’s unemployment insurance approach during the pandemic.
Also for disclosure: I haven’t received any FF funding nor work in an org that did.
I ran the UChicago x-risk fellowship this summer (we’d already started by the time I learned there was a joint ERI survey so decided to stick with our original survey form).
I just wanted to note that, for the fellows who weren’t previously aware of x-risk, we observed a dramatic increase in how important fellows thought x-risk work was and their reported familiarity with x-risk. As well, many indicated in the written responses an intention to work on x-risk related topics in the future where they previously hadn’t when responding to the same question. We exclusively advertised to UChicago students for this iteration and about 2⁄3 of our fellows were new to EA/x-risk.
A few questions mostly not relevant to me:
i) If I imagine I’m still leading a student group, a few things come to mind:
What does a full time equivalent mean? For instance, I’m skeptical that most undergrads are capable of putting in a full 40 hours/week, but (a) the part-time organizer option is not emphasized other than in passing to say compensation would be pro-rated and (b) the part time hours they do put in have a higher than normal opportunity cost than for an organizer not actively taking courses.
How should I know if I’m a good fit for this fellowship or whether I should apply? For continuity reasons, leadership is often passed to rising second years when the previous leadership graduates. But, it’s a weird position to be in as an 18⁄19 year old who just took on this leadership role and is of the mindset that it seems presumptuous to ask for compensation (if not somehow wrong to be paid for an altruistic effort) to apply for payment. This seems especially true for the people for whom funding would actually make a difference in how much effort they’re able to put in. It might be worth adding a section describing what a good fit looks like or some sort of referral system so that others can suggest good grantees to you.
How many organizers per group are you willing to fund? E.g. if a group has three main leaders/organizers presumably splitting responsibilities because it’s a lot of work for just one person, how should one of them navigate applying for funding when the others aren’t or if one of them has already been funded (this goes back to the presumptuous seeming point above).
ii) For the Century Fellowship:
If someone is considering applying, it seems likely that they’ve already made fairly substantial progress on their project already. How should they navigate whether to apply for the fellowship or to include personal compensation in the budget when seeking funding for their project as a whole?
A referral system seems like a good idea here too for similar reasons.
Funding private versions of Longtermist Political Institutions to lay groundwork for government versions
Some of the seemingly most promising and tractable ways to reduce short-termist incentives for legislators are Posterity Impact Assessments (PIA) and Futures Assemblies (see Tyler John’s work). But, it isn’t clear just how PIAs would actually work, e.g. what would qualify as an appropriate triggering mechanism, what evaluatory approaches would be employed to judge policies, how far into the future policies can be evaluated. It seems like it would be relatively inexpensive to fund an organization to do PIAs in order to build a framework which a potential in-government research institute could adopt instead of having to start from scratch. The precedent set by this organization seems like it would also contribute to reducing the difficulty of advocating for longtermist agency/research institutes within government.
Similarly, it would be reasonably affordable to run a trial Futures Assembly wherein a representative sample of a country’s population is formed to deliberate over how and to what extent policy makers should consider the interests of future persons/generations. This would provide a precedent for potential government funded versions as well as a democratically legitimate advocate for longtermist policy decisions.Basically, EAs could lay the groundwork for some of the most promising/feasible longtermist political institutions without first needing to get legislation passed.
Strong upvoted, I made a graph with it for a paper I intend to use for my summer research project and quickly found other papers I was unaware of which I expect will be helpful.
I thought Open Phil’s Criminal Justice Reform efforts would include work in this area and it seems they’ve done some research into this. Some links from a quick google for interested persons:
https://www.openphilanthropy.org/research/cause-reports/cannabis-policy
That 11,000 children died yesterday, will die today and are going to die tomorrow from preventable causes. (I’m not sure if that number is correct, but it’s the one that comes to mind most readily.)
TLDR: Very helpful post. Do you have any rough thoughts on how someone would pursue moral weighing research?
Wanted to say, first of all, that I found this post really helpful in helping crystalize some thoughts I’ve had for a while. I’ve spent about a year researching population axiologies (admittedly at the undergrad level) and have concluded that something like a critical level utilitarian view is close enough to a correct view that there’s not much left to say. So, in trying to figure out where to go from there (and especially whether to pursue a career in philosophy), I’ve been trying to think of just what sort of questions would make a substantive difference in how we ought to approach EA goals. I couldn’t think of anything, but it still seemed like there was some gap between the plausible arguments that have been presented so far and how to actually go about accomplishing those goals. I think you’ve clarified here, with “moral weighting,” the gap that was bothering me. It seems similar to the “neutrality intuition” Broome talks about where we don’t want to (but basically have to) say there’s a discrete threshold where a life goes from worth living to not.
At any rate, moral weighting is the sort of work I hope to be able to contribute to. Are there any other articles/papers/posts you think would be relevant to the topic? Do you have any rough thoughts on the sort of considerations that would be operative here? Do any particular fields seem closest to you? I had been considered something like a wellbeing metric like the QALY or DALY in public health (a la the work Derek Foster posted a little while ago) to be a promising direction.
Thanks!
Call for Early Career Speakers
I’m mostly using “person” to be a stand in for that thing in virtue of which something has rights or whatever. So if preference satisfaction turns out to be the person-making feature, then having the ability to have preferences satisfied is just what it is to be a person. In which case, not appropriately considering such a trait in non-humans would be prima facie wrong (and possibly arbitrary).
I’m familiar with the general argument, but I find it persuasive in the other direction. That is, I find it plausible that there are human animals for whom personhood fails to pertain, so ~(2). [Disclaimer: I’m not making any further claim to know what sort of humans those might be nor even that coming to know the fact of the matter in a given case is within our powers.] I don’t know if consciousness is the right feature, but I worry that my intuitive judgements on these sorts of features are ad hoc (and will just pick out whatever group I already think qualifies).
Just to respond to the conclusion of that article, it doesn’t seem at all obvious that humans should be treated equally despite having different abilities, at least in contexts where those abilities are relevant. They also seem to equivocate a bit on treatment/respect. I can hold that persons should be treated with equal respect or equitably (or whatever) without holding that they should be treated equally. It also seems to me like personhood would be a binary feature. I don’t think it makes sense to say that someone is more of a person than another and is this deserving of more person privileges.
Yes! It’s much more conducive to conversation now, and I’ve changed my vote accordingly.
To actually engage with your question: I personally find (1) to be the most motivating reason to adopt a more vegetarian diet since I’m more compelled by the idea that my actions might be harming other persons. Regardless, (1) and (2) are both grounded in the empirical observations. (and both of which are seriously questionable in how much of a difference they make in the individual case: see this and the number of confounding factors in veg diets causing better health)
I personally reject (3) because animals don’t fall, in my ontology, under the category of morally significant beings (neither argument nor experience has yet made me think animals possess whatever it is that makes us consider, at least most, humans as persons) I take this to be a morally relevant difference. (Though, I would endorse many efforts to improve animal welfare for reasons ultimately grounded in human person welfare.)
Moreover, regarding changing behavior, I can think of a number of additional reasons someone might not change their behavior that aren’t related to empathy, e.g. they might find it supererogatory, they might have ingrained cultural reasons, they might not think they’ll be able to make a difference, and reasons to do with poverty and food injustice.
Thus for me, an answer to (a) and (b) would be a convincing theory of personhood and a further convincing argument that animals share that person-making feature (or other moral relevance-making feature).
“(3) The ethical argument: killing or abusing an animal for culinary enjoyment is morally unsound”
I’m understanding abuse as being wrong by definition, a la how murder is by definition a wrongful killing. (3) seems to transparently be a case of arguing that something that is wrong is thus wrong. But, I agree, this by itself wouldn’t warrant downvoting so much as how the generally dismissive tone of the writing came off as assuming some moral high ground, e.g. “to accept that this being with no identity, little conceivable intellect, and no means of advocating for itself or expressing relief or gratitude is suffering to an extent that is not justified by the mere desire of taste,” “too inconvenient,” “culinary enjoyment.”
I felt I should comment instead of anonymously downvoting in case it was just a misunderstanding.
Down voted for question begging in the way you phrased the “ethical argument,” and descriptions like “the mere desire of taste.” [Edit: I changed my vote based on changes made.]
Hey, Zack from XLab here. I’d be happy to provide a couple sentence feedback on your application if you send me an email.
The most common reasons for rejection before an interview were things like no indication of having US citizenship or student visa, ChatGPT-seeming responses, responses to the exercise that didn’t clearly and compellingly indicate how it was relevant for global catastrophic risk mitigation, or lack of clarity on how mission aligned the applicant was.
We appreciate the feedback, though.