Happy to help! Another thing that strikes me is that in my experience (which is in the U.S.), running an academic research team at a university (i.e., being the principal investigator on the team’s grants) seems to have a lot in common with running a startup (you have a lot of autonomy/flexibility in how you spend your time; your efficacy is largely determined by how good you are at coordinating other people’s efforts and setting their priorities for them; you spend a lot of time coordinating with external stakeholders and pitching your value-add; you have authority over your organization’s general direction; etc.). This seems relevant because I think a lot of the top university economics research groups in the U.S. have a pretty substantial impact on policy (e.g., consider Opportunity Insights), and the same may well be true in the U.K. It seems to me that other avenues toward impacting policy (e.g., working in the government or for major, established advocacy organizations) are considerably less entrepreneurial in nature. Of course, you could also found your own advocacy organization to push for policy change, but 1) I think it’s generally easier to get funding for research than for work along these lines (especially as a newcomer), in part because the advocacy space is already so crowded, and 2) founding an advocacy organization seems like the kind of thing one might do through Charity Entrepreneurship, which you seem less excited about. If you’re mainly attracted to entrepreneurship by tight feedback loops, however, academia is probably the wrong way to go, as it definitely does not have those.
HStencil
It sounds based on your description that a fairly straightforward step would be for you to try to set up calls with 1) someone on the Charity Entrepreneurship leadership team, and 2) some of the founders of their incubated charities. This would help you to evaluate whether it would be a good idea for you to apply to the CE program at some point, as well as to refine your sense of which aspects of entrepreneurship you’re particularly suited to (so that if entrepreneurship doesn’t work out—maybe you discover other aspects of it that seem less appealing—you’ll be able to look for the bits you care for in positions with more established organizations). If you came out of those calls convinced that you might want to apply to Charity Entrepreneurship down the road, it seems to me that a logical next step would be to start reading up on potential causes and interventions that you might want your charity to pursue. You could also, I’m sure, do volunteer work for existing, newly launched CE charities, where given that most of them only have two staff, you’d probably be given a fair amount of responsibility and would be able to develop useful insights into the entrepreneurial process. For you, the value of information from doing that seems like it might be quite high.
That seems like a sound line of reasoning to me — best of luck with the rest of your degree!
I think this is a really hard question, and the right answer to it likely depends to a very significant degree on precisely what you’re likely to want to do professionally in the near and medium-term. I recently graduated from a top U.S. university, and my sense is that the two most significant benefits I reaped from where I went to school were:
Having that name brand on my resume definitely opened doors for me when applying for jobs during my senior year. I’m actually fairly confident that I would not have gotten my first job out of college had I gone to a less prestigious school, though I think this only really applies to positions at a fairly narrow set of financial services firms and consulting firms, as well as in certain corners of academic research.
I think I personally benefited from a significant peer effect. My specific social circle pushed me to challenge myself academically more than I likely otherwise would have (in ways that probably hurt my GPA but served me well all things considered). That said, I know that the academic research on peer effects in education is mixed to say the least, so I’d be hesitant to extrapolate much from my own experience.
I’m not sure how to weigh the importance of the first of those considerations. On the one hand, your first job is just that: your first job. It doesn’t necessarily mean anything about where you’ll end up at age 35. On the other hand, I do feel like I have observed this phenomenon of smart people graduating from relatively unknown universities and really struggling to find interesting work during their first several years out of college and then eventually resigning themselves to getting a master’s degree from a more well-known school (sometimes in a field where the educational benefit of the degree is relatively low) just so that they can get in the door to interview for jobs in their field of choice. This obviously comes at a significant cost, both in terms of time and—often but not always—in terms of money. That said, in some fields, you just do need a master’s to get in the door for a lot of roles, no matter where you went to undergrad or what you did while you were there, and maybe that’s all that’s really behind this.
Another thing potentially worth noting is that, in my experience, it seems as if U.S. research universities are most usefully divisible into three categories with respect to their undergraduate job placement: universities that “high-prestige” employers are unlikely to have heard of, universities that “high-prestige” employers are likely to have heard of and have vaguely positive associations with, and finally, the set of Harvard, Princeton, Yale, MIT, and Stanford (these are distinguished not only by their name brands but also by the extent of their funding and support for undergraduate research and internships, the robustness of their undergraduate advising, and other more “experiential” factors). There are certainly exceptions to this breakdown (the financial services and consulting firms mentioned above definitely differentiate between Penn and Michigan), but by and large, my sense has been that controlling for “ability,” the difference in early-career outcomes between a Harvard graduate and a Penn graduate is significantly larger than the difference in early-career outcomes between a Penn graduate and a Michigan graduate (note: the specific schools chosen as examples within each cohort here are completely arbitrary). Accordingly, I don’t think that very many people generally have a strong professional reason to transfer from UCLA to Brown or from the University of Virginia to Dartmouth, etc. However, I buy that those at lesser-known schools may, in many circumstances, have a strong professional reason to transfer to their flagship state school.
Other good reasons to transfer, I think, include transferring for the purpose of getting to a particular city where you know you want to work when you graduate, with an eye toward spending a portion of the remainder of your time in college networking or interning in your field of choice. In particular, I think that if you want to work in U.S. (national) policy after graduation, transferring to a school in the Washington, DC Metropolitan Area can be hugely beneficial. The same goes for financial services in the New York City Metropolitan Area, entertainment in Los Angeles, and (perhaps, though I am less sure about this) tech in the San Francisco Bay Area. In your case, it might be worthwhile to submit a transfer application to Georgetown with the aim of trying to forge some connections at the Center for Security and Emerging Technology (or perhaps the Center for Global Health Science and Security if you are interested in biosecurity policy), both of which are housed there. One other very strong reason to transfer, it seems to me, would be if you wanted to work on AI, but your current school didn’t have a computer science department, like a local state school near where I grew up. I assume from your post that that isn’t your situation, though.
Finally, I wouldn’t underestimate the importance of mental health considerations, to the extent that those may be at all relevant to your choice. Mental health during college can have a huge impact on GPA, and while where you go to undergrad will only really be a factor in determining your grad school prospects for a relatively narrow set of programs (mainly, I think, via the way it affects the kinds of research jobs you can get during and post-college), GPA is a huge determinant of grad school admissions across basically every field, so that is important to bear in mind. The transfer experience, from what I have heard, is not always easy, especially, I imagine, in academic environments that are already very high-pressure.
If you’d like to talk through this at greater length, feel free to DM me. To the extent that my perspective might be useful, I’d be more than happy to offer it, and if you’d just like someone to bounce ideas off of, I’d be happy to fill that role, as well.
I really like this. To me, it emphasizes that moral reason is a species of practical reason more generally and that the way moral reasons make themselves heard to us is through the generic architecture of practical reasoning. More precisely: Acting in a manner consistent with one’s moral duties is not about setting one’s preferences aside and living a life of self-denial; it’s about being sufficiently attentive to one’s moral world that one’s preferences naturally evolve in response to sound moral reasons, such that satisfying those preferences and fulfilling one’s duties are one and the same.
This is a fascinating argument — thank you for sharing it! I think it’s particularly interesting to consider it in the context of metaethical theories that don’t fall neatly within the realist paradigm but share some of its features, like R.M. Hare’s universal prescriptivism (see Freedom and Reason [1963] and Moral Thinking [1981]). However, I also think this probably shouldn’t lead most discounting realists to abandon their moral view. My biggest issue with the argument is that I suspect (though I am still thinking this through) that there exist parallel arguments of this form that would purport to disprove all of philosophical realism (i.e. including realism about empirical descriptions of the natural world). I think statements rejecting philosophical realism are pretty epistemically fraught (maybe impossible to believe with justification), which leaves me suspicious of your argument. (It’s worth noting here that special relativity itself is an empirical description of the natural world.)
I have a feeling that the right way of thinking about this is that the rise relativistic physics changed the conventional meaning of a “fact” into something like: a true statement for which its truth cannot depend upon the person thinking it within a particular inertial frame of reference. Otherwise, I think we would be forced to admit that there are not facts about the order in which events occur in time, and that seems quite obviously inconsistent with the ordinary language meanings of several common concepts to me. I know that relativity teaches that statements about time and duration are not objective descriptions of reality but are instead indexical reports of “where the speaker is” relative to a particular object, similar to “Derek Parfit’s cat is to my left,” but (for basically Wittgensteinian reasons) I do not think that this is actually what these statements mean.
Ultimately, if you’re someone who, like me, believes that a correct analysis of the question, “What is the right thing to do?” must start with a correct analysis of the logical properties of the concepts invoked in that sentence (see R.M. Hare, especially Sorting Out Ethics [1997]), and you believe that those logical properties are determined by the way in which those concepts are used (see Wittgenstein’s Philosophical Investigations [1953]), then I think this argument is mainly good evidence that the proper understanding of what moral realism means today is the following: “Moral realism holds that moral statements are facts, and the truth of a fact must be universal within the inertial frame of reference in which that fact exists; that is, that truth cannot depend upon the person thinking the fact within that inertial frame of reference.”
Glad to hear it helped! Of course, usual caveats apply about the possibility that your field is quite different from mine, so I wouldn’t stop looking for advice here, but hopefully, this gives you a decent starting point!
Regarding the data-driven policy path, my sense is that unfortunately, most policy work in the U.S. today is not that data-driven, though there’s no doubt that that’s in part attributable to human capital constraints. Two exceptions do come to mind, though:
Macroeconomic stabilization policy (which is one of Open Philanthropy’s priority areas) definitely fits the bill. Much of the work on this in the U.S. occurs in the research and statistics and forecasting groups of various branches of the Federal Reserve System (especially New York, the Board of Governors in D.C., Boston, Chicago, and San Francisco). These groups employ mathematical tools like DSGE and HANK models to predict the effects of various (mainly but not exclusively monetary) policy regimes on the macroeconomy. Staff economists working on this modeling regularly produce research that makes it onto the desks of members of the Federal Open Markets Committee and even gets cited in Committee meetings (where U.S. monetary policy is determined). To succeed on this path in the long-term you would need to get a PhD in economics, which probably has many of the same downsides as a PhD in computer science/AI, but the path might have other advantages, depending on your personal interests, skills, values, motivations, etc. One thing I would note is that it is probably easier to get into econ PhD programs with a math-CS bachelor’s than you would think (though still very competitive, etc.). The top U.S. economics programs expect an extensive background in pure math (real analysis, abstract algebra, etc.), which is more common among people who studied math in undergrad than among people who studied economics alone. A good friend of mine actually just started her PhD in economics at MIT after getting her bachelor’s in math and computer science and doing two years of research at the Fed. This is not a particularly unusual path. If you’re interested and have any questions about it, feel free to dm me.
At least until the gutting of the CDC under our current presidential administration, it employed research teams full of specialists in the epidemiology of infectious disease who make use of fairly sophisticated mathematical models in their work. I would consider this work to be highly quantitative/data-driven, and it’s obviously pertinent to the mitigation of biorisks. To do it long-term, you would need a PhD in epidemiology (ideally) or a related field (biostatistics, computational biology, health data science, public health, etc.). These programs are also definitely easier to get into with your background than you would expect. They need people with strong technical skills, and no one leaves undergrad with a bachelor’s in epidemiology. You would probably have to get some relevant domain experience before applying to an epi PhD program, though, likely either by working on the research staff at someplace like the Harvard Center for Communicable Disease Dynamics or by getting an MS in epidemiology first (you would have no trouble gaining admission to one of those programs with your background). One big advantage of epidemiology relative to macroeconomics and AI is that (my sense is) it’s a much less competitive field (or at least it certainly was pre-pandemic), which probably has lots of benefits in terms of odds of success, risk of burnout, etc. Once again, feel free to dm me if this sounds interesting to you and you have any questions; I know people who have gone this route, as well.
I think a lot of the day-to-day feelings of fulfillment in high-impact jobs come from either: 1) being part of a workplace community of people who really believe in the value of the work, or 2) seeing first-hand the way in which your work directly helped someone. I don’t really think the feelings of fulfillment typically come from the particular functional category of your role or the set of tasks that you perform during the workday, so I wonder how informative your experiments with data science, for instance, would be with respect to the question of identifying the thing that you feel you “must do,” as you put it. If I had to guess, I’d speculate that the feeling you’re looking for will be more specific to a particular organization or organizational mission than to the role you’d be filling for organizations generally.
If you’re committed to using data science to address public policy questions in the U.S. (either in government or a think tank-type organization), I suspect you’d be best-served by a program like one of these:
https://mccourt.georgetown.edu/master-of-science-in-data-science-for-public-policy/https://harris.uchicago.edu/academics/degrees/ms-computational-analysis-public-policy-mscapp
This is all fantastic information to have — thank you so much for explaining it! I’m really glad to have improved my understanding of this.
Yes, that argument for veg*anism is a big part of why I’m a vegetarian, but it does not on its own entail that one should prefer giving to multiplier charities rather than to the GiveWell Maximum Impact Fund. That depends on the empirical question of how the relative expected values weigh out. My argument is that there are sound reasons to believe that in the multiplier charity case specifically, the best-guess expected values do not favor giving to multiplier charities. “Your donation to a multiplier charity might have a big positive impact if it pushes them up an impact step,” doesn’t really respond to these reasons. Obviously, I agree that my donation might have a really big positive impact. I am just skeptical that we have sufficient reason to believe that, at the end of the day, the expected value is higher. I think the main reasons why I, at least prior to conversing with Jon, was strongly inclined to think the expected value calculus favored GiveWell were:
TLYCS’s multiplier likely doesn’t exceed 4x. [I have updated against this view on account of Jon’s comments.]
There is a much higher likelihood that TLYCS sees diminishing marginal returns on “charitable investments” than organizations directly fighting, say, the global malaria burden. [I have updated somewhat against this view on account of Jon’s comments.]
If a particularly promising opportunity for TLYCS to go up an impact step were to present itself, it most likely would get filled by a large donor, who would fully fund the opportunity irrespective of my donation. (In the case of the AMF—to continue with our earlier example—I imagine most in the donor community assume that such opportunities get funded with grants from GiveWell’s Maximum Impact Fund; it has proven to be an effective coordination mechanism in that sense.)
There are good reasons to be at least somewhat suspicious of the impact estimates that multiplier charities put out about themselves, particularly given how little scrutiny or oversight exists of their activities. There’s even a reasonable argument, I think, that such organizations, in the status quo, face strong incentives (due to potential conflicts of interest) to optimize for achieving aims unrelated to having a positive impact. For instance, I think Peter Singer’s work likely is highly effective at persuading people give more to effective charities, but imagine for a moment that TLYCS were to discover that, in fact, owing to the many controversies surrounding Singer, his association with the movement on net turned people away. Based on Jon’s remarks during this forum discussion, he seems like a great person, but I don’t think we have any general reason to believe that TLYCS would respond to that discovery in a positive-impact-maximizing way. Singer is such a large part of the organization that it seems plausible to me that he would be able—if he wished—to push it to continue to raise his profile, as it does today, even if doing so were likely net negative for the EA project. Furthermore, in reality, if something like this were to occur, it would probably happen through a slow trickle of individually inconclusive pieces of evidence, not through a single decisive revelation, so subconscious bias in interpreting that evidence inside of TLYCS could lead to this sort of suboptimal outcome even without anyone doing anything they believed might be harmful. Obviously, this is a deliberately somewhat outlandish hypothetical, but hopefully, it gets the point across.
Regarding your final point, I basically agree with your reasoning here. I have not confirmed my mental model with the AMF, and it’s fair to say I should. However, I also think that 1) you’re right that benefactors in “marginal” villages may get more (or there may be more of them) on account of my donations, and 2) deworming is so cheap (as are mosquito nets) that my donations to deworming charities probably do cover entire schools, etc.
I don’t know anything about the norms and expectations in CS, but in my field (a quantitative social science), it is basically impossible to get into PhD programs without research experience of some kind, and you would likely be advised, first and foremost, to seek a master’s as preparation, and if it went well, apply to PhD programs thereafter. The master’s programs that would be recommended would be designed for people interested in transitioning from industry to academia, and someone like you would probably have a good shot at getting in. They can be expensive, though. If you wanted to avoid that, you would need to come up with some other way of demonstrating research acumen. This could mean transitioning into an academic research staff role somewhere, which (in my field, though maybe not yours) would help your odds of admission a ton. It could also mean reconnecting with an old college professor about your interests and aspirations and seeing if they’d be willing to work on a paper with you (I know someone who did this successfully; the professor likely agreed to it because she judged that my friend’s work had a high chance of being published). Finally, you could just try to write a publishable research paper on your own. In my field, this seems to me like it would be very hard to do, especially without prior research experience, but even if it didn’t turn into a publication, if the paper were solid, you could submit it as a supplemental writing sample with your applications, and it would likely help to compensate for weaknesses in your research background (for what it’s worth on this point, a friend-of-a-friend of mine was a music conservatory student who was admitted to a philosophy doctoral program after self-studying philosophy entirely on her own).
I really don’t think my argument is about risk aversion at all. I think it’s about risk-neutral expected (counterfactual) value. The fact that it is extraordinarily difficult to imagine my donations to a multiplier charity having any counterfactual impact informs my belief about the likely probability of my donations to such an organization having a counterfactual impact, which is an input in my expected value calculus. You’re right that under some circumstances, a risk-neutral expected value calculus will favor small donors donating to “step-functional” charities that can’t scale their operations smoothly per marginal dollar donated, but my argument was that in the specific case of multiplier charities, the odds of a small-dollar donation being counterfactually responsible for moving the organization up an impact step are particularly infinitesimal (or at least that this is the most reasonable thing for small-dollar donors without inside information to believe). The fact that impact in this context is step-functional is a part of the explanation for the argument, not the conclusion of the argument.
With respect to the question of “relative step-functionality,” though, it’s also not clear to me why, compared to a multiplier charity, one would think that giving to GiveWell’s Maximum Impact Fund would be any more step-functional on the margin. It seems odd to suggest that being counterfactually responsible for an operational expansion into a new region is among the most plausible ways that a small-dollar gift to the AMF, for instance, has an impact. Clearly, such a gift allows the AMF to distribute more nets where they are currently operating, even if no such expansion into a new region is presently on the table. Moreover, I find this particularly confusing in the case of the Maximum Impact Fund, which allocates grants to target specific funding gaps, often corresponding to very well-defined organizational initiatives (e.g. expansions into new regions), the individual cost-effectiveness of which GiveWell has modeled. It’s obviously true that regardless of whether one gives to a multiplier charity or to the Maximum Impact Fund, there is some chance that one’s donations either A) languish unused in a bank account, or B) counterfactually cause something hugely impactful to happen, but given that in the case of GiveWell, we know the end recipients have a specific, highly cost-effective use already planned out for this particular chunk of money (and if they have extra, they can just put it toward… more nets), whereas in the multiplier charity case, we don’t have any reason to believe they could use these specific funds at all (not to mention productively), doesn’t it seem like the balance of expected values here favors going with GiveWell?
Finally, while it is obviously true that most nets don’t save lives, I fail to see how that bears on the question at hand. We both agree that this is reflected in GiveWell’s cost-effectiveness analysis, which we (presumably) both agree that we have strong reason to trust. We have no such independent cost-effectiveness analysis of any multiplier charity. And the fact that most nets don’t save lives certainly isn’t a reason why the impact of donations to the AMF would not rise by some smooth function of dollars donated. The only premise on which that argument depends is that if they don’t have anything else good to do with my money (which presumably, they do, having earned a grant from the Maximum Impact Fund), they can always just buy more nets. Given the current scale of global net distribution relative to the total malaria burden, it seems wildly unlikely that a much larger percentage of those nets would fail to save lives than was the case during previous net distributions.
It sounds like you’re doing some awesome work, and these are great questions, but I very seriously doubt you will be able to get good answers to them from anyone without domain expertise in your field, so this may not be best place to look. I personally have some very cursory exposure to biostatistics and health data science (definitely less than you), but I imagine I have significantly more familiarity with the area, especially in the U.S., than most people on the EA Forum, and I have zero clue about the answers to your questions.
I may be missing or misunderstanding something, but it seems like your worries/roadblocks about your option 1 all pertain specifically to the MBA/MPA component. If that is the case, and you think you really might want to work in tech, I’d encourage you to consider trying to transition directly to a tech company without first getting another degree. Anecdotally, my sense is that MBAs and MPAs are useful mainly for networking and allowing you to command a higher starting salary in many roles, not for what you learn during the degree (though this depends somewhat on your prior academic and professional background, as well as on the specific program you’re enrolled in, of course).
I imagine that the main reason you’ve been considering getting an MBA or an MPA is because you have a sense that you need it to make a significant career shift. I’m not so sure. I don’t know how easy it is to spend two years as a software engineer at a tech company (instead of spending those two years in grad school) and then transition into a product management role, but I imagine that particularly at smaller or medium-sized tech companies, this must be a thing that happens. And even if I’m wrong about that, I know people who went straight from coding roles at professional services firms to product manager-track positions (e.g. product data analyst) at medium-sized tech companies (admittedly, outside of Silicon Valley). I imagine these people will become product managers faster this way than they would have if they’d gotten an MBA in the middle. Finally, regarding going into debt for an MPA, you should consider applying to Princeton’s program; it’s free to everyone who is admitted!
Thank you — please do!
I’m glad to hear you found my reasoning useful, and I appreciate your explanation of where you think it may go astray. I’m a fairly marginal actor in the grand scheme of the EA community and don’t feel I am anywhere close to having a clear view on whether the returns to adding further vetting or oversight structures would outweigh the costs. Naïvely, it seems to me that some kinds of organizational transparency are pretty cheap. However, it occurs to me that even though I’ve spent a fair bit of time on the TLYCS website over the past several years and gave to your COVID-19 response fund back in the spring, I honestly have no recollection of the extent of your transparency in the status quo. In a similar vein, to put it more flippantly than you deserve, I don’t think most people I know in the community (myself included) really understand what you do. I was even unaware of how high your estimated multiplier is (if you had asked me to guess prior to your comment, there’s no way I would’ve gone higher than 4x), and now, I am quite curious about how you’re estimating that and what you think is driving such a high return. I expect this is probably my fault for not seriously investigating “multiplier charities” when deciding where to give and instead presuming that they likely aren’t a good fit for small donors like me for the reasons I explained. However, I also think I am exactly the persuadable small donor who you would want to be touching with whatever outreach or marketing you’re doing , so maybe there’s room for improvement on your part there, as well.
For what it’s worth, if you were going to invest in adding some kind of vetting or oversight structure, here are a few questions that—inspired by your comment—I would most want it to answer before making a determination about whether to give to TLYCS:
1. Why have TLYCS’s expenses tripled since 2016? Other than the website overhaul and the book launch, what have you been spending on? Are you aiming to engage in similar (financial) growth again in the near term? If not, would you be if you had more support from small donors?
2. What do you mean by “communicate with more donors?” What does that involve? How costly is it on a per-donor basis? How scalable is it?
3. When you spend more money (beyond your basic operating expenses: salaries, office space if you have it, etc.), and that spending seems to be associated with an increase in donor interest in your recommended charities, what do you think generally explains that relationship, and how do you determine that such an increase in donor interest was counterfactually caused by the increase in spending?
4. More generally, and this may be an extremely dumb question/something you have explained at length elsewhere, how do you arrive at your “money moved” estimates, and how do you ensure that they are counterfactually valid?
5. Do you personally believe that TLYCS will hit diminishing marginal returns on investments in growing its base of donors to its recommended charities sometime in the near or intermediate term?
You obviously do not have to answer these questions here or at all. I wrote them out only to provide a sense of what information I feel I am missing.
- Mar 9, 2021, 7:23 PM; 3 points) 's comment on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? by (
I suspect there may be too much inferential distance between your perspective on normative theory and my own for me to explain my view on this clearly, but I will try. To start, I find it very difficult to understand why someone would endorse doing something merely because it is “effective” without regard for what it is effective at. The most effective way of going about committing arson may be with gasoline, but surely we would not therefore recommend using gasoline to commit arson. Arson is not something we want people to be effective at! I think that if effective altruism is to make any sense, it must presuppose that its aims are worth pursuing.
Similarly, I disagree with your contention that morality isn’t, as you put it, paramount. I do not think that morality exists in a special normative domain, isolated far away from concerns of prudence or instrumental reason. I think moral principles follow directly from the principle of instrumental reason, and there is no metaphysical distinction between moral reasons and other practical reasons. They are all just considerations that bear on our choices. Accordingly, the only sensible understanding of what it means to say that something is morally best is: “It is what one ought to do,” (I am skeptical of the idea of supererogation). It is a practical contradiction to say, “X is what I ought to do, but I will not do it,” in the same way that it is a theoretical contradiction to say, “It is not raining, but I believe it’s raining.” Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best). To me, that sounds like saying, “EA should do X regardless of whether or not EA should do X.”
Regarding the idea of intrinsic value, I think what Fin, Michael et al. meant by “X has intrinsic value” is “X is valuable for its own sake, not for the sake of any further end or moral good.” This is the conventional understanding of what “intrinsic value” means in academic philosophy. Under this definition, if there is an ultimate reason that in fact explains why an individual’s life is Good or Bad, then that reason must, by virtue of the logical properties of the concepts in that sentence, have grounding in some kind of intrinsic value. But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad. In this case, however, I do not think it is possible to justify why we could ever have an overriding moral reason to do anything, including to eliminate an eternal Hell, as we could not justify why that Hell was Bad for those individuals who were stuck inside.
If you wanted to justify why that Hell was bad for those stuck inside, and you were committed to the notion that the structure of value must be determined by the subjective, evaluative judgments of people (or animals, etc.), you would wind up—deliberately or not—endorsing a “desire-based theory of wellbeing,” like one of those described in this forum post. However, as a note of caution, in order to believe that the structure of value is determined entirely by people’s subjective, evaluative judgments, probably as expressed through their preferences (on some understanding of what a preference is), you would have to consider those judgments to be ultimately without justification. Either I prefer X to Y because X is relevantly better than Y, or I prefer X to Y without justification, and there are no absolute, universal facts about what one should prefer. I think there are facts about what one should prefer and so steer clear of such theories.
That makes perfect sense! I agree that CE probably isn’t the best fit for people most interested in doing EA work to mitigate existential risks. Feel free to shoot me a DM if you’d ever like to talk any of this through at greater length, but otherwise, it seems to me like you’re approaching these decisions in a very sensible way.