I am a PhD candidate in Economics at Stanford University. Within effective altruism, I am interested in broad longtermism, long-term institutions and values, and animal welfare. In economics, my areas of interest include political economy, behavioral economics, and public economics.
zdgroff
It doesn’t seem all that relevant to me whether traders have a probability like that in their heads. Whether they have a low probability or are not thinking about it, they’re approximately leaving money on the table in a short-timelines world, which should be surprising. People have a large incentive to hunt for important probabilities they’re ignoring.
Of course, there are examples (cf. behavioral economics) of systemic biases in markets. But even within behavioral economics, it’s fairly commonly known that it’s hard to find ongoing, large-scale biases in financial markets.
Do you have a sense of whether the case is any stronger for specifically using cortical and pallial neurons? That’s the approach Romain Espinosa takes in this paper, which is among the best work in economics on animal welfare.
My husband and I are planning to donate to Wild Animal Initiative and Animal Charity Evaluators; we’ve also supported a number of political candidates this year (not tax deductible) who share our values.
We’ve been donating to WAI for a while, as we think they have a thoughtful, skilled team tackling a problem with a sweeping scale and scant attention.
We also support ACE’s work to evaluate and support effective ways to help animals. I’m on the board there, and we’re excited about ACE’s new approach to evaluations and trajectory for the coming years.
Yes, and thank you for the detailed private proposal you sent the research team. I didn’t see it but heard about it, and it seems like it was a huge help and just a massive amount of volunteer labor. I know they really appreciated it.
I’m an ACE board member, so full disclosure on that, though what I say here is in my personal capacity.
I’m very glad about a number of improvements to the eval process that are not obvious from this post. In particular, there are now numeric cost-effectiveness ratings that I found clarifying, overall explanations for each recommendation, and clearer delineation of the roles the “programs” and “cost-effectiveness” sections play in the reviews. I expect these changes to make recommendations more scope sensitive. This leaves me grateful for and confident in the new review framework.
- Dec 27, 2022, 12:38 PM; 16 points) 's comment on Effective Animal Charity recommendations? by (
As I noted on the nuclear post, I believe this is based on a (loosely speaking) person-affecting view (mentioned in Joel and Ben’s back-and-forth below). That seems likely to me to bias the cost-effectiveness downward.
Like Fin, I’m very surprised by how well this performs given takes in other places (e.g. The Precipice) on how asteroid prevention compares to other x-risk work.
Worth flagging that I believe this is based on a (loosely speaking) person-affecting view (mentioned in Joel and Ben’s back-and-forth below). That seems to me to bias the cost-effectiveness of anything that poses a sizable extinction risk dramatically downward.
At the same time, I find both the empirical work and the inside-view thinking here very impressive for a week’s work, and it seems like even those without a person-affecting view can learn a lot from this.
Thanks for writing this. I think about these sorts of things a lot. Given the title, do you know of examples of movements that did not start academic disciplines and appear to have suffered as a result?
The Global Priorities Institute and clusters of work around that do work in economics, including welfare economics. I’d also be curious to hear what you think they should do differently.
I’m toying with a project to gather reference classes for AGI-induced extinction and AGI takeover. If someone would like to collaborate, please get in touch.
(I’m aware of and giving thought to reference class tennis concerns but still think something like this is neglected.)
I don’t think it’s right that the broad project of alignment would look the same with and without considering religion. I’m curious what your reasoning is here and if I’m mistaken.
One way of reading this comment is that it’s a semantic disagreement about what alignment means. The OP seems to be talking about the problem of getting an AI to do the right thing, writ large, which may encompass a broader set of topics than alignment research as you define it.
Two other ways of reading it are that (a) solving the problem the OP is addressing (getting an AI to do the right thing, writ large) does not depend on values, or (b) solving the alignment problem will necessarily solve the value problem. I don’t entirely see how you can justify (a) without a claim like (b), though I’m curious if there’s a way.
You might justify (b) via the argument that solving alignment involves coming up a way to extrapolate values. Perhaps it is irrelevant which particular person you start with, because the extrapolation process will end up at the same point. To me this seems quite dubious. We have no such method and observe deep disagreement in the world. Which methods we use to resolve disagreement and determine whose values we include seem to involve a question of values. And from my lay sense, the methods of alignment that are currently most-discussed involve aligning it with specific preferences.
One thing that’s sad and perhaps not obvious to people is that, as I understand it, Nathan Robinson was initially sympathetic to EA (and this played a role in at-times vocal advocacy for animals). I don’t know that there’s much to be done about this. I think the course of events was perhaps inevitable, but that’s relevant context for other Forum readers who see this.
- Sep 26, 2022, 2:48 PM; 13 points) 's comment on “Defective Altruism” by Nathan J. Robinson in Current Affairs by (
And worth noting that Ben Franklin was involved in the constitution, so at least some of his longtermist time seems to have been well spent.
I don’t have a strong view on the original setup, but I can clarify what the argument is. For the first point, that we maximize . The idea is that we want to maximize the likelihood that the organism chooses the action that leads to enjoyment (the one being selected for). That probability is a function of how much better it is to choose that action than the alternative. So if you get E from choosing that action and lose S from choosing the alternative, the benefit from choosing that action is E - (-S) = E + S. However, you only pay to produce the experience of the action you actually take. This last reason is why the costs are weighted by probability, while the benefits, which are only about the anticipation of the experience you would get conditional on your action, are not.
It occurs to me that a fuller model might endogenize n, i.e. be something like max P(E(C_E) + S(C_S)) s.t. P(.) C_E + (1 - P(.)) C_S = M. (Replacing n with 1 - P here so it’s a rate, not a level. Also, perhaps this reduces to the same thing based on the envelope theorem.)
And on the last point, that point is relevant for the interpretation of the model (e.g. choosing the value of n), but it is not an assumption of the model.
Like others, I really appreciate these thoughts, and it resonates with me quite a lot. At this point, I think the biggest potential failure mode for EA is too much drift in this direction. I think the “EA needs megaprojects” thing has generated a view that the more we spend, the better, which we need to temper. Given all the resources, there’s a good chance EA is around for a while and quite large and powerful. We need to make sure we put these tools to good use and retain the right values.
EA spending is often perceived as wasteful and self-serving
It’s interesting here how far this is from the original version of EA and its criticisms; e.g. that EA was an unrealistic standard that involved sacrificing one’s identity and sense of companionship for an ascetic universalism.
I think the old perception is likely still more common, but it’s probably a matter of time (which means there’s likely still time to change it). And I think you described the tensions brilliantly.
Yes, that’s an accurate characterization of my suggestion. Re: digital sentience, intuitively something in the 80-90% range?
Yes, all those first points make sense. I did want to just point to where I see the most likely cruxes.
Re: neuron count, the idea would be to use various transformations of neuron counts, or of a particular type of neuron. I think it’s a judgment call whether to leave it to the readers to judge; I would prefer giving what one thinks is the most plausible benchmark way of counting and then giving the tools to adjust from there, but your approach is sensible too.
Thanks for writing this post. I have similar concerns and am glad to see this composed. I particularly like the note about the initial design of space colonies. A couple things:
My sense is that the dominance of digital minds (which you mention as a possible issue) is actually the main reason many longtermists think factory farming is likely to be small relative to the size of the future. You’re right to note that this means future human welfare is also relatively unimportant, and my sense is that most would admit that. Humanity is instrumentally important, however, since it will create those digital minds. I do think it’s an issue that a lot of discussion of the future treats it as the future “of humanity” when that’s not really what it’s about. I suspect that part of this is just a matter of avoiding overly weird messaging.
It would be good to explore how your argument changes when you weight animals in different ways, e.g. by neuron count, since that [does appear to change things](https://forum.effectivealtruism.org/posts/NfkEqssr7qDazTquW/the-expected-value-of-extinction-risk-reduction-is-positive). I think we should probably take a variety of approaches and place some weight on each, although there’s a sort of Pascalian problem with considering the possibility that each animal mind has equal weight in that it feels somewhat plausible but also leads to wild and seemingly wrong conclusions (e.g. that it’s all about insect larvae). But in general, this seems like a central issue worth adjusting for.
Research institute focused on civilizational lock-in
Values and Reflective Processes, Economic Growth, Space Governance, Effective Altruism
One source of long-term risks and potential levers to positively shape the future is the possibility that certain values or social structures get locked in, such as via global totalitarianism, self-replicating colonies, or widespread dominance of a single set of values. Though organizations exist dedicated to work on risks of human extinction, we would like to see an academic or independent institute focused on other events that could have an impact on the order of millions of years or more. Are such events plausible, and which ones should be of most interest and concern? Such an institute might be similar in structure to FHI, GPI, or CSER, drawing on the social sciences, history, philosophy, and mathematics.
Consulting on best practices around info hazards
Epistemic Institutions, Effective Altruism, Research That Can Help Us Improve
Information about ways to influence the long-term future can in some cases give rise to information hazards, where true information can cause harm. Typical examples concern research into existential risks, such as around potential powerful weapons or algorithms prone to misuse. Other risks exist, however, and may also be especially important for longtermists. For example, better understanding of ways social structures and values can get locked in may help powerful actors achieve deeply misguided objectives.
We would like to support an organization that can develop a set of best practices and consult with important institutions, companies, and longtermist organizations on how best to manage information hazards. We would like to see work to help organizations think about the tradeoffs in sharing information. How common are info hazards? Are there ways to eliminate or minimize downsides? Is it typically the case that the downsides to information sharing are much smaller than upsides or vice versa?
I’m trying to make sure I understand: Is this (a more colorful version) of the same point as the OP makes at the end of “Bet on real rates rising”?