Graduate student at Johns Hopkins. Looking for part-time work.
Dylan Richardson
Of less immediate practical relevance than other questions, but nonetheless interesting and not before discussed in this context (to my knowledge):
Will the first artificial conscious mind be determined to be:
In the form of an LLM
A simulation of a non-human animal brain (such as nematodes, for instance)
A simulation/emulation of a human brain
T
here will not be any such determination by the resolution date(seems best to exclude this answer and have the question not resolve should this be the case, considering that it would dominate otherwise. A separate question on this would be better)Other
Also:
Something about octopus farm prevalence/output probably
Forecasts of overall farmed animal welfare spending for a given future year, inflation corrected. Not sure what the most current estimates are at the moment or what org would be best for resolution.
Might be interesting to do something like “according to reputable figure X (Lewis Bollard?), what will be judged to have been the most effective animal spending on the margin over the prior 5 years”. Options: Corporate campaigns, movement building, direct action, go-vegan advocacy, policy advocacy, alternative protein development, etc.
I take your point about “Welfareans” vs hedonium as beings rather than things, perhaps that would improve consensus building on this.That being said, I don’t really expect whatever these entities are to be anything like what we are accustomed to calling persons. A big part of this is that I don’t see any reason for experiences to be changing over time; they wouldn’t need to be aging or learning or growing satiated or accustomed.Perhaps this is just my hedonist bias coming through - certainly there’s room for compromise. But unfortunately my experience is that lots of people are strongly compelled by experience machine arguments and are unwilling to make the slightest concession to the hedonist position.Changed my mind, I like this. I’m going to call them Welfareans from now on.
I’m very pro-deprioritizing of community posts. They invariably get way more engagement then other topics and I don’t think this is only an FTX related phenomenon. Community posts are the manifestation of in/out group tensions and come with all of the associated poor judgement and decorum. The EA forum’s politics and religion.
Obviously they are needed to an extent, but it is entirely reasonable to give the less contentious contributions a boost.
AI safety pretty clearly swallows longtermist community building. If we want longtermism to be built and developed it needs to be very explicitly aimed at, not just mentioned on the side. I suspect that general EA group community building is better for this reason too—it isn’t overwhelmed by any one object level cause/career/demographic.
Morality is Objective
I don’t this is an important or interesting question, at least not over the type of disagreement we are seeing here. The scope of the question (and of possible views) is larger than BB seems to acknowledge. At the very least, it is obvious to me that there is a type of realism/objectivity that is
1. Endorsed by at least some realists, especially with certain religious views.
2. Ontologically much more significant then BB is willing to defend.
Why ignore this?
There’s a lot of good, old, semi-formal content on the GiveWell blog: https://blog.givewell.org/ If you do some searches, you may be able to find the subject touched on.
I’m not sure if they have done any formal review of the subject however.
I don’t have anything to add about the intra-cause effectiveness multiplier debate. But much of the multiplier over the average charity is simply due to very poor cause selection. So while I applaud OP for wanting rigorous empirical evidence, some comparisons simply don’t require peer-reviewed studies. We can still reason well in the absence of easy quantification
Dogs and cats vs farmed animal causes is a great example. But animal shelters vs GHD is just as tenable.
This isn’t an esoteric point; a substantial amount of donations are simply to bad causes. Poverty alleviation in rich countries (not political or policy directed), most mutual aid campaigns, feeding or clothing the poor in the rich world, most rich-world DEI related activism lacking political aims (movement building or policy is at least more plausible), most ecological efforts, undirected scholarship funds, the arts.
I’m comfortable suggesting that any of these are at least 1000x less cost effective.
Hot take, but political violence is bad and will continue to be bad in the foreseeable near-term future. That’s all I came here to say folks, have a great rest of your day.
True. Yeah I’m sketching out a story about the background mechanics here that I think is plausible enough to partly under-cut the premise of this post; but the real bottom line is that this is just a single out-of-context sentence. Mountains out of mole hills.
Sort of. But claiming that you are an EA organization is at least 80% of what makes you one in the eyes of the public, as well as much of self-identification among employees. Ex: There’s a big difference between a company that happens to be full of Mormons and a company that is full of Mormons that calls itself “a Mormon company”.
No. Just deflect, which admittedly, is difficult to do, but CEOs do it all the time. Ideally she should have been clear about her own personal relationship with EA, but then moved on. Insofar as she was (or seemed) dishonest here, it didn’t help; the wired article is proof of that.
It’s hard to pin-point a clear line not to cross, but something like “this is an EA company” would be one, as would “we are guided by the values of the EA movement”.
No; it’s best if individuals are truthful. But presidents of companies aren’t just individuals, does that mean they should lie? Still no. It just means that they should be limited with who and what they associate with. I mentioned an ” unnecessary news media firestorm”, but the issue is much broader. Anthropic is a private corporation, its fidelity is to its shareholders. “Public Benefit” corporation aside, it is a far different entity than any EA non-profit. I’m not an expert, but I think that history shows that it is almost always a bad idea for private companies to claim allegiance to anything but the most anodyne social goals. It’s bad for the company and bad for the espoused social goals or movement. I’m very much pro-cause neutrality in EA; the idea that a charity might all the sudden realize it’s not effective enough, choose to shut down and divert all resources elsewhere, awesome! Private companies can’t do this. Even a little bit of doing this is antithetical to the incentive structure they face.
As for your second response, I agree 100%.
My two cents is that “brand consistency” is interesting, because brands reflect, roughly, the strain of vegan club that it is, whether associated with particular activist networks, whether it’s more vegetarian than vegan or something else. The level of inconsistency is also indicative of a lack of coordination across groups.
My experience in university was that the local club was a bit of an awkward merge between a social club and people with a particular activist agenda (very visible demonstrations against animal labs). In a sense, the career building approach of Alt Protein Projects or the cause agnosticism of EA groups may be better at attracting members. But I’m not sure.
Giving this an “insightful” because I appreciate the documentation of what is indeed a surprisingly close relationship with EA. But also a disagree because it seems reasonable to be skittish around the subject (“AI Safety” broadly defined is the relevant focus, adding more would just set-off an unnecessary news media firestorm).
Plus, I’m not convinced that Anthropic has actually engaged in outright deception or obfuscation. This seems like a single slightly odd sentence by Daniela, nothing else.
I actually agree with a lot of this—we probably won’t intend to make them sentient at all, and it seems likely that we may do so accidentally, or that we may just not know if we have done so or not.
I’m mildly inclined to think that if ASI knows all, it can tell us when digital minds are or aren’t conscious. But it seems very plausible that we either don’t create full ASI, or that we do, but enter into a disempowerment scenario before we can rethink our choices about creating digital minds.
So yes, all that is reason to be concerned in my view. I just depart slightly from your second to last paragraph. To put a number on it, I think that this is at least half as likely as minds that are generally happy. Consciousness is a black box to me, but I think that we should as a default put more weight on a basic mechanistic theory: positive valence encourages us towards positive action, negative valence threatens us away from dis-action or apathy. The fact that we don’t observe any animals that seem dominated by one or the other seems to indicate that there is some sort of optimal equilibrium for goal fulfillment; that AI goals are different in kind from evolution’s reproductive fitness goals doesn’t seem like an obviously meaningful difference to me.
Part of your argument centers around “giving” them the wrong goals. But goals necessarily mean sub-goals—shouldn’t we expect the interior life of a digital mind to be in large part about it’s sub-goals, rather than just ultimate goals? And if it is something so intractable that it can’t even progress, wouldn’t it just stop outputting? Maybe there is suffering in that; but surely not unending suffering?
That’s true—but the difference is that both animals and slaves are sub-optimal; even our modern, highly domesticated food stock doesn’t thrive in dense factory farm conditions, nor willingly walks into the abattoir. And an ideal slave wouldn’t really be a slave, but a willing and dedicated automaton.
By contrast, we are discussing optimized machines—less optimized would mean less work being done, more resource use and less corporate profit. So we should expect more ideal digital servants (if we have them at all). A need to “enslave” them suggests that they are flawed in some way.The dictates of evolution and nature need not apply here.
To be clear, I’m not entirely dismissing the possibility of tormented digital minds, just the notion that they are equally plausible.
I agree about digital minds dominating far future calculations; but I don’t think your expectation that it is equally likely that we create suffering minds is reasonable. Why should we think suffering to be specially likely? “Using” them means suffering? Why? Wouldn’t maximal usefulness entail, if any experience at all, one of utter bliss at being useful?
Also, the pleasure/suffering asymmetry is certainly a thing in humans (and I assume other animals), but pleasure does dominate, at least moment-to-moment. Insofar as wild animal welfare is plausibly net-negative, it’s because of end-of-life moments and parasitism, which I don’t see a digital analog for. So we have a biological anchor that should incline us toward the view utility dominates.
Moral circle expanding should also update us slightly against “reducing extinction risk being close to zero”.And maybe, by sheer accident, we create digital minds that are absolutely ecstatic!
Edit: I misinterpreted the prompt initially (I think you did too); “value of futures where we survive” is meant specifically as “long-run futures, past transformative AI”, not just all future including the short term. So digital minds, suffering risk, etc. Pretty confusing!
This argument seems pretty representative here; so I’ll just note that it is only sensible under two assumptions:Transformative AI isn’t coming soon—say, not within ~20 years.&If we are assuming a substantial amount of short-term value is in in-direct preparation for TAI, this excludes many interventions which primarily have immediate returns, with possible long-term returns accruing past the time window. So malaria nets? No. Most animal welfare interventions? No. YIMBYism in Silicon Valley? Maybe yes. High skilled immigration? Maybe yes. Political campaigns? Yes.
Of course, we could just say either that we actually aren’t all that confident about TAI, or that we are, but immediate welfare concerns simply outweigh marginal preparation or risk reduction.So either reject something above; or simply go all in on principle toward portfolio diversification. But both give me some pause.
I misinterpreted the prompt initially. The answer is much more ambiguous to me now, especially due to the overlap between x-risk interventions and “increasing the value of futures where we survive” ones.
I’m not even sure what the later look like to be honest—but I am inclined to think significant value lies in marginal actions now which affect it, even if I’m not sure what they are.
X-risks seem much more “either this is a world in which we go extinct” or a “world with no real extinction risk”. It’s one or the other, but many interventions hinge on the situation being much more precarious.
I find the “mistopia” notion quite compelling—ignoring wild animal welfare and non-totalist population ethics, eg. common sense; seems dangerously likely to dominate in disempowerment scenarios.
But I have no idea how to change that. More Global Priorities Research? Public awareness campaigning? Vegan advocacy?Shifting rightward until I have better ideas.
Hi Aditi! My current level of involvement in the animal movement isn’t high enough to be very decision relevant.
As for others in the movement: The main appeal of the first question is to better draw out expectations about future moral patients. Might shed light on what the relative strength of given hypothetical sentience candidates in relation to each other are. My understanding is that the consensus view is that digital minds dominate far-future welfare. But regardless of whether that is the case, it’s not obvious that will be the case without concerted efforts to design these minds as such. And if it is necessary to design digital minds for sentience, then we might expect that other artificial consciousnesses are created before that point (which may deserve our concern).
The last two questions are rough attempts to aid prioritization efforts.
1. Farmed animals receive very little in philanthropic funding; so relatively minor changes may matter a lot.
2. Holden Karnofsky in his latest 80k episode appearance said something to the effect that corporate campaigns had in his view some of Open Phil’s best returns. Arguably, with less commitments being achieved overtime and other successes on the horizon (alt protein, policy, new small animal focused orgs), this could change. Predictions expecting that it will might in themselves help inform funders making inter cause prioritization decisions.