LessWrong dev & admin as of July 5th, 2022.
RobertM
My claim is something closer to “experts in the field will correctly recognize them as obviously much smarter than +2 SD”, rather than “they have impressive credentials” (which is missing the critically important part where the person is actually much smarter than +2 SD).
I don’t think reputation has anything to do with titotal’s original claim and wasn’t trying to make any arguments in that direction.
Also… putting that aside, that is one bullet point from my list, and everyone else except Qiaochu has a wikipedia entry, which is not a criteria I was tracking when I wrote the list but think decisively refutes the claim that the list includes many people who are not publicly-legible intellectual powerhouses. (And, sure, I could list Dan Hendryks. I could probably come up with another twenty such names, even though I think they’d be worse at supporting the point I was trying to make.)
This still feels wrong to me: if they’re so smart, where are the nobel laureates? The famous physicists?
I think expecting nobel laureates is a bit much, especially given the demographics (these people are relatively young). But if you’re looking for people who are publicly-legible intellectual powerhouses, I think you can find a reasonable number:
Wei Dai
Hal Finney (RIP)
Scott Aaronson
Qiaochu Yuan[1]
Many historical MIRI researchers (random examples: Scott Garrabrant, Abram Demski, Jessica Taylor)
Paul Christiano (also formerly did research at MIRI)
(Many more not listed, including non-central examples like Robin Hanson, Vitalik Buterin, Shane Legg, and Yoshua Bengio[2].)
And, like, idk, man. 130 is pretty smart but not “famous for their public intellectual output” level smart. There are a bunch of STEM PhDs, a bunch of software engineers, some successful entrepreneurs, and about the number of “really very smart” people you’d expect in a community of this size.
- ^
He might disclaim any current affiliation, but for this purpose I think he obviously counts.
- ^
Who sure is working on AI x-risk and collaborating with much more central rats/EAs, but only came into it relatively recently, which is both evidence in favor of one of the core claims of the post but also evidence against what I read as the broader vibes.
first-hand accounts of people experiencing/overhearing racist exchanges
Sorry, I still can’t seem to find any of these, can you link me to such an account? I have seen one report that might be a second-hand account, though it could have been a non-racial slur.
(I’m generally not a fan of this much meta, but I consider the fact that this was strong downvoted by someone to be egregious. Most of the comment is reasonable speculation that turned out to be right, and the last sentence is a totally normal opinion to have, which might justify a disagree vote at worst.)
And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists.
I think this is kind of funny because I (directionally) agree with a lot of your list, at least within the observed range of human cognitive ability, but think that strong decoupling norms are mostly agnostic to questions like trusting AI researchers who supported Lysenkoism when it was popular. Of course it’s informative that they did so, but can be substantially screened off by examining the quality of their current research (and, if you must, its relationship to whatever the dominant paradigms in the current field are).
People who’d prefer to not have them platformed at an event somewhat connected to EA don’t seem to think this is a trade off.
Optimizing for X means optimizing against not-X. (Well, at the pareto frontier, which we aren’t at, but it’s usually true for humans, anyways.) You will generate two different lists of people for two different values of X. Ergo, there is a trade off.
Anecdotally, a major reason I created this post was because the amount of very edgy people was significantly higher than the baseline for non-EA large events. I can’t think of another event that I have attended where people would’ve felt comfortable saying the stuff that was being said.
Note that these two sentences are saying very different things. The first one is about the percentage of attendees that have certain views, and I am pretty confident that it is false (except in a trivial sense, where people at non-EA events might have different “edgy” views). If you think that percentage of the general population that holds views at least as backwards as “typical racism” is less than whatever it was at Manifest (where I would bet very large amounts of money the median attendee was much more egalitarian than average for their reference class)...
The second one is about what was said at the event, and so far I haven’t seen anyone describe an explicit instance of racism or bigotry by an attendee (invited speaker or not). There were no sessions about “race science”, so I am left at something of a loss to explain how that is a subject that could continue to come up, unless someone happened to accidentally wander into multiple ongoing conversations about the subject. Absent affirmative confirmation of such an event, my current belief is that much more innocous things are being lumped in under a much more disparaging label.
Your comment seems to be pretty straightforwardly advocating for optimizing for very traditional political considerations (appearance of respectability, relationships with particular interest groups, etc) by very traditional political means (disassociating with unfavorables). The more central this is to how “EA” operates, the more fair it is to call it a political project.
I agree that many rationalists have been alienated by wokeness/etc. I disagree that much of what’s being discussed today is well-explained by a reactionary leaning-in to edginess, and think that the explanation offered—that various people were invited on the basis of their engagement with concepts central to Manifest, or for specific panels not related to their less popular views—is sufficient to explain their presence.
With that said, I think Austin is not enormously representative of the rationalist community, and it’s pretty off-target to chalk this up as an epistemic win for the EA cultural scene over the rationalist cultural scene. Observe that it is here, on the EA forum, that a substantial fraction of commenters are calling for conference organizers to avoid inviting people for reasons that explicitly trade off against truth-seeking considerations. Notably, there are people who I wouldn’t have invited, if I were running this kind of event, specifically because I think they either have very bad epistemics or are habitual liars, such that it would be an epistemic disservice to other attendees to give those people any additional prominence.
I think that if relevant swathes of the population avoid engaging with e.g. prediction markets on the basis of the people invited to Manifest, this will be substantially an own-goal, where people with 2nd-order concerns (such as anticipated reputational risk) signal boost this and cause the very problem they’re worried about. (This is a contingent, empirical prediction, though unfortunately one that’s hard to test.) Separately, if someone avoided attending Manifest because they anticipated unpleasantness stemming from the presence of these attendees, they either had wildly miscalibrated expectations about what Manifest would be like, or (frankly) they might benefit from asking themselves what is different about attending Manifest vs. attending any other similarly large social event (nearly all of which have invited people with similarly unpalatable views), and whether they endorse letting the mere physical presence of people they could choose to pretend don’t exist stop them from going.
Huh, it’s a bit surprising to me that people disagree so strongly with this comment, which seems to be (uncharitably but not totally inaccurately) paraphrasing the parent, which has much more agreement.
(Maybe most people are taking it literally, rather than interpreting it as a snipe?)
Perhaps it’s missing from the summary, but there is trivially a much stronger argument that doesn’t seem addressed here.
Humans must be pretty close to the stupidest possible things that could design things smarter than them.
This is especially true when it comes to the domain of scientific R&D, where we only have even our minimal level of capabilities because it turns out that intelligence generalizes from e.g. basic tool-use and social modeling to other things.
We know that we can pretty reliably create systems that are superhuman in various domains when we figure out a proper training regime for those domains. e.g. AlphaZero is vastly superhuman in chess/go/etc, GPT-3 is superhuman at next token prediction (to say nothing of GPT-4 or subsequent systems), etc.
The nature of intelligent search processes is to route around bottlenecks. The argument re: bottlenecks proves too much, and doesn’t even seem to stand up historically. Why did bottlenecks not fail to stymie superhuman capabilities in the domains where we’re achieved them?
Humanity, today, could[1] embark on a moderately expensive project to enable wide-scale genomic selection for intelligence, which within a single generation would probably produce a substantial number of humans smarter than any who’ve ever lived. Humans are not exactly advantaged in their ability to iterate here, compared to AI.
The general shape of Thorstad’s argument doesn’t really make it clear what sort of counterargument he would admit as valid. Like, yes, humans have not (yet) kicked off any process of obvious, rapid, recusive self-improvement. That is indeed evidence that it might take humans a few decades after they invent computing technology to do so. What evidence, short of us stumbling into the situation under discussion, would be convincing?
- ^
(Social and political bottlenecks do exist, but the technology is pretty straightforward.)
I’ve spent some time thinking about the same question and I’m glad that there’s some multiple discovery; the AI Control agenda seems relevant here.
I think that neither of those are selective uses of analogies. They do point to similarities between things we have access to and future ASI that you might not think are valid similarities, but that is one thing that makes analogies useful—they can make locating disagreements in people’s models very fast, since they’re structurally meant to transmit information in a highly compressed fashion.
There is no button you can press on demand to publish an article in either a peer-reviewed journal or a mainstream media outlet.
Publishing pieces in the media (with minimal 3rd-party editing) is at least tractable on the scale of weeks, if you have a friendly journalist. The academic game is one to two orders of magnitude slower than that. If you want to communicate your views in real-time, you need to stick to platforms which allow that.
I do think media comms is a complementary strategy to direct comms (which MIRI has been using, to some degree). But it’s difficult to escape the fact that information posted on LW, the EA forum, or Twitter (by certain accounts) makes its way down the grapevine to relevant decision-makers surprisingly often, given how little overhead is involved.
ETA: feel free to ignore the below, given your caveat, though you may find it helpful if you choose to write an expanded form of any of the arguments later to have some early objections.
Correct me if I’m wrong, but it seems like most of these reasons boil down to not expecting AI to be superhuman in any relevant sense (since if it is, effectively all of them break down as reasons for optimism)? To wit:
Resource allocation is relatively equal (and relatively free of violence) among humans because even humans that don’t very much value the well-being of others don’t have the power to actually expropriate everyone else’s resources by force. (We have evidence of what happens when those conditions break down to any meaningful degree; it isn’t super pretty.)
I do not think GPT-4 is meaningful evidence about the difficulty of value alignment. In particular, the claim that “GPT-4 seems to be honest, kind, and helpful after relatively little effort” seems to be treating GPT-4′s behavior as meaningfully reflecting its internal preferences or motivations, which I think is “not even wrong”. I think it’s extremely unlikely that GPT-4 has preferences over world states in a way that most humans would consider meaningful, and in the very unlikely event that it does, those preferences almost certainly aren’t centrally pointed at being honest, kind, and helpful.
re: endogenous reponse to AI—I don’t see how this is relevant once you have ASI. To the extent that it might be relevant, it’s basically conceding the argument: that the reason we’ll be safe is that we’ll manage to avoid killing ourselves by moving too quickly. (Note that we are currently moving at pretty close to max speed, so this is a prediction that the future will be different from the past. One that some people are actively optimising for, but also one that other people are optimizing against.)
re: perfectionism—I would not be surprised if many current humans, given superhuman intelligence and power, created a pretty terrible future. Current power differentials do not meaningfully let individual players flip every single other player the bird at the same time. Assuming that this will continue to be true is again assuming the conclusion (that AI will not be superhuman in any relevant sense). I also feel like there’s an implicit argument here about how value isn’t fragile that I disagree with, but I might be reading into it.
I’m not totally sure what analogy you’re trying to rebut, but I think that human treatment of animal species, as a piece of evidence for how we might be treated by future AI systems that are analogously more powerful than we are, is extremely negative, not positive. Human efforts to preserve animal species are a drop in the bucket compared to the casual disregard with which we optimize over them and their environments for our benefit. I’m sure animals sometimes attempt to defend their territory against human encroachment. Has the human response to this been to shrug and back off? Of course, there are some humans who do care about animals having fulfilled lives by their own values. But even most of those humans do not spend their lives tirelessly optimizing for their best understanding of the values of animals.
I think the modal no-Anthropic counterfactual does not have an alignment-agnostic AI company that’s remotely competitive with OpenAI, which means there’s no external target for this Amazon investment. It’s not an accident that Anthropic was founded by former OpenAI staff who were substantially responsible for OpenAI’s earlier GPT scaling successes.
I don’t know if it’s commonly agreed upon; that’s just my current belief based on available evidence (to the extent that the claim is even philosophically sound enough to be pointing at a real thing).
Re: ontological shifts, see this arbital page: https://arbital.com/p/ontology_identification.
The fact that natural selection produced species with different goals/values/whatever isn’t evidence that that’s the only way to get those values, because “selection pressure” isn’t a mechanistic explanation. You need more info about how values are actually implemented to rule out that a proposed alternative route to natural selection succeeds in reproducing them.
I’m not claiming that evolution is the only way to get those values, merely that there’s no reason to expect you’ll get them by default by a totally different mechanism. The fact that we don’t have a good understanding of how values form even in the biological domain is a reason for pessimism, not optimism.
At best, these theory-first efforts did very little to improve our understanding of how to align powerful AI. And they may have been net negative, insofar as they propagated a variety of actively misleading ways of thinking both among alignment researchers and the broader public. Some examples include the now-debunked analogy from evolution, the false distinction between “inner” and “outer” alignment, and the idea that AIs will be rigid utility maximizing consequentialists (here, here, and here).
Random aside, but I think this paragraph is unjustified in both its core argument (that the referenced theory-first efforts propagated actively misleading ways of thinking about alignment) and none of the citations provide the claimed support.
The first post (re: evolutionary analogy as evidence for a sharp left turn) sees substantial pushback in the comments, and that pushback seems more correct to me than not, and in any case seems to misunderstand the position it’s arguing against.
The second post presents an interesting case for a set of claims that are different from “there is no distinction between inner and outer alignment”; I do not consider it to be a full refutation of that conceptual distinction. (See also Steven Byrnes’ comment.)
The third post is at best playing games with the definitions of words (or misunderstanding the thing it’s arguing against), at worst is just straightforwardly wrong.
I have less context on the fourth post, but from a quick skim of both the post and the comments, I think the way it’s most relevant here is as a demonstration of how important it is to be careful and precise with one’s claims. (The post is not making an argument about whether AIs will be “rigid utility maximizing consequentialists”, it is making a variety of arguments about whether coherence theorems necessarily require that whatever ASI we might build will behave in a goal-directed way. Relatedly, Rohin’s comment a year after writing that post indicated that he thinks we’re likely to develop goal-directed agents; he just doesn’t think that’s entailed by arguments from coherence theorems, which may or may not have been made by e.g. Eliezer in other essays.)
My guess is that you did not include the fifth post as a smoke test to see if anyone was checking your citations, but I am having trouble coming up with a charitable explanation for its inclusion in support of your argument.
I’m not really sure what my takeaway is here, except that I didn’t go scouring the essay for mistakes—the citation of Quintin’s post was just the first thing that jumped out at me, since that wasn’t all that long ago. I think the claims made in the paragraph are basically unsupported by the evidence, and the evidence itself is substantially mischaracterized. Based on other comments it looks like this is true of a bunch of other substantial claims and arguments in the post:
- ^
Though I’m sort of confused about what this back-and-forth is talking about, since it’s referencing behind-the-scenes stuff that I’m not privy to.
- 22 Sep 2023 20:36 UTC; -1 points) 's comment on AI Pause Will Likely Backfire by (
Please stop saying that mind-space is an “enormously broad space.” What does that even mean? How have you established a measure on mind-space that isn’t totally arbitrary?
Why don’t you make the positive case for the space of possible (or, if you wish, likely) minds being minds which have values compatible with the fulfillment of human values? I think we have pretty strong evidence that not all minds are like this even within the space of minds produced by evolution.
What if concepts and values are convergent when trained on similar data, just like we see convergent evolution in biology?
Concepts do seem to be convergent to some degree (though note that ontological shifts at increasing levels of intelligence seem likely), but I do in fact think that evidence from evolution suggests that values are strongly contingent on the kinds of selection pressures which produced various species.
The argument w.r.t. capabilities is disanalogous.
Yes, the training process is running a search where our steering is (sort of) effective for getting capabilities—though note that with e.g. LLMs we have approximately zero ability to reliably translate known inputs [X] into known capabilities [Y].
We are not doing the same thing to select for alignment, because “alignment” is:
an internal representation that depends on multiple unsolved problems in philosophy, decision theory, epistemology, math, etc, rather than “observable external behavior” (which is what we use to evaluate capabilities & steer training)
something that might be inextricably tied to the form of general intelligence which by default puts us in the “dangerous capabilities” regime, or if not strongly bound in theory, then strongly bound in practice
I do think this disagreement is substantially downstream of a disagreement about what “alignment” represents, i.e. I think that you might attempt outer alignment of GPT-4 but not inner alignment, because GPT-4 doesn’t have the internal bits which make inner alignment a relevant concern.
No easily summarizable comment on the rest of it, but as a LessWrong dev I do think the addition of Quick Takes to the front page of LW was very good—my sense is that it’s counterfactually responsible for a pretty substantial amount of high quality discussion. (I haven’t done any checking of ground-truth metrics, this is just my gestalt impression as a user of the site.)