OpenAI taking security more seriously seems good, and also I expect is good for reducing race dynamics (the less that US adversaries are able to hack US labs, the less tight I expect a race to be).
Daniel_Eth
Long-Term Future Fund: May 2023 to March 2024 Payout recommendations
I think there’s a decently-strong argument for there being some cultural benefits from AI-focused companies (or at least AGI-focused ones) – namely, because they are taking the idea of AGI seriously, they’re more likely to understand and take seriously AGI-specific concerns like deceptive misalignment or the sharp left turn. Empirically, I claim this is true – Anthropic and OpenAI, for instance, seem to take these sorts of concerns much more seriously than do, say, Meta AI or (pre-Google DeepMind) Google Brain.
Speculating, perhaps the ideal setup would be if an established organization swallows an AGI-focused effort, like with Google DeepMind (or like if an AGI-focused company was nationalized and put under a government agency that has a strong safety culture).
I’m pretty confident that people who prioritize their health or enjoyment of food over animal welfare can moral handshake with animal suffering vegans by tabooing poultry at the expense of beef.
Generally disagree, because the meat eaters don’t get anything out of this agreement. “We’ll both agree to eat beef but not poultry” doesn’t benefit the meat eater. The one major possible exception imho is people in relationships – I could image a couple where one person is vegan and the other is a meat eater where they decide both doing this is a pareto-improvement.
I think it is worth at least a few hours of every person’s time to help people during a war and humanitarian crisis.
I don’t think this is true, and I don’t see an a priori reason to expect cause prioritization research to result in that conclusion. I also find it a little weird how often people make this sort of generalized argument for focusing on this particular conflict, when such a generalized statement should apply equally well to many more conflicts that are much more neglected and lower salience but where people rarely make this sort of argument (it feels like some sort of selective invocation of a generalizable principle).
My personal view is that being an EA implies spending some significant portion of your efforts being (or aspiring to be) particularly effective in your altruism, but it doesn’t by any means demand you spend all your efforts doing so. I’d seriously worry about the movement if there was some expectation that EAs devote themselves completely to EA projects and neglect things like self-care and personal connections (even if there was an exception for self-care & connections insofar as they help one be more effective in their altruism).
It sounds like you developed a personal connection with this particular dog rather quickly, and while this might be unusual, I wouldn’t consider it a fault. At the same time, while I don’t see a problem with EAs engaging in that sort of partiality with those they connect with, I would worry a bit if you were making the case that this sort of behavior was in itself an act of effective altruism, as I think prioritization, impartiality, and good epistemics are really important to exhibit when engaged in EA projects. (Incidentally, this is one further reason I’d worry if there was an expectation that EAs devote themselves completely to EA projects – I think this would lead to more backwards rationalizations about why various acts people want to do are actually EA projects when they’re not, and this would hurt epistemics and so on.) But you don’t really seem to be doing that.
IIRC it took me about a minute or two. But I already had high context and knew how I wanted to vote, so after getting oriented I didn’t have to spend time learning more or thinking through tradeoffs.
I am curious to know how many Americans were consulted about the decision to spend about $10,000 per tax-payer on upgrading nuclear weapons… surely this is a decision that American voters should have been deeply involved in, given that it impacts both their taxes and their chance of being obliterated in a nuclear apocalypse.
I think there’s a debate to be had about when it’s best for political decisions be decided by what the public directly wants, vs when it’s better for the public to elect representatives that make decisions based on a combination of their personal judgment and deferring to domain experts. I don’t think this is obviously a case where the former makes more sense.
It feels like that much money could be much better spend in other areas.
Sure, but the alternative isn’t the money being spent half on AMF and half on the LTFF – it’s instead some combination of other USG spending, lower US taxes, and lower US deficits. I suspect the more important factor in whether this is good or bad will instead be the direct effects of this on nuclear risk (I assume some parts of the upgrade will reduce nuclear risk – for instance, better sensors might reduce the chances of a false positive of incoming nuclear weapons – while other parts will increase the risk).
Isn’t there a contradiction between the idea that nuclear weapons serve as a deterrent and the idea that we need to upgrade them? The implication would seem to be that the largest nuclear missile stockpile on the planet still isn’t a sufficient deterrent, in which case what exactly would constitute a deterrent?
Not necessarily – the upgrade likely includes many aspects for reducing the chances that a first-strike from adversaries could nullify the US stockpile (efforts towards this goal could include both hardening and redundancy), thus preserving US second-strike capabilities.
More to the point, is this decision being taken by people who see nuclear war as a zero-sum game—we win or we lose
I’m sure ~everyone involved considers nuclear war a negative-sum game. (They likely still think it’s preferable to win a nuclear war than to lose it, but they presumably think the “winner” doesn’t gain as much as the “loser” loses.)
If the US truly needs to upgrade its nuclear arsenal, then surely the same is true of Russia
Yeah, my sense is multiple countries will upgrade their arsenals soon. I’m legitimately uncertain whether this will on net increase or decrease nuclear risk (largely I’m just ignorant here – there may be an expert consensus that I’m unaware of, but I don’t think the immediate reaction of “spending further money on nukes increases nuclear risk” is obviously necessarily correct). Even if it would be better for everyone to not, it may be hard to coordinate to avoid doing so (though may still be worth trying).
Given the success of Oppenheimer and the spectre of nuclear annihilation that has been raised by the war in Ukraine, this might be the moment to get the public behind such an initiative.
I think it’s not crazy to think there might be a relative policy window now to change course, given these reasons.
I don’t have any strong views on whether this user should have been given a temporary ban vs a warning, but (unless the ban was for a comment which is now deleted or a private message, which are each possible, and feel free to correct me if so), from reading their public comments, I think it’s inaccurate (or at least misleading) to describing them as “promoting violence”. Specifically, they do not seem to have been advocating that anyone actually use violence, which is what I think the most natural interpretation of “promoting violence” would be. Instead, they appear to have been expressing that they’d emotionally desire that people who hypothetically would do the thing in question would face violence, that (in the hypothetical example) they’d feel the urge to use violence, and so on.
I’m not defending their behavior, but it does feel importantly less bad than what I initially assumed from the moderator comment, and I think it’s important to use precise language when making these sorts of public accusations.
Worth noting that in humans (and unlike in most other primates) status isn’t primarily determined solely by dominance (e.g., control via coercion), but instead is also significantly influenced via prestige (e.g., voluntary deference due to admiration). While both dominance and prestige play a large role in determining status among humans, if anything prestige probably plays a larger role.
(Note – I’m not an expert in anthropology, and anyone who is can chime in, but this is my understanding given my amount of knowledge in the area.)
Note to Israelis who may be reading this: I did not upvote/downvote this post and I do not intent to vote on such posts going forward. I think you should do the same.
You’re free to vote (or refrain from voting) how you want, but the suggestion to others feels illiberal to me in a way that I think is problematic. Would you also suggest that any Palestinians reading this post refrain from voting on it? (Or, going a step further, would you suggest Kenyan EAs refrain from voting on posts about GiveDirectly?) Personally, I think both Israeli EAs and Palestinian EAs should feel comfortable voting on posts like this, and I’d worry about the norms in the community if we tell people not to vote/otherwise voice their perspective based on demographics (even more so if these suggestions are asymmetrical instead of universal).
Another group that naturally could be in a coalition with those 2 – parents who just want clean air for their children to breathe from a pollution perspective, unrelated to covid. (In principle, I think may ordinary adults should also want clean air for themselves to breath due to the health benefits, but in practice I expect a much stronger reaction from parents who want to protect their children’s lungs.)
My problem with the post wasn’t that it used subpar prose or “could be written better”, it’s that it uses rhetorical techniques that make actual exchange of ideas and truth-seeking harder. This isn’t about “argument style points”, it’s about cultivating norms in the community that make it easier for us to converge on truth, even on hard topics.
The reason I didn’t personally engage with the object level is I didn’t feel like I had anything particularly valuable to say on the topic. I didn’t avoid saying my object-level views (if he had written a similar post with a style I didn’t take issue with, I wouldn’t have responded at all), and I don’t want other people in the community to avoid engaging with the ideas either.
I feel like this post is doing something I really don’t like, which I’d categorize as something like “instead of trying to persuade with arguments, using rhetorical tricks to define terms in such a way that the other side is stuck defending a loaded concept and has an unjustified uphill battle.”
For instance:let us be clear: hiding your beliefs, in ways that predictably leads people to believe false things, is lying. This is the case regardless of your intentions, and regardless of how it feels.
I mean, no, that’s just not how the term is usually used. It’s misleading to hide your beliefs in that way, and you could argue it’s dishonest, but it’s not generally what people would call a “lie” (or if they did, they’d use the phrase “lie by omission”). One could argue that lies by omission are no less bad than lies by commission, but I think this is at least nonobvious, and also a view that I’m pretty sure most people don’t hold. You could have written this post with words like “mislead” or “act coyly about true beliefs” instead of “lie”, and I think that would have made this post substantially better.
I also feel like the piece weirdly implies that it’s dishonest to advocate for a policy that you think is second best. Like, this just doesn’t follow – someone could, for instance, want a $20/hr minimum wage, and advocate for a $15/hr minimum wage based on the idea that it’s more politically feasible, and this isn’t remotely dishonest unless they’re being dishonest about their preference for $20/hr in other communications. You say:
many AI Safety people being much more vocal about their endorsement of RSPs than their private belief that in a saner world, all AGI progress should stop right now.
but this simply isn’t contradictory – you could think a perfect society would pause but that RSPs are still good and make more sense to advocate for given the political reality of our society.
That’s fair. I also don’t think simply putting a post on the forum is in itself enough to constitute a group being an EA group.
I don’t think that’s enough to consider an org an EA org. Specifically, if that was all it took for an org to be considered an EA org, I’d worry about how it could be abused by anyone who wanted to get an EA stamp of approval (which might have been what happened here – note that post is the founders’ only post on the forum).
[Just commenting on the part you copied]
Feels way too overconfident. Would the cultures diverge due to communication constraints? Seems likely, though also I could imagine pathways by which it wouldn’t happen significantly, such as if a singleton was already reached.
Would technological development diverge significantly, conditional on the above? Not necessarily, imho. If we don’t have a self-sufficient colony on Mars before we reach “technological maturity” (e.g., with APM and ASI), then presumably no (tech would hardly progress further at all, then).
Would tech divergence imply each world can’t truly track whatever weapons the other world had? Again, not necessarily. Perhaps one world had better tech and could just surveil the other.
Would there be a for-sure 1st strike advantage? Again, seems debatable.
Etcetera.
I was also surprised by how highly the EMH post was received, for a completely different reason – the fact that markets aren’t expecting AGI in the next few decades seems unbelievably obvious, even before we look at interest rates. If markets were expecting AGI, AI stocks would presumably be (much more, at least compared to non-AI stocks) to the moon than they are now, and market analysts would presumably (at least occasionally) cite the possibility of AGI as the reason why. But we weren’t seeing any of that, and we already knew from just general observation of the zeitgeist that, until a few months ago, the prospect of AGI was overwhelmingly not taken seriously outside of a few niche sub-communities and AI labs (how to address this reality has been a consistent, well-known hurdle within the AI safety community).
So I’m a little confused at what exactly judges thought was the value provided by the post – did they previously suspect that markets were taking AGI seriously, and this post significantly updated them towards thinking markets weren’t? Maybe instead judges thought that the post was valuable for some other reason unrelated to the main claim of “either reject EMH or reject AGI in the next few decades”, in which case I’d be curious to hear about what that reason is (e.g., if the post causes OP to borrow a bunch of money, that would be interesting to know).
Granted, it’s an interesting analysis, but that seems like a different question, and many of the other entries (including both those that did and didn’t win prizes) strike me as having advanced the discourse more, at least if we’re focusing on the main claims.
Did Eric Drexler not describe ideas like this in Engines of Creation? Either way, I would guess that Drexler has thought of similar ideas before (sans the phrase “diamondoid bacteria”) and has also likely communicated these ideas to Eliezer (albeit perhaps in an informal context). Though it’s also possible Eliezer came up with it independently, as it seems like a relatively natural idea to consider once you already assume diamondoid mechanosynthesis creating atomically-precise nanobots.
I think my introductory explainer on the topic is a pretty good resource for that sort of audience:
https://medium.com/@daniel_eth/ai-alignment-explained-in-5-points-95e7207300e3