Note that much of the strongest opposition to Anthropic is also associated with EA, so it’s not obvious that the EA community has been an uncomplicated good for the company, though I think it likely has been fairly helpful on net (especially if one measures EA’s contribution to Anthropic’s mission of making transformative AI go well for the world rather than its contribution to the company’s bottom line). I do think it would be better if Anthropic comms were less evasive about the degree of their entanglement with EA.
(I work at Anthropic, though I don’t claim any particular insight into the views of the cofounders. For my part I’ll say that I identify as an EA, know many other employees who do, get enormous amounts of value from the EA community, and think Anthropic is vastly more EA-flavored than almost any other large company, though it is vastly less EA-flavored than, like, actual EA orgs. I think the quotes in the paragraph of the Wired article give a pretty misleading picture of Anthropic when taken in isolation and I wouldn’t personally have said them, but I think “a journalist goes through your public statements looking for the most damning or hypocritical things you’ve ever said out of context” is an incredibly tricky situation to come out of looking good and many of the comments here seem a bit uncharitable given that.)
RavenclawPrefect
Thanks so much for this post—I’m going to adjust my buying habits from now on!
My impression is that e.g. Vital Farms is still substantially better than conventional egg brands, and if I need to buy eggs in a store that doesn’t offer these improved options it still probably cuts suffering per egg in half or more relative to a cheaper alternative. Does that seem right to you?
What are the limitations of the rodent studies? Two ways I could imagine them being inadequate:
Rodent eyes are smaller and the physical scale of relevant features matters a lot for how damaging far UV-C is (although I would naively guess that smaller eyes are if anything worse for this, so if the rodents do fine then I’d think the humans would too).
Rodents can’t follow detailed instructions or provide subjective reports, so some kinds of subtle vision impairments we wouldn’t be able to notice.
Do either of these apply, or are the limitations in these studies from other factors?
Since no one has said anything in reply to this comment yet: I suspect it is getting downvotes because it doesn’t seem especially relevant to the current discussion and feels like it would fit better as a standalone post or an Intercom message or something.
I’m lazy; I am not immune to the phenomenon where users reliably fail to apply optimization to their use of a website, despite their experience improving when such changes are made for them. (I suspect this perspective is underrepresented in the comments because fewer people are willing to admit it and it’s probably more common among lurkers.)
I consume content weighted in large part by how many upvotes it has, because that’s where the discussion is and it’s what people will be talking about. (Also because in my case most of my EA Forum reading comes from karma-gated RSS feeds, though I expect this to be uncommon.) This means that in an equilibrium of most attention going to community posts, I’ll read more of them, but I would be happy with a state of affairs that shifted the equilibrium to object-level posts.
I read the original comment not as an exhortation to always include lots of nuanced reflection in mostly-unrelated posts, but to have a norm that on the forum, the time and place to write sentences that you do not think are actually true as stated is “never (except maybe April Fools)”.
The change I’d like to see in this post isn’t a five-paragraph footnote on morality, just the replacement of a sentence that I don’t think they actually believe with one they do. I think that environments where it is considered a faux pas to point out “actually, I don’t think you can have a justified belief in the thing you said” are extremely corrosive to the epistemics of a community hosting those environments, and it’s worth pushing back on them pretty strongly.
it doesn’t seem good for people to face hardship as a result of this
I agree, but the tradeoff is not between “someone with a grant faces hardship” and “no one faces hardship”, it’s between “someone with a grant faces hardship” and “someone with deposits at FTX faces hardship”.
I expect that the person with the grant is likely to put that money to much better uses for the world, and that’s a valid reason not to do it! But in terms of the direct harms experienced by the person being deprived of money, I’d guess the median person who lost $10,000 to unrecoverable FTX deposits is made a fair bit worse off by that than the median person with a $10,000 Future Fund grant would be by returning it.
I assume you mean something like “return the money to FTX such that it gets used to pay out customer balances”, but I don’t actually know how I’d go about doing this as an individual. It seems like if this was a thing lots of people wished to do, we’d need some infrastructure to make it happen, and doing so in a way that led to the funds having the correct legal status to be transferred back to customers in that fashion might be nontrivial.
(Or not; I’m definitely not an expert here. Happy to hear from someone with more knowledge!)
What level of feedback detail do applicants currently receive? I would expect that giving a few more bits beyond a simple yes/no would have a good ROI, e.g. at least having the grantmaker tick some boxes on a dropdown menu.
“No because we think your approach has a substantial chance of doing harm”, “no because your application was confusing and we didn’t have the time to figure out what it was saying”, and “no because we think another funder is better able to evaluate this proposal, so if they didn’t fund it we’ll defer to their judgment” seem like useful distinctions to applicants without requiring much time from grantmakers.
Opening with a strong claim, making your readers scroll through a lot of introductory text, and ending abruptly with “but I don’t feel like justifying my point in any way, so come up with your own arguments” is not a very good look on this forum.
Insightful criticism of the capital allocation dynamics in EA is a valuable and worthwhile thing that I expect most EA Forum readers would like to see! But this is not that, and the extent to which it appears to be that for several minutes of the reader’s attention comes across as rather rude. My gut reaction to this kind of rhetorical strategy is “if even the author doesn’t want to put forth the effort to make this into a coherent argument, why should I?”
[I have read the entirety of The Inner Ring, but not the vast series of apparent prerequisite posts to this one. I would be very surprised if reading them caused me to disagree with the points in this comment, though.]
Alexey Guzey has posted a very critical review of Why We Sleep—I haven’t deeply investigated the resulting debate, but my impression from what I’ve seen thus far is that the book should be read with a healthy dose of skepticism.
If one doesn’t have strong time discounting in favor of the present, the vast majority of the value that can be theoretically realized exists in the far future.
As a toy model, suppose the world is habitable for a billion years, but there is an extinction risk in 100 years which requires substantial effort to avert.
If resources are dedicated entirely to mitigating extinction risks, there is net −1 utility each year for 100 years but a 90% chance that the world can be at +5 utility every year afterwards once these resources are freed up for direct work. (In the extinction case, there is no more utility to be had by anyone.)
If resources are split between extinction risk and improving current subjective experience, there is net +2 utility each year for 100 years, and a 50% chance that the world survives to the positive longterm future state above. It’s not hard to see that the former case has massively higher total utility, and remains such under almost any numbers in the model so long as we can expect billions of years of potential future good.
A model like this relies crucially on the idea that at some point we can stop diverting resources to global catastrophic risk, or at least do so less intensively, but I think this is an accurate assumption. We currently live in an unusually risk-prone world; it seems very plausible that pandemic risk, nuclear warfare, catastrophic climate change, unfriendly AGI, etc. are all safely dealt with in a few centuries if modern civilization endures long enough to keep working on them.
One’s priorities can change over time as their marginal value shifts; ignoring other considerations for the moment doesn’t preclude focusing on them once we’ve passed various x-risk hurdles.
It seems to me that there are quite low odds of 4000-qubit computers being deployed without proper preparations? There are very strong incentives for cryptography-using organizations of almost any stripe to transition to post-quantum encryption algorithms as soon as they expect such algorithms to become necessary in the near future, for instance as soon as they catch wind of 200- and 500- and 1000- bit quantum computers. Given that post-quantum algorithms already exist, it does not take much time from worrying about better quantum computers to protecting against them.
In particular, it seems like the only plausible route by which many current or recent communications are decrypted using large quantum computers is one in which a large amount of quantum computation is suddenly directed towards these goals without prior warning. This seems to require both (1) an incredible series of both theoretical and engineering accomplishments produced entirely in secret, perhaps on the scale of the Manhattan project and (2) that this work be done by an organization which is either malicious in its own right or distributes the machines publicly to other such actors.
(1) is not inconceivable (the Manhattan project did happen*), but (2) seems less likely; in particular, the most malicious organizations I can think of with the resources to pull off (1) are something like the NSA, and I think there is a pretty hard upper bound on how bad their actions can be (in particular, “global financial collapse from bank fraud” doesn’t seem like a possibility). Also, the NSA has already broken various cryptographic schemes in secret and the results seem to have been far from catastrophic.
I don’t see a route by which generic actors could acquire RSA-breaking quantum tech and where the users of RSA wouldn’t also be able to recognize this event coming months if not years in advance.
*Though note that there were no corporations working to develop nuclear bombs, while there are various tech giants looking at ways of developing quantum computers, so the competition is greater.
Oh, definitely agreed—I think effects like “EA counterfactually causes a person to work at Anthropic” are straightforwardly good for Anthropic. Almost all of the sources of bad-for-Anthropic effects from EA I expect come from people who have never worked there.
(Though again, I think even the all-things-considered effect of EA has been substantially positive for the company, and I agree that it would probably be virtue-ethically better for Anthropic to express more of the value they’ve gotten from that commons.)