I follow Crocker’s rules.
niplav
(Note: did not downvote)
Your central question appears interesting and important to me: Has Anthropic joined the arms race for advanced AI? If yes, why?
(And taking a by default conflict-theoretic stance toward new AI startups is perhaps good, based on the evidence one has received via DeepMind/OpenAI).
So, I’d join in in the call for asking e.g. Anthropic (but also other startups like Conjecture, Adept AI and Aligned AI) for their plans to avoid race dynamics, and their current implementation. However, I believe it’s not very likely that especially Anthropic will comment on this.
However, your post is mostly not fleshing out the question, but instead not-quite-attacking-but-also-not-not-attacking Anthropic (“Even when I’m mostly talking to AI ethicists now, I still regarded Anthropic as something not evil”) and not fully fleshing out the reasons why you’re asking the question (“I feel there’s a no-confidence case for us trusting Anthropic to do what they are doing well”), but instead talking a lot about your emotional state a lot. (I don’t think that talking about your emotional state a lot is necessarily bad, but I’d like accusations, questions and statements about emotion to be separated if possible.)
See my thread for more questions. I feel traumatized by EA, by this duplicity (that I have seen “rising up” before this, see my other threads). I’m searching for a job and I’m scared of people. Because this is not the first time, not at all. Somehow tech people are “number one” at this. And EA/tech people seem to be “number 0”, even better at Machiavellianism and duplicity than Peter Thiel or Musk. At least, Musk openly says he’s “red-pilled” and talks to Putin. What EA/safety is doing is kinda similar but hidden under the veil of “safety”.
I don’t understand this paragraph, for example. Why do you believe that EA/tech people are better at Machiavellianism than those two? Who exactly is EA/tech people here, that would be good to know.
I have a slightly negative reaction to this kind of thinking.
At the limit, there is a trade-off between reporting my beliefs without having bias in the sampling (i.e. lies by omission) and trying to convince people. If I mainly talk about how recommender systems are having bad effects on the discourse landscape because they are aligned, I am filtering evidence (and therefore imposing very high epistemic costs on my discussion partner in the process!)
In the process of doing so, I would not only potentially be making the outside epistemic environment worse, but also might be damaging my own epistemics (or that of the EA community) in the process (via Elephant-in-the-brain-like dynamics or by the conjecture that if you say something long enough, you become more likely to believe it as well).
A good idea that came out of the discussion (point 3, “Bayesian Honesty”) around Meta-Honesty was the heuristic that, when talking to another person, one shouldn’t give information that would, in expectation, cause the other person to update in the wrong direction. I think the above proposals would sometimes skirt this line (and cross it when considering beliefs about the EA community, such as “EA mainly worries about recommender systems increasing political polarization”).
Perhaps this is just a good reason for me not to be a spokesperson about AI risk (probably inappropriately married to the idea that truth is to be valued above everything else), but I wish that people will be very thoughtful around reporting misleading reasons why large parts of the EA community are extremely freaked out about AI (and not, as the examples would suggest, just a bit worried).
I’d be curious about a list of topics they would like others to investigate/continue investigating, or a list of the most important open questions.
The last person to have a case of smallpox, Ali Maow Maalin, dedicated years of his life to eradicating polio in the region.
On July 22nd 2013, he died of Malaria, while traveling again after polio had been reintroduced.
Value of information seems to exceed the potential damage done at these sample sizes for me.
Maybe we don’t just want to optimize the messaging, but the messengers: Having charismatic & likeable people talk about this stuff might be good (to what extent is this already happening? Are MacAskill & Ord as good as spokespeople as they are as researchers?).
Furthermore, taking the WaitButWhy approach, with easily understandable visualizations, sounds like a good approach, I agree.
Another way to do this is like the rationality community does: Its highest status members are often pseudonymous internet writers with sometimes no visible credentials and sometimes active disdain for credentials (with the observation that argument screens off from authority).
Gwern has no (visible) credentials (unless you count the huge & excellent website as one), Yudkowsky disdains them, Scott Alexander sometimes brings them up, Applied Divinity Studies and dynomight and Fantastic Anachronism are all pseudonymous and probably prefer to keep it that way…
I think it’s much easier to be heard & respected in the EA community purely through online writing & content production (for which you “only” need intelligence, conscientiousness & time, but rarely connections) than in most other communities (and especially academia).
I strongly agree with you, and would add that long content like Gwern’s (or Essays on Reducing Suffering or PredictionBook or Wikipedia etc.) are important as epistemic infrastructure: they have the added value of constant maintenance, which allows them to achieve depth and scope that is usually not found in blogs. I think this kind of maintenance is really really important, especially when considering long-term content. I mourn the times when people would put a serious effort into putting together an FAQ for things—truly weapons from a more civilized age.
I have read blogs for many years and most blog posts are the triumph of the hare over the tortoise. They are meant to be read by a few people on a weekday in 2004 and never again, and are quickly abandoned—and perhaps as Assange says, not a moment too soon. (But isn’t that sad? Isn’t it a terrible ROI for one’s time?) On the other hand, the best blogs always seem to be building something: they are rough drafts—works in progress.
—Gwern, “About This Website”, 2021
On the other hand, most blogs to me seem to be epistemic fireworks (or, maybe more nicely, epistemic tinder that sparks a conversation): read mostly when released, and then slowly bit-rotting away until the link falls stale. (Why don’t people care more about their content when they put so much effort intro producing it‽).
I find it ironic that the FTX Long Term Future Fund is giving out a price to a medium that is so often so ephemeral, so much not long-term, as blogs (what value can I gain from reading the whole archive of Marginal Revolution? A lot, probably, but extremely little value per post, I’m likely better off reading Wikipedia.). What’s next? The $10k price for the best discord message about longtermism? The best tweet? (”It’s about the outreach! Many more people read tweets and discord messages!”)
I find this comment much more convincing than the top-level post.
Note that it’s much easier to improve existing pages than to add new ones.
More EA-relevant Wikipedia articles that don’t yet exist:
Place premium
Population Ethics pages
Sadistic conclusion
Critical-threshold approaches
Cantril Ladder
Axelrod’s Meta-Norm
Open-source game theory
Humane Technology
Chris Olah
Machine Learning Interpretability
Circuit
Induction head
Lottery Ticket Hypothesis
Grokking
Deep Double Descent
Nanosystems: Molecular Machinery Manufacturing and Computation
Global Priorities Institute
Scaling Laws for Large Language Models
Some of these articles are about AI capabilities, so perhaps not as great to write about.
Additionally, the following EA-relevant articles could be greatly improved:
Causal Model (in general, the causal inference articles on Wikipedia are surprisingly lacking)
I strongly agree with the compliments thing!
This sequence is spam and should be deleted.
This solidifies a conclusion for me: when talking about AI risk, the best/most rigorous resources aren’t the ones which are most widely shared/recommended (rigorous resources are e.g. Ajeya Cotra’s report on AI timelines, Carlsmith’s report on power-seeking AI, Superintelligence by Bostrom or (to a lesser extent) Human Compatible by Russell).
Those might still not be satisfying to skeptics, but are probably more satisfying than ” short stories by Eliezer Yudkowsky” (though one can take an alternative angle: skeptics wouldn’t bother reading a >100 page report, and I think the complaint that it’s all short stories by Yudkowsky comes from the fact that that’s what people actually read).
Additionally, there appears to be a perception that AI safety research is limited to MIRI & related organisations, which definitely doesn’t reflect the state of the field—but from the outside this multipolarity might be hard to discover (outgroup-ish homogeneity bias strikes again).
What do you mean by “accurate estimate”? The more sophisticated version would be to create a probability distribution over the value of the marginal win, as well as for the intervention, and then perform a Monte-Carlo analysis, possibly with a sensitivity analysis.
But I imagine your disagreement goes deeper than that?
In general, I agree with the just estimate everything approach, but I imagine you have some arguments here.
Very cool! How does this compare to this earlier set of flashcards? Is it disjunct, contains it as a subset or is the relation unknown?
I agree with this, the success rate for wikis appears to be fairly low, at least in my anecdotal experience: who has read articles on the Cause Priorization wiki or the LessDead wiki or the LessWrong wiki? Even the EA forum wiki or the LessWrong tags are barely read or updated (perhaps a merge of the two would be helpful?).
Unfortunately, the wildest inclusionists have lost, so we can’t just put everything onto Wikipedia, which would be the best option.
Good content gathers dust on those isolated wikis, sadly.
I wonder whether the lives of those moths were net negative. If the population was rising, then the number of moths dying as larvae might’ve been fairly small. I assume that OPs apartment doesn’t have many predatory insects or animals that eat insects, so the risk of predation was fairly small. That leaves five causes of death: old age, hunger, thirst, disease and crushing.
Death by old age for moths is probably not that bad? They don’t have a very long life, so their duration of death also doesn’t seem very long to me, and couldn’t offset the quality of their life.
Hunger and thirst are likely worse, but I don’t know by how much, do starved moths die from heart problems? (Do moths have hearts?)
Disease in house moth colonies is probably fairly rare.
Crushing can be very fast or lead to long painful death. Seems the worst of those options.
I think those moths probably had a better life than outside, just given the number of predatory insects; but I don’t think that this was enough to make their lives net-positive. But it’s been a while since I’ve read into insect welfare, so if most young insects die by predation, I’d increase my credence in those moths having had net-positive lives.
More:
See also Tomasik 2017.
Assuming that interventions have log-normally distributed impact, compromising on interventions for the sake of public perception is not worth it unless it brings in exponentially more people.
I would very much prefer it if one didn’t appeal to the consequences of the belief about animal moral patienthood, and instead argue whether animals in fact are moral patients or not, or whether the question is well-posed.
For this reason, I have strong-downvoted your comment.