I follow Crocker’s rules.
niplav
Why I don’t write as much as I used to (Brian Tomasik, 2022)
(Note: did not downvote)
Your central question appears interesting and important to me: Has Anthropic joined the arms race for advanced AI? If yes, why?
(And taking a by default conflict-theoretic stance toward new AI startups is perhaps good, based on the evidence one has received via DeepMind/OpenAI).
So, I’d join in in the call for asking e.g. Anthropic (but also other startups like Conjecture, Adept AI and Aligned AI) for their plans to avoid race dynamics, and their current implementation. However, I believe it’s not very likely that especially Anthropic will comment on this.
However, your post is mostly not fleshing out the question, but instead not-quite-attacking-but-also-not-not-attacking Anthropic (“Even when I’m mostly talking to AI ethicists now, I still regarded Anthropic as something not evil”) and not fully fleshing out the reasons why you’re asking the question (“I feel there’s a no-confidence case for us trusting Anthropic to do what they are doing well”), but instead talking a lot about your emotional state a lot. (I don’t think that talking about your emotional state a lot is necessarily bad, but I’d like accusations, questions and statements about emotion to be separated if possible.)
See my thread for more questions. I feel traumatized by EA, by this duplicity (that I have seen “rising up” before this, see my other threads). I’m searching for a job and I’m scared of people. Because this is not the first time, not at all. Somehow tech people are “number one” at this. And EA/tech people seem to be “number 0”, even better at Machiavellianism and duplicity than Peter Thiel or Musk. At least, Musk openly says he’s “red-pilled” and talks to Putin. What EA/safety is doing is kinda similar but hidden under the veil of “safety”.
I don’t understand this paragraph, for example. Why do you believe that EA/tech people are better at Machiavellianism than those two? Who exactly is EA/tech people here, that would be good to know.
Iqisa: A Library For Handling Forecasting Datasets
[UNENDORSED] Reward Long Content
There is Little Evidence on Question Decomposition
I have a slightly negative reaction to this kind of thinking.
At the limit, there is a trade-off between reporting my beliefs without having bias in the sampling (i.e. lies by omission) and trying to convince people. If I mainly talk about how recommender systems are having bad effects on the discourse landscape because they are aligned, I am filtering evidence (and therefore imposing very high epistemic costs on my discussion partner in the process!)
In the process of doing so, I would not only potentially be making the outside epistemic environment worse, but also might be damaging my own epistemics (or that of the EA community) in the process (via Elephant-in-the-brain-like dynamics or by the conjecture that if you say something long enough, you become more likely to believe it as well).
A good idea that came out of the discussion (point 3, “Bayesian Honesty”) around Meta-Honesty was the heuristic that, when talking to another person, one shouldn’t give information that would, in expectation, cause the other person to update in the wrong direction. I think the above proposals would sometimes skirt this line (and cross it when considering beliefs about the EA community, such as “EA mainly worries about recommender systems increasing political polarization”).
Perhaps this is just a good reason for me not to be a spokesperson about AI risk (probably inappropriately married to the idea that truth is to be valued above everything else), but I wish that people will be very thoughtful around reporting misleading reasons why large parts of the EA community are extremely freaked out about AI (and not, as the examples would suggest, just a bit worried).
Range and Forecasting Accuracy
I’d be curious about a list of topics they would like others to investigate/continue investigating, or a list of the most important open questions.
The last person to have a case of smallpox, Ali Maow Maalin, dedicated years of his life to eradicating polio in the region.
On July 22nd 2013, he died of Malaria, while traveling again after polio had been reintroduced.
Value of information seems to exceed the potential damage done at these sample sizes for me.
Two Reasons For Restarting the Testing of Nuclear Weapons
Maybe we don’t just want to optimize the messaging, but the messengers: Having charismatic & likeable people talk about this stuff might be good (to what extent is this already happening? Are MacAskill & Ord as good as spokespeople as they are as researchers?).
Furthermore, taking the WaitButWhy approach, with easily understandable visualizations, sounds like a good approach, I agree.
Another way to do this is like the rationality community does: Its highest status members are often pseudonymous internet writers with sometimes no visible credentials and sometimes active disdain for credentials (with the observation that argument screens off from authority).
Gwern has no (visible) credentials (unless you count the huge & excellent website as one), Yudkowsky disdains them, Scott Alexander sometimes brings them up, Applied Divinity Studies and dynomight and Fantastic Anachronism are all pseudonymous and probably prefer to keep it that way…
I think it’s much easier to be heard & respected in the EA community purely through online writing & content production (for which you “only” need intelligence, conscientiousness & time, but rarely connections) than in most other communities (and especially academia).
I strongly agree with you, and would add that long content like Gwern’s (or Essays on Reducing Suffering or PredictionBook or Wikipedia etc.) are important as epistemic infrastructure: they have the added value of constant maintenance, which allows them to achieve depth and scope that is usually not found in blogs. I think this kind of maintenance is really really important, especially when considering long-term content. I mourn the times when people would put a serious effort into putting together an FAQ for things—truly weapons from a more civilized age.
I have read blogs for many years and most blog posts are the triumph of the hare over the tortoise. They are meant to be read by a few people on a weekday in 2004 and never again, and are quickly abandoned—and perhaps as Assange says, not a moment too soon. (But isn’t that sad? Isn’t it a terrible ROI for one’s time?) On the other hand, the best blogs always seem to be building something: they are rough drafts—works in progress.
—Gwern, “About This Website”, 2021
On the other hand, most blogs to me seem to be epistemic fireworks (or, maybe more nicely, epistemic tinder that sparks a conversation): read mostly when released, and then slowly bit-rotting away until the link falls stale. (Why don’t people care more about their content when they put so much effort intro producing it‽).
I find it ironic that the FTX Long Term Future Fund is giving out a price to a medium that is so often so ephemeral, so much not long-term, as blogs (what value can I gain from reading the whole archive of Marginal Revolution? A lot, probably, but extremely little value per post, I’m likely better off reading Wikipedia.). What’s next? The $10k price for the best discord message about longtermism? The best tweet? (”It’s about the outreach! Many more people read tweets and discord messages!”)
I find this comment much more convincing than the top-level post.
Note that it’s much easier to improve existing pages than to add new ones.
More EA-relevant Wikipedia articles that don’t yet exist:
Place premium
Population Ethics pages
Sadistic conclusion
Critical-threshold approaches
Cantril Ladder
Axelrod’s Meta-Norm
Open-source game theory
Humane Technology
Chris Olah
Machine Learning Interpretability
Circuit
Induction head
Lottery Ticket Hypothesis
Grokking
Deep Double Descent
Nanosystems: Molecular Machinery Manufacturing and Computation
Global Priorities Institute
Scaling Laws for Large Language Models
Some of these articles are about AI capabilities, so perhaps not as great to write about.
Additionally, the following EA-relevant articles could be greatly improved:
Causal Model (in general, the causal inference articles on Wikipedia are surprisingly lacking)
I strongly agree with the compliments thing!
Forecasters: What Do They Know? Do They Know Things?? Let’s Find Out!
This sequence is spam and should be deleted.
I would very much prefer it if one didn’t appeal to the consequences of the belief about animal moral patienthood, and instead argue whether animals in fact are moral patients or not, or whether the question is well-posed.
For this reason, I have strong-downvoted your comment.