It doesnât follow from there being no clear definition of something that there arenât clear positive and negative cases of it, only that itâs blurry at the boundaries. For example, suppose the only things that existed were humans, rocks, and lab grown human food. There still wouldnât be a clear definition of âconsciousâ, but it would be clear only humans were conscious, since lab grown meat and veg and rocks clearly donât count on any intepretation of âconsciousnessâ. Maybe all mites obviously donât count too. I agree with you that BB canât just assume that about mites though, and needs to provide an argument.
David Mathersđ¸
What about the argument that there are so many of them that even a tiny chance they are conscious is super-important?
Presumably there are at least some people who have long timelines, but also believe in high risk and donât want to speed things up. Or people who are unsure about timelines, but think risk is high whenever it happens. Or people (like me) who think X-risk is low* and timelines very unclear, but even a very low X-risk is very bad. (By very low, I mean like at least 1 in 1000, not 1 in 1x10^17 or something. I agree it is probably bad to use expected value reasoning with probabilities as low as that.)
I think you are pointing at a real tension though. But maybe try to see it a bit from the point of view of people who think X-risk is real enough and raised enough by acceleration that acceleration is bad. Itâs hardly going to escape their notice that projects at least somewhat framed as reducing X-risk often end up pushing capabilities forward. They donât have to be raging dogmatists to worry about this happening again, and itâs reasonable for them to balance this risk against risks of echo chambers when hiring people or funding projects.
*Iâm less surely merely catastrophic biorisk from human misuse is low sadly.
I donât think you can possibly know whether they really are actually thinking of the unconditional probabilities or whether they just have very different opinions and instincts from you about the whole domain which make very different genuinely conditional probabilities seem reasonable.
I donât find accusations of fallacy helpful here. The authorâs say in the abstract explicitly that they estimated the probability of each step conditional on the previous ones. So they are not making a simple, formal error like multiplying a bunch of unconditional probabilities whilst forgetting that only works if the probabilities are uncorrelated. Rather, you and Richard Ngo think that theyâre estimates for the explicitly conditional probabilities are too low, and you are speculating that this is because they are still really think of the unconditional probabilities. But I donât think âyou are committing a fallacyâ is a very good or fair way to describe âI disagree with your probabilities and I have some unevidenced speculation about why you are giving probabilities that are wrongâ.
âA fraudulent charityâ does not sound to me much like âa charity that knowingly used a mildly overoptimistic figure for the benefits of one of its programs even after admitting under pressure it was wrongâ. Rather, I think the rhetorical force of the phrase comes mostly from the fact that to any normal English speaker it conjures up the image of a charity that is a scam in the sense that it is taking money, not doing charitable work with it, and instead just putting it into the CEOâs (or whoeverâs) personal bank account. My feeling on this isnât really effected by whether the first thing meets the legal definition of fraud, probably it does. My guess is that many charities that almost no one would describe as âfraudulent organizationsâ have done something like this or equivalently bad at some point in their histories, probably including some pretty effective ones.
Not that I think that means Singeria have done nothing wrong. If they agree the figure is clearly overoptimistic they should change it. Not doing so is deceptive, and probably it is illegal. But I find it a bit irritating that you are using what seems to me to be somewhat deceptive rhetoric whilst attacking them for being deceptive.
They seem quite different to me: one is about AIs being able to talk like a smart human, and the other is about their ability to actually do novel scientific research and other serious intellectual tasks.
My memory is a large number of people to the NL controversy seriously, and the original threads on it were long and full of hostile comments to NL, and only after someone posted a long piece in defence of NL did some sympathy shift back to them. But even then there are like 90-something to 30-something agree votes and 200 karma on Yarrowâs comment saying NL still seem bad: https://ââforum.effectivealtruism.org/ââposts/ââH4DYehKLxZ5NpQdBC/âânonlinear-s-evidence-debunking-false-and-misleading-claims?commentId=7YxPKCW3nCwWn2swb
I donât think people dropped the ball here really, people were struggling honestly to take accusations of bad behaviour seriously without getting into witch hunt dynamics.
âBecause once a country embraces Statism, it usually begins an irreversible process of turning into a âshithole countryâ, as Trump himself eloquently put it. â
Ignoring tiny islands (some of them with dubious levels of independence from the US), the 10 nations with the largest %s of GDP as government revenue include Finland, France, Belgium and Austria, although, also, yes, Libya and Lesotho. In general, the top of the list for government revenue as % of GDP seems to be a mixture of small islands, petro states, and European welfare state democracies, not places that are particularly impoverished or authoritarian: https://ââen.wikipedia.org/ââwiki/ââList_of_countries_by_government_spending_as_percentage_of_GDP#List_of_countries_(2024)
Meanwhile the countries with the low levels of government revenue as a % of GDP that arenât currently having some kind of civil war are places like Bangladesh, Sri Lanka, Iran and (weirdly) Venezuela.
This isnât a perfect proxy for âstatismâ obviously, but I think it shows that things are more complicated than simplistic libertarian analysis would suggest. Big states (in purely monetary) seem to often be a consequence of success. Maybe they also hold back further success of course, but countries donât seem to actively degenerate once they arrive (i.e. growth might slow, but they are not in permanent recession.)
Iâd distinguish here between the community and actual EA work. The community, and especially its leaders, have undoubtedly gotten more AI-focused (and/âor publicly admittted to a degree of focus on AI theyâve always had) and rationalist-ish. But in terms of actual altruistic activity, I am very uncertain whether there is less money being spent by EAs on animal welfare or global health and development in 2025 than there was in 2015 or 2018. (I looked on Open Philâs website and so far this year it seems well down from 2018 but also well up from 2015, but also 2 months isnât much of a sample.) Not that that means your not allowed to feel sad about the loss of community, but I am not sure we are actually doing less good in these areas than we used to.
Itâs worth saying that the fact that most arrows go up on the OWiD chart could just point to two independent trends, one of growth rising almost everywhere and another of happiness rising almost everywhere, for two completely independent reasons. Without cases where negative or zero growth persists for a long time, itâs hard to rule this out.
Those are reasonable points, but Iâm not sure they are enough to overcome the generally reasonable heuristic that dramatic events will go better if people involved anticipate them and have had a chance to think about them and plan responses beforehand, than if they take them by surprise.
âReal gdp, adjusted for variable inflation, shows dead even growth. â I asked about gdp per capita right now, not growth rates over time. Do you have a source showing that the US doesnât actually have higher gdp per capita?
Inequality is probably part of the story, but I had a vague sense median real wages are higher in the US. Do you have a source saying thatâs wrong? Or that it goes away when you adjust for purchasing power?
Usually we are the ones accussed (not always unfairly to be honest given Yudkowskyâs TIME article) of being so fanatical weâd risk nuclear war to further our nefararious long-tern goals. The claim that nuclear war is preferable to us is novel at least.
Also, I donât like Scott Alexanderâs politics at all, but in the interests of strict accuracy I donât think he is a monarchist, or particularly monarchism sympathetic (except insofar as he finds some individuals with far-right views who like monarchy kind of endearing.) If anything, I had the impression that whilst Scott has certainly been influenced by and promoted the far right in many ways, a view that monarchism is just really, really silly was one of the things that genuinely kept him from regarding himself as fully in sympathy with the neo-reactionaries.
âIn response, Epoch AI created Frontier Math â a benchmark of insanely hard mathematical problems. The easiest 25% are similar to Olympiad-level problems. The most difficult 25% are, according to Fields Medalist Terence Tao, âextremely challenging,â and would typically need an expert in that branch of mathematics to solve them.
Previous models, including GPT-o1, could hardly solve any of these questions.[20] In December 2024, OpenAI claimed that GPT-o3 could solve 25%.â
I think if your going to mention the seemingly strong performance of GPT-o3 on Frontier Math, itâs worth pointing out the extremely poor performance of all LLMs including when they were given Math Olympiad questions more recently,. though they did use o3 mini rather than o3, so I guess itâs a not a direct comparison: https://ââgarymarcus.substack.com/ââp/ââreports-of-llms-mastering-math-have
âThe USA Math Olympiad is an extremely challenging math competition for the top US high school students; the top scorers get prizes and an invitation to the International Math Olympiad. The USAMO was held this year March 19-20. Hours after it was completed, so there could be virtually no chance of data leakage, a team of scientists gave the problems to some of the top large language models, whose mathematical and reasoning abilities have been loudly proclaimed: o3-Mini, o1-Pro, DeepSeek R1, QwQ-32B, Gemini-2.0-Flash-Thinking-Exp, and Claude-3.7-Sonnet-Thinking. The proofs output by all these models were evaluated by experts. The results were dismal: None of the AIs scored higher than 5% overall.â
Whatâs DEAM?
Another, very obvious reason is just that more EA people are near real power now than in 2018, and with serious involvement in power and politics comes tactical incentives to avoid saying what you actually think. I think that is probably a lot of what is going on with Anthropic people playing down their EA connections.
I donât think itâs absolutely clear from the one-sentence quote alone that Amanda was claiming personal lack of knowledge of EA (which would absolutely be deceptive if she was obviously), though I agree that is one reasonable reading. She has her GWWC membership fairly prominently displayed on her personal website, so if sheâs trying to hide being or having been EA, sheâs not doing so very strongly.
What would be evidence for sentience in your view?